Earlier this week, the New York Times published an article detailing a case of malicious tampering with the Wikipedia bio of John Seigenthaler, Sr., former editor of the Nashville Tennessean newspaper. According to the article, an anonymous contributor to Wikipedia added that Mr. Seigenthaler had been “thought to be directly involved in the Kennedy assassinations of both John and his brother Bobby”.
While the episode triggered much debate about the reliability of Wikipedia, it really raised a much larger question: can you trust information from an undetermined source?
I am sure that many executives from content businesses got excited when they read that story, saying “I knew that our value proposition is solid; the Internet can’t compete with us in terms of quality”. From my standpoint as an information consumer, it’s not so clear.
Users have various content needs. For some of those needs, they require a “trusted source”, particularly business users. For other needs, even business users can trust “any” source. For example, if I need the direct dial phone number of a senior executive at Cargill in Wayzata, MN, I’ll use a paid source for that. If I simply need the area code for Wayzata, I’ll Google it and assume that among the first few results I’ll find the correct answer.
From a more practical side, let’s think of the role of a recruiter. If they are sourcing a C-level role at a Fortune 1000 company, they might use a subscription database to identify the handful of candidates within that industry with comparable experience and financial responsibility. However, if they are simply looking to identify director-level financial executives in a given industry or geographical market, they might be better served using a service like ZoomInfo, which compiles millions of names from various web sources. The accuracy will be much lower than that of a high quality information service, but they are bound to identify 10-20 people to speak to, and that group will likely generate enough referrals for them to source strong candidates.
The reality is that some tasks require content from a “trusted source”, while others just need an answer that is likely correct. Using my Cargill example, even if the first site I clicked on had the wrong area code for Wayzata (showing the old 612 rather than the newer 952), it’s likely the second one would be accurate. Content publishers should look at their client base to identify those critical business processes that rely upon their data. Applications like compliance, legal research, medical information and others will always provide markets for trusted sources, while more and more less critical processes will make do with whatever is freely available.
At the same time, information users will have to learn how to delineate trusted sources from the unknown. Users today do this for certain types of information – they know to trust Consumer Reports more than a single Amazon review – but it will become increasingly important to filter any critical information through this process. Ratings and rankings from unknown sources can be manipulated, while trusted sources tend to be more reliable.
Content providers, both "trusted sources" and those which strive to become trusted, must take various measures to earn that trust. The traditional brands such as Consumer Reports, Dun & Bradstreet and others, should be careful not to dilute the power of their brand, even if it means foregoing revenue opportunities. Meanwhile, the newer companies should take steps to reduce the ease in which their systems can be manipulated. Wikipedia, which relies upon its huge community of users and editors to challenge false information, has temporarily locked the Seigenthaler page so that it may not be edited. Amazon's delays in posting new reviews allow it to scan for potential problems.
Going forward, there will likely be more opportunities for publishers, intermediaries and technology providers to help users gauge the validity of content. In the meantime, the message of the Internet may be “Caveat Lector”, reader beware.
Posted by: |