The debate was prompted by an article in the scientific journal Nature last December. Nature set out to use peer review to compare the accuracy of Wikipedia vs. that of Encyclopedia Britannica. To conduct the test, 42 domain experts each analyzed a single topic from both encyclopedias, covering a wide range of scientific disciplines. Reviewers were asked to review the articles for three types of errors: factual errors, critical omissions and misleading statements. The tests were blind (i.e. reviewers did not know the source of the listing they were reviewing).
The results were quite interesting. Not surprisingly, Britannica had fewer errors in the overall survey, but not by much. For the 42 topics, there were 162 errors uncovered in the Wikipedia entries, vs. 123 for Britannica. Interestingly, of eight "serious" errors, four each were found in Wikipedia and Britannica.
Last week, Britannica struck back with a response to the Nature article. Britannica did not focus their attack on the results, but instead on the methodology of the study, for example complaining that Nature used supplemental Britannica content outside of the Encyclopedia Britannica. In addition, they spent quite a bit of their response focusing on the wording - stating that where Nature said "Wikipedia comes close to Britannica", that the 1/3 more inaccuracies (162 to 123) were not that close.
This week, Nature published a brief response to the Britannica article on its blog, defending its methodology and results.
What Britannica seems to be missing is that this is a public relations battle that it cannot win. For an organization selling premium content compiled by experts, to split hairs over whether its content is slightly better than a free source compiled by the unwashed masses, is a losing battle. Rather than hiding behind definitions of what "is" is, the team at Britannica should take this as a siren call to look at their products and value proposition. Long-term, for Britannica to remain a viable business, they need to better understand the needs of their users and develop products that uniquely address those needs.
Perhaps the most interesting aspect is how it took Britannica more than three months to respond to the Nature article. Considering that in the original article, Nature published the specific findings for each of the 42 topics, it shouldn't have taken them long to fact-check them and put together their response. (In contrast, the team at South Park rewrote an entire script and produced a new episode in less than a week after Isaac Hayes quit as Chef due to pressure from Scientology). With Wikipedia's ability to respond to changing information within seconds or minutes, Britannica's slow-footed response seems telling.
Many content providers continue to live in denial, hiding behind claims that "our quality will win out" over Internet upstarts. But, the quality differential is rapidly diminishing. Whether comparing U.S.-based editors to outsourcing ("they'll never understand our market"), domain experts to the wisdom of crowds ("our PhDs have knowledge no one else can match"), or manual vs. automated tagging ("a computer can't understand the nuances of our taxonomy"), the gap is disappearing.
Traditional content providers hoping to still be around 5-10 years from now need to rethink their strategy. Rather than relying upon domain knowledge in the compilation of information, they should focus that knowledge on understanding how information is consumed. That will enable them to build the vertical market workflow-based applications that will continue to command premium value, while their basic content becomes commoditized.