My Photo

Email me


Made with ImageChef

Google Content Search


Powered by Rollyo

Content Matters Community

Content Links

  • AlacraBlog
  • AlacraWiki
  • billtrippe.com
  • ContentBiz
  • E-Media Tidbits
  • eContent Magazine
  • InfoCommerce
  • ONLINE Insider
  • PaidContent.org
  • Research Recap
  • Seth Godin Blog
  • Shore Communications
  • That We Know
  • The Content Wrangler
  • Web Ink Now
Powered by TypePad

Content Industry Jobroll

  • Jobs from Indeed

Syndication and Reuse

  • Creative Commons License
    This work is licensed under a Creative Commons Attribution 2.5 License.

« November 2006 | Main | January 2007 »

December 27, 2006

Public Library of Science Launches PLoS ONE

Plosone While Nature may be going back to the drawing board on their wiki-based peer review test, Public Library of Science officially unveiled the beta of their PLoS ONE last week.

PLoS ONE which was first announced in June, is a peer review platform that uses both traditional and web-based comments.  Each article is peer-reviewed with oversight by an academic board, much like other scientific journals.  After publication, however, articles published on PLoS ONE are then opened up for reader annotation, discussion and rating, creating a dialog between author and reader.

As a Public Library of Science offering, the product is completely open-source and published under the Creative Commons License.  Rather than charging users for the content, PLoS charges a nominal ($1,250) fee to authors to have their articles reviewed. 

For various reasons, this blog focuses largely on business and financial content.  However, there is quite a bit more innovation going on in the STM market today.  Both Nature and Public Library of Science have previously been named to the "50 Content Companies that Matter" list.  I have no doubt that these two organizations will continue to push the envelope in 2007.


December 21, 2006

Nature Cancels Wiki-based Peer Review

Nature_logo_2Scientific journal Nature announced this week that they will cancel the open peer review process that they had begun this past June.

Open peer review, using a wiki-based environment, was an experiment to improve the peer review of submitted papers by posting them to an open server for public comment.  The traditional peer review process relies upon two or more "referees" who anonymously review each paper so that Nature's editors can decide whether to accept or reject the paper. 

According to the Wall Street Journal, the goal of open peer review was to see if public review might help uncover potential fraud, following the retraction of two papers earlier in the year by a South Korean scientist researching cloning.

According to Nature, the open peer review process gained a lot of interest, but little participation.  Participation among authors was voluntary and only a small percentage of them chose to do so (perhaps out of concern that the public might not treat their papers fairly).  Of those who did participate, the number of comments received was minimal.  The editors at Nature speculate that the limited response might be due to concerns about providing comments in an open forum.

While it's disappointing that this experiment did not succeed, the fact that they tried this open peer review process is a testament to the team at Nature.  I have previously posted about Nature's position as an innovator among content providers.  While this experiment may not have produced the results they'd hoped for, I have no doubt they will continue to push the envelope in testing new technologies.

Clarification: Maxine Clarke posts on Nature's Peer to Peer blog a clarification that they did not shut down the open peer review site, per se, but that they had completed the three month (extended to four month) trial of the process. 

December 20, 2006

Place your laptop, shoes, carry-on and baby (?) on the conveyor for screening

ScreenerThanks to Paul Kedrosky for sharing this story from the LA Times.

Earlier this week at LAX, a 56-year old woman placed her one-month old grandson into one of those gray plastic bins and slid him into the x-ray machine.  Luckily, the screener was paying attention and noticed the shape of a human in the bin, extricating the child from the machine.

I think that retired FAA security agent Brian Sullivan says it best: "If a baby can get through, what the hell else can get through?"

Apparently this is not the first such incident.  In fact, the TSA finds it necessary to put the following in bold blue type in the "traveling with children" section of its website: NEVER leave babies in an infant carrier while it goes through the X-ray machine.




December 19, 2006

The Year in Search: Dweebs, horndogs and geezers

Roughtype_1 Nick Carr, at Rough Type, has posted the top 10 search terms for the year from Google, Yahoo and AOL.
The amazing part is the lack of overlap among the three.  The only term that appears on more than one list is "American Idol", which rates 6th on Yahoo and 4th on AOL.

I'm kind of surprised that Bebo is #1 on Google.  Not so much that it outranked MySpace, whose traffic dwarfs that of Bebo, but that Google users would bother to use a search engine to locate a 4-letter URL.  Also interesting is that video provider metacafe is on the list, but not YouTube.  Is it the opposite of the Bebo issue?  Has everyone already bookmarked YouTube or figured out how to type the domain name directly?  Or is the fact that Time Magazine has taken notice an indication that YouTube has jumped the shark.

Nick's comments hit it pretty much on the head.  Yahoo searches indicate 13-year old boys looking for half-naked photos of celebrities; Google is led by geeks searching for videos and social networking sites, while AOL's users are searching for categories of information, like horoscopes or dogs.

The full lists:
Google:
1. Bebo
2. Myspace
3. World Cup
4. Metacafe
5. Radioblog
6. Wikipedia
7. Video
8. Rebelde
9. Mininova
10. Wiki

Yahoo
1. Britney Spears
2. WWE
3. Shakira
4. Jessica Simpson
5. Paris Hilton
6. American Idol
7. Beyonce Knowles
8. Chris Brown
9. Pamela Anderson
10. Lindsay Lohan

AOL
1. Weather
2. Dictionary
3. Dogs
4. American Idol
5. Maps
6. Cars
7. Games
8. Tattoo
9. Horoscopes
10. Lyrics

Nstein acquires Eurocortex

NsteinNstein has announced its acquisition of Eurocortex, a content management provider headquartered in Toulouse, France.
The deal, valued at $1.4M Cdn, extends Nstein's reach into the publishing space.  Eurocortex claims more than 30 publishing/media clients including Agence France Presse, Reed Business Information, Hachette Group and Groupe Lagardere.

In the early days of semantic tagging, many of the leading software companies like Inxight, ClearForest and Nstein secured early adopter sales to content providers and aggregators.  These software companies saw the publishing market as an entry point, but saw their long-term opportunity in other, larger markets.  Unfortunately, due to the relative immaturity of their offerings and the complexity in communicating (and demonstrating) a strong ROI, semantic tagging software companies have struggled to make inroads in other markets.  As such, the publishing market, adopting this as a way to automate some of their manual tagging work, has remained a critical aspect of their business.

I believe that this market segment is ripe for a rollup, creating a single provider with revenues of $25-35M.  In the meantime, this acquisition makes sense for Nstein.  It gives it a new range of complementary products to sell and gives it a foot in the door to many European publishers. 

December 15, 2006

IBM, Yahoo Launch Free Enterprise Search

IbmyahoosearchIBM and Yahoo have introduced a new entry in the enterprise search market and the best part is, it's free.

The new IBM OmniFind Yahoo! Edition is a downloadable software application that can be run on a server or standalone PC.  The application utilizes Yahoo's search interface, on top of the IBM OmniFind platform.  The free version can index up to 500k documents (making it more like workgroup search than enterprise search) but you can expand the capabilities by upgrading to IBM's Enterprise Search Starter Edition.

Until recently, enterprise search remained an expensive and complex application.  Only five years ago, a typical Verity (now Autonomy) search implementation might have cost $400-750k (including professional services) and require 3-6 months to implement.

Yahoo and Google are both going after the enterprise search market with a compelling message: why is enterprise search so complex and expensive, when Internet search works well for free?  Of course, there are differences between web search and enterprise search.  The PageRank system assigns a value to pages based upon links to that page from other popular sites.  For complex searching inside the enterprise, the most valuable documents might not be the most popular.

That being said, basic search can address the needs of most enterprise users.  And with Yahoo joining Google in pursuing this market, the days of half-million dollar search implementations are long gone.  And, as ZDNet points out, the target of their efforts is not so much the Autonomy's of the world, but more Microsoft and hierarchical file organization.

You can download or learn more about the free IBM/Yahoo software here.
To learn more, read posts from Searchblog  or ZDNet or this Reuters article via CNet.

December 14, 2006

Does Scale Matter?

Scottkarp_1 Scott Karp has penned a thought-provoking post on his Publishing 2.0 blog, entitled "Content Businesses Don't Scale Anymore".  The basic premise of the post is that all of the recent content businesses which have scaled dramatically in recent years are content platforms (YouTube, MySpace, Google) and not creators of original content.  Karp further notes that even the long tail of revenue tends to reward the aggregators in the head, with more modest benefits trickling down to the tail.
Factually, I think Scott's points are accurate.  My question, however, is whether scale matters in the content industry.

The content business has long been a market of small companies focused on niches.  Interestingly, many niche content providers are profitable, particularly in the b2b space.  Many industries require scale to become profitable.  For example, in the enterprise software space, a company really isn't considered viable without revenues of $80-100M.  I can think of dozens of content businesses with revenues of $10-25M and 20% margins.

On the b2b side, most of the dominant "traditional" publishers, such as McGraw-Hill, Thomson, Primedia and Reed Elsevier, were built through the acquisition of small, niche players.  With the possible exception of the newspaper industry, there are few content companies that scaled well, even in the old days.

Scale may matter in the consumer space, where you need millions of page views in order to compete for the mainstream advertising dollar.  However, in the b2b space, I believe that growth and profitability are the key metrics for content companies.  A lot depends on the reason that you are building your business.  Is your goal is to get page views high enough to be acquired or is it to build a profitable business with sustainable growth?  If it's the latter, then scale shouldn't be your key measure of success.



December 12, 2006

The Suburbanization of the Internet

Suburbs Yesterday, Fred Wilson posed the question of what to call the de-portalization of the Internet, embodied by the move from central portals like Yahoo to the search engine-driven web 2.0 world.  A robust ecosystem has developed around Google, enabling numerous sites to interact with Google, but where the user interaction occurs on their sites, not on a central Google site.

Fred, not liking his “de-portalization” term put out a request for a better term.  Alacra’s Jarid Lukin, not content to rest on his laurels after naming the freemium business model, has proposed we call this the Suburban Web.  It’s an interesting analogy.  The details can be found here.

December 11, 2006

Fortent Acquires AML Newsletters

FortentFortent, the Warburg Pincus-backed provider of anti-money laundering content, has acquired Alert Global Media, publisher of Money Laundering Alert and moneylaundering.com.
Fortent is a rollup, established earlier this year through the acquisition of AML vendor SearchSpace and technology provider Semagix.
The AML software and content market has been strong as financial institutions look for ways to comply with new regulations from the Patriot Act, Bank Secrecy Act and Financial Service Authority requirements.



December 08, 2006

Swivel: YouTube for Data

Swivel Swivel launched earlier this week, positioning itself as "YouTube for Data".
Swivel is an open site that allows users to post any type of data, then run various cross-tabulations and statistical analyses to be shared with the overall community.

Swivelwine_1 Swivel makes it easy to instantly create charts and graphs that mash up disparate data series.  For example, here's one that was posted that shows the (inverse) correlation between wine consumption and crime in the U.S.  Is it statistically significant?  Probably not, but I'll pour myself an extra glass of zin at dinner tonight just in case.

Swivel has a community-oriented freemium business model.  You can upload any data that you'd like for free, as long as you make it accessible to other users.  They will soon launch a professional (paid) version where your data will be secure and private.  The data sets can be tagged, making it easier to find what you're looking for.  Users can also rate and make comments on the various charts created, so you can view the highest rated or most commented on graphs.

Will Swivel be the next YouTube?  Their audience is obviously more narrowly focused.  After all, anyone can enjoy watching a guy in a Santa suit fall off the garage roof onto his truck, but only data geeks will get kicks from mashing up time series data.  So, while I'd consider them more of a playground for stats junkies, the YouTube for Data comparison is probably better for their valuation (I'm sure VCs are getting inundated with business plans from companies calling themselves "YouTube for X").  As of now, they've only got about 1,150 data sets up there, but that number is sure to grow.

What's the impact for publishers?
Database and information publishers are great at showing the micro-level, but often ignore the macro-level data that can be compiled from their data.  If you're tracking revenues, budgets, number of employees, product prices, weather, exports or any other quantitative data, you can probably create some interesting analyses.  These cross-tabulations are great press release fodder; by giving your users access to the Swivel platform, they'll create their own unique content.  Many publisher's instincts will be to keep their data private so they control the analysis.  But the smart publishers will open their content up to the community and see the creative reports generated by users.

For more on Swivel, take a look at posts by TechCrunch and Open...

December 05, 2006

Can Web 2.0 penetrate the Intelligence Community?

Ny_times_logo_1 Clive Thompson’s article, Open Source Spying, in this Sunday’s New York Times Magazine was a fascinating look at how Web 2.0 is creeping its way into the least-open IT environment imaginable – the US Intelligence community.

The article starts by describing problems well-known to anyone who has tried to provide technical solutions to the major TLA agencies: outmoded technologies fomented by technical and policy walls prevent any meaningful sharing of information. 

I spent time in 2003-2004 delivering analytic technology solutions to the Intelligence community.  While advanced technologies had a few advocates in some high places (among others former Navy Admiral and “Total Information Awareness” sponsor John Poindexter was a big fan of our technology), there were hurdles in place that were too high to clear. 

Probably the greatest hurdle was the lack of sophistication of the actual agency employees combined with the greed and arrogance of the systems integrators who “served” them.  The defense and intelligence communities have been outsourced to private contractors for the past 25 years.  This was started by Reagan, who believed you could downsize government by cutting a $40k per year civil servant and replacing them with an $80k per year private contractor (often the same person but now on a different payroll).  Over time, the government lost its ability to recruit strong IT minds, and those who did join would quickly shift to the private sector where they could get a huge raise for performing the same job.

Since the agencies were left with little IT expertise, they became highly dependent upon contractors, large systems integrators like Lockheed, Northrop Grumman, SAIC and others.  Unfortunately, efficiency and public safety are lower priorities than revenue growth for these firms.  During my time in Washington, I found that the systems integrators put up walls to keep “off-the-shelf” software out, even while Corporate America had embraced COTS solutions as opposed to custom software.  COTS software is less expensive, easier to support, easy to integrate and has a much lower level of implementation risk than a custom solution, but systems integrators can make more money having a team of 50 people spend two years building something than having a team of 10 implement something that already exists.

The Times article describes recent efforts to change the system, led by Dale Meyerrose, CIO of the new Director of National Intelligence.  Meyerrose instituted a mandate of using COTS solutions (mainly for compatibility reasons) and has also begun to tackle some of the cultural obstacles.

At the same time, the DNI took over a CIA program that explored new methods of gathering and sharing intelligence.  Among the first methods considered were use of wikis and blogs.  Could decentralized tools like wikis and blogs survive in the most centralized of all IT environments?

Tests are still underway, but early results are compelling.  As an example of how this could work, the article describes how a wiki was used to capture and update information about the crash of a private commuter plane into a Manhattan apartment building.  Over the course of about two hours, that page was updated 80 times by analysts from nine different agencies.  They were quickly able to reach the conclusion that this was not a terrorist-related incident.  How long might that have taken in the traditional model, where analysts at different agencies were unable to share information?

Will Web 2.0 applications solve the many woes of the intelligence community?  I think that it will take more than a few innovative programs to weaken the multi-billion dollar grip the contractor community has over intelligence.  But, with the right leadership (and perhaps some committee hearings on war profiteering), it’s possible that more user-generated content initiatives will displace the failed knowledge management projects of the past.  And if these efforts can begin to take hold in the command-and-control centric world of the intelligence community, just think what impact they might have in a more flexible organization like your own.

December 01, 2006

TechCrunch Insights on the Answer Wars

Yahoo_answersTechCrunch’s Michael Arrington has written a great post about the victory of the Yahoo Answers model as opposed to the failed Google Answers.  The post uses this case to show how Web 2.0 and the idea of community is more than simply marketing hype.

Google Answers was launched in 2002, when ad revenues were down, and used a more traditional business model – users pay for answers, with Google taking a cut.  During its entire run, only 800 “experts” answered questions.  Yahoo launched its solution last year, at a time when social media was beginning to take hold.  In the Yahoo model, questions are posted for free and anyone can respond with an answer.  Readers vote on the answers, enabling the “wisdom of crowds” to determine if the answer is compelling.

Under Yahoo’s approach, as Michael describes it, the “network effect kicked in big time” and Yahoo Answers gets a tremendous number of page views.

It’s still very early in the social networking space and models are still evolving.  Yahoo Answers makes it clear that community participation, when leveraged correctly, can generate big results.

RSS Feed

  • Subscribe in NewsGator Online
  • Subscribe in Bloglines

  • Add to netvibes

  • Add to Google

Subscribe to Content Matters via email


  • Enter your Email


    Powered by FeedBlitz

Barry's Twitter Updates

    follow me on Twitter

    Premium Content

    • Premium Content from Alacrastore.com

    Facebook

    Research Topics

    Search the Alacra Store

    • Search For Business Information