Keeping Ratings Trustworthy
Ratings and reviews, such as those offered by Amazon, TripAdvisor, Zagat and ePinions, are one the most compelling forms of user-generated content. When done well, ratings provide readers with a way to make better purchase decisions in a time where unlimited choices create confusion.
But, what happens when the ratings cannot be trusted?
This week, the Dixie Chicks released a new CD, Taking the Long Way. The Dixie Chicks, originally a popular country band, caught the attention of many of us three years ago, just before the start of the Iraqi War, when they told a UK audience that “we’re ashamed that the President is from (our home state of) Texas”. At that time, when questioning the administration was tantamount to active support of terrorists, they became immediate pariahs. Country music stations refused to play their music and they received numerous death threats.
Three years later, the Dixie Chicks have released their new CD. The music style itself has changed. Traditional country twang has been replaced with a more folk-pop sound, reminiscent of Roseanne Cash. That, in itself, might result in some unusual ratings, as their traditional listeners might be disappointed, while listeners who might not have given them a listen in the past might find themselves fans. But, what’s driving the Amazon ratings of this CD is less the music than the politics. A one-star review on Amazon starts with “Lets get up and show your back side and talk trash, just to keep your self in the spot light, the CD sucks and I wouldn't buy it, I don't have anything by them and turn them off every time they come on the radio.” Well, if you don’t listen, how can you review it?
The Dixie Chicks are just the latest example of this. Take a look at ratings for books from Al Franken or Ann Coulter. I would guess that three-quarters of the reviewers have not read their books. For those authors, five stars means “I love your viewpoint, even if I've never read this book”, while one star means “I hate your politics”. Meanwhile, etailers have to contend with suppliers trying to game the system, giving their own products and services high reviews, while bashing the competition.
What can be done to address this? First, we can look at whether the “star system” is the best way to show ratings. Sure, it’s easy for users to view the stars or sort by them, but statisticians have long known that “mean” is one of the weakest measurements. A simple distribution of ratings might tell a better story.
For example, let's look at the distribution of stars for two books that each have an average rating of 3.5 stars. The green on, on the left, has most of its reviews giving it 3-4 stars. The blue chart on the right has very little in the mid-range, but has a lot of 1-star and 5-star ratings. By reading a few reviews, you’d get the context of why the love-hate relationship exists (is it the product, or something else).
Another option is to force users to provide their real names, or at least validate their registration by email. This will reduce the number of fake or duplicate entries, although it may also reduce the overall participation level.
What ideas do you have for making ratings and reviews more trustworthy? Please post your comments or your thoughts.
Comments