Online reviews, known as ratings and reviews, are just one form of user generated content, but they seem to be the most abused. Generally users seem to trust other users more than they do brands. The problem with these communities is that fake reviews continue to be a problem. Spam, fake reviews and blatant astroturfing pollute this very powerful form of content. My quick gut reaction to fake reviews and astroturfing is that it is so blatantly wrong on its face, why would you take the risk of regulatory action. The reason is that it seems to work.
If your campaign looks real to other users, users will take it at face value. The general user is only now building up a filter to these types of abuse, but there are plenty of users who still get taken in. I am amazed that the Nigerian prince still gets people to fall for their emails. It is so well known it has even spawned it’s own parody video:
If we all know about this, how is it that people are still taken in. As children we all start out being naturally kind, caring and trusting. Some of us lose that, but at our core I think we want to trust. I don’t have any psychological study to back this up, but I think that is why scams like this work. I also think it is why we tend to trust reviews that seem to be authentic. As the purveyors of these fake reviews (either blatant fake or bribed real consumers) get more sophisticated in their methods, it will only get worse. If it looks and sounds like another person, we will trust it.
What is a user to do?
This is where I draw a blank. I don’t think there is much that a general user could do. If you are on Yelp, you might consult Wired’s useful flowchart. You could do what some of my friends do, look for the spelling errors and trust those reviews. Maybe copy figure skating and drop the high and low and look at the middle.
You could also file complaints with your local Better Business Bureau, but what can you prove. Though regulators are trying to crack down on this. In New York, the Attorney General has taken action with a whopping $350,000 in fines spread over 19 companies. By my math, that is about $18,000 per company. Hardly more than a slap on the wrist.
The responsibility is ours – the brands, and the platforms
The reality is that platform developers need to work on this problem. Why do I put this on the platforms and not on the companies that are engaging in this kind of activity? Because there will always be a company trying to get an edge. Whether it is the ubiquitous 0% financing, or the new car for $4,999 (which is no longer available when you get there), there will always be a way to entice a customer into buying. Some of them are accepted and are a play on psychology, some are winked at, and some are just downright illegal. In the US, states may be stepping in, but we have always had the Federal Trade Commission (FTC).
The FTC’s own endorsement guidelines prohibit both fake online reviews and astroturfing (generally yes, but what if your astroturfing had a disclosure on it?). It is considered deceptive advertising, because it is. If you have engaged in this kind of activity do you think it is fair and not deceptive? Really?
Platforms have huge problems in trying to work on this problem. Amazon has been trying to work on this issue for years. They have tried programmatically, but what they have not done is any social engineering. I just went there and began the review process for something I recently bought. I see nothing in there telling me that if I have any connection to the author or publisher that I should disclose that. Even a simple checkbox that I can check to say I am. There is not notice to the writer that they are required to disclose. Sometimes the simplest solutions work the best.
Researchers have been trying to write software programs to review the reviews for fakes. This was presented at the World Wide Web 2012 conference. While aimed at spam, it wasn’t tuned for fake reviews written by humans which is what the New York AG was going after.
If you have a ratings and review section on your products or services, consider some simple process changes to address fake reviews and monitor the other review sites. Monitoring the other review sites should be by teams not connected to product marketing who are measured by product lift. Simple checks and balances, folks.
It’s getting worse
Gartner predicts that by 2014 as many as 15% of online reviews will be fake, or paid for. What I find troubling is that despite clear guidance that people are still doing it on the brand side. Who is teaching your marketing people about the rules? Why are they not learning about the rules?
Do the right thing, folks
I understand you are trying to get an edge. I understand that you want your users to talk about your stuff. Make it easy for them to do so, but be careful of the rules. The ultra-conservative may argue that any system that preferentially treats one customer over another is enough to create the material connection that the FTC is worried about. I don’t buy that argument. A super-fan community that encourages folks to spread the good word without expectation or compensation is a good start. If you start giving away product or tangible things, you start to walk yourself into problems. The nice thing about super-fan communities is that you can tell them what their endorsement obligations are and you know who they are so you can monitor. Through those two things I think you have a good defense if the FTC or other regulator comes a-knockin.