Assume, for instance, that a restaurant owner wants to plant an enthusiastic review of his establishment in Yelp, and asks a friend to compose and submit a recommendation of his fine dishes, his broad menu selection, his punctilious service. Yelp, it was claimed -- using a remarkable scientific device -- can now spot and prevent the publication of such a contrived review, without interfering with honest expressions about the same restaurant.
"In 2005", said Mr. Stoppelman in the course of the interview, "we created a review filter. It's automated and algorithmic and screens out certain reviews ..."
"How does the filter determine whether a review is trustworthy?" asked the Times.
"I really can't be very specific", answered Stoppelman.
A somewhat similar proposition has been voiced on numerous occasions by executives of TripAdvisor (www.tripadvisor.com), although I can't recall any of them claiming to have a "mathematical algorithm" that performs the screening task. Rather, as I recall, TripAdvisor hints that it maintains a staff of human beings to go over the thousands of comments it receives each day and eliminate the obviously phony ones. How, and in what manner, a human being can detect the false opinions -- can distinguish between an honest review and one composed by a friend of the hotel or restaurant in question living down the block from the honest reviewer -- is never explained.
But my question is: does anyone really believe this nonsense? Is there anyone out there gullible enough to agree that a "mathematical algorithm" can reliably distinguish between the honest and the phony recommendation? Has anyone from the user-generated sites ever supplied us with an iota of convincing evidence that this can be done?
I have said before, and repeat it now, that the user-generated sites, in my opinion and experience, contain within themselves the germ of their own undoing. All over the world, public relations agents boast that they send in self-manufactured reviews to the user-generated sites. In every organ of media, people discuss the need to urge friends and associates to manufacture positive opinions for the user-generated sites.
I'd be interested to know whether any of our readers believe in the ability of "mathematical algorithms" to analyze the text of submissions to the user-generated sites. Any of you accept the claim of these sites to employ human or mechanical filters to weed out improper recommendations or critiques?