How many times have you chosen a restaurant or accommodation based on reviews you read on the internet? Yes, but you need to be careful, as with the emergence of artificial intelligence there are more and more fake reviews everywhere.
Everything 5 stars...
The emergence of generative artificial intelligence tools such as OpenAI's ChatGPT has enabled creating detailed and seemingly authentic online reviews with little or no human effort. Despite their potential usefulness, these tools are being exploited to generate fake reviews on a large scale, creating challenges for merchants, consumers and digital platforms.
It is “Fake reviews” are nothing new on platforms like Amazon and Yelpbut the introduction of AI tools has generated a wave of fraud in volumes never seen before. Often purchased through social media or offered by companies in exchange for incentives, these reviews can now be created at greater speed and in greater detail thanks to AI.
Experts point out that this practice, illegal in the United States of America, for example, worsens during times of greater consumption, such as the Christmas shopping season, when consumers rely heavily on reviews to make purchasing decisions.
Where are more AI-generated reviews emerging?
According to the Transparency Company, which uses software to detect fake reviews, they are found in several industries, including e-commerce, hotels, restaurants, and services such as home repairs and medical care.
In a recent report, the company analyzed 73 million reviews across three industries and found 14% were likely fakewith around 2.3 million generated partially or entirely by AI.
The most vulnerable platforms include mobile apps and smart TVs, where AI is used to deceive consumers and promote fraudulent apps. Even companies that offer AI-assisted writing tools have been accused of making assessment creation easier fraudulent.
The challenges of detection
Identifying AI-generated reviews is a growing challenge. According to Max Spero, CEO of Pangram Labs, a company that is also dedicated to detecting fake reviews, some of these reviews are so well written and structured that they reach the top of search results on platforms like Amazon.
While there are indicators — like long texts, generic descriptions and clichéd phrases like "it changed my life" — even experts admit that distinguishing between real and fake reviews is becoming more difficult.
Companies like Amazon, Yelp or Trustpilot are developing policies for integrating AI-generated content in your fraud detection systems. For example, Amazon and Trustpilot allow AI-assisted reviews as long as they reflect genuine experiences. Yelp, on the other hand, requires reviews to be written solely by users, without resorting to AI.
The Coalition for Trusted Reviews, which includes platforms like Tripadvisor and Booking.com, is developing advanced AI detection systems to combat fraud and protect the integrity of online reviews.
Consumers can adopt some strategies to identify possible fake reviews. Therefore, be wary of overly enthusiastic or negative texts, be aware of the repetitive use of specific terms, such as the full name of the product, and pay attention to clichés and generic descriptions.
Still, research indicates that even the most attentive consumers have difficulty distinguishing between AI-generated and human-generated text.
Source: pplware.sapo.pt