The emergence of generative artificial intelligence tools that allow people to efficiently produce novel and detailed online reviews with almost no work has put merchants, service providers and consumers in uncharted territory, watchdog groups and researchers say. Phony reviews have long plagued many popular consumer websites, such as Amazon and Yelp. They are typically traded on private social media groups between fake review brokers and businesses willing to pay. Sometimes, such reviews are initiated by businesses that offer customers incentives such as gift cards for positive feedback. But AI-infused text generation tools, popularized by OpenAI’s ChatGPT, enable fraudsters to produce reviews faster and in greater volume, according to tech industry experts. The deceptive practice, which is illegal in the U.S., is carried out year-round but becomes a bigger problem for consumers during the holiday shopping season, when many people rely on reviews to help them purchase gifts. Where are AI-generated reviews showing up? Fake reviews are found across a wide range of industries, from e-commerce, lodging and restaurants, to services such as home repairs, medical care and piano lessons. The Transparency Company, a tech company and watchdog group that uses software to detect fake reviews, said it started to see AI-generated reviews show up in large numbers in mid-2023 and they have multiplied ever since. For a report released this month, The Transparency Company analyzed 73 million reviews in three sectors: home, legal and medical services. Nearly 14% of the reviews were likely fake, and the company expressed a “high degree of confidence” that 2.3 million reviews were partly or entirely AI-generated. “It’s just a really, really good tool for these review scammers,” said Maury Blackman, an investor and advisor to tech startups, who reviewed The Transparency Company’s work and is set to lead the organization starting Jan. 1. In August, software company DoubleVerify said it was observing a “significant increase” in mobile phone and smart TV apps with reviews crafted by generative AI. The reviews often were used to deceive customers into installing apps that could hijack devices or run ads constantly, the company said. The following month, the Federal Trade Commission sued the company behind an AI writing tool and content generator called Rytr, accusing it of offering a service that could pollute the marketplace with fraudulent reviews. The FTC, which this year banned the sale or purchase of fake reviews, said some of Rytr’s subscribers used the tool to produce hundreds and perhaps thousands of reviews for garage door repair companies, sellers of “replica” designer handbags and other businesses. It’s likely on prominent online sites, too Max Spero, CEO of AI detection company Pangram Labs, said the software his company uses has detected with almost certainty that some AI-generated appraisals posted on Amazon bubbled up to the top of review search results because they were so detailed and appeared to be well thought-out. But determining what is fake or not can be challenging. External parties can fall short because they don’t have “access to data signals that indicate patterns of abuse,” Amazon has said. Pangram Labs has done detection for some prominent online sites, which Spero declined to name due to non-disclosure agreements. He said he evaluated Amazon and Yelp independently. Many of the AI-generated comments on Yelp appeared to be posted by individuals who were trying to publish enough reviews to earn an “Elite” badge, which is intended to […]