Using advanced AI to spot & remove fake customer reviews: Amazon

Amazon has said it is using advanced artificial intelligence (AI) to to spot inauthentic product reviews on the platform, ahead of Black Friday amid the ongoing holiday shopping season.

The company said late on Friday that before being published online, Amazon is using AI to analyse the review for known indicators that the review is fake.

“The vast majority of reviews pass Amazon’s high bar for authenticity and get posted right away. However, if potential review abuse is detected, there are several paths the company takes,” said the company.

If Amazon is confident the review is fake, they move quickly to block or remove the review and take further action when necessary, including revoking a customer’s review permissions, blocking bad actor accounts, and even litigating against the parties involved.

“If a review is suspicious but additional evidence is needed, Amazon’s expert investigators who are specially trained to identify abusive behaviour look for other signals before taking action,” the e-commerce giant noted.

Last year, Amazon observed and proactively blocked more than 200 million suspected fake reviews in its stores worldwide.

“Fake reviews intentionally mislead customers by providing information that is not impartial, authentic, or intended for that product or service,” said Josh Meek, senior data science manager on Amazon’s Fraud Abuse and Prevention team.

“We work hard to responsibly monitor and enforce our policies to ensure reviews reflect the views of real customers, and protect honest sellers who rely on us to get it right,” Meek added.

Among other measures, Amazon uses the latest advancements in AI to stop hundreds of millions of suspected fake online reviews, manipulated ratings, fake customer accounts, and other abuses before customers see them.

Machine learning models analyze a multitude of proprietary data including whether the seller has invested in ads (which may be driving additional reviews), customer-submitted reports of abuse, risky behavioural patterns, review history, and more.

Large language models are used alongside natural language processing techniques to analyse anomalies in this data that might indicate that a review is fake or incentivised with a gift card, free product, or some other form of reimbursement.

“The difference between an authentic and fake review is not always clear for someone outside of Amazon to spot,” Meek said.


More English News