E-commerce fraud is expected to increase in the next five years due to AI, and marketers are advised to respond with … AI.
Juniper Research, a Hampshire, UK-based consultant, released a report on Monday predicting that the value of e-commerce fraud will rise from $443 million in 2024 to $447 million in 2029 – an increase of 141 percent.
The firm says AI tools have allowed hackers to stay ahead of security measures and enabled attacks with greater scale, scope, and frequency. It points to the ease with which creating fake accounts and fake identities can automate the process of defrauding consumers. And these attacks, it is said, can overwhelm rules-based defense systems.
Thomas Wilson, the author of the report, said in a statement, “E-commerce sellers should seek to integrate a fraud prevention system that provides AI methods to quickly identify emerging trends. This will prove very important in developed markets, where large retailers are at high risk of being targeted for fraud, such as trying to steal credit cards.
The potential for AI to help craft scams has become a matter of public concern. In May, California Attorney General Rob Bonta warned Californians about AI-powered hoaxes that rely on “deepfakes” to impersonate family members and government officials. And the FTC last month announced Operation AI Comply, five legal actions against companies that are exaggerating AI claims or selling AI technology that can be used to deceive.
Academics studying AI security have also sounded the alarm about AI’s deceptive capabilities. Last year, in a previously published paper, researchers from MIT, the Australian Catholic University, and the Center for AI Safety said, “Various AI systems have learned to deceive people. This ability causes danger. But this danger can be reduced by applying strict rules of law. to AI systems that are able to deceive, and and developing technical tools to prevent AI fraud.”
Political leaders, however, have resisted stricter regulatory policies, out of concern about economic damage. Last month in California, for example, Governor Gavin Newsom refused to sign SB 1047, which is considered one of the most successful attempts to legislate AI to date. AI companies have advocated against the bill.
However, other AI-related laws aimed at addressing AI-enabled fraud, such as the No AI Fraud Act, are pending adoption by US lawmakers. Europe’s Artificial Intelligence Act, a comprehensive legal framework for AI transparency and accountability, came into force in August and most of its provisions will be enforced in August 2026.
Juniper’s contribution to these concerns includes encouraging retailers to fight fire with fire, so to speak, because AI fraud detection methods can be helpful in solving early stage fraud – where consumers knowingly steal from retailers to gain wealth – and other forms of fraud. “For example, AI can detect unusual spending patterns, unexpected changes in customer behavior, or multiple accounts linked to a single device,” the company explains in a white paper.
There are challenges though: More data is needed, and the infrastructure and talent needed to run these systems come at a cost. Also, AI fraud detection can reveal false positives. “Genuine customers who use unusual browsers and VPNs (Virtual Private Networks) are likely to be classified as fraudulent users; reducing customer satisfaction and losing money to the seller,” Juniper Research explains.
In addition, the AI ​​involved – machine learning – often works in a way that is not easily explained, which makes it difficult to adjust the fraud detection algorithms based on the detected errors.
However, Juniper’s answer to AI is more AI, which doesn’t seem like it will end well. ®
#combat #fraud #ecommerce