
Visa's AI and Machine Learning Take on Fraud, Prevents $40B Losses
Visa employs AI and machine learning to combat $40 billion yearly fraud, including AI-generated card numbers and voice deepfakes.

Visa, a leading payment company, is at the forefront of using advanced technologies to combat fraud, with a particular focus on artificial intelligence (AI) and machine learning. This innovative approach was highlighted by James Mirfin, Visa's global head of risk and identity solutions, in a recent discussion with CNBC.
Significant Increase in Fraud Prevention
Visa's proactive adoption of AI and machine learning has yielded substantial results in the battle against fraud. The company reported a remarkable achievement in preventing $40 billion in fraudulent activity from October 2022 to September 2023. Notably, this figure represents a significant uptick, nearly double the amount prevented in the preceding year.
Emerging Fraudulent Tactics
As the landscape of fraud continues to evolve, perpetrators are deploying increasingly sophisticated methods to exploit vulnerabilities in the payment system. James Mirfin provided insights into some of the fraudulent tactics employed by scammers. One concerning tactic involves the use of AI to generate primary account numbers (PAN) and conduct iterative testing to validate their efficacy.
The PAN, typically comprising 16 digits but occasionally extending to 19 digits, serves as a vital identifier on payment cards. Criminals leverage AI-based bots to persistently execute online transaction attempts, utilizing a combination of primary account numbers, card verification values (CVV), and expiration dates. This relentless approach, characterized as an enumeration attack, exerts a substantial toll, leading to annual fraud losses amounting to $1.1 billion. This represents a noteworthy portion of the global losses incurred due to fraud, as disclosed by Visa.
AI-Powered Risk Analysis and Prevention
A core element of Visa's anti-fraud strategy revolves around the extensive utilization of AI and machine learning for real-time risk analysis. James Mirfin emphasized that each transaction undergoes comprehensive scrutiny, wherein over 500 different attributes are meticulously assessed and assigned a risk score. This approach empowers Visa to swiftly identify and thwart enumeration attacks, particularly in transactions conducted remotely without the physical presence of the card.
Furthermore, the AI models employed by Visa are designed to swiftly adapt to emerging fraudulent patterns. In the event of novel fraudulent strategies being deployed, the AI system promptly detects, flags, and assigns high-risk scores to the implicated transactions. This proactive identification enables Visa's clients to make informed decisions regarding the approval of such high-risk transactions, thereby fortifying their defenses against fraud.
Targeting Token Provisioning Fraud
Visa's AI-driven capabilities are not confined to transactional risk assessment alone. The company also harnesses AI to evaluate the likelihood of fraud associated with token provisioning requests. This multifaceted approach equips Visa to confront fraudsters who exploit social engineering and other deceptive practices to illicitly provision tokens and execute fraudulent transactions.
Strategic Technological Investments
Recognizing the escalating threat posed by cybercriminals and their adoption of advanced technologies, Visa has demonstrated a firm commitment to fortifying its defenses. Over the past five years, the company has channeled substantial resources, amounting to $10 billion, into technological initiatives aimed at mitigating fraud and enhancing the overall security of its network.
AI in the Hands of Fraudsters
While Visa remains unwavering in its pursuit of leveraging AI for fraud prevention, James Mirfin cautioned about the concerning proliferation of AI tools among cybercriminals. He highlighted the disconcerting trend wherein fraudsters are increasingly resorting to generative AI, voice cloning, and deepfakes to perpetrate scams. These technological advancements have been exploited in diverse fraudulent schemes, including romance scams, investment scams, and the deceptive tactic referred to as "pig butchering," wherein individuals are coerced into investing in fictitious cryptocurrency ventures.
Amid the Digital Arms Race
The evolving landscape of fraud encapsulates a digital arms race, wherein cybercriminals continuously endeavor to outmaneuver existing security measures. The advent of generative AI tools, such as ChatGPT, has empowered fraudsters to craft increasingly persuasive phishing communications, posing a heightened risk to unsuspecting individuals and institutions.
Moreover, the insidious potential of generative AI is underscored by its capacity to replicate voices with alarming accuracy. Cybercriminals have been reported to require a mere three seconds of audio to fabricate a convincing voice clone, subsequently leveraging this deceptive ploy to manipulate victims into unwarranted financial transactions. This exploitation of generative AI engenders a particularly disconcerting threat, as its efficacy in perpetrating fraud continues to burgeon.
Industry Warnings and Forecasts
The utilization of generative AI by cybercriminals has elicited concern within the industry, with prominent figures forewarning about the potential for unprecedented financial repercussions. Paul Fabara, chief risk and client services officer at Visa, underscored the gravity of the situation, emphasizing the heightened persuasiveness of contemporary scams due to the incorporation of generative AI and allied technologies.
Furthermore, insights from Deloitte's Center for Financial Services corroborate the apprehension surrounding the proliferating threat of generative AI-enabled fraud. The center's report posits that bad actors are poised to leverage increasingly sophisticated and cost-effective generative AI methods, potentially precipitating a surge in fraud losses. In fact, projections intimate that the United States could confront fraud losses totaling $40 billion by 2027, marking a substantial escalation from the $12.3 billion recorded in 2023.
Real-World Implications
The pernicious impact of deepfakes and voice cloning technologies was starkly illustrated by real-world instances of fraudulent financial transfers. Incidents such as the duplicitous transfer of $25 million, facilitated by the implementation of a deepfake to impersonate a company executive, underscore the tangible ramifications of these deceptive ploys. Notably, the deployment of deepfakes in orchestrating fraudulent financial transfers has been documented in varied contexts, compelling organizations and individuals to exercise heightened vigilance.
China, too, has grappled with the repercussions of deepfake-enabled fraud, as evidenced by a case in Shanxi province where an employee was coerced into transferring a substantial sum following the manipulation of a deepfake during a video call. These incidents serve as poignant reminders of the imperative to fortify safeguards and authentication protocols in the face of evolving technological subterfuge.
Share news