
Businesses brace for fraudulent payments
How to identify and combat AI-enabled fraud
KEY POINTS
- AI-enhanced fraud: AI tools are making fraudulent activities more sophisticated and harder to detect, posing significant risks to businesses.
- Proactive measures: Implementing AI-powered fraud detection systems, multifactor authentication and continuous transaction monitoring are crucial.
- Employee training: Regular training on social engineering tactics and verification protocols is essential to prevent fraud.
When is the last time you wrote a physical check? Once the norm, payment through cash and checks has largely been replaced by electronic payments—a system characterized by ease but also susceptible to AI-enhanced fraud.
“In the early 2000s, the internet and the influence of mobile technology started shifting a lot of payments online, making it faster to pay bills and make purchases,” said Tammy Foy, director of treasury sales at BOK Financial®. “Now, people expect these payments to be fast, easy, and secure, which also places a bigger emphasis on protecting people—and companies—from fraud and scams.”
Alongside this transition to predominantly digital payment systems, there has been a correlating increase in the threat of fraud, especially as artificial intelligence (AI) has now been integrated into so many facets of business.
AI scams targeting consumers and businesses alike
One report showed that generative AI scams rose 456% over the last year, compared to the previous year, which had already seen a 78% increase from 2022 to 2023. The Deloitte Center for Financial Services estimates that generative AI will enable $40 billion in losses by 2027, up from $12.3 billion in 2023 (a 32% compound annual growth rate).
“We’re seeing a significant rise in intelligent—and believable—AI-generated scams targeted at not only individuals, but also businesses and financial institutions to try and gain access to critical financial information,” said Paul Tucker, chief information security and privacy officer at BOK Financial. “While businesses of all sizes are being targeted, small businesses are especially vulnerable due to their size and potential for less protections afforded to larger-scale organizations.”
These threats and the challenges they present to businesses were a consistent theme throughout the recent TEXPO Treasury and Financial Conference, where experts like Tucker discussed how businesses can protect themselves from the changing threat landscape.
In his session, “AI and the Threat to Payment Security,” Tucker outlined some of the emerging threats he’s seeing in his role, including the use of deep fakes, social engineering and more to gain access to critical financial information.
“One of the biggest challenges in our industry is that, while we’re implementing AI into our systems to better protect them, scammers are using the same tools to defraud businesses, making it critical to understand what we’re up against,” Tucker said.

The AI scam threat to businesses
According to Tucker, AI tools are making fraudulent activities more sophisticated and harder to detect.
“Criminals are using machine learning to analyze transaction patterns and identify vulnerabilities in payment systems, allowing them to better mimic legitimate user behavior,” he said. In these scenarios, deep learning models facilitated by AI help fraudsters create more convincing fake identities by generating realistic personal information, fake documents and even deep fake images for identity verification bypass.
A deep fake is AI-generated media that uses deep learning algorithms to replace a person’s likeness with someone else’s, creating realistic but fake videos, images or audio recordings. “The technology can make it appear that someone said or did something that they never actually did, often making the fake content nearly indistinguishable from authentic media,” Tucker said.
Voice cloning and deep fake technologies enable social engineering attacks where fraudsters impersonate executives or customers to authorize fraudulent transactions. AI also helps criminals optimize the timing and amounts of fraudulent transactions to avoid triggering security alerts due to unusual patterns.
On the larger business level, AI-powered bots can execute account takeovers at scale by testing stolen credentials across multiple platforms simultaneously, while also automating the creation of numerous fake accounts. “In these instances, businesses are tasked with increasing their due diligence around detection and addressing issues as they arise, which puts a lot of burden on internal information security teams,” Tucker said.
Employee training critical
Tucker recommends that organizations invest time and resources in training staff on how to detect and address potential fraud before falling victim. These training programs should focus on social engineering tactics, including deep fake awareness and verification protocols for high-value transactions.
“The more your staff knows, the more they can help the organization identify potential threats and address them,” Tucker said. This might include implementing additional verification steps for high-value transactions—especially for wire transfers or account changes—delivered to accounts payable or financial professionals processing transactions.
Foy adds that accounts payable teams and anyone handling payment systems should be required to take ongoing training to stay up to date on emerging threats.
“Investing in individuals from both the corporate and financial services sectors to attend conferences and training focused on identifying and addressing fraud in payment systems is a vital part of the solution,” Foy said. “This investment empowers them to better protect and educate organizations about the real-world threats posed by increasingly sophisticated, AI-enabled fraud tactics.”
Protecting your business
Tucker and Foy recommend businesses follow six steps to protect themselves:
- Implement AI-powered fraud detection systems that use machine learning to identify unusual patterns and adapt to new fraud techniques in real-time, going beyond traditional rules-based systems.
- Deploy multi-factor authentication (MFA), including biometric verification, device fingerprinting, and behavior authentication to make account takeovers significantly more difficult.
- Monitor transaction patterns continuously with AI systems that can detect subtle anomalies in user behavior, spending patterns and transaction timing that might indicate fraud.
- Create layered security architecture with encryption, tokenization, secure APIs and network segmentation to limit the impact of any single point of failure.
- Implement velocity controls and spending limits that can automatically flag or block unusual transaction volumes or amounts that are different from established patterns.
- Conduct regular security audits and penetration testing specifically designed to test defenses against AI-enabled attacks and social engineering attempts.
“At the end of the day, combining the education and training of your workforce and investing in systems that can help identify and thwart potential fraud is your best bet in protecting your business from advanced techniques aimed at compromising data,” Tucker said.