Picture this: you receive a text notification and call from the credit union where your car loan is housed saying your monthly loan payment hasn't been received, but you know you paid it. This is a common approach by scammers to sow seeds of doubt, in an attempt to gain access to important account or payment information.
Another scenario: you receive an email from what looks like a vendor you work with in your job on a regular basis. However, it's actually an email from a scammer; you click through and compromise your company's network.
What makes both of these phishing attacks so dangerous is that the messages look so real—and you can thank artificial intelligence (AI) for that. The use of AI in fraud is hitting at a high clip over the last year; since the end of 2022, the use of malicious phishing emails is up 1,265%. This, according to cybersecurity firm SlashNext, includes 31,000 phishing attacks sent daily.
"We're seeing cyber criminals use a combination of tools to attempt to gain access to personal data and accounts, including phone, text and email, in an effort to make consumers believe they should be following up on their demands," said Jay Bouche, vice president of channel and customer success for Lumifi, a cybersecurity solutions provider.
"These cybercriminals make everything look so much more real than it did previously—and that's possible because of the AI-generated text, emails and other communications," Bouche said. What's new is the multi-layered approach and complexity. Previously, scammers sent emails that prompted victims to click on an attachment or a link that was clearly not going where the email claimed it would.
"Now, criminals are masking things so well that in a lot of situations, people are falling prey to very real-looking phishing attempts," Bouche said.
A rise in scams target consumers
Paul Tucker, chief information security and privacy officer at BOK Financial®, said that impersonation attempts are on the rise, particularly regarding a scam called social engineering. This is the use of deception to manipulate individuals into sharing confidential or personal information that can be used for fraudulent purposes.
Business email compromise, or BEC, contributed to approximately $2.7 billion in victim dollar losses in 2022, and another $52 million in losses from other types of phishing, according to the FBI's Internet Crime Report.
"A lot of this increase can be attributed to the rise in tools like ChatGPT that allow criminals to quickly write convincing messages to target individuals and businesses," Tucker said.
Consumers are also falling prey to:
- Scams involving images. "We're starting to see more threats around comprehensive neural networks, which is the technology that drives the realistic imaging that we're seeing from these platforms," Tucker said. Scammers are now better able to replicate documents such as driver's licenses, which is why it's so critical to protect personal information.
- Spear phishing. This kind of attack involves emails that are highly customized to the individual, making it appear like it's coming from one of your providers or vendors.
- Credential theft. Password guessing has been on the rise, Tucker said. That's when scammers have enough information on you that they can use AI to crack your passwords. "We're going to continue to see these attacks become more relevant and used more often as scammers increasingly use AI," he said.
"The bottom line is: If you're not expecting a specific message from someone, you need to take your time and double check that it's valid," Bouche said.
"We don't get 'prince in Nigeria' emails anymore; we're getting emails that look like things we use to support our daily lives."- Jay Bouche, vice president of channel and customer success for Lumifi
AI as a double-edged sword
"We look at AI as a hammer," Tucker said. "A hammer can build—but it can also destroy."
Tucker explained that as generative AI tools become more sophisticated, so too will the attacks. Generative AI is a term used to describe how users can quickly generate new content (text, images, etc.) based on a variety of inputs into programs such as ChatGPT. This can be extremely useful. However, cybercriminals can now mimic the style of a boss or supervisor to an employee, perhaps alluding to a specific event to make the request appear legitimate. The criminals might also use AI to generate calls that copy someone's voice in an effort to gain more personal information.
While generative AI technology like ChatGPT has been a game-changer for the "bad actors," Tucker said it has also been a tool for cybersecurity teams to identify new threat vectors and how these criminals are getting into systems to gain access to sensitive data. AI-driven technology is being used to look for anomalies across incoming emails at scale, helping cybersecurity teams identify threats and learn patterns in criminal behavior.
"The more that people understand what's possible for AI to do and how it works, the more educated they will be about using the tool and embracing how it can be used to put protections in place."- Paul Tucker, chief information security and privacy officer at BOK Financial
How consumers and companies can protect themselves
When it comes to scams, there's a common misconception that older generations fall victim far more, but recent studies show that it's actually Gen Z (those born in the late 1990s to early 2010s) that is more likely to succumb to these scams. In fact, Gen Z Americans are three times more likely to get caught up in an online scam than Baby Boomers are, according to Deloitte.
One thing that spans generations are the tools and knowledge necessary to protect yourself and your company from scams. For businesses, Bouche recommends investing in basic protections, such as an endpoint detection and response tool (EDR) and email protection software that can better detect possible instances of phishing and scams.
"But the biggest component of protection is the human element," Bouche said. Approximately 91% of cyberattacks begin with a phishing attack, which is a staggering number for businesses, according to Deloitte. One way to help boost protections is a focus on strong training protocols that establish security awareness across the organization.
"Having good hygiene around what personal information you share on the internet is one way to take control over your own protection—and the vulnerability of your company," Tucker said.
Best practices
Here are seven ways you can protect yourself and your company from falling victim:
- Verify, verify, verify. If you get an email from someone or what appears to be a business, do everything you can to verify the validity of the email. This includes going to the vendor's actual website, calling directly and ensuring the message is legitimate.
- Be careful whom you let in. When making social connections, make sure you're only connecting with people you know. Remove personal information to ensure criminals can't create dossiers about your life.
- Check the details. Bouche recommends hovering over links to see where clicking will take you to help verify whether an email is legitimate.
- Be discerning when receiving unexpected emails. Tucker suggests erring on the side of caution when opening and interacting with incoming emails. "Have a good understanding where it's coming from. If it says 'Apple Support,' but it's not a familiar email address, then that's a red flag," he said.
- Avoid downloading software or other tools when prompted.
- Keep track of your subscriptions. It's hard to get away from emails from brands that offer discounts, but Bouche said it's critical to keep track of your subscriptions so you know what's legitimate in your email inbox.
- Trust your instincts. "If something doesn't feel right, trust that instinct," Tucker suggests.
"This problem isn't going away; it boils down to users and consumers being as educated as humanly possible as to what those threats look like because they're going to look more and more real," Bouche said.