BrandShield Blog

How Cybercriminals Are Using AI Tools In Recent Phishing Attacks

Written by Yuval Zantkeren | Mar 13, 2023 1:03:46 PM

 

With just a few prompts, AI solutions enable cybercriminals to improve their phishing attacks and generate malicious code. Here's how to protect your brand from this growing threat.

AI Innovation: Progress Comes With Risk

Organizations are thrilled by new AI tools, which make coding, writing, and other functions as simple as a few clicks, however these solutions aren’t exclusively beneficial for businesses. Cybercriminals are taking advantage of this increase in innovation and leveraging AI tools for nefarious activity.

AI chatbots and text editors give cybercriminals access to a huge database of industry ‘knowhow’, as these solutions provide both coding skills and language accessibility. Meaning phishing is far easier for cybercriminals targeting countries where the local language isn't their mother tongue.

Before AI solutions, a cybercriminal needed competent writing skills to craft effective phishing attacks, as luring texts which read as “legitimate” was necessary. 


But thanks to AI tools, which are more accessible and available than ever before, cybercriminals can improve their lingo and wording to create more authentic texts.

AI And Phishing: What’s The Connection?

There is a dramatic uptick in phishing attacks right now. In just the first 6 months in 2022, phishing attacks rose by 61% in comparison to 2021, with messages generated by natural language processing (NLP) solutions being the main culprit behind this large increase.

Before the emergence of these AI tools, anti phishing best practices typically included checking for grammatical errors, incorrect spelling, strange punctuation, unusual syntax, and other written clues to a text being “weird”. But now, as these mistakes don’t typically happen when cybercriminals use AI tools, classic warning signs of phishing are no longer relevant and these attacks are harder to identify.

Recent phishing attacks have demonstrated why natural language processing in AI chatbots poses a major security concern. These technologies can expertly mimic a human voice, making it difficult for a target to understand that they’re speaking with a bot, not an actual person.

This also means that cybercriminals can utilize existing communications from your brand and replicate your voice, brand identity, and tone with ease. They can also create fraudulent communications in multiple languages, and distribute them for wider and faster traction. Essentially, your brand could be a victim of trademark infringement with just a few minutes of work.

Many businesses use anti phishing solutions that recognize when emails are likely malicious, based on common patterns of how these messages are written. But when cybercriminals use AI text editors, “attackers could potentially have unique content for each email generated for them with the help of AI, making these attacks harder to detect,” explains Ketaki Borade, Senior Analyst at Omdia.

“Similarly, writing phishing emails may become easier, without any of the typos or unique formats that today are often critical to differentiate these attacks from legitimate emails. The scary part is it’s possible to add as many variations to the prompt, such as ‘making the email look urgent,’ ‘email with a high likelihood of recipients clicking on the link,’  ‘social engineering email requesting wire transfer,’ and so on.”

Malware And AI: Here’s What You Need To Know

We’ve seen cybercriminals use AI tools for more than persuasive writing. Many available AI solutions can create code from scratch, with just a few prompts. They can also reverse engineer code and analyze existing malware for the purpose of recreating and deploying it. If that isn’t disturbing enough, AI tools can also add additional anti-analysis elements to avoid detection by security programs. 

The use of AI-generated malware in collaboration with AI-created writing means that cybercriminals can launch sophisticated phishing campaigns very quickly. Cybercriminals can use these AI tools to create code that’s identical to your website or apps. These fraudulent sites, which may appear indistinguishable from your brand’s legitimate site, may feature domain squatting URLs.

These URLs, which often feature a common misspelling or one letter that’s different to your authentic brand, furthering the illusion that a customer is interacting with your company. Coupled with AI-generated content that matches your brand’s voice, it can be next to impossible for a victim to determine that they’re visiting a fraudulent version of your site.

Other Risks Associated With Online AI Tools

The intense hype and buzz surrounding these AI apps and solutions means that less tech-savvy businesses have become aware of the promise of these tools. People who may not be experienced in managing technology or recognize red flags may jump into using AI tools, without understanding the risks involved in doing so

Because of the lack of options available regarding these highly-sought after tools, which are often unavailable due to high demand, organizations may end up searching for alternative tools. This could lead them to fraudulent sites or apps, which are created by cybercriminals to trick users into believing they’re accessing a legitimate solution. Unsuspecting users may then deliver critical details about their businesses or personally identifying information directly to these bad actors.

How Brands Can Combat The Risks Of AI Tools

Following basic security and anti phishing best practices are critical for brands looking to keep their organizations secure in the face of the growing threat posed by these solutions.

Continuous monitoring of your brand online to identify and takedown cybercriminals mirroring your brand is also extremely important. Without full-time, round-the-clock detection of fraudulent websites, social media profiles, and more that are purporting to represent your business, your brand is far more likely to suffer serious damage to its reputation and even financial losses. Swift takedowns make the difference between minimal damage and a catastrophe for your brand.

It’s key to train your staff, from junior employees to C-level executives, to always remain vigilant when it comes to suspicious emails, links, or websites. Should they receive strange messages or emails containing questionable content, they must immediately report it to your Security team.

Remind your teams never to click on a link in an email from an untrusted sender. You may also consider giving everyone a refresher as to recognize whether or not an email is really coming from within your organization, or could be an outsider masquerading as an employee.

BrandShield: Your (Not So) Secret Weapon Against AI-Generated Attacks

BrandShield is an industry-leader in protecting brands from online threats, including AI-generated attacks. Our proprietary technology, coupled with our seasoned experts who have decades of hard-won experience in the brand protection field, provides you holistic, big-picture monitoring, including analyzing, prioritization, detective, and takedowns - all in one intuitive platform. To learn more about how BrandShield can help safeguard your brand from the AI-generated attacks, get in touch with us today.