Reality Check: AI and Its Actual Role in Cyber Crime

post-thumb

For several years now, AI has been printed on marketing materials of tech companies far and wide. It’s been heralded as the companion that will get you through tedious routine work, such as writing emails, a feature which I honestly appreciate. The professed development has not stopped there, rather than AI has been envisioned as a replacement for actual people.

Machine Hallucinations

The first professions at risk might involve programmers, designers, video editors, musicians – the list goes on, and sometimes it feels like AI wants to put people in the kitchen washing dishes, while it can do all the fun stuff such as creating art and music.

Be as it may, the promise of AI has not yet been fulfilled, as the large language models (LLM) and other generative AI tools remain highly untrustworthy. For example, they have been observed to provide well-written answers that turn out to be riddled with nonsense – known as machine hallucinations.

Even summarizations of text leave a lot to be desired for, as Google’s AI handily summarizes that people should put non-toxic glue on their pizza, if the cheese is not sticking to the crust properly. Images and videos created with AI get better by the day, but people are quick to question and poke at their uncanniness.

Generative AI in the Hands of Criminals

Generative AI’s real impact for good is yet to be fully realized, but cyber criminals have for sure gotten a handle on how to leverage AI to their benefit.

There have been a couple of LLM-based AI tools, which have emerged over the last couple of years on shady online marketplaces. These tools, however, are relatively expensive – costing easily some hundreds of dollars per month.

Their selling point, however, is that these tools is that they do not have most of the restrictions that for instance OpenAI imposes on ChatGPT to try to prevent it from creating (or assist in creation of) malware, using it to find websites with specific vulnerabilities or create outright defrauding content.

In contrast, these are the specific restrictions for example OpenAI is trying to impose on ChatGPT.

Blackhat LLM, a Reality Check

It’s difficult, if not impossible, to say how lucrative a business these blackhat LLMs are. If we apply critical thinking to get some kind of a perspective on their market share, first we need remember that they are more expensive than your legally operating AI tools.

Furthermore, running a full-fledged generative AI tool requires continuous effort and incurs significant costs. You will also need to have access to good amount of high-quality, labeled data that can be fed to the AI to keep it operating at an acceptable level.

Therefore, it is likely that developing such AI tools is currently out of the question for the common cybercriminal. Things could change in the next five to ten years, as the cost of operating AI is likely to decrease and open-source machine learning models are likely to improve.

Naturally, nation-state actors are a different topic, but right now I’m talking about scammers and other financially motivated cybercriminals – who are the bane of our existence online.

Tour de Blackhat

If cybercriminals are not yet developing their own AI models and tools like they do with malware, phishing toolkits, and email spammers, what kinds of tools are they actually using? They turn to the big AI service providers. They use tools you can Google for – tools to create text, images, video or audio.

It’s easy to imagine where AI-generated text is used: for phishing messages, translating text from one language to another and so on. But cyber criminals also use generative AI tools to create fake personalities online, or perhaps more dangerously, mimick someone else’s likeness with these tools.

Deepfake Audio in Action

But cyber criminals also use generative AI tools to create fake personalities online, or perhaps more dangerously, mimick someone else’s likeness with these tools.

As Manuel Werka highlighted in his earlier post on audio deepfakes and voice cloning, we have gotten a taste of such AI-based attacks already: In Hong Kong, a finance worker sent $25 million to criminals after a deepfake version of the company’s CFO attended a video call.

Several individuals across the world have been contacted by someone they know, asking for money – except the plot twist here is that they were not who they were claiming to be, but rather cyber criminals who had simply cloned someone’s voice using AI.

No Social Media Precense, OPSEC Done?

I know some cyber security professionals would now say: “Hey, just keep out of social media and you’ll be fine.” There unfortunately are several counter-arguments to that point.

First, people use social media and for a lot of people that is the core of their Internet experience. Fighting against that is like attempting to fight against a Finnish back winter (for those living outside of Finland, this means the sudden snow fall or slush during periods when it should be warm and sunny – for instance during midsummer).

Secondly, there are already malware strains that are not only after people’s data in the traditional sense, but malware that wants to steal images or video of the victims. The Gold pickaxe malware was used by scammers to gain material they could use to create AI-generated images or videos of people and use them as a proof of authentication when contacting banks. You would be surprised how many institutions and online platforms rely on “selfie” videos or images to get a proof from a person that they’re actually who they are claiming to be.

While Gold Pickaxe seems to be more or less a standalone case for now, it sheds light on two worrying points:

  1. You don’t have to be visibly on social media for your likeness to be stolen.
  2. In the future, data about our likeness – images, video, audio clips – might become yet another type of data to be stolen and sold by cyber criminals.

The Likely Path of AI Evolution

AI will most likely shake both offensive and defensive security industry for years to come – we will see nasty technical bugs, quirks and problems caused by AI implementations. It is the natural life cycle for cyber security and technology in general. No matter how much security work gets put into developing a system, there is a high chance that novel issues will be uncovered once things are in production.

Technology always comes with a downside. It is exactly because of technology that we have online scammers and cyber criminals today. We could uninvent emails, but we’d put much larger population out of business than just criminals.

Deepfake Revenge Porn

Lastly, I want to mention one thing that would deserve a write-up of its own, but needs to be mentioned nonetheless. And that one last thing is: AI can be misused by someone other than scammers or who we see as normal cyber criminals.

Thanks to the growing AI hype, in schools, workplaces, and small communities, AI-generated deepfake porn is ruining lives every day. These tools for generating explicit content, mimicking someone’s likeness, are just one click away. Moreover, these tools are designed to be primarily used on women’s or girls’ pictures. I cannot stress enough how much damage a non-consensually shared pornographic image can cause to the individual, yet these tools remain easily findable to even the non-tech-savvy population.

LLMs are Here For Good and Bad

We can conclude that leveraging AI for any type of crime – whether for financial gain or defamation, is already here with us. All signs point towards a future where exploitation of these tools continues and therefore preventing and finding ways how to mitigate those issues becomes the key.

When it comes to financially motivated crime, the good news is that other than the exterior of the scam or attack, very little changes. The criminals still want us to click on links, give our sensitive data to them, download malware, transfer money or cryptocurrency to them, and so on. These are all things we in the cyber security industry have been fighting against for years, and we’ll keep fighting it in the future.

These are all things we in the cyber security industry have been fighting against for years, and we will keep fighting in the future as well. The devil, however, is in the details – the crime of tomorrow might be fueled by powerful systems that are able to scale and fool people in a more sophisticated fashion. These AI systems might cause a lot of harm by enabling more personalized and scarier manipulation tactics.

Ultimately, it is up to the companies developing new technologies to make sure they are not overpromising and that they are not compromising our security just for the sake of monetary profit. Constant work needs to be put into making exploitation of AI-based tools difficult as well.


Give Us Feedback or Subscribe to Our Newsletter

If this write-up pushed your buttons one way or another, then please give us some feedback below. The easiest way to make sure that you will not miss a post is to subscribe to our monthly newsletter. We will not spam you with frivolous marketing messages either, nor share your contact details with nefarious marketing people. 😉


comments powered by Disqus