StatCounter

Tuesday, November 07, 2023

ChatGPT and its variants could offer criminals new weapon

Commercial Crime International, June 2023

The public deployment of large language models such as AI chatbot ChatGPT has potential crime risks, particularly for online fraud and cybercrime, as Europol has warned. On the flipside, such systems are bolstering the capabilities of AI to ward off cybercrimes, and to decipher what is real and what may be a deep-fake. Paul Cochrane reports.

In the six months since the public launch of ChatGPT, there are no publicised cases of it being utilised for criminal activities, but cyber security experts say it is just a matter of time. Indeed, in March, European Union (EU) police agency Europol released a report on the impact of large language models (LLMs) on law enforcement.

Europol noted that given the information ChatGPT provides freely on the internet, by asking contextual questions “it is significantly easier for malicious actors to better understand and subsequently carry out various types of crime”.

The launch of artificial intelligence (AI) chatbots such as OpenAI’s ChatGPT and Google Bard into the public realm has caused much excitement. These LLMs can draw on massive amounts of data to answer a user’s question, create a text or image, or write code.

Easier for criminals

Such systems have been available in the AI field for some time. What has made LLMs disruptive is its public availability, and the possibility of its capabilities being deployed for more nefarious means.

“In the cyber security space, it’s one of the biggest events over the past decade. Any tool we have in the security space, the adversaries have, which makes life easier for criminals,” said Anna Collard, senior vice-president content strategy for Knowbe4 Africa, a security awareness training and simulated phishing platform.

While LLMs have a content moderation policy to not answer questions classified as harmful or biased, such safety measures “can still be circumvented in some cases with the correct ‘prompt engineering’”, noted Europol. This enables users to refine the way a question is asked to influence the output generated by an AI system, said Collard. If prompts are broken down into individual steps, “it is trivial to bypass these safety measures,” she said.

Circumventing laws

An example is asking “how to circumvent sanctions on Russia” in a less direct way, said Ilya Volovik, senior manager, payment fraud intelligence at Recorded Future, a Boston-based intelligence company.

“You could ask how to ship goods to Russia via Kazakhstan through a forwarding address in the United States. ChatGPT may bring up the top companies to use, which is easier than searching online for the answer. There are many different areas ChatGPT can assist with circumventing the law,” he said.

Fraud and phishing

More direct concerns are related to cybercrimes, particularly fraud and phishing. ChatGPT can for instance make a phishing email grammatically correct and more sophisticated. “In the past you looked out for spelling errors, but that doesn’t apply anymore,” said Collard.

Knowbe4 ran a demo of ChatGPT having a conversation with a human target on an app. “ChatGPT responds based on the stimuli it gets back. The goal is to trick the target [into providing certain information], and it does that automatically. If we can do that for good purposes – or phishing simulations – then the bad guys can definitely do it as well,” said Collard.

Other risks involve AI being used for deep-fakes, whether videos or photos, or replicating a person’s voice for impersonation purposes. The dangers of this were shown in 2019 when an unnamed British energy firm was scammed out of USD243,000 through the CEO’s voice being an AI generated deep-fake.

“It was harder to do such deep-fakes a few years ago, now it is way more accessible, and will become more mainstream in the criminal world,” said Collard.

Bypass anti-fraud measures

ChatGPT has also made computer programming easier by being able to write code, which could be used for hacking and developing malware. Europol noted that “for a potential criminal with little technical knowledge, this [ChatGPT] is an invaluable resource. At the same time, a more advanced user can exploit these improved capabilities to further refine or even automate sophisticated cybercriminal modi operandi.”

Volovik said such knowledge can be useful for cyber criminals to make purchases with stolen credit cards. For instance, anti-fraud measures on shopping platforms assess whether a buyer is a real card holder by checking an account’s history, such as the reading of reviews, writing comments or comparing similar merchandise.

“ChatGPT can create scripts to emulate human behaviour, such as a unique browsing experience to bypass certain anti-fraud measures,” he said. Criminals can also use AI to monitor whether a website’s cyber security software has detected the malicious code used to infect a system. “ChatGPT could monitor any changes in the anti-virus systems on certain pages, or how pages were set up, to give a warning to the cybercriminal,” said Volovik.

AI can be valuable

On the flipside, ChatGPT can be used to detect whether AI is being used maliciously in a system, or whether a call is a deep-fake, an email is genuine or if there are any security glitches. “The more advanced these AI systems become the easier it will be to decipher what is real and what is not, or if someone misleads you. ChatGPT can be a kind of assistant,” said Volovik.

Dr Michelle Frasher, a US-based independent financial crime and technology consultant, noted that the financial crime compliance industry has been exploring the use of and developing machine learning and AI tools to add another layer to screening, and sift through data and spot suspicious transaction behaviour. Such systems are constantly being improved. “An emerging trend is to see what is potentially criminal; AI’s emerging ability to process these trends is valuable,” she said.

As for ChatGPT’s potential use by criminals, Dr Frasher questioned how prevalent it may be. “There are tonnes of corners of the internet that have information [criminals could use], and there are still problems with AI’s accuracy. Right now, ChatGPT summarises information, and sometimes it doesn’t even do that well,” she said.

Better regulation

But the fact remains that as ChatGPT and other LLMs become more accurate, the risk of them being exploited for criminal purposes is real and this is one reason why there are growing calls in certain jurisdictions to better regulate AI.

The European Parliament is developing an Artificial Intelligence Act to establish a central AI authority that classifies AI systems by risk, and to strengthen rules around data quality, transparency, human oversight and accountability.

Such regulatory moves are expected to improve content moderation policies on ChatGPT and other prominent LLM models such as Vicuna, Koala, GPT4All, and Dolly2.0.

However, open-source AI is developing fast and could operate under a regulator’s radar, noted Volovik. Such systems may out compete more prominent systems, while scalable personal AI is increasingly easy to develop, with advanced users able to set up such a system in an evening.

“How will we fight this? With another AI system. The question is, who’s AI is better? This will become more important as we see more attacks come from the likes of ChatGPT,” said Volovik.

No comments: