
Europol experts have released a new report that has identified a worrying trend – AI chatbots such as ChatGPT can be misused by criminals to facilitate fraud, spread disinformation, and commit cybercrime.
ChatGPT may be the latest craze, but law enforcement officers in Europe are concerned that criminals could exploit this type of AI to further their illicit activities.
In the report published on 27 March, Europol has voiced a stark warning about the potential for large language models – such as ChatGPT- to be abused. Europol warns that as AI systems become more sophisticated, it is increasingly important for law enforcement to stay ahead of the developments to anticipate and prevent misuse.
Europol has identified three main areas where criminals could take advantage of AI chatbots such as ChatGPT: fraud and social engineering, disinformation, and cybercrime.
The report states that ChatGPT’s ability to generate realistic text impersonating certain individuals or groups could make it an ideal tool for phishing on a large scale. In addition, the ease with which ChatGPT can create believable text could be used to spread false information and propaganda.
Furthermore, criminals with no technical knowledge can use AI chatbots to generate malicious code that could facilitate cybercrime.
Rachel Jones, CEO of SnapDragon Monitoring, has warned that when misused, ChatGPT can become “a cyber weapon of severe destruction”. She has urged businesses to communicate with their customers about the threat posed by AIchatbots, and to take steps to monitor for fake domains.
On a personal level, Jones advises people to be sceptical of emails requesting personal and financial information, and to avoid clicking on links. If the email is urgent, she advises to call the organisation directly, as any security-conscious business should not see this as a nuisance.














