Cybercriminals Struggle to Harness AI, Research Reveals

Alex Turner, Technology Editor
4 Min Read
⏱️ 3 min read

**

In a fascinating twist, recent research indicates that cybercriminals are finding it challenging to integrate artificial intelligence (AI) into their illicit operations. A collaborative study by experts from the universities of Edinburgh, Strathclyde, and Cambridge scrutinised approximately 100 million posts from dark web forums, uncovering a startling lack of proficiency among cybercriminals in leveraging these advanced technologies.

The Research Breakdown

The study, which tapped into the CrimeBB database, employed a mix of machine learning tools and manual analysis techniques to sift through vast amounts of data. The researchers aimed to understand how hackers have been experimenting with AI since the launch of ChatGPT in November 2022. However, the results paint a picture of a community struggling to keep pace with evolving technology.

Rather than democratizing cybercrime, AI tools have predominantly benefited those already skilled in hacking. This means that the supposed barriers to entry for would-be cybercriminals remain dauntingly high.

AI’s Limited Impact on Cybercrime

Interestingly, the research highlighted that when AI is deployed in cybercrime, it is often used for specific nefarious activities. For instance, some have successfully employed AI to operate social media bots that spread misogynistic content or to camouflage fraud schemes from the prying eyes of cybersecurity professionals. Yet, even with these applications, the overall advantages have been minimal for less experienced offenders.

Dr Ben Collier, a senior lecturer at the University of Edinburgh, commented, “Cybercriminals are experimenting with these tools, but as far as we can tell, it’s not delivering them real benefits in their own work. Our message to industry is: don’t panic yet.” This statement underscores the current ineffectiveness of AI in empowering the less skilled within the cybercrime realm.

The Real Threat: Poorly Secured AI Systems

While cybercriminals grapple with AI, the report draws attention to a more pressing concern: the potential for poorly secured AI systems in legitimate sectors. The researchers warn that companies and individuals adopting these technologies without adequate security measures could unwittingly expose themselves to catastrophic cyberattacks. As cybercriminals hone in on vulnerable systems, the risks multiply.

Furthermore, as AI continues to infiltrate mainstream software industries, many individuals within cybercrime circles are expressing fears over job security. This anxiety may push some to engage in more cybercriminal activities, as they seek alternative means of income.

Looking Ahead: The Future of Cybercrime and AI

The study’s findings will be presented at the Workshop on the Economics of Information Security in Berkeley, California, this June, where experts will delve deeper into the implications of these revelations.

The report also discusses the risks associated with “vibecoded” products, where AI-generated code is integrated into legitimate software. These developments could pose significant threats if security protocols are not tightened.

Why it Matters

The implications of this research extend beyond the world of cybercriminals; they highlight the urgent need for robust security measures in the face of rapidly evolving technologies. As organisations increasingly adopt AI, understanding the vulnerabilities associated with these systems is crucial. The landscape of cybercrime may be shifting, but the keys to safeguarding our digital futures lie in vigilance and proactive security strategies. The message is clear: while AI may not yet be a powerful tool for cybercriminals, the real danger comes from unchecked technology in the hands of the unprepared.

Share This Article
Alex Turner has covered the technology industry for over a decade, specializing in artificial intelligence, cybersecurity, and Big Tech regulation. A former software engineer turned journalist, he brings technical depth to his reporting and has broken major stories on data privacy and platform accountability. His work has been cited by parliamentary committees and featured in documentaries on digital rights.
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

© 2026 The Update Desk. All rights reserved.
Terms of Service Privacy Policy