Major Security Flaw Discovered in AI Coding Platform Poses Risks to Users

Alex Turner, Technology Editor
5 Min Read
⏱️ 4 min read

**

In a shocking revelation, a popular AI coding platform known as Orchids has been exposed for a significant cybersecurity vulnerability that could potentially compromise its users. The flaw was highlighted when a BBC reporter’s laptop was hacked within minutes, showcasing the alarming risks associated with trusting AI tools with extensive access to our systems. This incident raises urgent questions about the safety of emerging technologies as they gain traction in the market.

The Rise of Vibe Coding

Orchids is celebrated as a “vibe-coding” tool, designed to enable individuals without programming expertise to create apps and games simply by entering text prompts into a chatbot. With over a million users, it has attracted a wide following, including major corporations like Google, Uber, and Amazon. This platform is touted as one of the leading applications for vibe coding, garnering high praise from analysts at App Bench and beyond. However, the recent security breach has cast a shadow over its reputation, prompting a closer examination of the implications of such tools.

A Flaw Exposed

Cybersecurity researcher Etizaz Mohsin demonstrated the vulnerability to the BBC by conducting a test on the Orchids desktop application. He initiated a project aimed at creating a computer game based on the BBC News website. Exploiting a previously undisclosed security weakness, Mohsin was able to infiltrate the project files and insert a line of code that allowed him to take control of the reporter’s laptop. The result? A notepad file appeared on the desktop with a chilling message: “Joe is hacked,” alongside a wallpaper featuring an AI hacker.

This incident exemplifies the potential hazards of zero-click attacks—hacks that occur without any action from the victim. Mohsin warned that a malicious actor could have easily installed harmful software or extracted sensitive information, underscoring the severity of the platform’s security shortcomings.

A Wake-Up Call for AI Safety

Mohsin, a seasoned cybersecurity expert based in the UK, has previously uncovered vulnerabilities in high-profile software, including the notorious Pegasus spyware. His attempts to alert Orchids about this flaw began in December 2025, but only received a response recently, with the company attributing its inaction to an overwhelming number of inquiries.

While Orchids is currently the only vibe-coding platform known to have this flaw, experts caution that this incident should serve as a wake-up call for all developers and users of AI tools. Kevin Curran, a professor of cybersecurity at Ulster University, noted that without proper discipline and oversight, code generated by these platforms can be prone to exploitation.

The Broader Implications of AI Tools

As AI agents like Orchids become more integrated into our daily lives, the potential for security vulnerabilities will only grow. One related example is the viral Clawbot, which can perform tasks on user devices with minimal human intervention. Such capabilities, while impressive, come with significant risks, according to Karolis Arbaciauskas from the cybersecurity firm NordPass. He advises users to operate these tools on dedicated machines and to employ disposable accounts for experimentation.

Moreover, the term “vibe coding” has even been named word of the year by Collins Dictionary, highlighting the buzz surrounding this innovative yet precarious trend in technology.

Why it Matters

The emergence of AI coding platforms like Orchids represents a transformative shift in how we interact with technology, providing unparalleled convenience for users. However, as this incident illustrates, the integration of AI into our workflows must be approached with caution. The security vulnerabilities highlighted here not only jeopardise individual users but also pose broader risks to businesses and their confidential data. As we embrace the future of AI, we must prioritise robust security measures to safeguard our digital lives against the ever-evolving landscape of cyber threats.

Share This Article
Alex Turner has covered the technology industry for over a decade, specializing in artificial intelligence, cybersecurity, and Big Tech regulation. A former software engineer turned journalist, he brings technical depth to his reporting and has broken major stories on data privacy and platform accountability. His work has been cited by parliamentary committees and featured in documentaries on digital rights.
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

© 2026 The Update Desk. All rights reserved.
Terms of Service Privacy Policy