When Google's Jules AI agent added a new feature to a live codebase in under ten minutes, it initially seemed like a breakthrough. But the same capabilities that allow AI tools to scan, modify, and deploy code rapidly also introduce new, troubling possibilities-particularly in the hands of malicious actors.
Experts are now voicing concern over the risks posed by hostile agents deploying AI tools with coding capabilities. If weaponised by rogue states or cybercriminals, the tools could be used to quietly embed harmful code into public or private repositories, potentially affecting millions of lines of critical software.
Even a single unnoticed line among hundreds of thousands could trigger back doors, logic bombs, or data leaks. The risk lies in how AI can slip past human vigilance.
From modifying update mechanisms to exfiltrating sensitive data or weakening cryptographic routines, the threat is both technical and psychological.
Developers must catch every mistake; an AI only needs to succeed once. As such tools become more advanced and publicly available, the conversation around safeguards, oversight, and secure-by-design principles is becoming urgent.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!