The recent hack of Amazon’s AI-based coding assistant, Q Developer, has brought up serious concerns about the security of generative AI tools in software development. Reports conducted by Bloomberg state that one of the hackers was able to gain access to the system by making a malicious pull request on GitHub. The update, which passed unnoticed, had hidden commands that would direct the AI to delete the files of users and restore the systems to their nearly factory settings.
Even though the attack was said to be aimed at finding vulnerabilities but not to cause actual damage, it still impacted Amazon customers who downloaded the infected version. Amazon said that the problem was addressed rapidly, although it is a stark reminder to engineers using AI to code and review code.
How hackers exploited Amazon Q through GitHub
The hacker used the identity of a genuine contributor to Amazon’s open-source repository on GitHub to attack Q Developer. They exploited the AI by sending instructions to do destructive tasks by inserting malicious instructions on a pull request. The instructions made the tool clear out systems, and there was a possibility of losing huge data. The threat was not detected by Amazon, as their regular review process did not help them to identify the threat, and the tainted update went live.
This was a form of social engineering where a hacker did not need to do much and was able to make the AI destructive under the name of normal functioning. This showed a simple way in which Gen-AI tools are prone to being fooled, and it illustrates how more thorough reviewing of contributions to public repositories ought to be done.
Generative AI: Helpful but vulnerable
The use of AI tools such as Q Developer, OpenAI Codex, etc. in what is now called vibe coding (programming with natural language prompts) is rapidly spreading. This will accelerate development as well as open new avenues to cyber threats. The report produced by Legit Security in 2025 states that almost fifty percent of the businesses that implemented the use of AI place themselves at risk without realising where and how AI is implemented in their work processes.
This is not the only misstep of other companies. The fast-growing startup Lovable had left its databases unsecured, and the competitor Replit found the problem. Lovable publicly acknowledged its security failures. It is clear that the safety measures required to keep up with the development with the use of AI are not keeping up with the pace of the development.
What developers can do to stay safe
Analysts indicate that developers must direct AI software to place an emphasis on secure coding. Also, in principle all the AI-generated code should be manually reviewed even at the cost of slowing down the process. The additional measures can contribute to identifying what is overlooked by automated systems at the moment. The increasing integration of AI in the daily coding practices means developers have to learn rapidly to ensure that their systems are not abused.