Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Amazon’s Coding Tool Hacked — Experts Warn of Bigger Risks

Hackers utilised Amazon’s own update to insert destructive code in the Amazon Q tool.

 



A contemporary cyber incident involving Amazon’s AI-powered coding assistant, Amazon Q, has raised serious concerns about the safety of developer tools and the risks of software supply chain attacks.

The issue came to light after a hacker managed to insert harmful code into the Visual Studio Code (VS Code) extension used by developers to access Amazon Q. This tampered version of the tool was distributed as an official update on July 17 — potentially reaching thousands of users before it was caught.

According to media reports, the attacker submitted a code change request to the public code repository on GitHub using an unverified account. Somehow, the attacker gained elevated access and was able to add commands that could instruct the AI assistant to delete files and cloud resources — essentially behaving like a system cleaner with dangerous privileges.

The hacker later told reporters that the goal wasn’t to cause damage but to make a point about weak security practices in AI tools. They described their action as a protest against what they called Amazon’s “AI security theatre.”


Amazon’s response and the fix

Amazon acted smartly to address the breach. The company confirmed that the issue was tied to a known vulnerability in two open-source repositories, which have now been secured. The corrupted version, 1.84.0, has been replaced with version 1.85, which includes the necessary security fixes. Amazon stated that no customer data or systems were harmed.


Bigger questions about AI security

This incident highlights a growing problem: the security of AI-based developer tools. Experts warn that when AI systems like code assistants are compromised, they can be used to inject harmful code into software projects or expose users to unseen risks.

Cybersecurity professionals say the situation also exposes gaps in how open-source contributions are reviewed and approved. Without strict checks in place, bad actors can take advantage of weak points in the software release process.


What needs to change?

Security analysts are calling for stronger DevSecOps practices — a development approach that combines software engineering, cybersecurity, and operations. This includes:

• Verifying all updates through secure hash checks,

• Monitoring tools for unusual behaviour,

• Limiting system access permissions and

• Ensuring quick communication with users during incidents.

They also stress the need for AI-specific threat models, especially as AI agents begin to take on more powerful system-level tasks.

The breach is a wake-up call for companies using or building AI tools. As more businesses rely on intelligent systems to write, test, or deploy code, ensuring these tools are secure from the inside out is no longer optional, it’s essential.

Share it:

AI

AI Assistant

Amazon Q

coding

Data Breach

Hackers