Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label AI. Show all posts

Amazon’s Coding Tool Hacked — Experts Warn of Bigger Risks

 



A contemporary cyber incident involving Amazon’s AI-powered coding assistant, Amazon Q, has raised serious concerns about the safety of developer tools and the risks of software supply chain attacks.

The issue came to light after a hacker managed to insert harmful code into the Visual Studio Code (VS Code) extension used by developers to access Amazon Q. This tampered version of the tool was distributed as an official update on July 17 — potentially reaching thousands of users before it was caught.

According to media reports, the attacker submitted a code change request to the public code repository on GitHub using an unverified account. Somehow, the attacker gained elevated access and was able to add commands that could instruct the AI assistant to delete files and cloud resources — essentially behaving like a system cleaner with dangerous privileges.

The hacker later told reporters that the goal wasn’t to cause damage but to make a point about weak security practices in AI tools. They described their action as a protest against what they called Amazon’s “AI security theatre.”


Amazon’s response and the fix

Amazon acted smartly to address the breach. The company confirmed that the issue was tied to a known vulnerability in two open-source repositories, which have now been secured. The corrupted version, 1.84.0, has been replaced with version 1.85, which includes the necessary security fixes. Amazon stated that no customer data or systems were harmed.


Bigger questions about AI security

This incident highlights a growing problem: the security of AI-based developer tools. Experts warn that when AI systems like code assistants are compromised, they can be used to inject harmful code into software projects or expose users to unseen risks.

Cybersecurity professionals say the situation also exposes gaps in how open-source contributions are reviewed and approved. Without strict checks in place, bad actors can take advantage of weak points in the software release process.


What needs to change?

Security analysts are calling for stronger DevSecOps practices — a development approach that combines software engineering, cybersecurity, and operations. This includes:

• Verifying all updates through secure hash checks,

• Monitoring tools for unusual behaviour,

• Limiting system access permissions and

• Ensuring quick communication with users during incidents.

They also stress the need for AI-specific threat models, especially as AI agents begin to take on more powerful system-level tasks.

The breach is a wake-up call for companies using or building AI tools. As more businesses rely on intelligent systems to write, test, or deploy code, ensuring these tools are secure from the inside out is no longer optional, it’s essential.

AI-Driven Phishing Threats Loom After Massive Data Breach at Major Betting Platforms

 

A significant data breach impacting as many as 800,000 users from two leading online betting platforms has heightened fears over sophisticated phishing risks and the growing role of artificial intelligence in exploiting compromised personal data.

The breach, confirmed by Flutter Entertainment, the parent company behind Paddy Power and Betfair, exposed users’ IP addresses, email addresses, and activity linked to their gambling profiles.

While no payment or password information was leaked, cybersecurity experts warn that the stolen details could still enable highly targeted attacks. Flutter, which also owns brands like Sky Bet and Tombola, referred to the event as a “data incident” that has been contained. The company informed affected customers that there is, “nothing you need to do in response to this incident,” but still advised them to stay alert.

With an average of 4.2 million monthly users across the UK and Ireland, even partial exposure poses a serious risk.

Harley Morlet, chief marketing officer at Storm Guidance, emphasized: “With the advent of AI, I think it would actually be very easy to build out a large-scale automated attack. Basically, focusing on crafting messages that look appealing to those gamblers.”

Similarly, Tim Rawlins, director and senior adviser at the NCC Group, urged users to remain cautious: “You might re-enter your credit card number, you might re-enter your bank account details, those are the sort of things people need to be on the lookout for and be conscious of that sort of threat. If it's too good to be true, it probably is a fraudster who's coming after your money.”

Rawlins also noted that AI technology is making phishing emails increasingly convincing, particularly in spear-phishing campaigns where stolen data is leveraged to mimic genuine communications.

Experts caution that relying solely on free antivirus tools or standard Android antivirus apps offers limited protection. While these can block known malware, they are less effective against deceptive emails that trick users into voluntarily revealing sensitive information.

A stronger defense involves practicing layered security—maintaining skepticism, exercising caution, and following strict cyber hygiene habits to minimize exposure

Stop! Don’t Let That AI App Spy on Your Inbox, Photos, and Calls

 



Artificial intelligence is now part of almost everything we use — from the apps on your phone to voice assistants and even touchscreen menus at restaurants. What once felt futuristic is quickly becoming everyday reality. But as AI gets more involved in our lives, it’s also starting to ask for more access to our private information, and that should raise concerns.

Many AI-powered tools today request broad permissions, sometimes more than they truly need to function. These requests often include access to your email, contacts, calendar, messages, or even files and photos stored on your device. While the goal may be to help you save time, the trade-off could be your privacy.

This situation is similar to how people once questioned why simple mobile apps like flashlight or calculator apps — needed access to personal data such as location or contact lists. The reason? That information could be sold or used for profit. Now, some AI tools are taking the same route, asking for access to highly personal data to improve their systems or provide services.

One example is a new web browser powered by AI. It allows users to search, summarize emails, and manage calendars. But in exchange, it asks for a wide range of permissions like sending emails on your behalf, viewing your saved contacts, reading your calendar events, and sometimes even seeing employee directories at workplaces. While companies claim this data is stored locally and not misused, giving such broad access still carries serious risks.

Other AI apps promise to take notes during calls or schedule appointments. But to do this, they often request live access to your phone conversations, calendar, contacts, and browsing history. Some even go as far as reading photos on your device that haven’t been uploaded yet. That’s a lot of personal information for one assistant to manage.

Experts warn that these apps are capable of acting independently on your behalf, which means you must trust them not just to store your data safely but also to use it responsibly. The issue is, AI can make mistakes and when that happens, real humans at these companies might look through your private information to figure out what went wrong.

So before granting an AI app permission to access your digital life, ask yourself: is the convenience really worth it? Giving these tools full access is like handing over a digital copy of your entire personal history, and once it’s done, there’s no taking it back.

Always read permission requests carefully. If an app asks for more than it needs, it’s okay to say no.

Why Policy-Driven Cryptography Matters in the AI Era

 



In this modern-day digital world, companies are under constant pressure to keep their networks secure. Traditionally, encryption systems were deeply built into applications and devices, making them hard to change or update. When a flaw was found, either in the encryption method itself or because hackers became smarter, fixing it took time, effort, and risk. Most companies chose to live with the risk because they didn’t have an easy way to fix the problem or even fully understand where it existed.

Now, with data moving across various platforms, for instance cloud servers, edge devices, and personal gadgets — it’s no longer practical to depend on rigid security setups. Businesses need flexible systems that can quickly respond to new threats, government rules, and technological changes.

According to the IBM X‑Force 2025 Threat Intelligence Index, nearly one-third (30 %) of all intrusions in 2024 began with valid account credential abuse, making identity theft a top pathway for attackers.

This is where policy-driven cryptography comes in.


What Is Policy-Driven Crypto Agility?

It means building systems where encryption tools and rules can be easily updated or swapped out based on pre-defined policies, rather than making changes manually in every application or device. Think of it like setting rules in a central dashboard: when updates are needed, the changes apply across the network with a few clicks.

This method helps businesses react quickly to new security threats without affecting ongoing services. It also supports easier compliance with laws like GDPR, HIPAA, or PCI DSS, as rules can be built directly into the system and leave behind an audit trail for review.


Why Is This Important Today?

Artificial intelligence is making cyber threats more powerful. AI tools can now scan massive amounts of encrypted data, detect patterns, and even speed up the process of cracking codes. At the same time, quantum computing; a new kind of computing still in development, may soon be able to break the encryption methods we rely on today.

If organizations start preparing now by using policy-based encryption systems, they’ll be better positioned to add future-proof encryption methods like post-quantum cryptography without having to rebuild everything from scratch.


How Can Organizations Start?

To make this work, businesses need a strong key management system: one that handles the creation, rotation, and deactivation of encryption keys. On top of that, there must be a smart control layer that reads the rules (policies) and makes changes across the network automatically.

Policies should reflect real needs, such as what kind of data is being protected, where it’s going, and what device is using it. Teams across IT, security, and compliance must work together to keep these rules updated. Developers and staff should also be trained to understand how the system works.

As more companies shift toward cloud-based networks and edge computing, policy-driven cryptography offers a smarter, faster, and safer way to manage security. It reduces the chance of human error, keeps up with fast-moving threats, and ensures compliance with strict data regulations.

In a time when hackers use AI and quantum computing is fast approaching, flexible and policy-based encryption may be the key to keeping tomorrow’s networks safe.

How Tech Democratization Is Helping SMBs Tackle 2025’s Toughest Challenges

 

Small and medium-sized businesses (SMBs) are entering 2025 grappling with familiar hurdles: tight budgets, economic uncertainty, talent shortages, and limited cybersecurity resources. A survey of 300 decision-makers highlights how these challenges are pushing SMBs to seek smarter, more affordable tech solutions.

Technology itself ranks high on the list of SMB pain points. A 2023 Mastercard report (via Digital Commerce 360) showed that two-thirds of small-business owners saw seamless digital experiences as critical—but 25% were overwhelmed by the cost and complexity. The World Economic Forum's 2025 report echoed this, noting that SMBs are often “left behind” when it comes to transformative tech.

That’s changing fast. As enterprise-grade tools become more accessible, SMBs now have affordable, powerful options to bridge the tech gap and compete effectively.

1. Stronger, Smarter Networks
Downtime is expensive—up to $427/minute, says Pingdom. SMBs now have access to fast, reliable fiber internet with backup connections that kick in automatically. These networks support AI tools, cloud apps, IoT, and more—while offering secure, segmented Wi-Fi for teams, guests, and devices.

Case in point: Albemarle, North Carolina, deployed fiber internet with a cloud-based backup, ensuring critical systems stay online 24/7.

2. Cybersecurity That Fits the SMB Budget
Cyberattacks hit 81% of small businesses in the past year (Identity Theft Resource Center, 2024). Yet under half feel ready to respond, and many hesitate to invest due to cost. The good news: built-in firewalls, multifactor authentication, and scalable security layers are now more affordable than ever.

As Checker.ai founder Anup Kayastha told StartupNation, the company started with MFA and scaled security as they grew.

3. Big Brand Experiences, Small Biz Budgets
SMBs now have the digital tools to deliver seamless, omnichannel customer experiences—just like larger players. High-performance networks and cloud-based apps enable rich e-commerce journeys and AI-driven support that build brand presence and loyalty.

4. Predictable Pricing, Maximum Value
Tech no longer requires deep pockets. Today’s solutions bundle high-speed internet, cybersecurity, compliance, and productivity tools—often with self-service options to reduce IT overhead.

5. Built-In Tech Support
Forget costly consultants. Many SMB-friendly providers now offer local, on-site support as part of their packages—helping small businesses install, manage, and maintain systems with ease.

Google Gemini Bug Exploits Summaries for Phishing Scams


False AI summaries leading to phishing attacks

Google Gemini for Workspace can be exploited to generate email summaries that appear legitimate but include malicious instructions or warnings that direct users to phishing sites without using attachments or direct links.

Google Gemini for Workplace can be compromised to create email summaries that look real but contain harmful instructions or warnings that redirect users to phishing websites without using direct links or attachments. 

Similar attacks were reported in 2024 and afterwards; safeguards were pushed to stop misleading responses. However, the tactic remains a problem for security experts. 

Gemini for attack

A prompt-injection attack on the Gemini model was revealed via cybersecurity researcher Marco Figueoa, at 0din, Mozilla’s bug bounty program for GenAI tools. The tactic creates an email with a hidden directive for Gemini. The threat actor can hide malicious commands in the message body text at the end via CSS and HTML, which changes the font size to zero and color to white. 

According to Marco, who is GenAI Bug Bounty Programs Manager at Mozilla, “Because the injected text is rendered in white-on-white (or otherwise hidden), the victim never sees the instruction in the original message, only the fabricated 'security alert' in the AI-generated summary. Similar indirect prompt attacks on Gemini were first reported in 2024, and Google has already published mitigations, but the technique remains viable today.”

Gmail does not render the malicious instruction as there are no attachments or links present, and the message may reach the victim’s inbox. If the receiver opens the email and asks Gemini to make a summary of the received mail, the AI tool will parse the invisible directive and create the summary. Figueroa provides an example of Gemini following hidden prompts, accompanied by a security warning that the victim’s Gmail password and phone number may be compromised.

Impact

Supply-chain threats: CRM systems, automated ticketing emails, and newsletters can become injection vectors, changing one exploited SaaS account into hundreds of thousands of phishing beacons.

Cross-product surface: The same tactics applies to Gemini in Slides, Drive search, Docs and any workplace where the model is getting third-party content.

According to Marco, “Security teams must treat AI assistants as part of the attack surface and instrument them, sandbox them, and never assume their output is benign.”

Latest Malware "Mamona" Attacks Locally, Hides by Self Deletion

Latest Malware "Mamona" Attacks Locally, Hides by Self Deletion

Cybersecurity experts are tracing Mamona, a new ransomware strain that is famous for its stripped-down build and silent local execution. Experts believe that the ransomware prevents the usual command-and-control (C2) servers, choosing instead a self-contained method that moves past tools relying on network traffic analysis.  

The malware is executed locally on a Windows system as a standalone binary file. The offline approach reveals a blind spot in traditional defenses, raising questions about how even the best antivirus and detection mechanisms will work when there is no network.

Self-deletion and escape techniques make detection difficult

Once executed, it starts a three-second delay via a modified ping command, ”cmd.exe /C ping 127.0.0.7 -n 3 > Nul & Del /f /q.” After this, it self-deletes. The self-deletion helps to eliminate forensic artifacts that make it difficult for experts to track or examine the malware after it has been executed. 

The malware uses 127.0.0.7 instead of the popular 127.0.0.1, which helps in evading detection measures. This tactic escapes simple detection tests and doesn’t leave digital traces that older file-based scanners might tag. The malware also drops a ransom note titled README.HAes.txt and renames impacted files with the .HAes extension. This means the encryption was successful. 

“We integrated Sysmon with Wazuh to enrich logs from the infected endpoint and created Wazuh detection rules to identify malicious behaviour associated with Mamona ransomware,” said Wazuh in a blog post.

Spotting Mamona

Wazuh has alerted that the “plug-and-play” nature of the malware makes it easy for cybercriminals and helps in the commodization of ransomware. This change highlights an urgent need for robust inspections of what stands as the best ransomware protection when such attacks do not need remote control infrastructure. Wazu’s method to track Mamona involves combining Sysom for log capture and employing custom rules to flag particular behaviours like ransom note creation and ping-based delays.

According to TechRadar, “Rule 100901 targets the creation of the README.HAes.txt file, while Rule 100902 confirms the presence of ransomware when both ransom note activity and the delay/self-delete sequence appear together.”

CISA Lists Citrix Bleed 2 as Exploit, Gives One Day Deadline to Patch

CISA Lists Citrix Bleed 2 as Exploit, Gives One Day Deadline to Patch

CISA confirms bug exploit

The US Cybersecurity & Infrastructure Security Agency (CISA) confirms active exploitation of the CitrixBleed 2 vulnerability (CVE-2025-5777 in Citrix NetScaler ADC and Gateway. It has given federal parties one day to patch the bugs. This unrealistic deadline for deploying the patches is the first since CISA issued the Known Exploited Vulnerabilities (KEV) catalog, highlighting the severity of attacks abusing the security gaps. 

About the critical vulnerability

CVE-2025-5777 is a critical memory safety bug (out-of-bounds memory read) that gives hackers unauthorized access to restricted memory parts. The flaw affects NetScaler devices that are configured as an AAA virtual server or a Gateway. Citrix patched the vulnerabilities via the June 17 updates. 

After that, expert Kevin Beaumont alerted about the flaw’s capability for exploitation if left unaddressed, terming the bug as ‘CitrixBleed 2’ because it shared similarities with the infamous CitrixBleed bug (CVE-2023-4966), which was widely abused in the wild by threat actors.

What is the CitrixBleed 2 exploit?

According to Bleeping Computer, “The first warning of CitrixBleed 2 being exploited came from ReliaQuest on June 27. On July 7, security researchers at watchTowr and Horizon3 published proof-of-concept exploits (PoCs) for CVE-2025-5777, demonstrating how the flaw can be leveraged in attacks that steal user session tokens.”

The rise of exploits

During that time, experts could not spot the signs of active exploitation. Soon, the threat actors started to exploit the bug on a larger scale, and after the attack, they became active on hacker forums, “discussing, working, testing, and publicly sharing feedback on PoCs for the Citrix Bleed 2 vulnerability,” according to Bleeping Computers. 

Hackers showed interest in how to use the available exploits in attacks effectively. The hackers have become more active, and various exploits for the bug have been published.

Now that CISA has confirmed the widespread exploitation of CitrixBleed 2 in attacks, threat actors may have developed their exploits based on the recently released technical information. CISA has suggested to “apply mitigations per vendor instructions, follow applicable BOD 22-01 guidance for cloud services, or discontinue use of the product if mitigations are unavailable.”