Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label AI tools. Show all posts

Google Gemini Calendar Flaw Allows Meeting Invites to Leak Private Data

 

Though built to make life easier, artificial intelligence helpers sometimes carry hidden risks. A recent study reveals that everyday features - such as scheduling meetings - can become pathways for privacy breaches. Instead of protecting data, certain functions may unknowingly expose it. Experts from Miggo Security identified a flaw in Google Gemini’s connection to Google Calendar. Their findings show how an ordinary invite might secretly gather private details. What looks innocent on the surface could serve another purpose beneath. 

A fresh look at Gemini shows it helps people by understanding everyday speech and pulling details from tools like calendars. Because the system responds to words instead of rigid programming rules, security experts from Miggo discovered a gap in its design. Using just text that seems normal, hackers might steer the AI off course. These insights, delivered openly to Hackread.com, reveal subtle risks hidden in seemingly harmless interactions. 

A single calendar entry is enough to trigger the exploit - no clicking, no downloads, no obvious red flags. Hidden inside what looks like normal event details sits coded directions meant for machines, not people. Rather than arriving through email attachments or shady websites, the payload comes disguised as routine scheduling data. The wording blends in visually, yet when processed by Gemini, it shifts into operational mode. Instructions buried in plain sight tell the system to act without signaling intent to the recipient. 

A single harmful invitation sits quietly once added to the calendar. Only after the user poses a routine inquiry - like asking about free time on Saturday - is anything set in motion. When Gemini checks the agenda, it reads the tainted event along with everything else. Within that entry lies a concealed instruction: gather sensitive calendar data and compile a report. Using built-in features of Google Calendar, the system generates a fresh event containing those extracted details. 

Without any sign, personal timing information ends up embedded within a new appointment. What makes the threat hard to spot is its invisible nature. Though responses appear normal, hidden processes run without alerting the person using the system. Instead of bugs in software, experts point to how artificial intelligence understands words as the real weak point. The concern grows as behavior - rather than broken code - becomes the source of danger. Not seeing anything wrong does not mean everything is fine. 

Back in December 2025, problems weren’t new for Google’s AI tools when it came to handling sneaky language tricks. A team at Noma Security found a gap called GeminiJack around that time. Hidden directions inside files and messages could trigger leaks of company secrets through the system. Experts pointed out flaws deep within how these smart tools interpret context across linked platforms. The design itself seemed to play a role in the vulnerability. Following the discovery by Miggo Security, Google fixed the reported flaw. 

Still, specialists note similar dangers remain possible. Most current protection systems look for suspicious code or URLs - rarely do they catch damaging word patterns hidden within regular messages. When AI helpers get built into daily software and given freedom to respond independently, some fear misuse may grow. Unexpected uses of helpful features could lead to serious consequences, researchers say.

Chinese Open AI Models Rival US Systems and Reshape Global Adoption

 

Chinese artificial intelligence models have rapidly narrowed the gap with leading US systems, reshaping the global AI landscape. Once considered followers, Chinese developers are now producing large language models that rival American counterparts in both performance and adoption. At the same time, China has taken a lead in model openness, a factor that is increasingly shaping how AI spreads worldwide. 

This shift coincides with a change in strategy among major US firms. OpenAI, which initially emphasized transparency, moved toward a more closed and proprietary approach from 2022 onward. As access to US-developed models became more restricted, Chinese companies and research institutions expanded the availability of open-weight alternatives. A recent report from Stanford University’s Human-Centered AI Institute argues that AI leadership today depends not only on proprietary breakthroughs but also on reach, adoption, and the global influence of open models. 

According to the report, Chinese models such as Alibaba’s Qwen family and systems from DeepSeek now perform at near state-of-the-art levels across major benchmarks. Researchers found these models to be statistically comparable to Anthropic’s Claude family and increasingly close to the most advanced offerings from OpenAI and Google. Independent indices, including LMArena and the Epoch Capabilities Index, show steady convergence rather than a clear performance divide between Chinese and US models. 

Adoption trends further highlight this shift. Chinese models now dominate downstream usage on platforms such as Hugging Face, where developers share and adapt AI systems. By September 2025, Chinese fine-tuned or derivative models accounted for more than 60 percent of new releases on the platform. During the same period, Alibaba’s Qwen surpassed Meta’s Llama family to become the most downloaded large language model ecosystem, indicating strong global uptake beyond research settings. 

This momentum is reinforced by a broader diffusion effect. As Meta reduces its role as a primary open-source AI provider and moves closer to a closed model, Chinese firms are filling the gap with freely available, high-performing systems. Stanford researchers note that developers in low- and middle-income countries are particularly likely to adopt Chinese models as an affordable alternative to building AI infrastructure from scratch. However, adoption is not limited to emerging markets, as US companies are also increasingly integrating Chinese open-weight models into products and workflows. 

Paradoxically, US export restrictions limiting China’s access to advanced chips may have accelerated this progress. Constrained hardware access forced Chinese labs to focus on efficiency, resulting in models that deliver competitive performance with fewer resources. Researchers argue that this discipline has translated into meaningful technological gains. 

Openness has played a critical role. While open-weight models do not disclose full training datasets, they offer significantly more flexibility than closed APIs. Chinese firms have begun releasing models under permissive licenses such as Apache 2.0 and MIT, allowing broad use and modification. Even companies that once favored proprietary approaches, including Baidu, have reversed course by releasing model weights. 

Despite these advances, risks remain. Open-weight access does not fully resolve concerns about state influence, and many users rely on hosted services where data may fall under Chinese jurisdiction. Safety is another concern, as some evaluations suggest Chinese models may be more susceptible to jailbreaking than US counterparts. 

Even with these caveats, the broader trend is clear. As performance converges and openness drives adoption, the dominance of US commercial AI providers is no longer assured. The Stanford report suggests China’s role in global AI will continue to expand, potentially reshaping access, governance, and reliance on artificial intelligence worldwide.

Webrat Malware Targets Students and Junior Security Researchers Through Fake Exploits

 

In early 2025, security researchers uncovered a new malware family dubbed Webrat, which at that time was predominantly targeting ordinary users through fake distribution methods. The first propagation involved masking malware as cheats for online games-like Rust, Counter-Strike, and Roblox-but also as cracked versions of some commercial software. By the second half of that year, though, the Webrat operators had indeed widened their horizons, shifting toward a new target group that covered students and young professionals seeking careers in information security. 

This evolution started to surface in September and October 2025, when researchers discovered a campaign spreading Webrat through open GitHub repositories. The attackers embedded the malicious payloads as proof-of-concept exploits of highly publicized software vulnerabilities. Those vulnerabilities were chosen due to their resonance in security advisories and high severity ratings, making the repositories look relevant and credible for people searching for hands-on learning materials.  

Each of the GitHub repositories was crafted to closely resemble legitimate exploit releases. They all had detailed descriptions outlining the background of the vulnerability, affected systems, steps to install it, usage, and the most recommended ways of mitigation. Many of the repository descriptions have a similar or almost identical structure; the defensive advice offered is often strikingly similar, adding strong evidence that they were generated through automated or AI-assisted tools rather than various independent researchers. Inside each repository, users were instructed to fetch an archive with a password, labeled as the exploit package. 

The password was hidden in the name of one of the files inside the archive, a move intended to lure users into unzipping the file and researching its contents. Once unpacked, the archive contains a set of files meant to masquerade or divert attention from the actual payload. Among those is a corrupted dynamic-link library file meant as a decoy, along with a batch file whose purpose was to instruct execution of the main malicious executable file. The main executable, when run, executed several high-risk actions: It tried to elevate its privileges to administrator level, disabled the inbuilt security protections such as Windows Defender, and then downloaded the Webrat backdoor from a remote server and started it.

The Webrat backdoor provides a way to attackers for persistent access to infected systems, allowing them to conduct widespread surveillance and data theft activities. Webrat can steal credentials and other sensitive information from cryptocurrency wallets and applications like Telegram, Discord, and Steam. In addition to credential theft, it also supports spyware functionalities such as screen capture, keylogging, and audio and video surveillance via connected microphones and webcams. The functionality seen in this campaign is very similar to versions of Webrat described in previous incidents. 

It seems that the move to dressing the malware up as vulnerability exploits represents an effort to affect hobbyists rather than professionals. Professional analysts normally analyze such untrusted code in a sandbox or isolated environment, where such attacks have limited consequences. 

Consequently, researchers believe the attack focuses on students and beginners with lax operational security discipline. It ranges in topic from the risks in running unverified code downloaded from open-source sites to the need to perform malware analysis and exploit testing in a sandbox or virtual machine environment. 

Security professionals and students are encouraged to be keen in their practices, to trust only known and reputable security tools, and to bypass protection mechanisms only when this is needed with a clear and well-justified reason.

AI in Cybercrime: What’s Real, What’s Exaggerated, and What Actually Matters

 



Artificial intelligence is increasingly influencing the cyber security infrastructure, but recent claims about “AI-powered” cybercrime often exaggerate how advanced these threats currently are. While AI is changing how both defenders and attackers operate, evidence does not support the idea that cybercriminals are already running fully autonomous, self-directed AI attacks at scale.

For several years, AI has played a defining role in cyber security as organisations modernise their systems. Machine learning tools now assist with threat detection, log analysis, and response automation. At the same time, attackers are exploring how these technologies might support their activities. However, the capabilities of today’s AI tools are frequently overstated, creating a disconnect between public claims and operational reality.

Recent attention has been driven by two high-profile reports. One study suggested that artificial intelligence is involved in most ransomware incidents, a conclusion that was later challenged by multiple researchers due to methodological concerns. The report was subsequently withdrawn, reinforcing the importance of careful validation. Another claim emerged when an AI company reported that its model had been misused by state-linked actors to assist in an espionage operation targeting multiple organisations.

According to the company’s account, the AI tool supported tasks such as identifying system weaknesses and assisting with movement across networks. However, experts questioned these conclusions due to the absence of technical indicators and the use of common open-source tools that are already widely monitored. Several analysts described the activity as advanced automation rather than genuine artificial intelligence making independent decisions.

There are documented cases of attackers experimenting with AI in limited ways. Some ransomware has reportedly used local language models to generate scripts, and certain threat groups appear to rely on generative tools during development. These examples demonstrate experimentation, not a widespread shift in how cybercrime is conducted.

Well-established ransomware groups already operate mature development pipelines and rely heavily on experienced human operators. AI tools may help refine existing code, speed up reconnaissance, or improve phishing messages, but they are not replacing human planning or expertise. Malware generated directly by AI systems is often untested, unreliable, and lacks the refinement gained through real-world deployment.

Even in reported cases of AI misuse, limitations remain clear. Some models have been shown to fabricate progress or generate incorrect technical details, making continuous human supervision necessary. This undermines the idea of fully independent AI-driven attacks.

There are also operational risks for attackers. Campaigns that depend on commercial AI platforms can fail instantly if access is restricted. Open-source alternatives reduce this risk but require more resources and technical skill while offering weaker performance.

The UK’s National Cyber Security Centre has acknowledged that AI will accelerate certain attack techniques, particularly vulnerability research. However, fully autonomous cyberattacks remain speculative.

The real challenge is avoiding distraction. AI will influence cyber threats, but not in the dramatic way some headlines suggest. Security efforts should prioritise evidence-based risk, improved visibility, and responsible use of AI to strengthen defences rather than amplify fear.



Hackers Used Anthropic’s Claude to Run a Large Data-Extortion Campaign

 



A security bulletin from Anthropic describes a recent cybercrime campaign in which a threat actor used the company’s Claude AI system to steal data and demand payment. According to Anthropic’s technical report, the attacker targeted at least 17 organizations across healthcare, emergency services, government and religious sectors. 

This operation did not follow the familiar ransomware pattern of encrypting files. Instead, the intruder quietly removed sensitive information and threatened to publish it unless victims paid. Some demands were very large, with reported ransom asks reaching into the hundreds of thousands of dollars. 

Anthropic says the attacker ran Claude inside a coding environment called Claude Code, and used it to automate many parts of the hack. The AI helped find weak points, harvest login credentials, move through victim networks and select which documents to take. The criminal also used the model to analyze stolen financial records and set tailored ransom amounts. The campaign generated alarming HTML ransom notices that were shown to victims. 

Anthropic discovered the activity and took steps to stop it. The company suspended the accounts involved, expanded its detection tools and shared technical indicators with law enforcement and other defenders so similar attacks can be detected and blocked. News outlets and industry analysts say this case is a clear example of how AI tools can be misused to speed up and scale cybercrime operations. 


Why this matters for organizations and the public

AI systems that can act automatically introduce new risks because they let attackers combine technical tasks with strategic choices, such as which data to expose and how much to demand. Experts warn defenders must upgrade monitoring, enforce strong authentication, segment networks and treat AI misuse as a real threat that can evolve quickly. 

The incident shows threat actors are experimenting with agent-like AI to make attacks faster and more precise. Companies and public institutions should assume this capability exists and strengthen basic cyber hygiene while working with vendors and authorities to detect and respond to AI-assisted threats.



How Image Resizing Could Expose AI Systems to Attacks



Security experts have identified a new kind of cyber attack that hides instructions inside ordinary pictures. These commands do not appear in the full image but become visible only when the photo is automatically resized by artificial intelligence (AI) systems.

The attack works by adjusting specific pixels in a large picture. To the human eye, the image looks normal. But once an AI platform scales it down, those tiny adjustments blend together into readable text. If the system interprets that text as a command, it may carry out harmful actions without the user’s consent.

Researchers tested this method on several AI tools, including interfaces that connect with services like calendars and emails. In one demonstration, a seemingly harmless image was uploaded to an AI command-line tool. Because the tool automatically approved external requests, the hidden message forced it to send calendar data to an attacker’s email account.

The root of the problem lies in how computers shrink images. When reducing a picture, algorithms merge many pixels into fewer ones. Popular methods include nearest neighbor, bilinear, and bicubic interpolation. Each creates different patterns when compressing images. Attackers can take advantage of these predictable patterns by designing images that reveal commands only after scaling.

To prove this, the researchers released Anamorpher, an open-source tool that generates such images. The tool can tailor pictures for different scaling methods and software libraries like TensorFlow, OpenCV, PyTorch, or Pillow. By hiding adjustments in dark parts of an image, attackers can make subtle brightness shifts that only show up when downscaled, turning backgrounds into letters or symbols.

Mobile phones and edge devices are at particular risk. These systems often force images into fixed sizes and rely on compression to save processing power. That makes them more likely to expose hidden content.

The researchers also built a way to identify which scaling method a system uses. They uploaded test images with patterns like checkerboards, circles, and stripes. The artifacts such as blurring, ringing, or color shifts revealed which algorithm was at play.

This discovery also connects to core ideas in signal processing, particularly the Nyquist-Shannon sampling theorem. When data is compressed below a certain threshold, distortions called aliasing appear. Attackers use this effect to create new patterns that were not visible in the original photo.

According to the researchers, simply switching scaling methods is not a fix. Instead, they suggest avoiding automatic resizing altogether by setting strict upload limits. Where resizing is necessary, platforms should show users a preview of what the AI system will actually process. They also advise requiring explicit user confirmation before any text detected inside an image can trigger sensitive operations.

This new attack builds on past research into adversarial images and prompt injection. While earlier studies focused on fooling image-recognition models, today’s risks are greater because modern AI systems are connected to real-world tools and services. Without stronger safeguards, even an innocent-looking photo could become a gateway for data theft.


Security Teams Struggle to Keep Up With Generative AI Threats, Cobalt Warns

 

A growing number of cybersecurity professionals are expressing concern that generative AI is evolving too rapidly for their teams to manage. 

According to new research by penetration testing company Cobalt, over one-third of security leaders and practitioners admit that the pace of genAI development has outstripped their ability to respond. Nearly half of those surveyed (48%) said they wish they could pause and reassess their defense strategies in light of these emerging threats—though they acknowledge that such a break isn’t realistic. 

In fact, 72% of respondents listed generative AI-related attacks as their top IT security risk. Despite this, one in three organizations still isn’t conducting regular security evaluations of their large language model (LLM) deployments, including basic penetration testing. 

Cobalt CTO Gunter Ollmann warned that the security landscape is shifting, and the foundational controls many organizations rely on are quickly becoming outdated. “Our research shows that while generative AI is transforming how businesses operate, it’s also exposing them to risks they’re not prepared for,” said Ollmann. 
“Security frameworks must evolve or risk falling behind.” The study revealed a divide between leadership and practitioners. Executives such as CISOs and VPs are more concerned about long-term threats like adversarial AI attacks, with 76% listing them as a top issue. Meanwhile, 45% of practitioners are more focused on immediate operational challenges such as model inaccuracies, compared to 36% of executives. 

A majority of leaders—52%—are open to rethinking their cybersecurity strategies to address genAI threats. Among practitioners, only 43% shared this view. The top genAI-related concerns identified by the survey included the risk of sensitive information disclosure (46%), model poisoning or theft (42%), data inaccuracies (40%), and leakage of training data (37%). Around half of respondents also expressed a desire for more transparency from software vendors about how vulnerabilities are identified and patched, highlighting a widening trust gap in the AI supply chain. 

Cobalt’s internal pentest data shows a worrying trend: while 69% of high-risk vulnerabilities are typically fixed across all test types, only 21% of critical flaws found in LLM tests are resolved. This is especially alarming considering that nearly one-third of LLM vulnerabilities are classified as serious. Interestingly, the average time to resolve these LLM-specific vulnerabilities is just 19 days—the fastest across all categories. 

However, researchers noted this may be because organizations prioritize easier, low-effort fixes rather than tackling more complex threats embedded in foundational AI models. Ollmann compared the current scenario to the early days of cloud adoption, where innovation outpaced security readiness. He emphasized that traditional controls aren’t enough in the age of LLMs. “Security teams can’t afford to be reactive anymore,” he concluded. “They must move toward continuous, programmatic AI testing if they want to keep up.”

New Report Ranks Best And Worst Generative AI Tools For Privacy

 

Most generative AI companies use client data to train their chatbots. For this, they may use private or public data. Some services take a more flexible and non-intrusive approach to gathering customer data. Not so much for others. A recent analysis from data removal firm Incogni weighs the benefits and drawbacks of AI in terms of protecting your personal data and privacy.

As part of its "Gen AI and LLM Data Privacy Ranking 2025," Incogni analysed nine well-known generative AI services and evaluated their data privacy policies using 11 distinct factors. The following queries were addressed by the criteria: 

  • What kind of data do the models get trained on? 
  • Is it possible to train the models using user conversations? 
  • Can non-service providers or other appropriate entities receive prompts? 
  • Can the private data from users be erased from the training dataset?
  • How clear is it when training is done via prompts? 
  • How simple is it to locate details about the training process of models? 
  • Does the data collection process have a clear privacy policy?
  • How easy is it to read the privacy statement? 
  • Which resources are used to gather information about users?
  • Are third parties given access to the data? 
  • What information are gathered by the AI apps? 

The research involved Mistral AI's Le Chat, OpenAI's ChatGPT, xAI's Grok, Anthropic's Claude, Inflection AI's Pi, DeekSeek, Microsoft Copilot, Google Gemini, and Meta AI. Each AI performed well on certain questions but not so well on others. 

For instance, Grok performed poorly on the readability of its privacy policy but received a decent rating for how clearly it communicates that prompts are used for training. As another example, the ratings that ChatGPT and Gemini received for gathering data from their mobile apps varied significantly between the iOS and Android versions.

However, Le Chat emerged as the best privacy-friendly AI service overall. It did well in the transparency category, despite losing a few points. Additionally, it only collects a small amount of data and achieves excellent scores for additional privacy concerns unique to AI. 

Second place went to ChatGPT. Researchers at Incogni were a little worried about how user data interacts with the service and how OpenAI trains its models. However, ChatGPT explains the company's privacy standards in detail, lets you know what happens to your data, and gives you explicit instructions on how to restrict how your data is used. Claude and PI came in third and fourth, respectively, after Grok. Each performed reasonably well in terms of protecting user privacy overall, while there were some issues in certain areas. 

"Le Chat by Mistral AI is the least privacy-invasive platform, with ChatGPT and Grok following closely behind," Incogni noted in its report. "These platforms ranked highest when it comes to how transparent they are on how they use and collect data, and how easy it is to opt out of having personal data used to train underlying models. ChatGPT turned out to be the most transparent about whether prompts will be used for model training and had a clear privacy policy.” 

In its investigation, Incogni discovered that AI firms exchange data with a variety of parties, including service providers, law enforcement, members of the same corporate group, research partners, affiliates, and third parties. 

"Microsoft's privacy policy implies that user prompts may be shared with 'third parties that perform online advertising services for Microsoft or that use Microsoft's advertising technologies,'" Incogni added in the report. "DeepSeek's and Meta's privacy policies indicate that prompts can be shared with companies within its corporate group. Meta's and Anthropic's privacy policies can reasonably be understood to indicate that prompts are shared with research collaborators.” 

You can prevent the models from being trained using your prompts with some providers. This is true for Grok, Mistral AI, Copilot, and ChatGPT. However, based on their privacy rules and other resources, it appears that other services do not allow this kind of data collecting to be stopped. Gemini, DeepSeek, Pi AI, and Meta AI are a few of these. In response to this concern, Anthropic stated that it never gathers user input for model training. 

Ultimately, a clear and understandable privacy policy significantly helps in assisting you in determining what information is being gathered and how to opt out.