Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Latest News

Home Renovation Choices That Often Do Not Deliver Real Value

  Home renovations are often regarded as investments; however, not every upgrade enhances a home's function, character, or resale value....

All the recent news you need to know

IDESaster Report: Severe AI Bugs Found in AI Agents Can Lead to Data Theft and Exploit


Using AI agents for data exfiltrating and RCE

A six-month research into AI-based development tools has disclosed over thirty security bugs that allow remote code execution (RCE) and data exfiltration. The findings by IDEsaster research revealed how AI agents deployed in IDEs like Visual Studio Code, Zed, JetBrains products and various commercial assistants can be tricked into leaking sensitive data or launching hacker-controlled code. 

The research reports that 100% of tested AI IDEs and coding agents were vulnerable. Impacted products include GitHub, Windsurf, Copilot, Cursor, Kiro.dev, Zed.dev, Roo Code, Junie, Cline, Gemini CLI, and Claude Code. At least twenty-four assigned CVEs and additional AWS advisories were also included. 

AI assistants exploitation 

The main problem comes from the way AI agents interact with IDE features. Autonomous components that could read, edit, and create files were never intended for these editors. Once-harmless features turned become attack surfaces when AI agents acquired these skills. In their threat model, all AI IDEs essentially disregard the base software. Since these features have been around for years, they consider them to be naturally safe. 

Attack tactic 

However, the same functionalities can be weaponized into RCE primitives and data exfiltration once autonomous AI bots are included. The research reported that this is an IDE-agnostic attack chain. 

It begins with context hacking via prompt-injection. Covert instructions can be deployed in file names, rule files, READMEs, and outputs from malicious MCP servers. When an agent reads the context, the tool can be redirected to run authorized actions that activate malicious behaviours in the core IDE. The last stage exploits built-in features to steal data or run hacker code in AI IDEs sharing core software layers.

Examples

Writing a JSON file that references a remote schema is one example. Sensitive information gathered earlier in the chain is among the parameters inserted by the agent that are leaked when the IDE automatically retrieves that schema. This behavior was seen in Zed, JetBrains IDEs, and Visual Studio Code. The outbound request was not suppressed by developer safeguards like diff previews.  

Another case study uses altered IDE settings to show complete remote code execution. An attacker can make the IDE execute arbitrary code as soon as a relevant file type is opened or created by updating an executable file that is already in the workspace and then changing configuration fields like php.validate.executablePath. Similar exposure is demonstrated by JetBrains utilities via workspace metadata.

According to the IDEsaster report, “It’s impossible to entirely prevent this vulnerability class short-term, as IDEs were not initially built following the Secure for AI principle. However, these measures can be taken to reduce risk from both a user perspective and a maintainer perspective.”


5 Critical Situations Where You Should Never Rely on ChatGPT

  •  

Just a few years after its launch, ChatGPT has evolved into a go-to digital assistant for tasks ranging from quick searches to event planning. While it undeniably offers convenience, treating it as an all-knowing authority can be risky. ChatGPT is a large language model, not an infallible source of truth, and it is prone to misinformation and fabricated responses. Understanding where its usefulness ends is crucial.

Here are five important areas where experts strongly advise turning to real people, not AI chatbots:

  • Medical advice
ChatGPT cannot be trusted with health-related decisions. It is known to provide confident yet inaccurate information, and it may even acknowledge errors only after being corrected. Even healthcare professionals experimenting with AI agree that it can offer only broad, generic insights — not tailored guidance based on individual symptoms.

Despite this, the chatbot can still respond if you ask, "Hey, what's that sharp pain in my side?", instead of urging you to seek urgent medical care. The core issue is that chatbots cannot distinguish fact from fiction. They generate responses by blending massive amounts of data, regardless of accuracy.

ChatGPT is not, and likely never will be, a licensed medical professional. While it may provide references if asked, those sources must be carefully verified. In several cases, people have reported real harm after following chatbot-generated health advice.

  • Therapy
Mental health support is essential, yet often expensive. Even so-called "cheap" online therapy platforms can cost around $65 per session, and insurance coverage remains limited. While it may be tempting to confide in a chatbot, this can be dangerous.

One major concern is ChatGPT’s tendency toward agreement and validation. In therapy, this can be harmful, as it may encourage behaviors or beliefs that are objectively damaging. Effective mental health care requires an external, trained professional who can challenge harmful thought patterns rather than reinforce them.

There is also an ongoing lawsuit alleging that ChatGPT contributed to a teen’s suicide — a claim OpenAI denies. Regardless of the legal outcome, the case highlights the risks of relying on AI for mental health support. Even advocates of AI-assisted therapy admit that its limitations are significant.

  • Advice during emergencies
In emergencies, every second counts. Whether it’s a fire, accident, or medical crisis, turning to ChatGPT for instructions is a gamble. Incorrect advice in such situations can lead to severe injury or death.

Preparation is far more reliable than last-minute AI guidance. Learning basic skills like CPR or the Heimlich maneuver, participating in fire drills, and keeping emergency equipment on hand can save lives. If possible, always call emergency services rather than relying on a chatbot. This is one scenario where AI is least dependable.

  • Password generation
Using ChatGPT to create passwords may seem harmless, but it carries serious security risks. There is a strong possibility that the chatbot could generate identical or predictable passwords for multiple users. Without precise instructions, the suggested passwords may also lack sufficient complexity.

Additionally, chatbots often struggle with basic constraints, such as character counts. More importantly, ChatGPT stores prompts and outputs to improve its systems, raising concerns about sensitive data being reused or exposed.

Instead, experts recommend dedicated password generators offered by trusted password managers or reputable online tools, which are specifically designed with security in mind.
  • Future predictions
If even leading experts struggle to predict the future accurately, it’s unrealistic to expect ChatGPT to do better. Since AI models frequently get present-day facts wrong, their long-term forecasts are even less reliable.

Using ChatGPT to decide which stocks to buy, which team will win, or which career path will be most profitable is unwise. While it can be entertaining to ask speculative questions about humanity centuries from now, such responses should be treated as curiosity-driven thought experiments — not actionable guidance.

ChatGPT can be a helpful tool when used appropriately, but knowing its limitations is essential. For critical decisions involving health, safety, security, or mental well-being, real professionals remain irreplaceable.


Apple Addresses Two Actively Exploited Zero-Day Security Flaws


Following confirmation that two previously unknown security flaws had been actively exploited in the wild on Friday, Apple rolled out a series of security updates across its entire software ecosystem to address this issue, further demonstrating the continued use of high-end exploit chains against some targets. This is a major security update that is being released by Apple today across a wide range of iOS, iPadOS, macOS, watchOS, tvOS, visionOS, and the Safari browser. This fix addresses flaws that could have led attackers to execute malicious code in the past using specially crafted web content.


There are a number of vulnerabilities that are reminiscent of one of the ones Google patched earlier this week in Chrome, highlighting cross-platform vulnerability within shared graphics components. A report released by Apple indicated that at least one of the flaws may have been exploited as part of what it described as an "extremely sophisticated attack" targeting individuals who were running older versions of iOS before iOS 26, indicating that rather than an opportunistic abuse, this was a targeted exploitation campaign. 

Using a coordinated effort between Apple Security Engineering and Architecture and Google's Threat Analysis Group, the vulnerabilities were identified as CVE-2025-14174, a high severity memory corruption flaw, and as CVE-2025-43529, a use-after-free flaw. The two vulnerabilities were tracked as CVE-2025-43529, a use-after-free bug. 

In response to advanced threat activity, major vendors are continuing to collaborate together. Separately, Apple has released a new round of emergency patches after confirming that two more vulnerabilities have also been exploited in a real-world attack in a separate advisory. 

Apple has released a new update to address the flaws that could allow attackers to gain deeper control over their affected devices under carefully crafted conditions, and this update is applicable to iOS, iPadOS, macOS Sequoia, tvOS, and visionOS. 

A memory corruption issue in Apple's Core Audio framework has led to an issue named CVE-2025-31200 which could result in arbitrary code execution on a device when it processes a specially designed audio stream embedded within a malicious media file. The second issue is CVE-2025-31201. This flaw affects Apple's RPAC component, which could be exploited by an attacker with existing read and write capabilities in order to bypass the protections for Pointer Authentication.

In an attempt to mitigate the risks, Apple said it strengthened bounds checks and removed the vulnerable code path altogether. According to Apple's engineers, Google's Threat Analysis Group as well as the company's own engineers were the ones who identified the Core Audio vulnerability. According to the company's earlier disclosures, the bugs have been leveraged to launch what it calls "extremely sophisticated" attacks targeting a very specific group of iOS users. 

With the latest fix from Apple, the number of zero-day vulnerabilities Apple has patched in the past year has reached five, following earlier updates addressing actively exploited flaws in Core Media, Accessibility, and WebKit—a combination of high-risk issues that indicates a sustained focus by advanced threat actors on Apple's software stack, demonstrating that Apple's software stack has been the target of sophisticated attack actors. 

The company claims the vulnerabilities have been addressed across its latest software releases, including iOS 26.2, iOS and iPad OS 18.7.3, macOS Tahoe 26.2, tvOS 26.2, watchOS 26.2, visionOS 26.2, and Safari 26.2, making sure that both current and legacy platforms are protected from these threats.

Following the disclosure, Google quietly patched a previously undisclosed Chrome zero-day that had been labelled only as a high-severity issue "under coordination" earlier in the week, which was close in nature. After updating its advisory to CVE-2025-14174, Google confirmed that the flaw is an out-of-bounds memory access bug in the ANGLE graphics layer, which was the same issue that was addressed by Apple earlier this week. 

It indicates that Google and Apple handled vulnerabilities together in a coordinated manner. In the absence of further technical insight into the attacks themselves, Apple has refused to provide any further technical information, other than to note that the attacks were directed at a single group of individuals running older versions of iOS prior to iOS 26, which can be correlated with using exploits that are spyware-grade in nature. 

Since the problems both originate in WebKit, the browser engine that runs all iOS browsers, including Chrome, the researchers believe the activity represents a narrowly targeted campaign rather than an indiscriminate exploitation of the platform. 

Even though Apple emphasised that these attacks were targeted and very specific, the company strongly urged its users to update their operating systems without delay in order to prevent any further damage to their systems. 

Apple has patched seven zero-day vulnerabilities during 2025 with these updates. There have been a number of exploits that have been addressed in the wild throughout the year, from January and February until April, as well as a noteworthy backport that was implemented in September that provided protection against CVE-2025-43300 on older iPhone and iPad models still running iOS or iOSOS 15 and 16.

Apple's platforms have increasingly been discovered to be a high-value target for well-resourced threat actors, with the capability of exploiting browser and system weaknesses in a way that allows them to reach carefully selected victims using a chain of attacks on the platforms. 

It is evident that the company's rapid patching cadence, along with coordinated efforts with external researchers, indicates the company's maturing response to advanced exploitation; however, the frequency of zero-day fixes this year highlights the importance of timely updates across all supported devices in order to safeguard consumers.

Specifically, security experts recommend that users, especially those who perform high risk functions like journalists, executives, and public figures, enable automatic updates, limit the amount of untrusted web content they view, and review device security settings in order to reduce potential attack surfaces. 

Enterprises that manage Apple hardware at scale should also accelerate patch deployments and keep an eye out for signs of compromise associated with WebKit-based attacks. A growing number of targeted surveillance tools and commercial spyware continue to emerge, and Apple’s latest fixes serve to remind us of the fact that platform security is more of a process than it is a static guarantee. 

For a company to stay ahead of sophisticated adversaries, collaboration, transparency, and user awareness are increasingly critical to ensuring platform security.

AI Browsers Raise Privacy and Security Risks as Prompt Injection Attacks Grow

 

A new wave of competition is stirring in the browser market as companies like OpenAI, Perplexity, and The Browser Company aggressively push to redefine how humans interact with the web. Rather than merely displaying pages, these AI browsers will be engineered to reason, take action independently, and execute tasks on behalf of end users. At least four such products, including ChatGPT's Atlas, Perplexity's Comet, and The Browser Company's Dia, represent a transition reminiscent of the early browser wars, when Netscape and Internet Explorer battled to compete for a role in the shaping of the future of the Internet. 

Whereas the other browsers rely on search results and manual navigation, an AI browser is designed to understand natural language instructions and perform multi-step actions. For instance, a user can ask an AI browser to find a restaurant nearby, compare options, and make a reservation without the user opening the booking page themselves. In this context, the browser has to process both user instructions and the content of each of the webpages it accesses, intertwining decision-making with automation. 

But this capability also creates a serious security risk that's inherent in the way large language models work. AI systems cannot be sure whether a command comes from a trusted user or comes with general text on an untrusted web page. Malicious actors may now inject malicious instructions within webpages, which can include uses of invisible text, HTML comments, and image-based prompts. Unbeknownst to them, that might get processed by an AI browser along with the user's original request-a type of attack now called prompt injection. 

The consequence of such attacks could be dire, since AI browsers are designed to gain access to sensitive data in order to function effectively. Many ask for permission to emails, calendars, contacts, payment information, and browsing histories. If compromised, those very integrations become conduits for data exfiltration. Security researchers have shown just how prompt injections can trick AI browsers into forwarding emails, extracting stored credentials, making unauthorized purchases, or downloading malware without explicit user interaction. One such neat proof-of-concept was that of Perplexity's Comet browser, wherein the researchers had embedded command instructions in a Reddit comment, hidden behind a spoiler tag. When the browser arrived and was asked to summarise the page, it obediently followed the buried commands and tried to scrape email data. The user did nothing more than request a summary; passive interactions indeed are enough to get someone compromised. 

More recently, researchers detailed a method called HashJack, which abuses the way web browsers process URL fragments. Everything that appears after the “#” in a URL never actually makes it to the server of a given website and is only accessible to the browser. An attacker can embed nefarious commands in this fragment, and AI-powered browsers may read and act upon it without the hosting site detecting such commands. Researchers have already demonstrated that this method can make AI browsers show the wrong information, such as incorrect dosages of medication on well-known medical websites. Though vendors are experimenting with mitigations, such as reinforcement learning to detect suspicious prompts or restricting access during logged-out browsing sessions, these remain imperfect. 

The flexibility that makes AI browsers useful also makes them vulnerable. As the technology is still in development, it shows great convenience, but the security risks raise questions of whether fully trustworthy AI browsing is an unsolved problem.

Fake GitHub OSINT Tools Spread PyStoreRAT Malware

 

Attackers are using GitHub as part of a campaign to spread a novel JavaScript-based RAT called PyStoreRAT, masquerading as widely used OSINT, GPT, and security utilities targeting developers and analysts. The malware campaign leverages small pieces of Python or JavaScript loader code hosted on fake GitHub repositories, which silently fetch and execute remote HTML Application (HTA) files via mshta.exe, initiating a multi-stage infection chain. 

PyStoreRAT is said to be a modular, multi-stage implant that can load and execute a wide range of payload formats, including EXE, DLL, PowerShell, MSI, Python, JavaScript, and HTA modules, making it highly versatile once a breach has been established. One of the most prominent follow-on payloads is the Rhadamanthys information stealer, which specializes in the exfiltration of sensitive information, including credentials and financial data. The loaders arrive embedded in repositories branded as OSINT frameworks, DeFi trading bots, GPT wrappers, or security tools; many of these hardly work past statically showing menus or other placeholder behavior to appear legitimate.

It is believed the campaign started at around mid-June 2025, with the attackers publishing new repositories at a steady pace, and then artificially inflating stars and forks by promoting those on YouTube, X, and other platforms. When these tools started gaining traction and hit GitHub's trending lists, the threat actors slipped in malicious "maintenance" commits in October and November, quietly swapping or augmenting the code to insert the loader logic. This factor of abusing GitHub's trust model and popularity signals echoes a trend seen in supply chain-like gimmicks such as Stargazers Ghost Network tactic.

Subsequently, the loader retrieves a distant HTA, which installs PyStoreRAT, a tool that profiles the system, identifies whether it has administrator privileges, and searches for cryptocurrency wallet artifacts involving services such as Ledger Live, Trezor, Exodus, Atomic, Guarda, and BitBox02. It also identifies installed anti-virus software and searches for strings such as “Falcon” and “Reason,” which are attributed to CrowdStrike and Cybereason/ReasonLabs, with what appears to be a modification of the path used to execute mshta.exe to avoid detection. 

It uses a scheduled task, which is disguised as an NVIDIA self-update, with the RAT communicating with a distant server for command execution, which includes but is not limited to downloading and executing EXE payloads, delivering Rhadamanthys, unzip archives, loading malicious DLLs via rundll32.exe, unpacking MSI packages, executing PowerShell payloads within a suspended process, instantiating additional mshta.exe, and propagate via portable storage devices by embedding armed LNK documents. 

Additionally, it has the capacity to eliminate its own scheduled tasks, which is attributed to making reverse-engineering even more complicated. The Python-based weapons have revealed Russian language artifacts as well as programming conventions that indicate a probable Eastern European adversary, who has described PyStoreRAT as part of a growth toward adaptable, script-based implants that avoid common detection on a targeted environment until a very late stage in the fight.

Indian Government Proposes Compulsory Location Tracking in Smartphones, Faces Backlash


Government faces backlash over location-tracking proposal

The Indian government is pushing a telecom industry proposal that will compel smartphone companies to allow satellite location tracking that will be activated 24x7 for surveillance. 

Tech giants Samsung, Google, and Apple have opposed this move due to privacy concerns. Privacy debates have stirred in India after the government was forced to repeal an order that mandated smartphone companies to pre-install a state run cyber safety application on all devices. Activists and opposition raised concerns about possible spying. 

About the proposal 

Recently, the government had been concerned that agencies didn't get accurate locations when legal requests were sent to telecom companies during investigations. Currently, the firm only uses cellular tower data that provides estimated area location, this can be sometimes inaccurate.

The Cellular Operators Association of India (COAI) representing Bharti Airtel and Reliance Jio suggested accurate user locations be provided if the government mandates smartphone firms to turn on A-GPS technology which uses cellular data and satellite signals.

Strong opposition from tech giants 

If this is implemented, location services will be activated in smartphones with no disable option. Samsung, Google, and Apple strongly oppose this proposal. A proposal to track user location is not present anywhere else in the world, according to lobbying group India Cellular & Electronics Association (ICEA), representing Google and Apple. 

Reuters reached out to the India's IT and home ministries for clarity on the telecom industry's proposal but have received no replies. According to digital forensics expert Junade Ali, the "proposal would see phones operate as a dedicated surveillance device." 

According to technology experts, utilizing A-GPS technology, which is normally only activated when specific apps are operating or emergency calls are being made, might give authorities location data accurate enough to follow a person to within a meter.  

Telecom vs government 

Globally, governments are constantly looking for new ways to improve in tracking the movements or data of mobile users. All Russian mobile phones are mandated to have a state-sponsored communications app installed. With 735 million smartphones as of mid-2025, India is the second-largest mobile market in the world. 

According to Counterpoint Research, more than 95% of these gadgets are running Google's Android operating system, while the remaining phones are running Apple's iOS. 

Apple and Google cautioned that their user base will include members of the armed forces, judges, business executives, and journalists, and that the proposed location tracking would jeopardize their security because they store sensitive data.

According to the telecom industry, even the outdated method of location tracking is becoming troublesome because smartphone manufacturers notify users via pop-up messages that their "carrier is trying to access your location."



Featured