Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Latest News

Optimizely Reports Data Breach Linked to Sophisticated Vishing Incident

  In addition to serving as a crossroads of technology, marketing intelligence, and vast networks of corporate data, digital advertising pla...

All the recent news you need to know

OpenAI’s Codex Security Flags Over 10,000 High-Risk Vulnerabilities in Code Scan

 



Artificial intelligence is increasingly being used to help developers identify security weaknesses in software, and a new tool from OpenAI reflects that shift.

The company has introduced Codex Security, an automated security assistant designed to examine software projects, detect vulnerabilities, confirm whether they can actually be exploited, and recommend ways to fix them.

The feature is currently being released as a research preview and can be accessed through the Codex interface by users subscribed to ChatGPT Pro, Enterprise, Business, and Edu plans. OpenAI said customers will be able to use the capability without cost during its first month of availability.

According to the company, the system studies how a codebase functions as a whole before attempting to locate security flaws. By building a detailed understanding of how the software operates, the tool aims to detect complicated vulnerabilities that may escape conventional automated scanners while filtering out minor or irrelevant issues that can overwhelm security teams.

The technology is an advancement of Aardvark, an internal project that entered private testing in October 2025 to help development and security teams locate and resolve weaknesses across large collections of source code.

During the last month of beta testing, Codex Security examined more than 1.2 million individual code commits across publicly accessible repositories. The analysis produced 792 critical vulnerabilities and 10,561 issues classified as high severity.

Several well-known open-source projects were affected, including OpenSSH, GnuTLS, GOGS, Thorium, libssh, PHP, and Chromium.

Some of the identified weaknesses were assigned official vulnerability identifiers. These included CVE-2026-24881 and CVE-2026-24882 linked to GnuPG, CVE-2025-32988 and CVE-2025-32989 affecting GnuTLS, and CVE-2025-64175 along with CVE-2026-25242 associated with GOGS. In the Thorium browser project, researchers also reported seven separate issues ranging from CVE-2025-35430 through CVE-2025-35436.

OpenAI explained that the system relies on advanced reasoning capabilities from its latest AI models together with automated verification techniques. This combination is intended to reduce the number of incorrect alerts while producing remediation guidance that developers can apply directly.

Repeated scans of the same repositories during testing also showed measurable improvements in accuracy. The company reported that the number of false alarms declined by more than 50 percent while the precision of vulnerability detection increased.

The platform operates through a multi-step process. It begins by examining a repository in order to understand the structure of the application and map areas where security risks are most likely to appear. From this analysis, the system produces an editable threat model describing the software’s behavior and potential attack surfaces.

Using that model as a reference point, the tool searches for weaknesses and evaluates how serious they could be in real-world scenarios. Suspected vulnerabilities are then executed in a sandbox environment to determine whether they can actually be exploited.

When configured with a project-specific runtime environment, the system can test potential vulnerabilities directly against a functioning version of the software. In some cases it can also generate proof-of-concept exploits, allowing security teams to confirm the problem before deploying a fix.

Once validation is complete, the tool suggests code changes designed to address the weakness while preserving the original behavior of the application. This approach is intended to reduce the risk that security patches introduce new software defects.

The launch of Codex Security follows the introduction of Claude Code Security by Anthropic, another system that analyzes software repositories to uncover vulnerabilities and propose remediation steps.

The emergence of these tools reflects a broader trend within cybersecurity: using artificial intelligence to review vast amounts of software code, detect vulnerabilities earlier in the development cycle, and assist developers in securing critical digital infrastructure.

Microsoft Report Reveals Hackers Exploit AI In Cyberattacks


According to Microsoft, hackers are increasingly using AI in their work to increase attacks, scale cyberattack activity, and limit technical barriers throughout all aspects of a cyberattack. 

Microsoft’s new Threat Intelligence report reveals that threat actors are using genAI tools for various tasks, such as phishing, surveillance, malware building, infrastructure development, and post-hack activity. 

About the report

In various incidents, AI helps to create phishing emails, summarize stolen information, debug malware, translate content, and configure infrastructure. “Microsoft Threat Intelligence has observed that most malicious use of AI today centers on using language models for producing text, code, or media. Threat actors use generative AI to draft phishing lures, translate content, summarize stolen data, generate or debug malware, and scaffold scripts or infrastructure,” the report said. 

"For these uses, AI functions as a force multiplier that reduces technical friction and accelerates execution, while human operators retain control over objectives, targeting, and deployment decisions,’ warns Microsoft.

AI in cyberattacks 

Microsoft found different hacking gangs using AI in their cyberattacks, such as North Korean hackers known as Coral Sleet (Storm-1877) and Jasper Sleet (Storm-0287), who use the AI in their remote IT worker scams. 

The AI helps to make realistic identities, communications, and resumes to get a job in Western companies and have access once hired. Microsoft also explained how AI is being exploited in malware development and infrastructure creation. Threat actors are using AI coding tools to create and refine malicious code, fix errors, and send malware components to different programming languages. 

The impact

A few malware experiments showed traces of AI-enabled malware that create scripts or configure behaviour at runtime. Microsoft found Coral Sleet using AI to make fake company sites, manage infrastructure, and troubleshoot their installations. 

When security analysts try to stop the use of AI in these attacks, Microsoft says hackers are using jailbreaking techniques to trick AI into creating malicious code or content. 

Besides generative AI use, the report revealed that hackers experiment with agentic AI to do tasks autonomously. The AI is mainly used for decision-making currently. As IT worker campaigns depend on the exploitation of authentic access, experts have advised organizations to address these attacks as insider risks. 

Anthropic AI Model Finds 22 Security Flaws in Firefox

 

Anthropic said its artificial intelligence model Claude Opus 4.6 helped uncover 22 previously unknown security vulnerabilities in the Firefox web browser as part of a collaboration with the Mozilla. 

The company said the issues were discovered during a two week analysis conducted in January 2026. 

The findings include 14 vulnerabilities rated as high severity, seven categorized as moderate and one considered low severity. 

Most of the flaws were addressed in Firefox version 148, which was released late last month, while the remaining fixes are expected in upcoming updates. 

Anthropic said the number of high severity bugs discovered by its AI model represents a notable share of the browser’s serious vulnerabilities reported over the past year. 

During the research, Claude Opus 4.6 scanned roughly 6,000 C++ files in the Firefox codebase and generated 112 unique vulnerability reports. 

Human researchers reviewed the results to confirm the findings and rule out false positives before reporting them. One issue identified by the model involved a use-after-free vulnerability in Firefox’s JavaScript engine. 

According to Anthropic, the AI located the flaw within about 20 minutes of examining the code, after which a security researcher validated the finding in a controlled testing environment. 

Researchers also tested whether the AI model could go beyond identifying flaws and attempt to build exploits from them. Anthropic said it provided Claude access to the list of vulnerabilities reported to Mozilla and asked it to develop working exploits. 

After hundreds of test runs and about $4,000 worth of API usage, the model succeeded in producing a working exploit in only two cases. 

Anthropic said the results suggest that finding vulnerabilities may be easier for AI systems than turning those flaws into functioning exploits. 

“However, the fact that Claude could succeed at automatically developing a crude browser exploit, even if only in a few cases, is concerning,” the company said. 

It added that the exploit tests were performed in a restricted research environment where some protections, such as sandboxing, were deliberately removed. 

One exploit generated by the model targeted a vulnerability tracked as CVE-2026-2796, which involves a miscompilation issue in the JavaScript WebAssembly component of Firefox’s just-in-time compilation system. 

Anthropic said the testing process included a verification system designed to check whether the AI-generated exploit actually worked. 

The system provided real-time feedback, allowing the model to refine its attempts until it produced a functioning proof of concept. The research comes shortly after Anthropic introduced Claude Code Security in a limited preview. 

The tool is designed to help developers identify and fix software vulnerabilities with the assistance of AI agents. Mozilla said in a separate statement that the collaboration produced additional findings beyond the 22 vulnerabilities. 

According to the company, the AI-assisted analysis uncovered about 90 other bugs, including assertion failures typically identified through fuzzing as well as logic errors that traditional testing tools had missed. 

“The scale of findings reflects the power of combining rigorous engineering with new analysis tools for continuous improvement,” Mozilla said. 

“We view this as clear evidence that large-scale, AI-assisted analysis is a powerful new addition to security engineers’ toolbox.”

DeepMind Chief Sounds Alarm on AI's Dual Threats

 

Google DeepMind CEO Sir Demis Hassabis has issued a stark warning on the escalating threats posed by artificial intelligence, urging immediate action from governments and tech firms. In an exclusive BBC interview at the AI Impact Summit in Delhi, he emphasized that more research into AI risks "needs to be done urgently," rather than waiting years. Hassabis highlighted the industry's push for "smart regulation" targeting genuine dangers from increasingly autonomous systems.

The AI pioneer identified two primary threats: malicious exploitation by bad actors and the potential loss of human control over super-capable AI systems. He stressed that current fragmented efforts in safety research are insufficient, with massive investments in AI development far outpacing those in oversight and evaluation. As AI models grow more powerful, Hassabis warned of a "narrow window" to implement robust safeguards before existing institutions are overwhelmed.

Speaking at the summit, which concluded recently in India's capital, Hassabis called for scaled-up funding and talent in AI safety science. He compared the challenge to nuclear safety protocols, arguing that advanced AI now demands societal-level treatment with rigorous testing before widespread deployment. The event brought together global leaders to discuss AI's societal impacts amid rapid advancements.

Hassabis advocated for international cooperation, noting AI's borderless nature means it affects everyone worldwide. He praised forums like those in the UK, Paris, and Seoul for uniting technologists and policymakers, while pushing for minimum global standards on AI deployment.However, tensions exist, as the US delegation at the Delhi summit rejected global AI governance outright.

This comes as AI capabilities surge, with systems learning physical realities and approaching artificial general intelligence (AGI) in 5-10 years. Hassabis acknowledged natural constraints like hardware shortages may slow progress, providing time for safeguards, but stressed proactive measures are essential. Industry leaders must balance innovation with risk mitigation to harness AI's potential safely.

Safety recommendations 

To counter AI threats, organizations should prioritize independent safety evaluations and red-teaming exercises before deploying models. Governments must fund public AI safety research grants and enforce "smart regulations" focused on real risks like misuse and loss of control. Individuals can stay vigilant by verifying AI-generated content, using tools like watermark detectors, limiting data shared with AI systems, and supporting ethical AI policies through advocacy.

FBI Warns Outdated Wi-Fi Routers Are Being Targeted in Malware and Botnet Attacks

 

Cybersecurity risks could rise when outdated home routers stop getting manufacturer support, federal agents say. Devices from the late 2000s into the early 2010s often fall out of update cycles, leaving networks open. Without patches, vulnerabilities stay unaddressed - making intrusion more likely over time. Older models reaching end-of-life lack protection upgrades once available. This gap draws attention from officials tracking digital threats to household systems. 

Older network equipment often loses support as makers discontinue update releases. Once patching ends, weaknesses found earlier stay open indefinitely. Such gaps let hackers break in more easily. Devices like obsolete routers now attract criminals who deploy malicious code. Access at admin level gets seized without owners noticing. Infected machines may join hidden networks controlled remotely. Evidence shows law enforcement warning about these risks repeatedly. 

Built from hijacked devices, botnets answer to remote operators. These collections of infected machines frequently enable massive digital assaults. Instead of serving legitimate users, they route harmful data across the web. Criminals rely on them to mask where attacks originate. Through hidden channels, wrongdoers stay anonymous during operations. 

Back in 2011, Linksys made several routers later flagged as weak by the FBI. Devices like the E1200, E2500, and E4200 came under scrutiny due to security flaws. Earlier models also appear on the list - take the WRT320N, launched in 2009. Then there is the M10, hitting shelves a year after that one. Some routers come equipped with remote setup options, letting people adjust settings using web-connected interfaces. 

Though useful, such access may lead to problems if flaws are left unfixed. Hackers regularly search online for devices running open management ports, particularly ones stuck on old software versions. Hackers start by spotting weak routers, then slip through software gaps to plant harmful programs straight onto the machine. Once inside, that hidden code opens the door wide - giving intruders complete control while setting up secret talks with remote hubs. 

Sometimes, these taken devices ping those distant centers each minute, just to say they’re still online and waiting. Opened network ports on routers might let malware turn devices into proxies. With such access, attackers send harmful data across infected networks instead of launching attacks directly. Some even trade entry rights to third parties wanting to mask where they operate from. What makes router-based infections tricky is how hard they are to spot for most people. 

Since standard antivirus tools target laptops and phones, routers often fall outside their scope. Running within the router's own software, the malware stays hidden even when everything seems to work fine. The network keeps running smoothly, masking the presence of harmful code tucked deep inside. Older routers without regular updates become weak spots over time. 

Because of this, specialists suggest swapping them out. A modern replacement brings continued protection through active maintenance. This shift lowers chances of intrusions via obsolete equipment found in personal setups.

ExpressVPN Expands Privacy Tools with Launch of Hybrid Browser Extension


 

Increasingly, immersive technologies are moving from being novel to being part of everyday digital infrastructure, which raises questions regarding privacy within virtual environments. Activities previously conducted on conventional screens now occur within headsets that process vast streams of personal data, such as browsing behavior, location signals, and device interactions, as well as process vast streams of personal data.

It has been announced that ExpressVPN has partnered with Meta in recognition of this emerging privacy frontier, which will allow its security tools to be integrated directly into Meta Quest. An application will be introduced by Meta through the Meta App Store, which will enable headset users to activate full-device VPN protection within the virtual reality environment. 

Additionally, ExpressVPN has released a hybrid browser extension that combines VPN and proxy functionality into an effective privacy tool, signaling an ongoing effort to adapt traditional internet security models to the increasingly complex environment of immersive computing. An integral part of the newly introduced extension is Smart Routing, which enables users to control how browser traffic interacts with the VPN network with granularity. 

By using the system, specific websites can be automatically linked to predefined VPN endpoints or routing preferences rather than requiring users to switch server locations multiple times when navigating between services hosted in various regions. In addition to streamlining the management of geographically sensitive connections, this approach also maintains a consistent level of privacy protection.

Additionally, additional safeguards have been implemented in order to increase protections at the browser-level. WebRTC leaks are a well-known method by which IP addresses can be uncovered despite the use of VPNs, and the extension incorporates mechanisms to block them. HTML5 geolocation data transmission is also restricted by controls in the extension. These measures are designed to prevent websites from inferring a user's physical location through browser-based signals by limiting the ability of websites to do so. 

In light of the fact that most digital activity now takes place within web environments, browser-centric protection has been focused as a way to address this reality. In order to facilitate streaming media, electronic commerce transactions, and collaborative work platforms, browser interfaces are increasingly replacing standalone software applications. 

It appears as though the company is positioning the hybrid extension as a flexible bridge between lightweight web privacy and comprehensive network protection by concentrating security controls at this layer while still providing a primary VPN application that can fully encrypt devices at the device level. At the same time, the company is expanding its privacy infrastructure beyond traditional computing devices to include immersive technology, which is rapidly gaining in popularity.

In addition to the Meta Quest platform support, we are introducing a dedicated VPN application which can be downloaded directly from the Meta App Store, enabling encrypted connectivity across the headset's system environment. Additionally, the hybrid extension is expected to be available on the platform in a browser-specific version, providing an additional level of security for virtual reality activities. 

It has historically been difficult to deploy conventional VPNs in VR ecosystems, requiring complex network workarounds or external device configuration. Native integration therefore indicates a significant change in how privacy tools are adapting to these environments. It is important to note that this development is part of a broader change that is occurring within the VPN industry as internet usage gradually expands into a variety of connected hardware categories. 

Increasingly, browsing occurs within headsets and other immersive devices, rather than just laptops or smartphones. The use of flexible routing and layered protection to safeguard user data across emerging digital interfaces may become more prominent as a result of the emergence of this technology. 

In addition to providing an encrypted connection directly to the Meta Quest headset through a dedicated application distributed through the Meta App Store as part of the company's collaboration with Meta, the company is introducing hybrid browser technology as well. As a result of this development, virtual reality headsets are increasingly regarded as more than entertainment devices; they are becoming full-featured computing platforms that facilitate various digital activities, including communication, content consumption, and collaboration. 

ExpressVPN utilizes a native VPN application that is deployed within the device environment to ensure that network traffic generated by the entire headset is routed through encrypted channels rather than limiting protection only to individual applications or browsing sessions. This type of system-wide coverage is especially useful for applications that consume large amounts of bandwidth, such as VR streaming and multiplayer gaming, where unprotected traffic can be subjected to network throttling. 

In addition, the company stated that its newly introduced hybrid extension will shortly be extended to the headset's native browsing environment in the near future. VR browser users will be able to secure web traffic via a streamlined protection mode once it is implemented, which will not require the user to remain active through a background VPN. 

In addition to providing additional privacy for browser-based activity, this lighter configuration also ensures that system resources are preserved during performance-sensitive applications, such as those that affect the immersive experience directly due to computational overhead and frame stability. 

As part of the extension architecture, the provider's proprietary Lightway Protocol has been updated to incorporate post-quantum cryptographic protections, as well as support for the extension architecture. By strengthening the protocol, we hope to address emerging concerns that future developments in quantum computing may undermine conventional encryption algorithms, positioning it as a forward-looking safeguard against decryption capabilities of the future.

It is currently available for popular browsers including Google Chrome and Mozilla Firefox, however it is expected that integration with Meta Quest in the near future will be available as soon as possible. Combined, the developments demonstrate how privacy infrastructure is gradually evolving in order to accommodate new digital interfaces, extending encrypted connectivity beyond traditional desktop and mobile ecosystems into immersive computing environments. 

The combination of these developments illustrates how privacy architectures are gradually being revised to accommodate the changing boundaries of the internet as digital interaction is increasingly centered on browsers, applications, and immersive devices. Security strategies that once focused on a single device or network layer are becoming more adaptable to meet changing requirements. 

Organizations and individual users should examine how data flows through emerging platforms and ensure that encryption and routing controls evolve simultaneously. With the internet continuing to extend beyond conventional computing interfaces, solutions that integrate flexible browser-level safeguards with device-wide encryption may represent a practical solution for maintaining consistent privacy standards.

Featured