Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label Technology. Show all posts

Alaska Airlines Grounds All Flights Amid System-Wide IT Outage, Passengers Face Major Disruptions

 


Alaska Airlines was forced to implement a full nationwide ground stop for its mainline and Horizon Air flights on Sunday evening due to a significant IT system outage. The disruption began around 8 p.m. Pacific Time and affected the entire network of the Seattle-based airline, as reflected on the Federal Aviation Administration’s (FAA) status dashboard.

The technical failure led to a temporary suspension of flight operations, causing widespread delays and cancellations across multiple airports in the U.S. Seattle-Tacoma International Airport, Alaska Airlines’ central hub, was among the worst affected. As reported by The Economic Times, hundreds of flights were either grounded or delayed, leaving travelers stranded and uncertain about their schedules.

In a statement to CBS News, the airline confirmed it had "experienced an IT outage that's impacting our operations" and had issued "a temporary, system-wide ground stop for Alaska and Horizon Air flights until the issue is resolved". The airline also warned of lingering operational impacts, noting passengers should expect ongoing disruptions into the night.

While the root cause of the outage has not been revealed, Alaska Airlines assured that its technical teams were actively working to restore normal service. A message posted on the carrier’s website stated: "We are experiencing issues with our IT systems. We apologize for the inconvenience and are working to resolve the issues."

As reported by KIRO 7, some services began resuming gradually by late Sunday. However, neither the airline nor the FAA provided a clear timeline for when full service would be reinstated.

Passengers took to social media to voice frustration over long wait times, non-functional customer support apps, and confusion at airport gates. Meanwhile, Portland International Airport reported minimal delays, with only a few Alaska Airlines flights affected by 9:15 p.m., according to KATU.

This outage is among the most significant operational disruptions for Alaska Airlines in recent years. The last major glitch of this scale occurred in 2022, resulting in widespread delays due to a similar system malfunction.

As of now, Alaska Airlines has advised passengers to regularly check their flight status before heading to the airport and has not confirmed any details regarding rebooking assistance or compensation. The airline continues to be a major player in both domestic and international air travel, especially across the U.S. West Coast.

Stop! Don’t Let That AI App Spy on Your Inbox, Photos, and Calls

 



Artificial intelligence is now part of almost everything we use — from the apps on your phone to voice assistants and even touchscreen menus at restaurants. What once felt futuristic is quickly becoming everyday reality. But as AI gets more involved in our lives, it’s also starting to ask for more access to our private information, and that should raise concerns.

Many AI-powered tools today request broad permissions, sometimes more than they truly need to function. These requests often include access to your email, contacts, calendar, messages, or even files and photos stored on your device. While the goal may be to help you save time, the trade-off could be your privacy.

This situation is similar to how people once questioned why simple mobile apps like flashlight or calculator apps — needed access to personal data such as location or contact lists. The reason? That information could be sold or used for profit. Now, some AI tools are taking the same route, asking for access to highly personal data to improve their systems or provide services.

One example is a new web browser powered by AI. It allows users to search, summarize emails, and manage calendars. But in exchange, it asks for a wide range of permissions like sending emails on your behalf, viewing your saved contacts, reading your calendar events, and sometimes even seeing employee directories at workplaces. While companies claim this data is stored locally and not misused, giving such broad access still carries serious risks.

Other AI apps promise to take notes during calls or schedule appointments. But to do this, they often request live access to your phone conversations, calendar, contacts, and browsing history. Some even go as far as reading photos on your device that haven’t been uploaded yet. That’s a lot of personal information for one assistant to manage.

Experts warn that these apps are capable of acting independently on your behalf, which means you must trust them not just to store your data safely but also to use it responsibly. The issue is, AI can make mistakes and when that happens, real humans at these companies might look through your private information to figure out what went wrong.

So before granting an AI app permission to access your digital life, ask yourself: is the convenience really worth it? Giving these tools full access is like handing over a digital copy of your entire personal history, and once it’s done, there’s no taking it back.

Always read permission requests carefully. If an app asks for more than it needs, it’s okay to say no.

Why Policy-Driven Cryptography Matters in the AI Era

 



In this modern-day digital world, companies are under constant pressure to keep their networks secure. Traditionally, encryption systems were deeply built into applications and devices, making them hard to change or update. When a flaw was found, either in the encryption method itself or because hackers became smarter, fixing it took time, effort, and risk. Most companies chose to live with the risk because they didn’t have an easy way to fix the problem or even fully understand where it existed.

Now, with data moving across various platforms, for instance cloud servers, edge devices, and personal gadgets — it’s no longer practical to depend on rigid security setups. Businesses need flexible systems that can quickly respond to new threats, government rules, and technological changes.

According to the IBM X‑Force 2025 Threat Intelligence Index, nearly one-third (30 %) of all intrusions in 2024 began with valid account credential abuse, making identity theft a top pathway for attackers.

This is where policy-driven cryptography comes in.


What Is Policy-Driven Crypto Agility?

It means building systems where encryption tools and rules can be easily updated or swapped out based on pre-defined policies, rather than making changes manually in every application or device. Think of it like setting rules in a central dashboard: when updates are needed, the changes apply across the network with a few clicks.

This method helps businesses react quickly to new security threats without affecting ongoing services. It also supports easier compliance with laws like GDPR, HIPAA, or PCI DSS, as rules can be built directly into the system and leave behind an audit trail for review.


Why Is This Important Today?

Artificial intelligence is making cyber threats more powerful. AI tools can now scan massive amounts of encrypted data, detect patterns, and even speed up the process of cracking codes. At the same time, quantum computing; a new kind of computing still in development, may soon be able to break the encryption methods we rely on today.

If organizations start preparing now by using policy-based encryption systems, they’ll be better positioned to add future-proof encryption methods like post-quantum cryptography without having to rebuild everything from scratch.


How Can Organizations Start?

To make this work, businesses need a strong key management system: one that handles the creation, rotation, and deactivation of encryption keys. On top of that, there must be a smart control layer that reads the rules (policies) and makes changes across the network automatically.

Policies should reflect real needs, such as what kind of data is being protected, where it’s going, and what device is using it. Teams across IT, security, and compliance must work together to keep these rules updated. Developers and staff should also be trained to understand how the system works.

As more companies shift toward cloud-based networks and edge computing, policy-driven cryptography offers a smarter, faster, and safer way to manage security. It reduces the chance of human error, keeps up with fast-moving threats, and ensures compliance with strict data regulations.

In a time when hackers use AI and quantum computing is fast approaching, flexible and policy-based encryption may be the key to keeping tomorrow’s networks safe.

Britons Risk Privacy by Sharing Sensitive Data with AI Chatbots Despite Security Concerns

 

Nearly one in three individuals in the UK admits to sharing confidential personal details with AI chatbots, such as OpenAI’s ChatGPT, according to new research by cybersecurity firm NymVPN. The study reveals that 30% of Britons have disclosed sensitive data—including banking information and health records—to AI tools, potentially endangering their own privacy and that of others.

Despite 48% of respondents expressing concerns over the safety of AI chatbots, many continue to reveal private details. This habit extends to professional settings, where employees are reportedly sharing internal company and customer information with these platforms.

The findings come amid a wave of high-profile cyberattacks, including the recent breach at Marks & Spencer, which underscores how easily confidential data can be compromised. NymVPN reports that 26% of survey participants have entered financial details related to salaries, mortgages, and investments, while 18% have exposed credit card or bank account numbers. Additionally, 24% acknowledged sharing customer data—such as names and email addresses—and 16% uploaded company financial records and contracts.

“AI tools have rapidly become part of how people work, but we’re seeing a worrying trend where convenience is being prioritized over security,” said Harry Halpin, CEO of NymVPN.

Organizations such as M&S, Co-op, and Adidas have already made headlines for data breaches. “High-profile breaches show how vulnerable even major organizations can be, and the more personal and corporate data that is fed into AI, the bigger the target becomes for cybercriminals,” Halpin added.

With nearly a quarter of people admitting to sharing customer data with AI tools, experts emphasize the urgent need for businesses to establish strict policies governing AI usage at work.

“Employees and businesses urgently need to think about how they’re protecting both personal privacy and company data when using AI tools,” Halpin warned.

Completely avoiding AI chatbots might be the safest option, but it’s not always realistic. Users are advised to refrain from entering sensitive information, adjust privacy settings by disabling chat history, or opt out of model training.

Using a VPN can provide an additional layer of online privacy by encrypting internet traffic and masking IP addresses when accessing AI chatbots like ChatGPT. However, even with a VPN, risks remain if individuals continue to input confidential data.

How Tech Democratization Is Helping SMBs Tackle 2025’s Toughest Challenges

 

Small and medium-sized businesses (SMBs) are entering 2025 grappling with familiar hurdles: tight budgets, economic uncertainty, talent shortages, and limited cybersecurity resources. A survey of 300 decision-makers highlights how these challenges are pushing SMBs to seek smarter, more affordable tech solutions.

Technology itself ranks high on the list of SMB pain points. A 2023 Mastercard report (via Digital Commerce 360) showed that two-thirds of small-business owners saw seamless digital experiences as critical—but 25% were overwhelmed by the cost and complexity. The World Economic Forum's 2025 report echoed this, noting that SMBs are often “left behind” when it comes to transformative tech.

That’s changing fast. As enterprise-grade tools become more accessible, SMBs now have affordable, powerful options to bridge the tech gap and compete effectively.

1. Stronger, Smarter Networks
Downtime is expensive—up to $427/minute, says Pingdom. SMBs now have access to fast, reliable fiber internet with backup connections that kick in automatically. These networks support AI tools, cloud apps, IoT, and more—while offering secure, segmented Wi-Fi for teams, guests, and devices.

Case in point: Albemarle, North Carolina, deployed fiber internet with a cloud-based backup, ensuring critical systems stay online 24/7.

2. Cybersecurity That Fits the SMB Budget
Cyberattacks hit 81% of small businesses in the past year (Identity Theft Resource Center, 2024). Yet under half feel ready to respond, and many hesitate to invest due to cost. The good news: built-in firewalls, multifactor authentication, and scalable security layers are now more affordable than ever.

As Checker.ai founder Anup Kayastha told StartupNation, the company started with MFA and scaled security as they grew.

3. Big Brand Experiences, Small Biz Budgets
SMBs now have the digital tools to deliver seamless, omnichannel customer experiences—just like larger players. High-performance networks and cloud-based apps enable rich e-commerce journeys and AI-driven support that build brand presence and loyalty.

4. Predictable Pricing, Maximum Value
Tech no longer requires deep pockets. Today’s solutions bundle high-speed internet, cybersecurity, compliance, and productivity tools—often with self-service options to reduce IT overhead.

5. Built-In Tech Support
Forget costly consultants. Many SMB-friendly providers now offer local, on-site support as part of their packages—helping small businesses install, manage, and maintain systems with ease.

Here's How Everyday Tech Is Being Weaponized to Deploy Trojan

 

The technology that facilitates your daily life, from the smartphone in your hand to the car in your garage, may simultaneously be detrimental to you. Once the stuff of spy thrillers, consumer electronics can today be used as tools of control, tracking, or even warfare if they are manufactured in adversarial countries or linked to opaque systems. 

Mandatory usage and dependence on technology in all facets of our lives has led to risks and vulnerabilities that are no longer hypothetical. In addition to being found in your appliances, phone, internet, electricity, and other utility services, connected technology is also integrated in your firmware, transmitted through your cloud services, and magnified over your social media feeds. 

China's dominance in electronics manufacturing, which gives it enormous influence over the global tech supply chain, is a major cause for concern. Malware has been found pre-installed on electronic equipment exported from Chinese manufacturing. These flaws are frequently built into the hardware and cannot be fixed with a simple update. 

These risks are genuine and cause for concern, according to former NSA director Mike Rogers: We know that China sees value in putting at least some of our key infrastructure at risk of disruption or destruction. I believe that the Chinese are partially hoping that the West's options for handling the security issue will be limited due to the widespread use of inverters.

A new level of complexity is introduced by autonomous cars. These rolling data centres have sensors, cameras, GPS tracking, and cloud connectivity, allowing for remote monitoring and deactivation. Physical safety and national infrastructure are at risk if parts or software come from unreliable sources. Even seemingly innocuous gadgets like fitness trackers, smart TVs, and baby monitors might have security flaws. 

They continuously gather and send data, frequently with little security or user supervision. The Electronic Privacy Information Center's counsel, Suzanne Bernstein, stated that "HIPAA does not apply to health data collected by many wearable devices and health and wellness apps.”

The message is clear: even low-tech tools can become high-risk in a tech-driven environment. Foreign intelligence services do not need to sneak agents into enemy territory; they simply require access to the software supply chain. Malware campaigns such as China's APT41 and Russia's NotPetya demonstrate how compromised consumer and business software can be used for espionage and sabotage. Worse, these attacks are sometimes unnoticed for months or years before being activated—either during conflict or at times of strategic strain.

China Hacks Seized Phones Using Advanced Forensics Tool

 


There has been a significant concern raised regarding digital privacy and the practices of state surveillance as a result of an investigation conducted by mobile security firm Lookout. Police departments across China are using a sophisticated surveillance system, raising serious concerns about the state's surveillance policies. 

According to Chinese cybersecurity and surveillance technology company Xiamen Meiya Pico, Massistant, the system is referred to as Massistant. It has been reported that Lookout's analysis indicates that Massistant is geared toward extracting a lot of sensitive data from confiscated smartphones, which could help authorities perform comprehensive digital forensics on the seized devices. This advanced software can be used to retrieve a broad range of information, including private messages, call records, contact lists, media files, GPS locations, audio records, and even encrypted messages from secure messaging applications like Signal. 

A notable leap in surveillance capabilities has been demonstrated by this system, as it has been able to access protected platforms which were once considered secure, potentially bypassing encryption safeguards that were once considered secure. This discovery indicates the increasing state control over personal data in China, and it underscores how increasingly intrusive digital tools are being used to support law enforcement operations within the country. 

With the advent of sophisticated and widespread technologies such as these, there will be an increasing need for human rights protection, privacy protection, and oversight on the global stage as they become more sophisticated. It has been reported that Chinese law enforcement agencies are using a powerful mobile forensic tool known as Massistant to extract sensitive information from confiscated smartphones, a powerful mobile forensic tool known as Massistant. 

In the history of digital surveillance, Massistant represents a significant advance in digital surveillance technology. Massistant was developed by SDIC Intelligence Xiamen Information Co., Ltd., which was previously known as Meiya Pico. To use this tool, authorities can gain direct access to a wide range of personal data stored on mobile devices, such as SMS messages, call histories, contact lists, GPS location records, multimedia files and audio recordings, as well as messages from encrypted messaging apps like Signal, to the data. 

A report by Lookout, a mobile security firm, states that Massistant is a desktop-based forensic analysis tool designed to work in conjunction with Massistant, creating a comprehensive system of obtaining digital evidence, in combination with desktop-based forensic analysis software. In order to install and operate the tool, the device must be physically accessed—usually during security checkpoints, border crossings, or police inspections on the spot. 

When deployed, the system allows officials to conduct a detailed examination of the contents of the phone, bypassing conventional privacy protections and encryption protocols in order to examine the contents in detail. In the absence of transparent oversight, the emergence of these tools illustrates the growing sophistication of state surveillance capabilities and raises serious concerns over user privacy, data security, and the possibility of abuse. 

The further investigation of Massistant revealed that the deployment and functionality of the system are closely related to the efforts that Chinese authorities are putting into increasing digital surveillance by using hardware and software tools. It has been reported that Kristina Balaam, a Lookout security researcher, has discovered that the tool's developer, Meiya Pico, currently operating under the name SDIC Intelligence Xiamen Information Co., Ltd., maintains active partnerships with domestic and foreign law enforcement agencies alike. 

In addition to product development, these collaborations extend to specialised training programs designed to help law enforcement personnel become proficient in advanced technical surveillance techniques. According to the research conducted by Lookout, which included analysing multiple Massistant samples collected between mid-2019 and early 2023, the tool is directly related to Meiya Pico as a signatory certificate referencing the company can be found in the tool. 

For Massistant to work, it requires direct access to a smartphone - usually a smartphone during border inspections or police encounters - to facilitate its installation. In addition, once the tool has been installed, it is integrated with a desktop forensics platform, enabling investigators to extract large amounts of sensitive user information using a systematic approach. In addition to text messages, contact information, and location history, secure communication platforms provide protected content, as well. 

As its predecessor, MFSocket, Massistant is a program that connects mobile devices to desktops in order to extract data from them. Upon activation, the application prompts the user to grant the necessary permissions to access private data held by the mobile device. Despite the fact that the device owner does not require any further interaction once the initial authorisation is complete, the application does not require any further interaction once it has been launched. 

Upon closing the application, the user is presented with a warning indicating that the software is in the “get data” mode and that exiting will result in an error, and this message is available only in Simplified Chinese and American English, indicating the application’s dual-target audience. In addition, Massistant has introduced several new enhancements over MFSocket, namely the ability to connect to users' Android device using the Android Debug Bridge (ADB) over WiFi, so they can engage wirelessly and access additional data without having to use direct cable connections. 

In addition to the application's ability to remain undetected, it is also designed to automatically uninstall itself once users disconnect their USB cable, so that no trace of the surveillance operation remains. It is evident that these capabilities position Massistant as a powerful weapon in the arsenal of government-controlled digital forensics and surveillance tools, underlining growing concerns about privacy violations and a lack of transparency when it comes to the deployment of such tools.

Kristina Balaam, a security researcher, notes that despite Massistant's intrusive capabilities that it does not operate in complete stealth, so users have a good chance of detecting and removing it from compromised computers, even though it is invasive. It's important to know that the tool can appear on users' phone as a visible application, which can alert them to the presence of this application. 

Alternatively, technically proficient individuals could identify and remove the application using advanced utilities such as Android Debug Bridge (ADB), which enables direct communication between users' smartphone and their computer by providing a command-line interface. According to Balaam, it is important to note that the data exfiltration process can be almost complete by the time Massistant is installed, which means authorities may already have accessed and extracted all important personal information from the device by the time Massistant is installed. 

Xiamen Meiya Pico's MSSocket mobile forensics tool, which was also developed by the company Xiamen Meiya Pico, was the subject of cybersecurity scrutiny a couple of years ago, and Massistant was regarded as a successor tool by the company in 2019. In developing surveillance solutions tailored for forensic investigations, the evolution from MSSocket to Massistant demonstrates the company's continued innovation. 

Xiamen Meiya Pico, according to industry data, controls around 40 per cent of the Chinese digital forensics market, demonstrating its position as the market leader in the provision of data extraction technologies to law enforcement. However, this company is not to be overlooked internationally as its activities have not gone unnoticed. For the first time in 2021, the U.S. government imposed sanctions against Meiya Pico, allegedly supplying surveillance tools to Chinese authorities. 

It has been reported that these surveillance tools have been used in ways that are causing serious human rights and privacy violations. Despite the fact that media outlets, including TechCrunch, have inquired about the company's role in mass instant development and distribution, it has declined to respond to these inquiries. 

It was Balaam who pointed out that Massistant is just a tiny portion of a much larger and more rapidly growing ecosystem of surveillance software developed by Chinese companies. At the moment, Lookout is tracking over fifteen distinct families of spyware and malware that originated from China. Many of these programs are thought to be specifically designed for state surveillance and digital forensics purposes. 

Having seen this trend in action, it is apparent that the surveillance industry is both large and mature in the region, which exacerbates global concerns regarding unchecked data collection and misuse of intrusive technologies. A critical inflexion point has been reached in the global conversation surrounding privacy, state surveillance, and digital autonomy, because tools like Massistant are becoming increasingly common. 

Mobile forensic technology has become increasingly powerful and accessible to government entities, which has led to an alarming blurring of the lines between lawful investigation and invasive overreach. Not only does this trend threaten individual privacy rights, but it also threatens to undermine trust in the digital ecosystem when transparency and accountability are lacking, especially when they are lacking in both. 

Consequently, it highlights the urgency of adopting stronger device security practices for individuals, staying informed about the risks associated with physical device access, and advocating for encrypted platforms that are resistant to unauthorized exploits, as well as advocating for stronger security practices for individuals. 

For policymakers and technology companies around the world, the report highlights the imperative need to develop and enforce robust regulatory frameworks that govern the ethical use of surveillance tools, both domestically and internationally. It is important to keep in mind that if these technologies are not regulated and monitored adequately, then they may set a dangerous precedent, enabling abuses that extend much beyond their intended scope. 

The Massistant case serves as a powerful reminder that the protection of digital rights is a central component of modern governance and civic responsibility in an age defined by data.

Google Gemini Exploit Enables Covert Delivery of Phishing Content

 


An AI-powered automation system in professional environments, such as Google Gemini for Workspace, is vulnerable to a new security flaw. Using Google’s advanced large language model (LLM) integration within its ecosystem, Gemini enables the use of artificial intelligence (AI) directly with a wide range of user tools, including Gmail, to simplify workplace tasks. 

A key feature of the app is the ability to request concise summaries of emails, which are intended to save users time and prevent them from becoming fatigued in their inboxes by reducing the amount of time they spend in it. Security researchers have, however identified a significant flaw in this feature which appears to be so helpful. 

As Mozilla bug bounty experts pointed out, malicious actors can take advantage of the trust users place in Gemini's automated responses by manipulating email content so that the AI is misled into creating misleading summaries by manipulating the content. As a result of the fact that Gemini operates within Google's trusted environment, users are likely to accept its interpretations without question, giving hackers a prime opportunity. This finding highlights what is becoming increasingly apparent in the cybersecurity landscape: when powerful artificial intelligence tools are embedded within widely used platforms, even minor vulnerabilities can be exploited by sophisticated social engineers. 

It is the vulnerability at the root of this problem that Gemini can generate e-mail summaries that seem legitimate but can be manipulated so as to include deceptive or malicious content without having to rely on conventional red flags, such as suspicious links or file attachments, to detect it. 

An attack can be embedded within an email body as an indirect prompt injection by attackers, according to cybersecurity researchers. When Gemini's language model interprets these hidden instructions during thesummarisationn process, it causes the AI to unintentionally include misleading messages in the summary that it delivers to the user, unknowingly. 

As an example, a summary can falsely inform the recipient that there has been a problem with their account, advising them to act right away, and subtly direct them to a phishing site that appears to be reliable and trustworthy. 

While prompt injection attacks on LLMs have been documented since the year 2024, and despite the implementation of numerous safeguards by developers to prevent these manipulations from occurring, this method continues to be effective even today. This tactic is persisting because of the growing sophistication of threat actors as well as the challenge of fully securing generative artificial intelligence systems that are embedded in critical communication platforms. 

There is also a need to be more vigilant when developing artificial intelligence and making sure users are aware of it, as traditional cybersecurity cues may no longer apply to these AI-driven environments. In order to find these vulnerabilities, a cybersecurity researcher, Marco Figueroa, identified them and responsibly disclosed them through Mozilla's 0Din bug bounty program, which specialises in finding vulnerabilities in generative artificial intelligence. 

There is a clever but deeply concerning method of exploitation demonstrated in Figueroa's proof-of-concept. The attack begins with a seemingly harmless e-mail sent to the intended victim that appears harmless at first glance. A phishing prompt disguised in white font on a white background is hidden in a secondary, malicious component of the message, which conceals benign information so as to avoid suspicion of the message.

When viewed in a standard email client, it is completely invisible to the human eye and is hidden behind benign content. The malicious message is strategically embedded within custom tags, which are not standard HTML elements, but which appear to be interpreted in a privileged manner by Gemini's summarization function, as they are not standard HTML elements. 

By activating the "Summarise this email" feature in Google Gemini, a machine learning algorithm takes into account both visible and hidden text within the email. Due to the way Gemini handles input wrapped in tags, it prioritises and reproduces the hidden message verbatim within the summary, placing it at the end of the response, as it should. 

In consequence, what appears to be a trustworthy, AI-generated summary now contains manipulative instructions which can be used to entice people to click on phishing websites, effectively bypassing traditional security measures. A demonstration of the ease with which generative AI tools can be exploited when trust in the system is assumed is demonstrated in this attack method, and it further demonstrates the importance of robust sanitisation protocols as well as input validation protocols for prompt sanitisation. 

It is alarming how effectively the exploitation technique is despite its technical simplicity. An invisible formatting technique enables the embedding of hidden prompts into an email, leveraging Google Gemini's interpretation of raw content to capitalise on its ability to comprehend the content. In the documented attack, a malicious actor inserts a command inside a span element with font-size: 0 and colour: white, effectively rendering the content invisible to the recipient who is viewing the message in a standard email client. 

Unlike a browser, which renders only what can be seen by the user, Gemini process the entire raw HTML document, including all hidden elements. As a consequence, Gemini's summary feature, which is available to the user when they invoke it, interprets and includes the hidden instruction as though it were part of the legitimate message in the generated summary.

It is important to note that this flaw has significant implications for services that operate at scale, as well as for those who use them regularly. A summary tool that is capable of analysing HTML inline styles, such as font-size:0, colour: white, and opacity:0, should be instructed to ignore or neutralise these styles, which render text visually hidden. 

The development team can also integrate guard prompts into LLM behaviour, instructing models not to ignore invisible content, for example. In terms of user education, he recommends that organisations make sure their employees are aware that AI-generated summaries, including those generated by Gemini, serve only as informational aids and should not be treated as authoritative sources when it comes to urgent or security-related instructions. 

A vulnerability of this magnitude has been discovered at a crucial time, as more and more tech companies are increasingly integrating LLMs into their platforms to automate productivity. In contrast to previous models, where users would manually trigger AI tools, the new paradigm is a shift to automated AI tools that will run in the background instead.

It is for this reason that Google introduced the Gemini side panel last year in Gmail, Docs, Sheets, and other Workspace apps to help users summarise and create content within their workflow seamlessly. A noteworthy change in Gmail's functionality is that on May 29, Google enabled automatic email summarisation for users whose organisations have enabled smart features across Gmail, Chat, Meet, and other Workspace tools by activating a default personalisation setting. 

As generative artificial intelligence becomes increasingly integrated into everyday communication systems, robust security protocols will become increasingly important as this move enhances convenience. This vulnerability exposes an issue of fundamental inadequacy in the current guardrails used for LLM, primarily focusing on filtering or flagging content that is visible to the user. 

A significant number of AI models, including the Google Gemini AI model, continue to use raw HTML markup, making them susceptible to obfuscation techniques such as zero-font text and white-on-white formatting. Despite being invisible to users, these techniques are still considered valid input to the model by the model-thereby creating a blind spot for attackers that can easily be exploited by attackers. 

Mozilla's 0Din program classified the issue as a moderately serious vulnerability by Mozilla, and said that the flaw could be exploited by hackers to harvest credential information, use vishing (voice-phishing), and perform other social engineering attacks by exploiting trust in artificial intelligence-generated content in order to gain access to information. 

In addition to the output filter, a post-processing filter can also function as an additional safeguard by inspecting artificial intelligence-generated summaries for signs of manipulation, such as embedded URLs, telephone numbers, or language that implies urgency, flagging these suspicious summaries for human review. This layered defence strategy is especially vital in environments where AI operates at scale. 

As well as protecting against individual attacks, there is also a broader supply chain risk to consider. It is clear that mass communication systems, such as CRM platforms, newsletters, and automated support ticketing services, are potential vectors for injection, according to researcher Marco Figueroa. There is a possibility that a single compromised account on any of these SaaS systems can be used to spread hidden prompt injections across thousands of recipients, turning otherwise legitimate SaaS services into large-scale phishing attacks. 

There is an apt term to describe "prompt injections", which have become the new email macros according to the research. The exploit exhibited by Phishing for Gemini significantly underscores a fundamental truth: even apparently minor, invisible code can be weaponised and used for malicious purposes. 

As long as language models don't contain robust context isolation that ensures third-party content is sandboxed or subjected to appropriate scrutiny, each piece of input should be viewed as potentially executable code, regardless of whether it is encoded correctly or not. In light of this, security teams should start to understand that AI systems are no longer just productivity tools, but rather components of a threat surface that need to be actively monitored, measured, and contained. 

The risk landscape of today does not allow organisations to blindly trust AI output. Because generative artificial intelligence is being integrated into enterprise ecosystems in ever greater numbers, organisations must reevaluate their security frameworks in order to address the emerging risks that arise from machine learning systems in the future. 

Considering the findings regarding Google Gemini, it is urgent to consider AI-generated outputs as potential threat vectors, as they are capable of being manipulated in subtle but impactful ways. A security protocol based on AI needs to be implemented by enterprises to prevent such exploitations from occurring, robust validation mechanisms for automated content need to be established, and a collaborative oversight system between development, IT, and security teams must be established to ensure this doesn't happen again. 

Moreover, it is imperative that AI-driven tools, especially those embedded within communication workflows, be made accessible to end users so that they can understand their capabilities and limitations. In light of the increasing ease and pervasiveness of automation in digital operations, it will become increasingly essential to maintain a culture of informed vigilance across all layers of the organisation to maintain trust and integrity.

How to Protect Your eSIM from Hacks: Essential Tips for Safe Digital Connectivity

 

As mobile technology evolves, the eSIM (Embedded SIM) has emerged as a smarter alternative to traditional SIM cards. It offers seamless setup, easier number management, and smoother international travel. But while eSIMs are more secure than physical SIMs, they aren’t completely immune to hacking.

Why eSIMs Are Generally More Secure

Unlike old-school SIM cards, eSIMs are embedded directly into your device. There's no card to physically remove and misuse. All of your mobile credentials—including your number and network configuration—are securely stored on a programmable chip within the phone.

This setup eliminates one of the key risks of traditional SIMs: theft. A criminal can’t just pop your SIM into another phone and hijack your identity.

Moreover, eSIM data is encrypted, which makes it exceptionally difficult for hackers to manipulate or clone. The activation process is tightly regulated—telecom providers carry out identity verification to ensure the eSIM is linked to the correct user and device.

Remote management adds another layer of safety. You can usually monitor and control your eSIM directly through your carrier’s app. If you suspect misuse, disabling the line remotely is often just a few taps away.

Another underrated benefit is reduced reliance on public Wi-Fi networks. Travelers using eSIMs can activate data plans instantly without seeking out insecure Wi-Fi hotspots—long considered a major cyber risk.

But Here’s Why You Should Still Be Cautious

Despite stronger safeguards, eSIMs aren’t invincible. SIM swapping attacks are still a concern. In these cases, cybercriminals impersonate you to transfer your number to their device, potentially cutting off your service and hijacking your online accounts.

Another risk vector is malware—often delivered through deceptive links in messages or emails. Once infected, your device and eSIM could be vulnerable to unauthorized access.

As the article notes, “Be wary of phishing attempts, for example, and think twice about following links unless you’re absolutely sure they’re genuine.” Double-check with the source if something feels suspicious.

Also, weak login credentials can lead to account breaches. Anyone who gains access to your eSIM provider account could tamper with your mobile identity. To defend against this, always use strong, unique passwords and enable two-factor authentication (2FA) wherever possible.

Finally, stay updated. Regularly installing the latest system updates and app versions will help fix security holes and enhance protection. Use biometric authentication (like fingerprint or face unlock) along with a strong PIN to keep your phone locked down if lost or stolen.

While eSIMs represent a leap forward in mobile security, they require smart digital habits to stay truly secure. Protect your identity by being vigilant, updating your device, and securing your accounts with robust passwords and two-factor authentication.

Malicious Firefox Extension Steals Verification Tokens: Update to stay safe


Credential theft and browser security were commonly found in Google Chrome browsers due to its wide popularity and usage. Recently, however, cyber criminals have started targeting Mozilla Firefox users. A recent report disclosed a total of eight malicious Firefox extensions that could spy on users and even steal verification tokens.

About the malicious extension

Regardless of the web browser we use, criminals are always on the hunt. Threat actors generally prefer malicious extensions or add-ons; therefore, browser vendors like Mozilla offer background protections and public support to minimize these threats as much as possible. Despite such a measure, on July 4th, the Socket Threat Research Team's report revealed that threat actors are still targeting Firefox users. 

According to Kush Pandya, security engineer at Socket Threat Research Team, said that while the “investigation focuses on Firefox extensions, these threats span the entire browser ecosystem.” However, the particular Firefox investigation revealed a total of eight potentially harmful extensions, including user session hijacking to earn commissions on websites, redirection to scam sites, surveillance via an invisible iframe tracking method, and the most serious: authentication theft.

How to mitigate the Firefox attack threat

Users are advised to read the technical details of the extensions. According to Forbes, Mozilla is taking positive action to protect Firefox users from such threats. The company has taken care of the extensions mentioned in the report. According to Mozilla, the malicious extension impacted a very small number of users; some of the extensions have been shut down. 

“We help users customize their browsing experience by featuring a variety of add-ons, manually reviewed by our Firefox Add-ons team, on our Recommended Extensions page,” said a Firefox spokesperson. To protect the users, Mozilla has disabled “extensions that compromise their safety or privacy, or violate its policies, and continuously works to improve its malicious add-on detection tools and processes.”

How to stay safe?

To protect against these threats, Mozilla has advised users to Firefox users to take further steps, cautioning that such extensions are made by third parties. Users should check the extension rating and reviews, and be extra careful of extensions that need excessive permissions that are not compatible with what the extension claims to do. If any extension seems to be malicious, “users should report it for review,” a Firefox spokesperson said. 

Germany’s Warmwind May Be the First True AI Operating System — But It’s Not What You Expect

 



Artificial intelligence is starting to change how we interact with computers. Since advanced chatbots like ChatGPT gained popularity, the idea of AI systems that can understand natural language and perform tasks for us has been gaining ground. Many have imagined a future where we simply tell our computer what to do, and it just gets done, like the assistants we’ve seen in science fiction movies.

Tech giants like OpenAI, Google, and Apple have already taken early steps. AI tools can now understand voice commands, control some apps, and even help automate tasks. But while these efforts are still in progress, the first real AI operating system appears to be coming from a small German company called Jena, not from Silicon Valley.

Their product is called Warmwind, and it’s currently in beta testing. Though it’s not widely available yet, over 12,000 people have already joined the waitlist to try it.


What exactly is Warmwind?

Warmwind is an AI-powered system designed to work like a “digital employee.” Instead of being a voice assistant or chatbot, Warmwind watches how users perform digital tasks like filling out forms, creating reports, or managing software, and then learns to do those tasks itself. Once trained, it can carry out the same work over and over again without any help.

Unlike traditional operating systems, Warmwind doesn’t run on your computer. It operates remotely through cloud servers based in Germany, following the strict privacy rules under the EU’s GDPR. You access it through your browser, but the system keeps running even if you close the window.

The AI behaves much like a person using a computer. It clicks buttons, types, navigates through screens, and reads information — all without needing special APIs or coding integrations. In short, it automates your digital tasks the same way a human would, but much faster and without tiring.

Warmwind is mainly aimed at businesses that want to reduce time spent on repetitive computer work. While it’s not the futuristic AI companion from the movies, it’s a step in that direction, making software more hands-free and automated.

Technically, Warmwind runs on a customized version of Linux built specifically for automation. It uses remote streaming technology to show you the user interface while the AI works in the background.

Jena, the company behind Warmwind, says calling it an “AI operating system” is symbolic. The name helps people understand the concept quickly, it’s an operating system, not for people, but for digital AI workers.

While it’s still early days for AI OS platforms, Warmwind might be showing us what the future of work could look like, where computers no longer wait for instructions but get things done on their own.

Tallento.ai Crosses 1 Million Users, Disrupts Recruitment with AI-Powered Instant Hiring

 

Tallento.ai, an AI-driven recruitment platform built without external funding, has surpassed 1 million registered professionals and joined forces with more than 5,500 employers nationwide. By fusing artificial intelligence, gamification, and a mobile-first experience, Tallento.ai is transforming the way talent is sourced, verified, and hired—positioning itself as "India's Quick Commerce of Hiring" that compresses job matching timelines from weeks to minutes.

Founded by a team of IIT Guwahati, NIT, and IIM Bangalore alumni, Tallento.ai stands out as a purpose-led alternative to conventional job portals. The platform leverages smart algorithms, gamified application journeys, and an intuitive mobile design to help companies onboard pre-verified candidates faster than ever before.

"We asked a simple question: if groceries and cabs can arrive in 10 minutes, why does hiring still take 30 days?" said Sandeep Boora, Co-founder. "We're solving for speed, relevance, and dignity—especially for young professionals entering the workforce."

Originally focused on recruitment in the EdTech sector, the company now partners with leading brands such as Allen, Aakash Institute, PhysicsWallah, and Byju’s to scale educator and operational hiring across India. Operating with a 120-member team and remaining profitable without raising outside capital, Tallento.ai has demonstrated strong demand and trust among employers and job seekers alike.

Looking ahead, the platform plans to roll out several new features, including:

  • AI Mentorship Modules to deliver personalized upskilling recommendations
  • Video-first Talent Showcases replacing static resumes with dynamic storytelling
  • Voice and regional language search to improve access for blue- and grey-collar workers
  • Emotional wellness support tools to ease job transitions
  • One-click verified hiring backed by AI-generated trust scores

"Hiring is no longer just transactional," said Neha Gopal Thakur, Co-founder. "We are building an ecosystem that empowers individuals, supports mental well-being, and ensures companies find the right talent, faster."

With a clear mission to make hiring accessible in Tier 2 and Tier 3 cities, Tallento.ai is bridging the gap for job seekers in semi-urban regions. The company envisions becoming the backbone of recruitment in fast-growing sectors such as IT, healthcare, BFSI, and retail.

"India's youth need fast, fair, and future-ready hiring," said Tushar Saraf, Co-founder. "Tallento.ai is here to deliver that—without friction, delay, or exclusion." 

Google Gemini Bug Exploits Summaries for Phishing Scams


False AI summaries leading to phishing attacks

Google Gemini for Workspace can be exploited to generate email summaries that appear legitimate but include malicious instructions or warnings that direct users to phishing sites without using attachments or direct links.

Google Gemini for Workplace can be compromised to create email summaries that look real but contain harmful instructions or warnings that redirect users to phishing websites without using direct links or attachments. 

Similar attacks were reported in 2024 and afterwards; safeguards were pushed to stop misleading responses. However, the tactic remains a problem for security experts. 

Gemini for attack

A prompt-injection attack on the Gemini model was revealed via cybersecurity researcher Marco Figueoa, at 0din, Mozilla’s bug bounty program for GenAI tools. The tactic creates an email with a hidden directive for Gemini. The threat actor can hide malicious commands in the message body text at the end via CSS and HTML, which changes the font size to zero and color to white. 

According to Marco, who is GenAI Bug Bounty Programs Manager at Mozilla, “Because the injected text is rendered in white-on-white (or otherwise hidden), the victim never sees the instruction in the original message, only the fabricated 'security alert' in the AI-generated summary. Similar indirect prompt attacks on Gemini were first reported in 2024, and Google has already published mitigations, but the technique remains viable today.”

Gmail does not render the malicious instruction as there are no attachments or links present, and the message may reach the victim’s inbox. If the receiver opens the email and asks Gemini to make a summary of the received mail, the AI tool will parse the invisible directive and create the summary. Figueroa provides an example of Gemini following hidden prompts, accompanied by a security warning that the victim’s Gmail password and phone number may be compromised.

Impact

Supply-chain threats: CRM systems, automated ticketing emails, and newsletters can become injection vectors, changing one exploited SaaS account into hundreds of thousands of phishing beacons.

Cross-product surface: The same tactics applies to Gemini in Slides, Drive search, Docs and any workplace where the model is getting third-party content.

According to Marco, “Security teams must treat AI assistants as part of the attack surface and instrument them, sandbox them, and never assume their output is benign.”

Can AI Be Trusted With Sensitive Business Data?

 



As artificial intelligence becomes more common in businesses, from retail to finance to technology— it’s helping teams make faster decisions. But behind these smart predictions is a growing problem: how do you make sure employees only see what they’re allowed to, especially when AI mixes information from many different places?

Take this example: A retail company’s AI tool predicts upcoming sales trends. To do this, it uses both public market data and private customer records. The output looks clean and useful but what if that forecast is shown to someone who isn’t supposed to access sensitive customer details? That’s where access control becomes tricky.


Why Traditional Access Rules Don’t Work for AI

In older systems, access control was straightforward. Each person had certain permissions: developers accessed code, managers viewed reports, and so on. But AI changes the game. These systems pull data from multiple sources, internal files, external APIs, sensor feeds, and combine everything to create insights. That means even if a person only has permission for public data, they might end up seeing results that are based, in part, on private or restricted information.


Why It Matters

Security Concerns: If sensitive data ends up in the wrong hands even indirectly, it can lead to data leaks. A 2025 study showed that over two-thirds of companies had AI-related security issues due to weak access controls.

Legal Risks: Privacy laws like the GDPR require clear separation of data. If a prediction includes restricted inputs and is shown to the wrong person, companies can face heavy fines.

Trust Issues: When employees or clients feel their data isn’t safe, they lose trust in the system, and the business.


What’s Making This So Difficult?

1. AI systems often blend data so deeply that it’s hard to tell what came from where.

2. Access rules are usually fixed, but AI relies on fast-changing data.

3. Companies have many users with different roles and permissions, making enforcement complicated.

4. Permissions are often too broad, for example, someone allowed to "view reports" might accidentally access sensitive content.


How Can Businesses Fix This?

• Track Data Origins: Label data as "public" or "restricted" and monitor where it ends up.

• Flexible Access Rules: Adjust permissions based on user roles and context.

• Filter Outputs: Build AI to hide or mask parts of its response that come from private sources.

• Separate Models: Train different AI models for different user groups, each with its own safe data.

• Monitor Usage: Keep logs of who accessed what, and use alerts to catch suspicious activity.


As AI tools grow more advanced and rely on live data from many sources, managing access will only get harder. Businesses must modernize their security strategies to protect sensitive information without slowing down innovation.

NVIDIA Urges Users to Enable ECC to Defend GDDR6 GPUs Against Rowhammer Threats

  

NVIDIA has issued a renewed advisory encouraging customers to activate System Level Error-Correcting Code (ECC) protections to defend against Rowhammer attacks targeting GPUs equipped with GDDR6 memory.

This heightened warning follows recent research from the University of Toronto demonstrating how practical Rowhammer attacks can be on NVIDIA’s A6000 graphics processor.

“We ran GPUHammer on an NVIDIA RTX A6000 (48 GB GDDR6) across four DRAM banks and observed 8 distinct single-bit flips, and bit-flips across all tested banks,” the researchers explained. “The minimum activation count (TRH) to induce a flip was ~12K, consistent with prior DDR4 findings.”

Using these induced bit flips, the researchers performed what they described as the first machine learning accuracy degradation attack leveraging Rowhammer on a GPU.

Rowhammer exploits a hardware vulnerability where repeatedly accessing a memory row can cause adjacent memory cells to change state, flipping bits from 1 to 0 or vice versa. This can lead to denial-of-service issues, corrupted data, or even potential privilege escalation.

System Level ECC combats such risks by introducing redundant bits that can automatically detect and correct single-bit memory errors, ensuring data remains intact.

NVIDIA emphasized that enabling ECC is particularly critical for workstation and data center GPUs, which handle sensitive workloads like AI training and inference, to prevent serious computational errors.

The company’s security bulletin confirmed that researchers “showed a potential Rowhammer attack against an NVIDIA A6000 GPU with GDDR6 Memory” in scenarios where ECC had not been turned on.

The GPUHammer technique developed by the academic team successfully induced bit flips despite GDDR6’s higher latency and faster refresh rates, which generally make Rowhammer attacks more challenging compared to older DDR4 memory.

Researcher Gururaj Saileshwar told BleepingComputer that their demonstration could drop an AI model’s accuracy from 80% to below 1% with just a single bit flip on the A6000.

In addition to the RTX A6000, NVIDIA strongly recommends enabling ECC on the following GPU product lines:

Data Center GPUs:
  • Ampere: A100, A40, A30, A16, A10, A2, A800
  • Ada: L40S, L40, L4
  • Hopper: H100, H200, GH200, H20, H800
  • Blackwell: GB200, B200, B100
  • Turing: T1000, T600, T400, T4
  • Volta: Tesla V100, Tesla V100S
Workstation GPUs:
  • Ampere RTX: A6000, A5000, A4500, A4000, A2000, A1000, A400
  • Ada RTX: 6000, 5000, 4500, 4000, 4000 SFF, 2000
  • Blackwell RTX PRO (latest workstation line)
  • Turing RTX: 8000, 6000, 5000, 4000
  • Volta: Quadro GV100
Embedded/Industrial:
  • Jetson AGX Orin Industrial
  • IGX Orin
Newer GPUs—including Blackwell RTX 50 Series, Blackwell Data Center chips, and Hopper Data Center GPUs—feature built-in on-die ECC protection that requires no manual configuration.

To verify whether ECC is active, administrators can use an out-of-band method through the Baseboard Management Controller (BMC) and Redfish API to check the “ECCModeEnabled” status. NVIDIA’s NSM Type 3 and SMBPBI tools also allow ECC configuration, but these require NVIDIA Partner Portal access.

Alternatively, ECC can be checked or enabled in-band using the nvidia-smi command-line tool from the system CPU.

Saileshwar noted that enabling these safeguards could reduce machine learning inference performance by around 10% and reduce available memory capacity by 6.5% across workloads.

While Rowhammer remains a significant security concern, its exploitation in real-world scenarios is complex. An attack requires highly specific conditions, intensive memory access, and precise control, making it difficult to carry out reliably, especially in production environments.

CISA Lists Citrix Bleed 2 as Exploit, Gives One Day Deadline to Patch

CISA Lists Citrix Bleed 2 as Exploit, Gives One Day Deadline to Patch

CISA confirms bug exploit

The US Cybersecurity & Infrastructure Security Agency (CISA) confirms active exploitation of the CitrixBleed 2 vulnerability (CVE-2025-5777 in Citrix NetScaler ADC and Gateway. It has given federal parties one day to patch the bugs. This unrealistic deadline for deploying the patches is the first since CISA issued the Known Exploited Vulnerabilities (KEV) catalog, highlighting the severity of attacks abusing the security gaps. 

About the critical vulnerability

CVE-2025-5777 is a critical memory safety bug (out-of-bounds memory read) that gives hackers unauthorized access to restricted memory parts. The flaw affects NetScaler devices that are configured as an AAA virtual server or a Gateway. Citrix patched the vulnerabilities via the June 17 updates. 

After that, expert Kevin Beaumont alerted about the flaw’s capability for exploitation if left unaddressed, terming the bug as ‘CitrixBleed 2’ because it shared similarities with the infamous CitrixBleed bug (CVE-2023-4966), which was widely abused in the wild by threat actors.

What is the CitrixBleed 2 exploit?

According to Bleeping Computer, “The first warning of CitrixBleed 2 being exploited came from ReliaQuest on June 27. On July 7, security researchers at watchTowr and Horizon3 published proof-of-concept exploits (PoCs) for CVE-2025-5777, demonstrating how the flaw can be leveraged in attacks that steal user session tokens.”

The rise of exploits

During that time, experts could not spot the signs of active exploitation. Soon, the threat actors started to exploit the bug on a larger scale, and after the attack, they became active on hacker forums, “discussing, working, testing, and publicly sharing feedback on PoCs for the Citrix Bleed 2 vulnerability,” according to Bleeping Computers. 

Hackers showed interest in how to use the available exploits in attacks effectively. The hackers have become more active, and various exploits for the bug have been published.

Now that CISA has confirmed the widespread exploitation of CitrixBleed 2 in attacks, threat actors may have developed their exploits based on the recently released technical information. CISA has suggested to “apply mitigations per vendor instructions, follow applicable BOD 22-01 guidance for cloud services, or discontinue use of the product if mitigations are unavailable.”

Security Breach Reveals "Catwatchful" Spyware is Snooping on Users

Security Breach Reveals "Catwatchful" Spyware is Snooping on Users

A security bug in a stealthy Android spyware operation, “Catwatchful,” has exposed full user databases affecting its 62,000 customers and also its app admin. The vulnerability was found by cybersecurity expert Eric Daigle reported about the spyware app’s full database of email IDs and plaintext passwords used by Catwatchful customers to access stolen data from the devices of their victims. 

Most of the victims were based in India, Argentina, Peru, Mexico, Colombia, Bolivia, and Ecuador. A few records date back to 2018. The leaked database also revealed the identity of the Catwatchful admin called Omar Soca Char.

The Catwatchful database also revealed the identity of the spyware operation’s administrator, Omar Soca Charcov, a developer based in Uruguay.

About Catwatchful

Catwatchful is a spyware that pretends to be a child monitoring app, claiming to be “invisible and can not be detected,” while it uploads the victim’s data to a dashboard accessible to the person who planted the app. The stolen data includes real-time location data, victims’ photos, and messages.  The app can also track live ambient audio from the device’s mic and access the phone camera (both front and rear).

Catwatchful and similar apps are banned on app stores, and depend on being downloaded and deployed by someone having physical access to a victim’s phone. These apps are famous as “stalkerware” or “spouseware” as they are capable of unauthorized and illegal non-consensual surveillance of romantic partners and spouses. 

Rise of spyware apps

The Catwatchful incident is the fifth and latest in this year’s growing list of stalkerware scams that have been breached, hacked, or had their data exposed. 

How was the spyware found?

Daigle has previously discovered stalkerware exploits. Catwatchful uses a custom-made API, which the planted app uses to communicate to send data back to Catwatchful servers. The stalkerware also uses Google Firebase to host and store stolen data. 

According to Techradar, the “data was stored on Google Firebase, sent via a custom API that was unauthenticated, resulting in open access to user and victim data. The report also confirms that, although hosting had initially been suspended by HostGator, it had been restored via another temporary domain."

Ditch Passwords, Use Passkeys to Secure Your Account

Ditch Passwords, Use Passkeys to Secure Your Account

Ditch passwords, use passkeys

Microsoft and Google users, in particular, have been warned about ditching passwords for passkeys. Passwords are easy to steal and can unlock your digital life. Microsoft has been at the forefront, confirming it will delete passwords for more than a billion users. Google, too, has warned that most of its users will have to add passkeys to their accounts. 

What are passkeys?

Instead of a username and password, passkeys use our device security to log into our account. This means that there is no password to hack and no two-factor authentication codes to bypass, making it phishing-resistant.

At the same time, the Okta team warned that it found threat actors exploiting v0, an advanced GenAI tool made by Vercelopens, to create phishing websites that mimic real sign-in webpages

Okta warns users to not use passwords

A video shows how this works, raising concerns about users still using passwords to sign into their accounts, even when backed by multi-factor authentication, and “especially if that 2FA is nothing better than SMS, which is now little better than nothing at all,” according to Forbes. 

According to Okta, “This signals a new evolution in the weaponization of GenAI by threat actors who have demonstrated an ability to generate a functional phishing site from simple text prompts. The technology is being used to build replicas of the legitimate sign-in pages of multiple brands, including an Okta customer.”

Why are passwords not safe?

It is shocking how easy a login webpage can be mimicked. Users should not be surprised that today’s cyber criminals are exploiting and weaponizing GenAI features to advance and streamline their phishing attacks. AI in the wrong hands can have massive repercussions for the cybersecurity industry.

According to Forbes, “Gone are the days of clumsy imagery and texts and fake sign-in pages that can be detected in an instant. These latest attacks need a technical solution.”

Users are advised to add passkeys to their accounts if available and stop using passwords when signing in to their accounts. Users should also ensure that if they use passwords, they should be long and unique, and not backed up by SMS 2-factor authentication. 

Microsoft Phases Out Password Autofill in Authenticator App, Urges Move to Passkeys for Stronger Security

 

Microsoft is ushering in major changes to how users secure their accounts, declaring that “the password era is ending” and warning that “bad actors know it” and are “desperately accelerating password-related attacks while they still can.”

These updates, rolling out immediately, impact the Microsoft Authenticator app. Previously, the app let users securely store and autofill passwords on apps and websites you visit on your phone. However, starting this month, “you will not be able to use autofill with Authenticator.”

A more significant shift is just weeks away. “From August,” Microsoft cautions, “your saved passwords will no longer be accessible in Authenticator.” Users have until August 2025 to transfer their stored passwords elsewhere, or risk losing access altogether. As the company emphasized, “any generated passwords not saved will be deleted.”

These moves are part of Microsoft’s broader initiative to phase out traditional passwords in favor of passkeys. The tech giant, alongside Google and other industry leaders, points out that passwords represent a major security vulnerability. Despite common safeguards like two-factor authentication (2FA), account credentials can still be intercepted or compromised.

Passkeys, by contrast, bind account access to device-level security, requiring biometrics or a PIN to log in. This means there’s no password to steal, phish, or share. The FIDO Alliance explains: “passkeys are phishing resistant and secure by design. They inherently help reduce attacks from cybercriminals such as phishing, credential stuffing, and other remote attacks. With passkeys there are no passwords to steal and there is no sign-in data that can be used to perpetuate attacks.”

For users currently relying on Authenticator’s password storage, Microsoft advises moving credentials to the Edge browser or exporting them to another password manager. But more importantly, this is a chance to upgrade your key accounts to passkeys.

Authenticator will continue to support passkeys going forward. Microsoft advises: “If you have set up Passkeys for your Microsoft Account, ensure that Authenticator remains enabled as your Passkey Provider. Disabling Authenticator will disable your passkeys.”