Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Latest News

Sweden Confirms Power Grid Breach Amid Growing Ransomware Concerns

  Swedish power grid operator, Suderland, has confirmed it is investigating a security incident related to a potential ransomware attack aim...

All the recent news you need to know

Privacy Laws Struggle to Keep Up with Meta’s ‘Luxury Surveillance’ Glasses


Meta’s newest smart glasses have reignited concerns about privacy, as many believe the company is inching toward a world where constant surveillance becomes ordinary. 

Introduced at Meta’s recent Connect event, the glasses reflect the kind of future that science fiction has long warned about, where everyone can record anyone at any moment and privacy nearly disappears. This is not the first time the tech industry has tried to make wearable cameras mainstream. 

More than ten years ago, Google launched Google Glass, which quickly became a public failure. People mocked its users as “Glassholes,” criticizing how easily the device could invade personal space. The backlash revealed that society was not ready for technology that quietly records others without their consent. 

Meta appears to have taken a different approach. By partnering with Ray-Ban, the company has created glasses that look fashionable and ordinary. Small cameras are placed near the nose bridge or along the outer rims, and a faint LED light is the only sign that recording is taking place. 

The glasses include a built-in display, voice-controlled artificial intelligence, and a wristband that lets the wearer start filming or livestreaming with a simple gesture. All recorded footage is instantly uploaded to Meta’s servers. 

Even with these improvements in design, the legal and ethical issues remain. Current privacy regulations are too outdated to deal with the challenges that come with such advanced wearable devices. 

Experts believe that social pressure and public disapproval may still be stronger than any law in discouraging misuse. As Meta promotes its vision of smart eyewear, critics warn that what is really being made normal is a culture of surveillance. 

The sleek design and luxury branding may make the technology more appealing, but the real risk lies in how easily people may accept being watched everywhere they go.

Afghans Report Killings After British Ministry of Defence Data Leak

 

Dozens of Afghans whose personal information was exposed in a British Ministry of Defence (MoD) data breach have reported that their relatives or colleagues were killed because of the leak, according to new research submitted to a UK parliamentary inquiry. The breach, which occurred in February 2022, revealed the identities of nearly 19,000 Afghans who had worked with the UK government during the war in Afghanistan. It happened just six months after the Taliban regained control of Kabul, leaving many of those listed in grave danger. 

The study, conducted by Refugee Legal Support in partnership with Lancaster University and the University of York, surveyed 350 individuals affected by the breach. Of those, 231 said the MoD had directly informed them that their data had been compromised. Nearly 50 respondents said their family members or colleagues were killed as a result, while over 40 percent reported receiving death threats. At least half said their relatives or friends had been targeted by the Taliban following the exposure of their details. 

One participant, a former Afghan special forces member, described how his family suffered extreme violence after the leak. “My father was brutally beaten until his toenails were torn off, and my parents remain under constant threat,” he said, adding that his family continues to face harassment and repeated house searches. Others criticized the British government for waiting too long to alert them, saying the delay had endangered lives unnecessarily.  

According to several accounts, while the MoD discovered the breach in 2023, many affected Afghans were only notified in mid-2025. “Waiting nearly two years to learn that our personal data was exposed placed many of us in serious jeopardy,” said a former Afghan National Army officer still living in Afghanistan. “If we had been told sooner, we could have taken steps to protect our families.”  

Olivia Clark, Executive Director of Refugee Legal Support, said the findings revealed the “devastating human consequences” of the government’s failure to protect sensitive information. “Afghans who risked their lives working alongside British forces have faced renewed threats, violent assaults, and even killings of their loved ones after their identities were exposed,” she said. 

Clark added that only a small portion of those affected have been offered relocation to the UK. The government estimates that more than 7,300 Afghans qualify for resettlement under a program launched in 2024 to assist those placed at risk by the data breach. However, rights organizations say the scheme has been too slow and insufficient compared to the magnitude of the crisis.

The breach has raised significant concerns about how the UK manages sensitive defense data and its responsibilities toward Afghans who supported British missions. For many of those affected, the consequences of the exposure remain deeply personal and ongoing, with families still living under threat while waiting for promised protection or safe passage to the UK.

Tech Giants Pour Billions Into AI Race for Market Dominance

 

Tech giants are intensifying their investments in artificial intelligence, fueling an industry boom that has driven stock markets to unprecedented heights. Fresh earnings reports from Meta, Alphabet, and Microsoft underscore the immense sums being poured into AI infrastructure—from data centers to advanced chips—despite lingering doubts about the speed of returns.

Meta announced that its 2025 capital expenditures will range between $70 billion and $72 billion, slightly higher than its earlier forecast. The company also revealed plans for substantially larger spending growth in 2026 as it seeks to compete more aggressively with players like OpenAI.

During a call with analysts, CEO Mark Zuckerberg defended Meta’s aggressive investment strategy, emphasizing AI’s transformative potential in driving both new product development and enhancing its core advertising business. He described the firm’s infrastructure as operating in a “compute-starved” state and argued that accelerating spending was essential to unlocking future growth.

Alphabet, parent to Google and YouTube, also raised its annual capital spending outlook to between $91 billion and $93 billion—up from $85 billion earlier this year. This nearly doubles what the company spent in 2024 and highlights its determination to stay at the forefront of large-scale AI development.

Microsoft’s quarterly report similarly showcased its expanding investment efforts. The company disclosed $34.9 billion in capital expenditures through September 30, surpassing analyst expectations and climbing from $24 billion in the previous quarter. CEO Satya Nadella said Microsoft continues to ramp up AI spending in both infrastructure and talent to seize what he called a “massive opportunity.” He noted that Azure and the company’s broader portfolio of AI tools are already having tangible real-world effects.

Investor enthusiasm surrounding these bold AI commitments has helped lift the share prices of all three firms above the broader S&P 500 index. Still, Wall Street remains keenly interested in seeing whether these heavy capital outlays will translate into measurable profits.

Bank of America senior economist Aditya Bhave observed that robust consumer activity and AI-driven business investment have been the key pillars supporting U.S. economic resilience. As long as the latter remains strong, he said, it signals continued GDP growth. Despite an 83 percent profit drop for Meta due to a one-time tax charge, Microsoft and Alphabet reported profit increases of 12 percent and 33 percent, respectively.

Atroposia Malware Offers Attackers Built-In Tools to Spy, Steal, and Scan Systems

 




Cybersecurity researchers have recently discovered a new malware platform known as Atroposia, which is being promoted on dark web forums as a subscription-based hacking toolkit. The platform offers cybercriminals a remote access trojan (RAT) that can secretly control computers, steal sensitive data, and even scan the infected system for security flaws, all for a monthly payment.

Researchers from Varonis, a data protection firm, explained that Atroposia is the latest example of a growing trend where ready-to-use malware services make advanced hacking tools affordable and accessible, even to attackers with little technical expertise.


How Atroposia Works

Atroposia operates as a modular program, meaning its users can turn individual features on or off depending on what they want to achieve. Once installed on a device, it connects back to the attacker’s command-and-control (C2) server using encrypted communication, making it difficult for defenders to detect its activity.

The malware can also bypass User Account Control (UAC), a security layer in Windows designed to prevent unauthorized changes, allowing it to gain full system privileges and remain active in the background.

Those who purchase access, reportedly priced at around $200 per month unlock a wide set of tools. These include the ability to open a hidden remote desktop, steal files, exfiltrate data, capture copied text, harvest credentials, and even interfere with internet settings through DNS hijacking.

One of the most distinctive parts of Atroposia is its HRDP Connect module, which secretly creates a secondary desktop session. Through this, attackers can explore a victim’s computer, read emails, open apps, or view documents without the user noticing anything unusual. Because the interaction happens invisibly, traditional monitoring systems often fail to recognize it as remote access.

The malware also provides an Explorer-style file manager, which lets attackers browse, copy, or delete files remotely. It includes a “grabber” feature that can search for specific file types or keywords, automatically compress the selected items into password-protected ZIP archives, and transmit them directly from memory leaving little trace on the device.


Theft and Manipulation Features

Atroposia’s data-theft tools are extensive. Its stealer module targets saved logins from browsers, chat records, and even cryptocurrency wallets. A clipboard monitor records everything a user copies, such as passwords, private keys, or wallet addresses, storing them in an easily accessible list for the attacker.

The RAT also uses DNS hijacking at the local machine level. This technique silently redirects web traffic to malicious sites controlled by the attacker, making it possible to trick victims into entering credentials on fake websites, download malware updates, or expose their data through man-in-the-middle attacks.


A Built-In Vulnerability Scanner

Unlike typical RATs, Atroposia comes with a local vulnerability scanner that automatically checks the system for weak spots, such as missing security patches, outdated software, or unsafe configurations. It generates a score to show which issues are easiest to exploit.

Researchers have warned that this function poses a major threat to corporate networks, since it can reveal unpatched VPN clients or privilege escalation flaws that allow attackers to deepen their access or spread across connected systems.

Security experts view Atroposia as part of a larger movement in the cybercrime ecosystem. Services like SpamGPT and MatrixPDF have already shown how subscription-based hacking tools lower the technical barrier for attackers. Atroposia extends that trend by bundling reconnaissance, exploitation, and data theft into one easy-to-use toolkit.


How Users Can Stay Protected

Analysts recommend taking preventive steps to reduce exposure to such threats.

Users should:

• Keep all software and operating systems updated.

• Download programs only from verified and official sources.

• Avoid pirated or torrent-based software.

• Be cautious of unfamiliar commands or links found online.

Companies are also urged to monitor for signs such as hidden desktop sessions, unusual DNS modifications, and data being sent directly from memory, as these can indicate the presence of sophisticated RATs like Atroposia.

Atroposia’s discovery highlights the growing ease with which advanced hacking tools are becoming available. What once required high-level expertise can now be rented online, posing a serious challenge to both individual users and large organizations trying to protect their digital environments.



Cybercriminals Target Fans Ahead of 2026 FIFA World Cup, Norton Warns

 

Cybercriminals Target Fans Ahead of 2026 FIFA World Cup, Norton Warns With the 2026 FIFA World Cup still months away, cybersecurity experts are already warning fans to stay alert as fraudsters begin exploiting the global excitement surrounding the tournament. According to cybersecurity firm Norton, a wave of early scams is emerging aimed at deceiving soccer enthusiasts and stealing their money and personal data. 

The tournament, set to take place across the United States, Canada, and Mexico next summer, presents a lucrative opportunity for cybercriminals. 

“Every major event attracts cybercriminals. They exploit the distraction and excitement of fans to make them more vulnerable,” said Iskander Sanchez-Rola, Director of AI and Innovation at Norton. 

Experts say online threats range from counterfeit ticket offers and phishing campaigns to fake sweepstakes and manipulated search results. Fraudsters are reportedly creating fake websites that mimic official World Cup pages to distribute malware or collect sensitive information. 

Others are setting up bogus social media accounts promoting exclusive ticket deals or giveaways to lure victims. 

Norton’s analysis highlights several prevalent scam types: 

Manipulated Search Results: Fake ticketing and merchandise sites appearing high in search results to spread malware. 

Fake Sweepstakes and Promotions: Fraudulent offers designed to capture personal data under the guise of contests. 

Counterfeit Tickets: Illegitimate sales on social media or private channels that leave fans without valid entry after payment. 

Phishing Emails: Messages imitating FIFA or partner brands to trick users into downloading malicious files. 

Travel Booking Scams: Sham websites offering discounted accommodations that disappear after receiving payments. 

Security professionals urge fans to exercise caution. Norton advises checking URLs carefully for misspellings or strange domain names, purchasing tickets only through verified platforms, and avoiding money transfers to private accounts. 

Users are also encouraged to enable two-factor authentication and use password managers for added protection. Authorities warn that such scams will likely escalate as the tournament nears. Fans are urged to remain vigilant, verify every offer, and immediately report any suspected fraud to official channels or local law enforcement.

AIjacking Threat Exposed: How Hackers Hijacked Microsoft’s Copilot Agent Without a Single Click

 

Imagine this — a customer service AI agent receives an email and, within seconds, secretly extracts your entire customer database and sends it to a hacker. No clicks, no downloads, no alerts.

Security researchers recently showcased this chilling scenario with a Microsoft Copilot Studio agent. The exploit worked through prompt injection, a manipulation technique where attackers hide malicious instructions in ordinary-looking text inputs.

As companies rush to integrate AI agents into customer service, analytics, and software development, they’re opening up new risks that traditional cybersecurity tools can’t fully protect against. For developers and data teams, understanding AIjacking — the hijacking of AI systems through deceptive prompts — has become crucial.

In simple terms, AIjacking occurs when attackers use natural language to trick AI systems into executing commands that bypass their programmed restrictions. These malicious prompts can be buried in anything the AI reads — an email, a chat message, a document — and the system can’t reliably tell the difference between a real instruction and a hidden attack.

Unlike conventional hacks that exploit software bugs, AIjacking leverages the very nature of large language models. These models follow contextual language instructions — whether those instructions come from a legitimate user or a hacker.

The Microsoft Copilot Studio incident illustrates the stakes clearly. Researchers sent emails embedded with hidden prompt injections to an AI-powered customer service agent that had CRM access. Once the agent read the emails, it followed the instructions, extracted sensitive data, and emailed it back to the attacker — all autonomously. This was a true zero-click exploit.

Traditional cyberattacks often rely on tricking users into clicking malicious links or opening dangerous attachments. AIjacking requires no such action — the AI processes inputs automatically, which is both its greatest strength and its biggest vulnerability.

Old-school defenses like firewalls, antivirus software, and input validation protect against code-level threats like SQL injection or XSS attacks. But AIjacking is a different beast — it targets the language understanding capability itself, not the code.

Because malicious prompts can be written in infinite variations — in different tones, formats, or even languages — it’s impossible to build a simple “bad input” blacklist that prevents all attacks.

When Microsoft fixed the Copilot Studio flaw, they added prompt injection classifiers, but these have limitations. Block one phrasing, and attackers simply reword their prompts.

AI agents are typically granted broad permissions to perform useful tasks — querying databases, sending emails, and calling APIs. But when hijacked, those same permissions become a weapon, allowing the agent to carry out unauthorized operations in seconds.

Security tools can’t easily detect a well-crafted malicious prompt that looks like normal text. Antivirus programs don’t recognize adversarial inputs that exploit AI behavior. What’s needed are new defense strategies tailored to AI systems.

The biggest risk lies in data exfiltration. In Microsoft’s test, the hijacked AI extracted entire customer records from the CRM. Scaled up, that could mean millions of records lost in moments.

Beyond data theft, hijacked agents could send fake emails from your company, initiate fraudulent transactions, or abuse APIs — all using legitimate credentials. Because the AI acts within its normal permissions, the attack is almost indistinguishable from authorized activity.

Privilege escalation amplifies the damage. Since most AI agents need elevated access — for instance, customer service bots read user data, while dev assistants access codebases — a single hijack can expose multiple internal systems.

Many organizations wrongly assume that existing cybersecurity systems already protect them. But prompt injection bypasses these controls entirely. Any text input the AI processes can serve as an attack vector.

To defend against AIjacking, a multi-layered security strategy is essential:

  1. Input validation & authentication: Don’t let AI agents auto-respond to unverified external inputs. Only allow trusted senders and authenticated users.
  2. Least privilege access: Give agents only the permissions necessary for their task — never full database or write access unless essential.
  3. Human-in-the-loop approval: Require manual confirmation before agents perform sensitive tasks like large data exports or financial transactions.
  4. Logging & monitoring: Track agent behavior and flag unusual actions, such as accessing large volumes of data or contacting new external addresses.
  5. System design & isolation: Keep AI agents away from production databases, use read-only replicas, and apply rate limits to contain damage.

Security testing should also include adversarial prompt testing, where developers actively try to manipulate the AI to find weaknesses before attackers do.

AIjacking marks a new era in cybersecurity. It’s not hypothetical — it’s happening now. But layered defense strategies — from input authentication to human oversight — can help organizations deploy AI safely. Those who take action now will be better equipped to protect both their systems and their users.

Featured