Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label Threat Intelligence. Show all posts

Here's How to Safeguard Your Smartphone Against Zero-Click Attacks

 

Spyware tools have been discovered on the phones of politicians, journalists, and activists on numerous occasions over the past decade. This has prompted worries regarding the lack of protections in the tech industry and an unprecedented expansion of spyware technologies. 

Meta's WhatsApp recently stated that it has detected a hacking campaign aimed at roughly ninety users, the majority of whom were journalists and civil society activists from two dozen countries. 

According to a WhatsApp representative, the attack was carried out by the Israeli spyware company Paragon Solutions, which is now controlled by the Florida-based private equity firm AE Industrial Partners. Graphite, Paragon's spyware, infiltrated WhatsApp groups by sending them a malicious PDF attachment. It can access and read messages from encrypted apps such as WhatsApp and Signal without the user's knowledge. 

What is a zero-click attack? 

A zero-click attack, such as the one on WhatsApp, compromises a device without requiring any user activity. Unlike phishing or one-click attacks, which rely on clicking a malicious link or opening an attachment, zero-click leverages a security flaw to stealthily gain complete access after the device has been infected. 

"In the case of graphite, via WhatsApp, some kind of payload, like a PDF or an image, [was sent to the victims' devices] and the underlying processes that receive and handle those packages have vulnerabilities that the attackers exploit [to] infect the phone,” Rocky Cole, co-founder of mobile threat protection company iVerify, noted.

While reports do not indicate "whether graphite can engage in privilege escalation [vulnerability] and operate outside WhatsApp or even move into the iOS kernel itself, we do know from our own detections and other work with customers, that privilege escalation via WhatsApp in order to gain kernel access is indeed possible," Cole added. 

The iVerify team believes that the malicious attacks are "potentially more widespread" than the 90 individuals who were reported to have been infected by graphite because they have discovered cases where a number of WhatsApp crashes on [mobile] devices [they're] monitoring with iVerify have seemed to be malicious in nature.

While the WhatsApp hack primarily targeted civil society activists, Cole believes mobile spyware is a rising threat to everyone since mobile exploitation is more pervasive than many people realise. Moreover, the outcome is an emerging ecosystem around mobile spyware development and an increasing number of VC-backed mobile spyware companies are under pressure to become viable organisations. This eventually increases marketing competition for spyware merchants and lowers barriers that might normally deter these attacks. 

Mitigation tips

Cole recommends users to treat their phones as computers. Just as you use best practices to safeguard traditional endpoints like laptops from exploitation and compromise, you should do the same for phones. This includes rebooting your phone on a daily basis because most of these exploits remain in memory rather than files, and rebooting your phone should theoretically wipe out the malware as well, he said. 

If you have an Apple device, you can also enable Lockdown Mode. As indicated by Cole, "lockdown mode has the effect of reducing some functionality of internet-facing applications [which can] in some ways reduce the attack surface to some degree."

Ultimately, the only way to properly safeguard oneself from zero-click capabilities is to address the underlying flaws. Cole emphasised that only Apple, Google, and app developers may do so. "So as an end user, it's critically important that when a new security patch is available, you apply it as soon as you possibly can," the researcher added.

Microsoft Alerts Users About Password-spraying Attack

Microsoft Alerts Users About Password-spraying Attack

Microsoft alerts users about password-spraying attacks

Microsoft has warned users about a new password-spraying attack by a hacking group Storm-1977 that targets cloud users. The Microsoft Threat Intelligence team reported a new warning after discovering threat actors are abusing unsecured workload identities to access restricted resources. 

According to Microsoft, “Container technology has become essential for modern application development and deployment. It's a critical component for over 90% of cloud-native organizations, facilitating swift, reliable, and flexible processes that drive digital transformation.” 

Hackers use adoption-as-a-service

Research says 51% of such workload identities have been inactive for one year, which is why attackers are exploiting this attack surface. The report highlights the “adoption of containers-as-a-service among organizations rises.” According to Microsoft, it continues to look out for unique security dangers that affect “containerized environments.” 

The password-spraying attack targeted a command line interface tool “AzureChecker” to download AES-encrypted data which revealed the list of password-spray targets after it was decoded. To make things worse, the “threat actor then used the information from both files and posted the credentials to the target tenants for validation.”

The attack allowed the Storm-1977 hackers to leverage a guest account to make a compromised subscription resource group and over 200 containers that were used for crypto mining. 

Mitigating password-spraying attacks

The solution to the problem of password spraying attacks is eliminating passwords. It can be done by moving towards passkeys, a lot of people are already doing that. 

Microsoft has suggested these steps to mitigate the issue

  • Use strong authentication while putting sensitive interfaces to the internet. 
  • Use strong verification methods for the Kubernetes API to stop hackers from getting access to the cluster even when valid credentials like kubeconfig are obtained.  
  • Don’t use the read-only endpoint of Kubelet on port 10255, which doesn’t need verification. 

Modify the Kubernetes role-based access controls for every user and service account to only retain permissions that are required. 

According to Microsoft, “Recent updates to Microsoft Defender for Cloud enhance its container security capabilities from development to runtime. Defender for Cloud now offers enhanced discovery, providing agentless visibility into Kubernetes environments, tracking containers, pods, and applications.” These updates upgrade security via continuous granular scanning. 

Pentagon Director Hegseth Revealed Key Yemen War Plans in Second Signal Chat, Source Claims

 

In a chat group that included his wife, brother, and personal attorney, U.S. Defence Secretary Pete Hegseth provided specifics of a strike on Yemen's Iran-aligned Houthis in March, a person familiar with the situation told Reuters earlier this week. 

Hegseth's use of an unclassified messaging system to share extremely sensitive security details is called into question by the disclosure of a second Signal chat. This comes at a particularly sensitive time for him, as senior officials were removed from the Pentagon last week as part of an internal leak investigation. 

In the second chat, Hegseth shared details of the attack, which were similar to those revealed last month by The Atlantic magazine after its editor-in-chief, Jeffrey Goldberg, was accidentally included in a separate chat on the Signal app, in an embarrassing incident involving all of President Donald Trump's most senior national security officials.

The individual familiar with the situation, who spoke on the condition of anonymity, stated that the second chat, which comprised around a dozen people, was set up during his confirmation process to discuss administrative concerns rather than real military planning. According to the insider, the chat included details about the air attack schedule. 

Jennifer, Hegseth's wife and a former Fox News producer, has attended classified meetings with foreign military counterparts, according to photographs released by the Pentagon. During a meeting with his British colleague at the Pentagon in March, Hegseth's wife was found sitting behind him. Hegseth's brother serves as a Department of Homeland Security liaison to the Pentagon.

The Trump administration has aggressively pursued leaks, which Hegseth has warmly supported in the Pentagon. Pentagon spokesperson Sean Parnell said, without evidence, that the media was "enthusiastically taking the grievances of disgruntled former employees as the sole sources for their article.” 

Hegeseth'S tumultuous moment 

Democratic lawmakers stated Hegseth could no longer continue in his position. "We keep learning how Pete Hegseth put lives at risk," Senate Minority Leader Chuck Schumer said in a post to X. "But Trump is still too weak to fire him. Pete Hegseth must be fired.”

Senator Tammy Duckworth, an Iraq War veteran who was severely injured in combat in 2004, stated that Hegseth "must resign in disgrace.” 

The latest disclosure comes just days after Dan Caldwell, one of Hegseth's top aides, was taken from the Pentagon after being identified during an investigation into leaks at the Department of Defence. Although Caldwell is not as well-known as other senior Pentagon officials, he has played an important role for Hegseth and was chosen the Pentagon's point of contact by the Secretary during the first Signal chat.

Security Analysts Express Concerns Over AI-Generated Doll Trend

 

If you've been scrolling through social media recently, you've probably seen a lot of... dolls. There are dolls all over X and on Facebook feeds. Instagram? Dolls. TikTok?

You guessed it: dolls, as well as doll-making techniques. There are even dolls on LinkedIn, undoubtedly the most serious and least entertaining member of the club. You can refer to it as the Barbie AI treatment or the Barbie box trend. If Barbie isn't your thing, you can try AI action figures, action figure starter packs, or the ChatGPT action figure fad. However, regardless of the hashtag, dolls appear to be everywhere. 

And, while they share some similarities (boxes and packaging resembling Mattel's Barbie, personality-driven accessories, a plastic-looking smile), they're all as unique as the people who post them, with the exception of one key common feature: they're not real. 

In the emerging trend, users are using generative AI tools like ChatGPT to envision themselves as dolls or action figures, complete with accessories. It has proven quite popular, and not just among influencers.

Politicians, celebrities, and major brands have all joined in. Journalists covering the trend have created images of themselves with cameras and microphones (albeit this journalist won't put you through that). Users have created renditions of almost every well-known figure, including billionaire Elon Musk and actress and singer Ariana Grande. 

The Verge, a tech media outlet, claims that it started on LinkedIn, a professional social networking site that was well-liked by marketers seeking interaction. Because of this, a lot of the dolls you see try to advertise a company or business. (Think, "social media marketer doll," or even "SEO manager doll." ) 

Privacy concerns

From a social perspective, the popularity of the doll-generating trend isn't surprising at all, according to Matthew Guzdial, an assistant professor of computing science at the University of Alberta.

"This is the kind of internet trend we've had since we've had social media. Maybe it used to be things like a forwarded email or a quiz where you'd share the results," Guzdial noted. 

But as with any AI trend, there are some concerns over its data use. Generative AI in general poses substantial data privacy challenges. As the Stanford University Institute for Human-Centered Artificial Intelligence (Stanford HAI) points out, data privacy concerns and the internet are nothing new, but AI is so "data-hungry" that it magnifies the risk. 

Safety tips 

As we have seen, one of the major risks of participating in viral AI trends is the potential for your conversation history to be compromised by unauthorised or malicious parties. To stay safe, researchers recommend taking the following steps: 

Protect your account: This includes enabling 2FA, creating secure and unique passwords for each service, and avoiding logging in to shared computers.

Minimise the real data you give to the AI model: Fornés suggests using nicknames or other data instead. You should also consider utilising a different ID solely for interactions with AI models.

Use the tool cautiously and properly: When feasible, use the AI model in incognito mode and without activating the history or conversational memory functions.

Black Basta: Exposing the Ransomware Outfit Through Leaked Chat Logs

 

The cybersecurity sector experienced an extraordinary breach in February 2025 that revealed the inner workings of the well-known ransomware gang Black Basta. 

Trustwave SpiderLabs researchers have now taken an in-depth look at the disclosed contents, which explain how the gang thinks and operates, including discussions about tactics and the effectiveness of various attack tools. Even going so far as to debate the ethical and legal implications of targeting Ascension Health. 

The messages were initially posted to MEGA before being reuploaded straight to Telegram on February 11 by the online identity ExploitWhispers. The JSON-based dataset contained over 190,000 messages allegedly sent by group members between September 18, 2023 and September 28, 2024. 

This data dump provides rare insight into the group's infrastructure, tactics, and internal decision-making procedures, providing obvious links to the infamous Conti leaks of 2022. The leak does not provide every information about the group's inner workings, but it does provide a rare glimpse inside one of the most financially successful ransomware organisations in recent years. 

The dataset reveals Black Basta's internal workflows, decision-making processes, and team dynamics, providing an unfiltered view of how one of the most active ransomware gangs functions behind the scenes, with parallels to the infamous Conti leaks. Black Basta has been operating since 2022. 

The outfit normally keeps a low profile while carrying out its operations, which target organisations in a variety of sectors and demand millions in ransom payments. The messages demonstrate members' remarkable autonomy and ingenuity in adjusting fast to changing security situations. The leak revealed Black Basta's reliance on social engineering tactics. While traditional phishing efforts are still common, they can take a more personable approach in some cases. 

The chat logs provide greater insight into Black Basta's strategic approach to vulnerability exploitation. The group actively seeks common and unique vulnerabilities, acquiring zero-day exploits to gain a competitive advantage. 

Its weaponization policy reveals a deliberate effort to increase the impact of its attacks, with Cobalt Strike frequently deployed for command and control operations. Notably, Black Basta created a custom proxy architecture dubbed "Coba PROXY" to manage massive amounts of C2 traffic, which improved both stealth and resilience. Beyond its technological expertise, the leak provides insight into Black Basta's negotiation strategies. 

The gang uses aggressive l and psychologically manipulative tactics to coerce victims into paying ransoms. Strategic delays and coercive rhetoric are standard tactics used to extract the maximum financial return. Even more alarming is its growth into previously off-limits targets, such as CIS-based financial institutions.

While the immediate impact of the breach is unknown, the disclosure of Black Basta's inner workings provides a unique chance for cybersecurity specialists to adapt and respond. Understanding its methodology promotes the creation of more effective defensive strategies, hence increasing resilience to future ransomware assaults.

AI and Privacy – Issues and Challenges

 

Artificial intelligence is changing cybersecurity and digital privacy. It promises better security but also raises concerns about ethical boundaries, data exploitation, and spying. From facial recognition software to predictive crime prevention, customers are left wondering where to draw the line between safety and overreach as AI-driven systems become more and more integrated into daily life.

The same artificial intelligence (AI) tools that aid in spotting online threats, optimising security procedures, and stopping fraud can also be used for intrusive data collecting, behavioural tracking, and mass spying. The use of AI-powered surveillance in corporate data mining, law enforcement profiling, and government tracking has drawn criticism in recent years. AI runs the potential of undermining rather than defending basic rights in the absence of clear regulations and transparency. 

AI and data ethics

Despite encouraging developments, there are numerous instances of AI-driven inventions going awry, which raise serious questions. A face recognition business called Clearview AI amassed one of the largest facial recognition databases in the world by illegally scraping billions of photos from social media. Clearview's technology was employed by governments and law enforcement organisations across the globe, leading to legal action and regulatory action about mass surveillance. 

The UK Department for Work and Pensions used an AI system to detect welfare fraud. An internal investigation suggested that the system disproportionately targeted people based on their age, handicap, marital status, and country. This prejudice resulted in certain groups being unfairly picked for fraud investigations, raising questions about discrimination and the ethical use of artificial intelligence in public services. Despite earlier guarantees of impartiality, the findings have fuelled calls for increased openness and supervision in government AI use. 

Regulations and consumer protection

The ethical use of AI is being regulated by governments worldwide, with a number of significant regulations having an immediate impact on consumers. The AI Act of the European Union, which is scheduled to go into force in 2025, divides AI applications into risk categories. 

Strict regulations will be applied to high-risk technology, like biometric surveillance and facial recognition, to guarantee transparency and moral deployment. The EU's commitment to responsible AI governance is further reinforced by the possibility of severe sanctions for non compliant companies. 

Individuals in the United States have more control over their personal data according to California's Consumer Privacy Act. Consumers have the right to know what information firms gather about them, to seek its erasure, and to opt out of data sales. This rule adds an important layer of privacy protection in an era where AI-powered data processing is becoming more common. 

The White House has recently introduced the AI Bill of Rights, a framework aimed at encouraging responsible AI practices. While not legally enforceable, it emphasises the need of privacy, transparency, and algorithmic fairness, pointing to a larger push for ethical AI development in policy making.

Nearly Half of Companies Lack AI-driven Cyber Threat Plans, Report Finds

 

Mimecast has discovered that over 55% of organisations do not have specific plans in place to deal with AI-driven cyberthreats. The cybersecurity company's most recent "State of Human Risk" report, which is based on a global survey of 1,100 IT security professionals, emphasises growing concerns about insider threats, cybersecurity budget shortages, and vulnerabilities related to artificial intelligence. 

According to the report, establishing a structured cybersecurity strategy has improved the risk posture of 96% of organisations. The threat landscape is still becoming more complicated, though, and insider threats and AI-driven attacks are posing new challenges for security leaders. 

“Despite the complexity of challenges facing organisations—including increased insider risk, larger attack surfaces from collaboration tools, and sophisticated AI attacks—organisations are still too eager to simply throw point solutions at the problem,” stated Mimecast’s human risk strategist VP, Masha Sedova. “With short-staffed IT and security teams and an unrelenting threat landscape, organisations must shift to a human-centric platform approach that connects the dots between employees and technology to keep the business secure.” 

95% of organisations use AI for insider risk assessments, endpoint security, and threat detection, according to the survey, but 81% are concerned regarding data leakage from generative AI (GenAI) technology. In addition to 46% not being confident in their abilities to defend against AI-powered phishing and deepfake threats, more than half do not have defined tactics to resist AI-driven attacks.

Data loss from internal sources is expected to increase over the next year, according to 66% of IT leaders, while insider security incidents have increased by 43%. The average cost of insider-driven data breaches, leaks, or theft is $13.9 million per incident, according to the research. Furthermore, 79% of organisations think that the increased usage of collaboration technologies has increased security concerns, making them more vulnerable to both deliberate and accidental data breaches. 

With only 8% of employees accountable for 80% of security incidents, the report highlights a move away from traditional security awareness training and towards proactive Human Risk Management. To identify and eliminate threats early, organisations are implementing behavioural analytics and AI-driven surveillance. A shift towards sophisticated threat detection and risk mitigation techniques is seen in the fact that 72% of security leaders believe that human-centric cybersecurity solutions will be essential over the next five years.

Terror Ourfits Are Using Crypto Funds For Donations in India: TRM Labs

 

Transaction Monitoring (TRM) Labs, a blockchain intelligence firm based in San Francisco and recognised by the World Economic Forum, recently published a report revealing the links between the Islamic State Khorasan Province (ISKP) and ISIS-affiliated fund-collecting networks in India. ISKP, an Afghan terrorist outfit, is reportedly using the cryptocurrency Monero (XMR) to gather funds.

Following the departure of US soldiers from Afghanistan, the ISKP terrorist group garnered significant attention. The "TRM Labs 2025 Crypto Crime Report," published on February 10th, focusses on unlawful cryptocurrency transactions in 2024. According to the reports, illicit transactions have fallen by 24% compared to 2023. 

The "TRM Labs 2025 Crypto Crime Report," published on February 10th, focusses on illicit cryptocurrency transactions in 2024. According to the reports, illicit transactions have fallen by 24% compared to 2023. However, it also emphasises the evolving techniques employed by terrorist organisations. 

TRM Labs' report uncovered on-chain ties between ISKP-affiliated addresses and covert fundraising campaigns in India. The on-chain link is a component of the Chainlink network that runs directly on a blockchain, featuring smart contracts that handle data requests and connect to off-chain oracles. The TRM report states that the ISKP has begun receiving donations in Monero (XMR). 

News reports state that Voice of Khorasan, a periodical created by ISKP's media branch, al-Azaim, announced the commencement of the organization's first donation drive in support of Monero. Since then, Monero's fundraising activities have consistently included requests for donations. 

According to the report, ISKP and other terrorist organisations are favouring Monero more and more because of its blockchain anonymity capabilities. Monero is now worth ₹19,017.77. This powerful privacy tool aids in transaction concealment. However, the report emphasises that terrorist groups will choose more stable cryptocurrencies over Monero money for the foreseeable future due to its volatility and possible crackdowns. 

Furthermore, reliance on cryptocurrency mixers and unidentified wallets has risen. The primary venues for exchanging guidance on best practices and locating providers with the highest security requirements are now online forums. Fake proofs are being used by people to get over Know Your Customer (KYC) rules enforced by exchanges, which makes it challenging for law enforcement to follow the illicit transactions. 

In contrast to Bitcoin and other well-known digital assets, Monero gained attention for its sophisticated privacy features that make transactions trickier to identify. Because of this, they are a tempting option for people who engage in illicit financial activity.