Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Latest News

Dark Web Voice-Phishing Kits Supercharge Social Engineering and Account Takeovers

  Cybercriminals are finding it easier than ever to run convincing social engineering schemes and identity theft operations, driven by the ...

All the recent news you need to know

Attackers Hijack Microsoft Email Accounts to Launch Phishing Campaign Against Energy Firms

 


Cybercriminals have compromised Microsoft email accounts belonging to organizations in the energy sector and used those trusted inboxes to distribute large volumes of phishing emails. In at least one confirmed incident, more than 600 malicious messages were sent from a single hijacked account.

Microsoft security researchers explained that the attackers did not rely on technical exploits or system vulnerabilities. Instead, they gained access by using legitimate login credentials that were likely stolen earlier through unknown means. This allowed them to sign in as real users, making the activity harder to detect.

The attack began with emails that appeared routine and business-related. These messages included Microsoft SharePoint links and subject lines suggesting formal documents, such as proposals or confidentiality agreements. To view the files, recipients were asked to authenticate their accounts.

When users clicked the SharePoint link, they were redirected to a fraudulent website designed to look legitimate. The site prompted them to enter their Microsoft login details. By doing so, victims unknowingly handed over valid usernames and passwords to the attackers.

After collecting credentials, the attackers accessed the compromised email accounts from different IP addresses. They then created inbox rules that automatically deleted incoming emails and marked messages as read. This step helped conceal the intrusion and prevented account owners from noticing unusual activity.

Using these compromised inboxes, the attackers launched a second wave of phishing emails. These messages were sent not only to external contacts but also to colleagues and internal distribution lists. Recipients were selected based on recent email conversations found in the victim’s inbox, increasing the likelihood that the messages would appear trustworthy.

In this campaign, the attackers actively monitored inbox responses. They removed automated replies such as out-of-office messages and undeliverable notices. They also read replies from recipients and responded to questions about the legitimacy of the emails. All such exchanges were later deleted to erase evidence.

Any employee within an energy organization who interacted with the malicious links was also targeted for credential theft, allowing the attackers to expand their access further.

Microsoft confirmed that the activity began in January and described it as a short-duration, multi-stage phishing operation that was quickly disrupted. The company did not disclose how many organizations were affected, identify the attackers, or confirm whether the campaign is still active.

Security experts warn that simply resetting passwords may not be enough in these attacks. Because attackers can interfere with multi-factor authentication settings, they may maintain access even after credentials are changed. For example, attackers can register their own device to receive one-time authentication codes.

Despite these risks, multi-factor authentication remains a critical defense against account compromise. Microsoft also recommends using conditional access controls that assess login attempts based on factors such as location, device health, and user role. Suspicious sign-ins can then be blocked automatically.

Additional protection can be achieved by deploying anti-phishing solutions that scan emails and websites for malicious activity. These measures, combined with user awareness, are essential as attackers increasingly rely on stolen identities rather than software flaws.


Cisco Patches ISE XML Flaw with Public Exploit Code

 

Cisco has recently addressed a significant security vulnerability in its Identity Services Engine (ISE) and ISE Passive Identity Connector (ISE-PIC), tracked as CVE-2026-20029. This medium-severity issue, scored at 4.9 out of 10, stems from improper XML parsing in the web-based management interface. Attackers with valid admin credentials could upload malicious XML files, enabling arbitrary file reads from the underlying operating system and exposing sensitive data.

The flaw poses a substantial risk to enterprise networks, where ISE is widely deployed for centralized access control. Enterprises rely on ISE to manage who and what accesses their infrastructure, making it a prime target for cybercriminals seeking to steal credentials or configuration files.Although no wild exploitation has been confirmed, public proof-of-concept (PoC) exploit code heightens the urgency, echoing patterns from prior ISE vulnerabilities.

Past incidents underscore ISE's appeal to threat actors. In November 2025, sophisticated attackers exploited a maximum-severity zero-day (CVSS 10/10) to deploy custom backdoor malware, bypassing authentication entirely. Similarly, June 2025 patches fixed critical flaws with public PoCs, including arbitrary code execution risks in ISE and related platforms. These events highlight persistent scrutiny on Cisco's network access tools.

Mitigation demands immediate patching, as no workarounds exist. Affected versions require specific updates: migrate pre-3.2 releases to fixed ones; apply Patch 8 for 3.2 and 3.3; use Patch 4 for 3.4; and note 3.5 is unaffected.Administrators must verify their ISE version and apply the precise patch to prevent data leaks, especially given the admin-credential prerequisite that insiders or compromised accounts could fulfill.

Organizations should prioritize auditing ISE deployments amid rising enterprise-targeted attacks. Regular vulnerability scans, credential hygiene, and monitoring for anomalous XML uploads are essential defenses. As PoC code circulates, patching remains the sole bulwark, reinforcing the need for swift action in securing network identities.

Online Misinformation and AI-Driven Fake Content Raise Concerns for Election Integrity

 

With elections drawing near, unease is spreading about how digital falsehoods might influence voter behavior. False narratives on social platforms may skew perception, according to officials and scholars alike. As artificial intelligence advances, deceptive content grows more convincing, slipping past scrutiny. Trust in core societal structures risks erosion under such pressure. Warnings come not just from academics but also from community leaders watching real-time shifts in public sentiment.  

Fake messages have recently circulated online, pretending to be from the City of York Council. Though they looked real, officials later stated these ads were entirely false. One showed a request for people willing to host asylum seekers; another asked volunteers to take down St George flags. A third offered work fixing road damage across neighborhoods. What made them convincing was their design - complete with official logos, formatting, and contact information typical of genuine notices. 

Without close inspection, someone scrolling quickly might believe them. Despite their authentic appearance, none of the programs mentioned were active or approved by local government. The resemblance to actual council material caused confusion until authorities stepped in to clarify. Blurred logos stood out immediately when BBC Verify examined the pictures. Wrong fonts appeared alongside misspelled words, often pointing toward artificial creation. 

Details like fingers looked twisted or incomplete - a frequent issue in computer-made visuals. One poster included an email tied to a real council employee, though that person had no knowledge of the material. Websites referenced in some flyers simply did not exist online. Even so, plenty of individuals passed the content along without questioning its truth. A single fabricated post managed to spread through networks totaling over 500,000 followers. False appearances held strong appeal despite clear warning signs. 

What spreads fast online isn’t always true - Clare Douglas, head of City of York Council, pointed out how today’s tech amplifies old problems in new ways. False stories once moved slowly; now they race across devices at a pace that overwhelms fact-checking efforts. Trust fades when people see conflicting claims everywhere, especially around health or voting matters. Institutions lose ground not because facts disappear, but because attention scatters too widely. When doubt sticks longer than corrections, participation dips quietly over time.  

Ahead of public meetings, tensions surfaced in various regions. Misinformation targeting asylum seekers and councils emerged online in Barnsley, according to Sir Steve Houghton, its council head. False stories spread further due to influencers who keep sharing them - profit often outweighs correction. Although government outlets issued clarifications, distorted messages continue flooding digital spaces. Their sheer number, combined with how long they linger, threatens trust between groups and raises risks for everyday security. Not everyone checks facts these days, according to Ilya Yablokov from the University of Sheffield’s Disinformation Research Cluster. Because AI makes it easier than ever, faking believable content takes little effort now. 

With just a small setup, someone can flood online spaces fast. What helps spread falsehoods is how busy people are - they skip checking details before passing things along. Instead, gut feelings or existing opinions shape what gets shared. Fabricated stories spreading locally might cost almost nothing to create, yet their impact on democracy can be deep. 

When misleading accounts reach more voters, specialists emphasize skills like questioning sources, checking facts, or understanding media messages - these help preserve confidence in public processes while supporting thoughtful engagement during voting events.

OpenAI Faces Court Order to Disclose 20 Million Anonymized ChatGPT Chats


OpenAI, a company that is pushing to redefine how courts balance innovation, privacy, and the enforcement of copyright in the current legal battle over artificial intelligence and intellectual property, has brought a lawsuit challenging a sweeping discovery order. 

It was announced on Wednesday that the artificial intelligence company requested a federal judge to overturn a ruling that requires it to disclose 20 million anonymized ChatGPT conversation logs, warning even de-identified records may reveal sensitive information about users. 

In the current dispute, the New York Times and several other news organizations have filed a lawsuit alleging that OpenAI is violating copyright terms in its large language models by illegally using their content. The claim is that OpenAI has violated its copyright rights by using their copyrighted content. 

A federal district court in New York upheld two discovery orders on January 5, 2026 that required OpenAI to produce a substantial sample of the interactions with ChatGPT by the end of the year, a consequential milestone in an ongoing litigation that is situated at the intersection of copyright law, data privacy, and the emergence of artificial intelligence. 

According to the court's decision, this case concludes that there is a growing willingness by judicial authorities to critically examine the internal data practices of AI developers, while corporations argue that disclosure of this sort could have far-reaching implications for both user trust and the confidentiality of platforms themselves. As part of the controversy, plaintiffs are requesting access to ChatGPT's conversation logs that record both user prompts and the system's response to those prompts. 

Those logs, they argue, are crucial in evaluating claims of copyright infringement as well as OpenAI's asserted defenses, including fair use, since they capture both user prompts and system responses. In July 2025, when OpenAI filed a motion seeking the production of a 120-million-log sample, citing the scale and the privacy concerns involved in the request, it refused.

OpenAI, which maintains billions of logs as part of its normal operations, initially resisted the request. It responded by proposing to produce 20 million conversations, stripped of all personally identifiable information and sensitive information, using a proprietary process that would ensure the data would not be manipulated. 

A reduction of this sample was agreed upon by plaintiffs as an interim measure, however they reserved the right to continue their pursuit of a broader sample if the data were not sufficient. During October 2025, tensions escalated as OpenAI changed its position, offering instead to search for targeted words within the 20-million-log dataset and only to find conversations that directly implicated the plaintiff's work based on those search terms.

In their opinion, limiting disclosure to filtered results would be a better safeguard for user privacy, preventing the exposure of unnecessary unrelated communications. Plaintiffs, however, swiftly rejected this approach, filing a new motion to demand the release of the entire de-identified dataset. 

On November 7, 2025, a U.S. Magistrate Judge Ona Wang sided with the plaintiffs, ordering OpenAI to provide all of the sample data in addition to denying the company's request to reconsider. A judge ruled that obtaining access to both relevant and ostensibly irrelevant logs was necessary in order to conduct a comprehensive and fair analysis of OpenAI's claims. 

Accordingly, even conversations which are not directly referencing copyrighted material can be taken into account by OpenAI when attempting to prove fair use. As part of its assessment of privacy risks, the court deemed that the dataset had been reduced from billions to 20 million records by applying de-identification measures and enforcing a standing protective order, all of which were adequate to mitigate them. 

In light of the fact that the litigation is entering a more consequential phase, Keker Van Nest, Latham & Watkins, and Morrison & Foerster are representing OpenAI in the matter, which is approaching court-imposed production deadlines. 

In light of the fact that the order reflects a broader judicial posture toward artificial intelligence disputes, legal observers have noticed that courts are increasingly willing to compel extensive discovery - even if it is anonymous - to examine the process by which large language models are trained and whether copyrighted material may be involved.

A crucial aspect of this ruling is that it strengthens the procedural avenues for publishers and other content owners to challenge alleged copyright violations by AI developers. The ruling highlights the need for technology companies to be vigilant with their stewardship of large repositories of user-generated data, and the legal risks associated with retaining, processing, and releasing such data. 

Additionally, the dispute has intensified since there have been allegations that OpenAI was not able to suspend certain data deletion practices after the litigation commenced, therefore perhaps endangering evidence relevant to claims that some users may have bypassed publisher paywalls through their use of OpenAI products. 

As a result of the deletions, plaintiffs claim that they disproportionately affected free and subscription-tier user records, raising concerns about whether evidence preservation obligations were met fully. The company, which has been named as a co-defendant in the case, has been required to produce more than eight million anonymized Copilot interaction logs in response to the lawsuit and has not faced similar data preservation complaints.

A statement by Dr. Ilia Kolochenko, CEO of ImmuniWeb, on the implications of the ruling was given by CybersecurityNews. He said that while the ruling represents a significant legal setback for OpenAI, it could also embolden other plaintiffs to pursue similar discovery strategies or take advantage of stronger settlement positions in parallel proceedings. 

In response to the allegations, several courts have requested a deeper investigation into OpenAI's internal data governance practices, including a request for injunctions preventing further deletions until it is clear what remains and what is potentially recoverable and what can be done. Aside from the courtroom, the case has been accompanied by an intensifying investor scrutiny that has swept the artificial intelligence industry nationwide. 

In the midst of companies like SpaceX and Anthropic preparing for a possible public offering at a valuation that could reach hundreds of billions of dollars, market confidence is becoming increasingly dependent upon the ability of companies to cope with regulatory exposure, rising operational costs, and competitive pressures associated with rapid artificial intelligence development. 

Meanwhile, speculation around strategic acquisitions that could reshape the competitive landscape continues to abound in the industry. The fact that reports suggest OpenAI is exploring Pinterest may highlight the strategic value that large amounts of user interaction data have for enhancing product search capabilities and increasing ad revenue—both of which are increasingly critical considerations in the context of the competition between major technology companies for real-time consumer engagement and data-driven growth.

In view of the detailed allegations made by the news organizations, the litigation has gained added urgency due to the fact that a significant volume of potentially relevant data has been destroyed as a consequence of OpenAI's failure to preserve key evidence after the lawsuit was filed. 

A court filing indicates that plaintiffs learned nearly 11 months ago that large quantities of ChatGPT output logs, which reportedly affected a considerable number of Free, Pro, and Plus user conversations, had been deleted at an alarming rate after the suit was filed, and they were reportedly doing so at a disproportionately high rate. 

It is argued by plaintiffs that users trying to circumvent paywalls were more likely to enable chat deletion, which indicates this category of data is most likely to contain infringing material. Furthermore, the filings assert that despite OpenAI's attempt to justify the deletion of approximately one-third of all user conversations after the New York Times' complaint, OpenAI failed to provide any rationale other than citing what appeared to be an anomalous drop in usage during the period around the New Year of 2024. 

While news organizations have alleged OpenAI has continued routine deletion practices without implementing litigation holds despite two additional spikes in mass deletions that have been attributed to technical issues, they have selectively retained outputs relating to accounts mentioned in the publishers' complaints and continue to do so. 

During a testimony by OpenAI's associate general counsel, Mike Trinh, plaintiffs argue that the trial documents preserved by OpenAI substantiate the defenses of OpenAI, whereas the records that could substantiate the claims of third parties were not preserved. 

According to the researchers, the precise extent of the loss of the data remains unclear, because OpenAI still refuses to disclose even basic details about what it does and does not erase, an approach that they believe contrasts with Microsoft's ability to preserve Copilot log files without having to go through similar difficulties.

Consequently, as a result of Microsoft's failure to produce searchable Copilot logs, and in light of OpenAI's deletion of mass amounts of data, the news organizations are seeking a court order for Microsoft to produce searchable Copilot logs as soon as possible. 

It has also been requested that the court maintain the existing preservation orders which prevent further permanent deletions of output data as well as to compel OpenAI to accurately reflect the extent to which output data has been destroyed across the company's products as well as clarify whether any of that information can be restored and examined for its legal purposes.

How Generative AI Is Accelerating Password Attacks on Active Directory

 

Active Directory remains the backbone of identity management for most organizations, which is why it continues to be a prime target for cyberattacks. What has shifted is not the focus on Active Directory itself, but the speed and efficiency with which attackers can now compromise it.

The rise of generative AI has dramatically reduced the cost and complexity of password-based attacks. Tasks that once demanded advanced expertise and substantial computing resources can now be executed far more easily and at scale.

Tools such as PassGAN mark a significant evolution in password-cracking techniques. Instead of relying on static wordlists or random brute-force attempts, these systems use adversarial learning to understand how people actually create passwords. With every iteration, the model refines its predictions based on real-world behavior.

The impact is concerning. Research indicates that PassGAN can crack 51% of commonly used passwords in under one minute and 81% within a month. The pace at which these models improve only increases the risk.

When trained using organization-specific breach data, public social media activity, or information from company websites, AI models can produce highly targeted password guesses that closely mirror employee habits.

How generative AI is reshaping password attack methods

Earlier password attacks followed predictable workflows. Attackers relied on dictionary lists, applied rule-based tweaks—such as replacing letters with symbols or appending numbers—and waited for successful matches. This approach was slow and computationally expensive.
  • Pattern recognition at scale: Machine learning systems identify nuanced behaviors in password creation, including keyboard habits, substitutions, and the use of personal references. Instead of wasting resources on random guesses, attackers concentrate computing power on the most statistically likely passwords.
  • Smart credential variation: When leaked credentials are obtained from external breaches, AI can generate environment-specific variations. If “Summer2024!” worked elsewhere, the model can intelligently test related versions such as “Winter2025!” or “Spring2025!” rather than guessing blindly.
  • Automated intelligence gathering: Large language models can rapidly process publicly available data—press releases, LinkedIn profiles, product names—and weave that context into phishing campaigns and password spray attacks. What once took hours of manual research can now be completed in minutes.
  • Reduced technical barriers: Pre-trained AI models and accessible cloud infrastructure mean attackers no longer need specialized skills or costly hardware. The increased availability of high-performance consumer GPUs has unintentionally strengthened attackers’ capabilities, especially when organizations rent out unused GPU capacity.
Today, for roughly $5 per hour, attackers can rent eight RTX 5090 GPUs capable of cracking bcrypt hashes about 65% faster than previous generations.

Even when strong hashing algorithms and elevated cost factors are used, the sheer volume of password guesses now possible far exceeds what was realistic just a few years ago. Combined with AI-generated, high-probability guesses, the time needed to break weak or moderately strong passwords has dropped significantly.

Why traditional password policies are no longer enough

Many Active Directory password rules were designed before AI-driven threats became mainstream. Common complexity requirements—uppercase letters, lowercase letters, numbers, and symbols—often result in predictable structures that AI models are well-equipped to exploit.

"Password123!" meets complexity rules but follows a pattern that generative models can instantly recognize.

Similarly, enforced 90-day password rotations have lost much of their defensive value. Users frequently make minor, predictable changes such as adjusting numbers or referencing seasons. AI systems trained on breach data can anticipate these habits and test them during credential stuffing attacks.

While basic multi-factor authentication (MFA) adds protection, it does not eliminate the risks posed by compromised passwords. If attackers bypass MFA through tactics like social engineering, session hijacking, or MFA fatigue, access to Active Directory may still be possible.

Defending Active Directory against AI-assisted attacks

Countering AI-enhanced threats requires moving beyond compliance-driven controls and focusing on how passwords fail in real-world attacks. Password length is often more effective than complexity alone.

AI models struggle more with long, random passphrases than with short, symbol-heavy strings. An 18-character passphrase built from unrelated words presents a much stronger defense than an 8-character complex password.

Equally critical is visibility into whether employee passwords have already appeared in breach datasets. If a password exists in an attacker’s training data, hashing strength becomes irrelevant—the attacker simply uses the known credential.

Specops Password Policy and Breached Password Protection help organizations defend against over 4 billion known unique compromised passwords, including those that technically meet complexity rules but have already been stolen by malware.

The solution updates daily using real-world attack intelligence, ensuring protection against newly exposed credentials. Custom dictionaries that block company-specific terminology—such as product names, internal jargon, and brand references—further reduce the effectiveness of AI-driven reconnaissance.

When combined with passphrase support and robust length requirements, these measures significantly increase resistance to AI-generated password guessing.

Before applying new controls, organizations should assess their existing exposure. Specops Password Auditor provides a free, read-only scan of Active Directory to identify weak passwords, compromised credentials, and policy gaps—without altering the environment.

This assessment helps pinpoint where AI-powered attacks are most likely to succeed.

Generative AI has fundamentally shifted the balance of effort in password attacks, giving adversaries a clear advantage.

The real question is no longer whether defenses need to be strengthened, but whether organizations will act before their credentials appear in the next breach.

Suspicious Polymarket Bets Spark Insider Trading Fears After Maduro’s Capture

 

A sudden, massive bet surfaced just ahead of a major political development involving Venezuela’s leader. Days prior to Donald Trump revealing that Nicolás Maduro had been seized by U.S. authorities, an individual on Polymarket placed a highly profitable position. That trade turned a substantial gain almost instantly after the news broke. Suspicion now centers on how the timing could have been so precise. Information not yet public might have influenced the decision. The incident casts doubt on who truly knows what - and when - in digital betting arenas. Profits like these do not typically emerge without some edge. 

Hours before Trump spoke on Saturday, predictions about Maduro losing control by late January jumped fast on Polymarket. A single user, active for less than a month, made four distinct moves tied to Venezuela's political situation. That player started with $32,537 and ended with over $436,000 in returns. Instead of a name, only a digital wallet marks the profile. Who actually placed those bets has not come to light. 

That Friday afternoon, market signals began shifting - quietly at first. Come late evening, chances of Maduro being ousted edged up to 11%, starting from only 6.5% earlier. Then, overnight into January 3, something sharper unfolded. Activity picked up fast, right before news broke. Word arrived via a post: Trump claimed Maduro was under U.S. arrest. Traders appear to have moved quickly, moments prior. Their actions hint at advance awareness - or sharp guesswork - as prices reacted well before confirmation surfaced. Despite repeated attempts, Polymarket offered no prompt reply regarding the odd betting patterns. 

Still, unease is growing among regulators and lawmakers. According to Dennis Kelleher - who leads Better Markets, an independent organization focused on financial oversight - the bet carries every sign of being rooted in privileged knowledge Not just one trader walked away with gains. Others on Polymarket also pulled in sizable returns - tens of thousands - in the window before news broke. That timing hints at information spreading earlier than expected. Some clues likely slipped out ahead of formal releases. One episode sparked concern among American legislators. 

On Monday, New York's Representative Ritchie Torres - affiliated with the Democratic Party - filed a bill targeting insider activity by public officials in forecast-based trading platforms. Should such individuals hold significant details not yet disclosed, involvement in these wagers would be prohibited under his plan. This move surfaces amid broader scrutiny over how loosely governed these speculative arenas remain. Prediction markets like Polymarket and Kalshi gained traction fast across the U.S., letting people bet on politics, economies, or world events. 

When the 2024 presidential race heated up, millions flowed into these sites - adding up quickly. Insider knowledge trades face strict rules on Wall Street, yet forecasting platforms often escape similar control. Under Biden, authorities turned closer attention to these markets, increasing pressure across the sector. When Trump returned to influence, conditions shifted, opening space for lighter supervision. At Kalshi and Polymarket, leadership includes Donald Trump Jr., serving behind the scenes in guiding roles. 

Though Kalshi clearly prohibits insider trading - even among government staff using classified details - the Maduro wagering debate reveals regulatory struggles. Prediction platforms increasingly complicate distinctions, merging guesswork, uneven knowledge, then outright ethical breaches without clear boundaries.

Featured