In a likely phishing attempt, over four employees of Kasaragod and Wayanad Collectorates received WhatsApp texts from accounts imitating their district Collectors and asking for urgent money transfers. After that, the numbers have been sent to the cyber police, according to the Collectorate officials.
The texts came from Vietnam based numbers but showed the profile pictures of concerned collectors, Inbasekar K in Kasaragod and D R Meghasree.
In one incident, the scammers also shared a Google Pay number, but the target didn't proceed. According to the official, "the employees who received the messages were saved simply because they recognised the Collector’s tone and style of communication."
Two employees from Wayanad received texts, all from different numbers from Vietnam. In the Kasaragod incident, Collector Inbasekar said a lot of employees received the phishing texts on WhatsApp. Two employees reported the incident. No employee lost the money.
The scam used a similar script in the two districts. The first text read: Hello, how are you? Where are you currently? In the Wayanad incident, the first massage was sent around 4 pm, and in Kasaragod, around 5:30 pm. When the employee replied, a follow up text was sent: Very good. Please do something urgently. This shows that the scam followed the typical pitches used by scammers.
The numbers have been reported to the cyber police. According to Wayanad officials, "Once the messages were identified as fake, screenshots were immediately circulated across all internal WhatsApp groups." Cyber Unit has blocked both Vietnam-linked and Google Pay numbers.
Kasaragod Collector cautioned the public and staff to be careful when getting texts asking for money transfers. Coincidentally, in both the incidents, the texts were sent to staff employed in the Special Intensive Revision of electoral rolls. In this pursuit, the scammers revealed the pressures under which booth-level employees are working.
According to cyber security experts, the fake identity scams are increasingly targeting top government officials. Scammers are exploiting hierarchical structures to trick officials into acting promptly. “Police have urged government employees and the public to avoid responding to unsolicited WhatsApp messages requesting money, verify communication through official phone numbers or email, and report suspicious messages immediately to cybercrime authorities,” the New Indian Express reported.
An ongoing security incident at Gainsight's customer-management platform has raised fresh alarms about how deeply third-party integrations can affect cloud environments. The breach centers on compromised OAuth tokens connected with Gainsight's Salesforce connectors, leaving unclear how many organizations touched and the type of information accessed.
Salesforce was the first to flag suspicious activity originating from Gainsight's connected applications. As a precautionary measure, Salesforce revoked all associated access tokens and, for some time, disabled the concerned integrations. The company also released detailed indicators of compromise, timelines of malicious activity, and guidance urging customers to review authentication logs and API usage within their own environments.
Gainsight later confirmed that unauthorized parties misused certain OAuth tokens linked to its Salesforce-connected app. According to its leadership, only a small number of customers have so far reported confirmed data impact. However, several independent security teams-including Google's Threat Intelligence Group-reported signs that the intrusion may have reached far more Salesforce instances than initially acknowledged. These differing numbers are not unusual: supply-chain incidents often reveal their full extent only after weeks of log analysis and correlation.
At this time, investigators understand the attack as a case of token abuse, not a failure of Salesforce's underlying platform. OAuth tokens are long-lived keys that let approved applications make API calls on behalf of customers. Once attackers have them, they can access the CRM records through legitimate channels, and the detection is far more challenging. This approach enables the intruders to bypass common login checks, and therefore Salesforce has focused on log review and token rotation as immediate priorities.
To enhance visibility, Gainsight has onboarded Mandiant to conduct a forensic investigation into the incident. The company is investigating historical logs, token behavior, connector activity, and cross-platform data flows to understand the attacker's movements and whether other services were impacted. As a precautionary measure, Gainsight has also worked with platforms including HubSpot, Zendesk, and Gong to temporarily revoke related tokens until investigators can confirm they are safe to restore.
The incident is similar to other attacks that happened this year, where other Salesforce integrations were used to siphon customer records without exploiting any direct vulnerability in Salesforce. Repeated patterns here illustrate a structural challenge: organizations may secure their main cloud platform rigorously, but one compromised integration can open a path to wider unauthorized access.
But for customers, the best steps are as straightforward as ever: monitor Salesforce authentication and API logs for anomalous access patterns; invalidate or rotate existing OAuth tokens; reduce third-party app permissions to the bare minimum; and, if possible, apply IP restrictions or allowlists to further restrict the range of sources from which API calls can be made.
Both companies say they will provide further updates and support customers who have been affected by the issue. The incident served as yet another wake-up call that in modern cloud ecosystems, the security of one vendor often relies on the security practices of all in its integration chain.
A new phishing operation is misleading users through an extremely subtle visual technique that alters the appearance of Microsoft’s domain name. Attackers have registered the look-alike address “rnicrosoft(.)com,” which replaces the single letter m with the characters r and n positioned closely together. The small difference is enough to trick many people into believing they are interacting with the legitimate site.
This method is a form of typosquatting where criminals depend on how modern screens display text. Email clients and browsers often place r and n so closely that the pair resembles an m, leading the human eye to automatically correct the mistake. The result is a domain that appears trustworthy at first glance although it has no association with the actual company.
Experts note that phishing messages built around this tactic often copy Microsoft’s familiar presentation style. Everything from symbols to formatting is imitated to encourage users to act without closely checking the URL. The campaign takes advantage of predictable reading patterns where the brain prioritizes recognition over detail, particularly when the user is scanning quickly.
The deception becomes stronger on mobile screens. Limited display space can hide the entire web address and the address bar may shorten or disguise the domain. Criminals use this opportunity to push malicious links, deliver invoices that look genuine, or impersonate internal departments such as HR teams. Once a victim believes the message is legitimate, they are more likely to follow the link or download a harmful attachment.
The “rn” substitution is only one example of a broader pattern. Typosquatting groups also replace the letter o with the number zero, add hyphens to create official-sounding variations, or register sites with different top level domains that resemble the original brand. All of these are intended to mislead users into entering passwords or sending sensitive information.
Security specialists advise users to verify every unexpected message before interacting with it. Expanding the full sender address exposes inconsistencies that the display name may hide. Checking links by hovering over them, or using long-press previews on mobile devices, can reveal whether the destination is legitimate. Reviewing email headers, especially the Reply-To field, can also uncover signs that responses are being redirected to an external mailbox controlled by attackers.
When an email claims that a password reset or account change is required, the safest approach is to ignore the provided link. Instead, users should manually open a new browser tab and visit the official website. Organisations are encouraged to conduct repeated security awareness exercises so employees do not react instinctively to familiar-looking alerts.
Below are common variations used in these attacks:
• Letter Pairing: r and n are combined to imitate m as seen in rnicrosoft(.)com.
• Number Replacement: the letter o is switched with the number zero in addresses like micros0ft(.)com.
• Added Hyphens: attackers introduce hyphens to create domains that appear official, such as microsoft-support(.)com.
• Domain Substitution: similar names are created by altering only the top level domain, for example microsoft(.)co.
This phishing strategy succeeds because it relies on human perception rather than technical flaws. Recognising these small changes and adopting consistent verification habits remain the most effective protections against such attacks.
Many organisations are starting to recognise a security problem that has been forming silently in the background. Conversations employees hold with public AI chatbots can accumulate into a long-term record of sensitive information, behavioural patterns, and internal decision-making. As reliance on AI tools increases, these stored interactions may become a serious vulnerability that companies have not fully accounted for.
The concern resurfaced after a viral trend in late 2024 in which social media users asked AI models to highlight things they “might not know” about themselves. Most treated it as a novelty, but the trend revealed a larger issue. Major AI providers routinely retain prompts, responses, and related metadata unless users disable retention or use enterprise controls. Over extended periods, these stored exchanges can unintentionally reveal how employees think, communicate, and handle confidential tasks.
This risk becomes more severe when considering the rise of unapproved AI use at work. Recent business research shows that while the majority of employees rely on consumer AI tools to automate or speed up tasks, only a fraction of companies officially track or authorise such usage. This gap means workers frequently insert sensitive data into external platforms without proper safeguards, enlarging the exposure surface beyond what internal security teams can monitor.
Vendor assurances do not fully eliminate the risk. Although companies like OpenAI, Google, and others emphasize encryption and temporary chat options, their systems still operate within legal and regulatory environments. One widely discussed court order in 2025 required the preservation of AI chat logs, including previously deleted exchanges. Even though the order was later withdrawn and the company resumed standard deletion timelines, the case reminded businesses that stored conversations can resurface unexpectedly.
Technical weaknesses also contribute to the threat. Security researchers have uncovered misconfigured databases operated by AI firms that contained user conversations, internal keys, and operational details. Other investigations have demonstrated that prompt-based manipulation in certain workplace AI features can cause private channel messages to leak. These findings show that vulnerabilities do not always come from user mistakes; sometimes the supporting AI infrastructure itself becomes an entry point.
Criminals have already shown how AI-generated impersonation can be exploited. A notable example involved attackers using synthetic voice technology to imitate an executive, tricking an employee into transferring funds. As AI models absorb years of prompt history, attackers could use stylistic and behavioural patterns to impersonate employees, tailor phishing messages, or replicate internal documents.
Despite these risks, many companies still lack comprehensive AI governance. Studies reveal that employees continue to insert confidential data into AI systems, sometimes knowingly, because it speeds up their work. Compliance requirements such as GDPR’s strict data minimisation rules make this behaviour even more dangerous, given the penalties for mishandling personal information.
Experts advise organisations to adopt structured controls. This includes building an inventory of approved AI tools, monitoring for unsanctioned usage, conducting risk assessments, and providing regular training so staff understand what should never be shared with external systems. Some analysts also suggest that instead of banning shadow AI outright, companies should guide employees toward secure, enterprise-level AI platforms.
If companies fail to act, each casual AI conversation can slowly accumulate into a dataset capable of exposing confidential operations. While AI brings clear productivity benefits, unmanaged use may convert everyday workplace conversations into one of the most overlooked security liabilities of the decade.
Akira, one of the most active ransomware operations this year, has expanded its capabilities and increased the scale of its attacks, according to new threat intelligence shared by global security agencies. The group’s operators have upgraded their ransomware toolkit, continued to target a broad range of sectors, and sharply increased the financial impact of their attacks.
Data collected from public extortion portals shows that by the end of September 2025 the group had claimed roughly 244.17 million dollars in ransom proceeds. Analysts note that this figure represents a steep rise compared to estimates released in early 2024. Current tracking data places Akira second in overall activity among hundreds of monitored ransomware groups, with more than 620 victim organisations listed this year.
The growing number of incidents has prompted an updated joint advisory from international cyber authorities. The latest report outlines newly observed techniques, warns of the group’s expanded targeting, and urges all organisations to review their defensive posture.
Researchers confirm that Akira has introduced a new ransomware strain, commonly referenced as Akira v2. This version is designed to encrypt files at higher speeds and make data recovery significantly harder. Systems affected by the new variant often show one of several extensions, which include akira, powerranges, akiranew, and aki. Victims typically find ransom instructions stored as text files in both the main system directory and user folders.
Investigations show that Akira actors gain entry through several familiar but effective routes. These include exploiting security gaps in edge devices and backup servers, taking advantage of authentication bypass and scripting flaws, and using buffer overflow vulnerabilities to run malicious code. Stolen or brute forced credentials remain a common factor, especially when multi factor authentication is disabled.
Once inside a network, the attackers quickly establish long-term access. They generate new domain accounts, including administrative profiles, and have repeatedly created an account named itadm during intrusions. The group also uses legitimate system tools to explore networks and identify sensitive assets. This includes commands used for domain discovery and open-source frameworks designed for remote execution. In many cases, the attackers uninstall endpoint detection products, change firewall rules, and disable antivirus tools to remain unnoticed.
The group has also expanded its focus to virtual and cloud based environments. Security teams recently observed the encryption of virtual machine disk files on Nutanix AHV, in addition to previous activity on VMware ESXi and Hyper-V platforms. In one incident, operators temporarily powered down a domain controller to copy protected virtual disk files and load them onto a new virtual machine, allowing them to access privileged credentials.
Command and control activity is often routed through encrypted tunnels, and recent intrusions show the use of tunnelling services to mask traffic. Authorities warn that data theft can occur within hours of initial access.
Security agencies stress that the most effective defence remains prompt patching of known exploited vulnerabilities, enforcing multi factor authentication on all remote services, monitoring for unusual account creation, and ensuring that backup systems are fully secured and tested.