The CISA, NSA, and FBI teamed with cybersecurity agencies from the UK, Australia, and New Zealand to make a best-practices policy for safe AI development. The principles laid down in this document offer a strong foundation for protecting AI data and securing the reliability and accuracy of AI-driven outcomes.
The advisory comes at a crucial point, as many businesses rush to integrate AI into their workplace, but this can be a risky situation also. Governments in the West have become cautious as they believe that China, Russia, and other actors will find means to abuse AI vulnerabilities in unexpected ways.
The risks are increasing swiftly as critical infrastructure operators develop AI into operational tech that controls important parts of daily life, from scheduling meetings to paying bills to doing your taxes.
From foundational elements of AI to data consulting, the document outlines ways to protect your data at different stages of the AI life cycle such as planning, data collection, model development, installment and operations.
It requests people to use digital signature that verify modifications, secure infrastructure that prevents suspicious access and ongoing risk assessments that can track emerging threats.
The document addresses ways to prevent data quality issues, whether intentional or accidental, from compromising the reliability and safety of AI models.
Cryptographic hashes make sure that taw data is not changed once it is incorporated into a model, according to the document, and frequent curation can cancel out problems with data sets available on the web. The document also advises the use of anomaly detection algorithms that can eliminate “malicious or suspicious data points before training."
The joint guidance also highlights issues such as incorrect information, duplicate records and “data drift”, statistics bias, a natural limitation in the characteristics of the input data.
The reported flaws are CVE-2025-24061 (Mark of the Web bypass) and CVE-2025-24071 (File Explorer spoofing), which Microsoft fixed in its March 2025 Patch Tuesday updates, giving credit to the reporter as ‘SkorikARI.’ In this absurd incident, the actor had dual identities—EncryptHub and SkorikARI. The entire case shows us an individual who works in both cybersecurity and cybercrime.
Outpost24 linked SkorikARI and EncryptHub via a security breach, where the latter mistakenly revealed their credentials, exposing links to multiple accounts. The disclosed profile showed the actor’s swing between malicious activities and cybersecurity operations.
Outpost24’ security researcher Hector Garcia said the “hardest evidence was from the fact that the password files EncryptHub exfiltrated from his system had accounts linked to both EncryptHub” such as credentials to EncryptRAT- still in development, or “his account on xss.is, and to SkorikARI, like accesses to freelance sites or his own Gmail account.”
Garcia also said there was a login to “hxxps://github[.]com/SkorikJR,” which was reported in July’s Fortinet story about Fickle Stealer; this helped them solve the puzzle. Another big reveal of the links to dual identity was ChatGPT conversations, where activities of both SkorikARI and EncryptHub could be found.
Evidence suggests this wasn't EncryptHub's first involvement with zero-day flaws, as the actor has tried to sell it to other cybercriminals on hacking forums.
Outpost24 highlighted EncryptHub's suspicious activities- oscillating between cybercrime and freelancing. An accidental operational security (OPSEC) disclosed personal information despite their technical expertise.
Outpost24 found EncryptHub using ChatGPT to build phishing sites, develop malware, integrate code, and conduct vulnerability research. One ChatGPT conversation included a self-assessment showing their conflicted nature: “40% black hat, 30% grey hat, 20% white hat, and 10% uncertain.” The conversation also showed plans for massive (although harmless) publicity stunts affecting tens of thousands of computers.
EncryptHub has connections with ransomware groups such as BlackSuit and RansomHub who are known for their phishing attacks, advanced social engineering campaigns, and making of Fickle Stealer- a custom PowerShell-based infostealer.
For years, companies protected sensitive data by securing emails, devices, and internal networks. But work habits have changed. Now, most of the data moves through web browsers.
Employees often copy, paste, upload, or transfer information online without realizing the risks. Web apps, personal accounts, AI tools, and browser extensions have made it harder to track where the data goes. Old security methods can no longer catch these new risks.
How Data Slips Out Through Browsers
Data leaks no longer happen only through obvious channels like USB drives or emails. Today, normal work tasks done inside browsers cause unintentional leaks.
For example, a developer might paste secret codes into an AI chatbot. A salesperson could move customer details into their personal cloud account. A manager might give an online tool access to company data without knowing it.
Because these activities happen inside approved apps, companies often miss the risks. Different platforms also store data differently, making it harder to apply the same safety rules everywhere.
Simple actions like copying text, using extensions, or uploading files now create new ways for data to leak. Cloud services like AWS or Microsoft add another layer of confusion, as it becomes unclear where the data is stored.
The use of multiple browsers, Chrome, Safari, Firefox — makes it even harder for security teams to keep an eye on everything.
Personal Accounts Add to the Risk
Switching between work and personal accounts during the same browser session is very common. People use services like Gmail, Google Drive, ChatGPT, and others without separating personal and office work.
As a result, important company data often ends up in personal cloud drives, emails, or messaging apps without any bad intention from employees.
Studies show that nearly 40% of web use in Google apps involves personal accounts. Blocking personal uploads is not a solution. Instead, companies need smart browser rules to separate work from personal use without affecting productivity.
Moving Data Is the Most Dangerous Moment
Data is most vulnerable when it is being shared or transferred — what experts call "data in motion." Even though companies try to label sensitive information, most protections work only when data is stored, not when it moves.
Popular apps like Google Drive, Slack, and ChatGPT make sharing easy but also increase the risk of leaks. Old security systems fail because the biggest threats now come from tools employees use every day.
Extensions and Unknown Apps — The Hidden Threat
Browser extensions and third-party apps are another weak spot. Employees often install them without knowing how much access they give away.
Some of these tools can record keystrokes, collect login details, or keep pulling data even after use. Since these risks often stay hidden, security teams struggle to control them.
Today, browsers are the biggest weak spot in protecting company data. Businesses need better tools that control data flow inside the browser, keeping information safe without slowing down work.
The Ministry of Finance, under Nirmala Sitharaman’s leadership, has issued a directive prohibiting employees from using artificial intelligence (AI) tools such as ChatGPT and DeepSeek for official work. The decision comes over concerns about data security as these AI-powered platforms process and store information externally, potentially putting confidential government data at risk.
Why Has the Finance Ministry Banned AI Tools?
AI chatbots and virtual assistants have gained popularity for their ability to generate text, answer questions, and assist with tasks. However, since these tools rely on cloud-based processing, there is a risk that sensitive government information could be exposed or accessed by unauthorized parties.
The ministry’s concern is that official documents, financial records, and policy decisions could unintentionally be shared with external AI systems, making them vulnerable to cyber threats or misuse. By restricting their use, the government aims to safeguard national data and prevent potential security breaches.
Public Reactions and Social Media Buzz
The announcement quickly sparked discussions online, with many users sharing humorous takes on the decision. Some questioned how government employees would manage their workload without AI assistance, while others speculated whether Indian AI tools like Ola Krutrim might be an approved alternative.
A few of the popular reactions included:
1. "How will they complete work on time now?"
2. "So, only Ola Krutrim is allowed?"
3. "The Finance Ministry is switching back to traditional methods."
4. "India should develop its own AI instead of relying on foreign tools."
India’s Position in the Global AI Race
With AI development accelerating worldwide, several countries are striving to build their own advanced models. China’s DeepSeek has emerged as a major competitor to OpenAI’s ChatGPT and Google’s Gemini, increasing the competition in the field.
The U.S. has imposed trade restrictions on Chinese AI technology, leading to growing tensions in the tech industry. Meanwhile, India has yet to launch an AI model capable of competing globally, but the government’s interest in regulating AI suggests that future developments could be on the horizon.
While the Finance Ministry’s move prioritizes data security, it also raises questions about efficiency. AI tools help streamline work processes, and their restriction could lead to slower operations in certain departments.
Experts suggest that India should focus on developing AI models that are secure and optimized for government use, ensuring that innovation continues without compromising confidential information.
For now, the Finance Ministry’s stance reinforces the need for careful regulation of AI technologies, ensuring that security remains a top priority in government operations.