"KLM has reported to the Dutch Data Protection Authority; Air France has done this in France at the CNIL. Customers whose data may have been accessed are currently being informed and advised to be extra alert to suspicious emails or phone calls," the group said.
With 78,000 employees and a fleet of 564 aircraft, Air France-KLM offers services for 300 destinations in 90 countries worldwide. The group transported 98 million passengers globally in 2024. The airlines said that they have closed the threat actors’ access to the hacked systems once the breach was discovered. They also claim that the attack didn’t impact their networks.
"Air France and KLM have detected unusual activity on an external platform we use for customer service. This activity resulted in unauthorized access to customer data. Our IT security teams, along with the relevant external party, took immediate action to stop the unauthorized access. Measures have also been implemented to prevent recurrence. Internal Air France and KLM systems were not affected," the group said.
The attackers stole data, including names, email addresses, contact numbers, transaction records, and details of rewards programs. But the group has said that the passengers’ personal and financial data was not compromised. The airlines have informed the concerned authorities in the respective countries of the attack. They have also notified the impacted individuals about the breach.
"KLM has reported the incident to the Dutch Data Protection Authority; Air France has done so in France with the CNIL.” "Customers whose data may have been accessed are currently being informed and advised to be extra vigilant for suspicious emails or phone calls," they said.
Experts have discovered a new prompt injection attack that can turn ChatGPT into a hacker’s best friend in data thefts. Known as AgentFlayer, the exploit uses a single document to hide “secret” prompt instructions that target OpenAI’s chatbot. An attacker can share what appears to be a harmless document with victims through Google Drive, without any clicks.
AgentFlayer is a “zero-click” threat as it abuses a vulnerability in Connectors, for instance, a ChatGPT feature that connects the assistant to other applications, websites, and services. OpenAI suggests that Connectors supports a few of the world’s most widely used platforms. This includes cloud storage platforms such as Microsoft OneDrive and Google Drive.
Experts used Google Drive to expose the threats possible from chatbots and hidden prompts.
The malicious document has a 300-word hidden malicious prompt. The text is size one, formatted in white to hide it from human readers but visible to the chatbot.
The prompt used to showcase AgentFlayer’s attacks prompts ChatGPT to find the victim’s Google Drive for API keys, link them to a tailored URL, and an external server. When the malicious document is shared, the attack is launched. The threat actor gets the hidden API keys when the target uses ChatGPT (the Connectors feature has to be enabled).
AgentFlayer is not a bug that only affects the Google Cloud. “As with any indirect prompt injection attack, we need a way into the LLM's context. And luckily for us, people upload untrusted documents into their ChatGPT all the time. This is usually done to summarize files or data, or leverage the LLM to ask specific questions about the document’s content instead of parsing through the entire thing by themselves,” said expert Tamir Ishay Sharbat from Zenity Labs.
“OpenAI is already aware of the vulnerability and has mitigations in place. But unfortunately, these mitigations aren’t enough. Even safe-looking URLs can be used for malicious purposes. If a URL is considered safe, you can be sure an attacker will find a creative way to take advantage of it,” Zenith Labs said in the report.
The network is built by the National Data Association and managed by the Ministry of Public Security’s Data Innovation and Exploitation Center. It will serve as the primary verification layer for tasks such as supply-chain logs, school transcripts, and hospital records.
According to experts, NDAChain is based on a hybrid model, relying on a Proof-of-Authority mechanism to ensure only authorized nodes can verify transactions. It also adds Zero-Knowledge-Proofs to protect sensitive data while verifying its authenticity. According to officials, NDAChain can process between 1,200 and 3,600 transactions per second, a statistic that aims to support faster verifications in logistics, e-government, and other areas.
The networks have two main features: NDA DID offers digital IDs that integrate with Vietnam’s current VNeID framework, allowing users to verify their IDs online when signing documents or using services. On the other hand, NDATrace provides end-to-end product tracking via GS1 and EBSI Trace standards. Items are tagged with unique identifiers that RFID chips or QR codes can scan, helping businesses prove verification to overseas procurers and ease recalls in case of problems.
NDAChain works as a “protective layer” for Vietnam’s digital infrastructure, built to scale as data volume expands. Digital records can be verified without needing personal details due to the added privacy tools. The permissioned setup also offers authorities more control over people joining the network. According to reports, total integration with the National Data Center will be completed by this year. The focus will then move towards local agencies and universities, where industry-specific Layer 3 apps are planned for 2026.
According to Vietnam Briefing, "in sectors such as food, pharmaceuticals, and health supplements, where counterfeit goods remain a persistent threat, NDAChain enables end-to-end product origin authentication. By tracing a product’s whole journey from manufacturer to end-consumer, businesses can enhance brand trust, reduce legal risk, and meet rising regulatory demands for transparency."
Most of the victims were based in India, Argentina, Peru, Mexico, Colombia, Bolivia, and Ecuador. A few records date back to 2018. The leaked database also revealed the identity of the Catwatchful admin called Omar Soca Char.
The Catwatchful database also revealed the identity of the spyware operation’s administrator, Omar Soca Charcov, a developer based in Uruguay.
Catwatchful is a spyware that pretends to be a child monitoring app, claiming to be “invisible and can not be detected,” while it uploads the victim’s data to a dashboard accessible to the person who planted the app. The stolen data includes real-time location data, victims’ photos, and messages. The app can also track live ambient audio from the device’s mic and access the phone camera (both front and rear).
Catwatchful and similar apps are banned on app stores, and depend on being downloaded and deployed by someone having physical access to a victim’s phone. These apps are famous as “stalkerware” or “spouseware” as they are capable of unauthorized and illegal non-consensual surveillance of romantic partners and spouses.
The Catwatchful incident is the fifth and latest in this year’s growing list of stalkerware scams that have been breached, hacked, or had their data exposed.
Daigle has previously discovered stalkerware exploits. Catwatchful uses a custom-made API, which the planted app uses to communicate to send data back to Catwatchful servers. The stalkerware also uses Google Firebase to host and store stolen data.
According to Techradar, the “data was stored on Google Firebase, sent via a custom API that was unauthenticated, resulting in open access to user and victim data. The report also confirms that, although hosting had initially been suspended by HostGator, it had been restored via another temporary domain."
Further investigation by the US government revealed that these actors were working to steal money for the North Korean government and use the funds to run its government operations and its weapons program.
The US has imposed strict sanctions on North Korea, which restrict US companies from hiring North Korean nationals. It has led to threat actors making fake identities and using all kinds of tricks (such as VPNs) to obscure their real identities and locations. This is being done to avoid getting caught and get easily hired.
Recently, the threat actors have started using spoof tactics such as voice-changing tools and AI-generated documents to appear credible. In one incident, the scammers somehow used an individual residing in New Jersey, who set up shell companies to fool victims into believing they were paying a legitimate local business. The same individual also helped overseas partners to get recruited.
The clever campaign has now come to an end, as the US Department of Justice (DoJ) arrested and charged a US national called Zhenxing “Danny” Wanf with operating a “year-long” scam. The scheme earned over $5 million. The agency also arrested eight more people - six Chinese and two Taiwanese nationals. The arrested individuals are charged with money laundering, identity theft, hacking, sanctions violations, and conspiring to commit wire fraud.
In addition to getting paid in these jobs, which Microsoft says is a hefty payment, these individuals also get access to private organization data. They exploit this access by stealing sensitive information and blackmailing the company.
One of the largest and most infamous hacking gangs worldwide is the North Korean state-sponsored group, Lazarus. According to experts, the gang extorted billions of dollars from the Korean government through similar scams. The entire campaign is popular as “Operation DreamJob”.
"To disrupt this activity and protect our customers, we’ve suspended 3,000 known Microsoft consumer accounts (Outlook/Hotmail) created by North Korean IT workers," said Microsoft.
WhatsApp has introduced ‘Private Processing,’ a new tech that allows users to use advanced AI features by offloading tasks to privacy-preserving cloud servers, without exposing their chat to Meta. Meta claims even it cannot see the messages while processing them. The system employs encrypted cloud infrastructure and hardware-based isolation without making it visible to anyone, even Meta, or processing data.
For those who decide to use Private Processing, the system works in an obscure verification via the user’s WhatsApp client to confirm the user’s validity.
Meta claims this system keeps WhatsApp’s end-to-end encryption intact while offering AI features in chats. However, the feature currently applies only to select use cases and excludes Meta’s broader AI deployments, including those used in India’s public service systems.
Private processing employs Trusted Execution Environments (TEEs) — safe virtual machines that use cloud infrastructure to keep AI requests hidden.
According to Meta, the Private processing is a response to privacy questions around AI and messaging. WhatsApp has now joined other companies like Apple that have introduced confidential AI computing models in the previous year. “To validate our implementation of these and other security principles, independent security researchers will be able to continuously verify our privacy and security architecture and its integrity,” Meta said.
It is similar to Apple’s private cloud computing in terms of public transparency and stateless processing. Currently, however, WhatsApp is using them only for select features. Apple, on the other hand, has declared plans to implement this model throughout all its AI tools, whereas WhatsApp has not made such claims, yet.
WhatsApp says, “Private Processing uses anonymous credentials to authenticate users over OHTTP. This way, Private Processing can authenticate users to the Private Processing system but remains unable to identify them.”
Microsoft has warned users about a new password-spraying attack by a hacking group Storm-1977 that targets cloud users. The Microsoft Threat Intelligence team reported a new warning after discovering threat actors are abusing unsecured workload identities to access restricted resources.
According to Microsoft, “Container technology has become essential for modern application development and deployment. It's a critical component for over 90% of cloud-native organizations, facilitating swift, reliable, and flexible processes that drive digital transformation.”
Research says 51% of such workload identities have been inactive for one year, which is why attackers are exploiting this attack surface. The report highlights the “adoption of containers-as-a-service among organizations rises.” According to Microsoft, it continues to look out for unique security dangers that affect “containerized environments.”
The password-spraying attack targeted a command line interface tool “AzureChecker” to download AES-encrypted data which revealed the list of password-spray targets after it was decoded. To make things worse, the “threat actor then used the information from both files and posted the credentials to the target tenants for validation.”
The attack allowed the Storm-1977 hackers to leverage a guest account to make a compromised subscription resource group and over 200 containers that were used for crypto mining.
The solution to the problem of password spraying attacks is eliminating passwords. It can be done by moving towards passkeys, a lot of people are already doing that.
Modify the Kubernetes role-based access controls for every user and service account to only retain permissions that are required.
According to Microsoft, “Recent updates to Microsoft Defender for Cloud enhance its container security capabilities from development to runtime. Defender for Cloud now offers enhanced discovery, providing agentless visibility into Kubernetes environments, tracking containers, pods, and applications.” These updates upgrade security via continuous granular scanning.
The main highlight of the M-Trends report is that hackers are using every opportunity to advance their goals, such as using infostealer malware to steal credentials. Another trend is attacking unsecured data repositories due to poor security hygiene.
Hackers are also exploiting fractures and risks that surface when an organization takes its data to the cloud. “In 2024, Mandiant initiated 83 campaigns and five global events and continued to track activity identified in previous years. These campaigns affected every industry vertical and 73 countries across six continents,” the report said.
Ransomware-related attacks accounted for 21% of all invasions in 2024 and comprised almost two-thirds of cases related to monetization tactics. This comes in addition to data theft, email hacks, cryptocurrency scams, and North Korean fake job campaigns, all attempting to get money from targets.
Exploits were amid the most popular primary infection vector at 33%, stolen credentials at 16%, phishing at 14%, web compromises at 9%, and earlier compromises at 8%.
Finance topped in the targeted industry, with more than 17% of attacks targeting the sector, followed closely by professional services and business (11%), critical industries such as high tech (10%), governments (10%), and healthcare (9%).
Experts have highlighted a broader target of various industries, suggesting that anyone can be targeted by state-sponsored attacks, either politically or financially motivated.
Stuart McKenzie, Managing Director, Mandiant Consulting EMEA. said “Financially motivated attacks are still the leading category. “While ransomware, data theft, and multifaceted extortion are and will continue to be significant global cybercrime concerns, we are also tracking the rise in the adoption of infostealer malware and the developing exploitation of Web3 technologies, including cryptocurrencies.”
He also stressed that the “increasing sophistication and automation offered by artificial intelligence are further exacerbating these threats by enabling more targeted, evasive, and widespread attacks. Organizations need to proactively gather insights to stay ahead of these trends and implement processes and tools to continuously collect and analyze threat intelligence from diverse sources.”
The hackers have attacked over 140 Netskope customers situated in Asia, North America, and Southern Europe throughout different segments, driven by the financial and tech sectors.
Netskope has been examining different phishing and malware campaigns targeting users who look for PDF documents online. Hackers use tricky ways within these PDFs to resend victims to malicious websites or lure them into downloading malware. In the newly found campaign, they used fake CAPTCHAs and Cloudflare Turnstile to distribute the LegionLoader payload.
The infection begins with a drive-by download when a target looks for a particular document and is baited to a malicious site.
The downloaded file contains a fake CAPTCHA. If clicked, it redirects the user via a Clloudfare Turnstile CAPTCHA to a notification page.
In the last step, victims are urged to allow browser notifications.
When a user blocks the browser notification prompt or uses a browser that doesn’t support notifications, they are redirected to download harmless apps like Opera or 7-Zip. However, if the user agrees to receive browser notifications, they are redirected to another Cloudflare Turnstile CAPTCHA. Once this is done, they are sent to a page with instructions on how to download their file.
The download process requires the victim to open the Windows Run window (win + r) and put content copied to the clipboard (ctrl + v), and “ execute it by pressing enter (we described a similar approach in a post about Lumma Stealer),” Netscope said. In this incident, the command in the clipboard uses the “ command prompt to run cURL and download an MSI file.” After this, the “command opens File Explorer, where the MSI file has been downloaded. When the victim runs the MSI file, it will execute the initial payload.”
To avoid detection, the campaign uses a legitimate VMware-signed app that sideloads a malicious DLL to run and load the LegionLeader payload. Later, a new custom algorithm is used to remove the LegionLeader shellcode loader.
In the final stage, the hackers install a malicious browser extension that can steal sensitive info across different browsers, such as Opera, Chrome, Brave, and Edge. Netscope warns of an alarming trend where hackers are targeting users searching for PDF docs online via sophisticated tactics to install malware.
The latest "Qwen2.5-Omni-7B" is a multimodal model- it can process inputs like audio/video, text, and images- while also creating real-time text and natural speech responses, Alibaba’s cloud website reports. It also said that the model can be used on edge devices such as smartphones, providing higher efficiency without giving up on performance.
According to Alibaba, the “unique combination makes it the perfect foundation for developing agile, cost-effective AI agents that deliver tangible value, especially intelligent voice applications.” For instance, the AI can be used to assist visually impaired individuals to navigate their environment via real-time audio description.
The latest model is open-sourced on forums GitHub and Hugging Face, after a rising trend in China post DeepSeek breakthrough R1 model open-source. Open-source means a software in which the source code is created freely on web for potential modification and redistribution.
In recent years, Alibaba claims it has open-sourced more that 200 generative AI models. In the noise of China’s AI dominance intensified by DeepSeek due to its shoe string budget and capabilities, Alibaba and genAI competitors are also releasing new, cost-cutting models and services an exceptional case.
Last week, Chinese tech mammoth Baidu launched a new multimodal foundational model and its first reasoning-based model. Likewise, Alibaba introduced its updated Qwen 2.5 AI model in January and also launched a new variant of its AI assistant tool Quark this month.
Alibaba has also made strong commitments to its AI plan, recently, it announced a plan to put $53 billion in its cloud computing and AI infrastructure over the next three years, even surpassing its spending in the space over the past decade.
CNBC talked with Kai Wang, Asia Senior equity analyst at Morningstar, Mr Kai told CNBC that “large Chinese tech players such as Alibaba, which build data centers to meet the computing needs of AI in addition to building their own LLMs, are well positioned to benefit from China's post-DeepSeek AI boom.” According to CNBC, “Alibaba secured a major win for its AI business last month when it confirmed that the company was partnering with Apple to roll out AI integration for iPhones sold in China.”
Called Google Maps Timeline, from December, Google will save user location data for a maximum of 180 days. After the duration ends, the data will be erased from Google Cloud servers.
The new policy means Google can only save a user’s movements and whereabouts for 6 months, the user has an option to store the data on a personal device, but the cloud data will be permanently deleted from Google servers.
The new privacy change is welcomed, smartphones can balance privacy and convenience in terms of data storage, but nothing is more important than location data.
Users can change settings that suit them best, but the majority go with default settings. The problem here arises when Google uses user data for suggesting insights (based on anonymous location data), or improving Google services like ads products.
The Google Maps Timeline feature addresses questions about data privacy and security. The good things include:
Better privacy: By restricting the storage timeline of location data on the cloud, Google can reduce data misuse. Limiting the storage duration means less historical data is exposed to threat actors if there's a breach.
More control to users: When users have the option to retain location data on their devices, it gives them ownership over their personal data. Users can choose whether to delete their location history or keep the data.
Accountability from Google: The move is a positive sign toward building transparency and trust, showing a commitment to user privacy.
Services: Google features that use location history data for tailored suggestions might be impacted, and users may observe changes in correct location-based suggestions and targeted ads.
The problem in data recovery: For users who like to store their data for a longer duration, the new move can be a problem. Users will have to self-back up data if they want to keep it for more than 180 days.