WhatsApp has introduced ‘Private Processing,’ a new tech that allows users to use advanced AI features by offloading tasks to privacy-preserving cloud servers, without exposing their chat to Meta. Meta claims even it cannot see the messages while processing them. The system employs encrypted cloud infrastructure and hardware-based isolation without making it visible to anyone, even Meta, or processing data.
For those who decide to use Private Processing, the system works in an obscure verification via the user’s WhatsApp client to confirm the user’s validity.
Meta claims this system keeps WhatsApp’s end-to-end encryption intact while offering AI features in chats. However, the feature currently applies only to select use cases and excludes Meta’s broader AI deployments, including those used in India’s public service systems.
Private processing employs Trusted Execution Environments (TEEs) — safe virtual machines that use cloud infrastructure to keep AI requests hidden.
According to Meta, the Private processing is a response to privacy questions around AI and messaging. WhatsApp has now joined other companies like Apple that have introduced confidential AI computing models in the previous year. “To validate our implementation of these and other security principles, independent security researchers will be able to continuously verify our privacy and security architecture and its integrity,” Meta said.
It is similar to Apple’s private cloud computing in terms of public transparency and stateless processing. Currently, however, WhatsApp is using them only for select features. Apple, on the other hand, has declared plans to implement this model throughout all its AI tools, whereas WhatsApp has not made such claims, yet.
WhatsApp says, “Private Processing uses anonymous credentials to authenticate users over OHTTP. This way, Private Processing can authenticate users to the Private Processing system but remains unable to identify them.”
Microsoft has warned users about a new password-spraying attack by a hacking group Storm-1977 that targets cloud users. The Microsoft Threat Intelligence team reported a new warning after discovering threat actors are abusing unsecured workload identities to access restricted resources.
According to Microsoft, “Container technology has become essential for modern application development and deployment. It's a critical component for over 90% of cloud-native organizations, facilitating swift, reliable, and flexible processes that drive digital transformation.”
Research says 51% of such workload identities have been inactive for one year, which is why attackers are exploiting this attack surface. The report highlights the “adoption of containers-as-a-service among organizations rises.” According to Microsoft, it continues to look out for unique security dangers that affect “containerized environments.”
The password-spraying attack targeted a command line interface tool “AzureChecker” to download AES-encrypted data which revealed the list of password-spray targets after it was decoded. To make things worse, the “threat actor then used the information from both files and posted the credentials to the target tenants for validation.”
The attack allowed the Storm-1977 hackers to leverage a guest account to make a compromised subscription resource group and over 200 containers that were used for crypto mining.
The solution to the problem of password spraying attacks is eliminating passwords. It can be done by moving towards passkeys, a lot of people are already doing that.
Modify the Kubernetes role-based access controls for every user and service account to only retain permissions that are required.
According to Microsoft, “Recent updates to Microsoft Defender for Cloud enhance its container security capabilities from development to runtime. Defender for Cloud now offers enhanced discovery, providing agentless visibility into Kubernetes environments, tracking containers, pods, and applications.” These updates upgrade security via continuous granular scanning.
The main highlight of the M-Trends report is that hackers are using every opportunity to advance their goals, such as using infostealer malware to steal credentials. Another trend is attacking unsecured data repositories due to poor security hygiene.
Hackers are also exploiting fractures and risks that surface when an organization takes its data to the cloud. “In 2024, Mandiant initiated 83 campaigns and five global events and continued to track activity identified in previous years. These campaigns affected every industry vertical and 73 countries across six continents,” the report said.
Ransomware-related attacks accounted for 21% of all invasions in 2024 and comprised almost two-thirds of cases related to monetization tactics. This comes in addition to data theft, email hacks, cryptocurrency scams, and North Korean fake job campaigns, all attempting to get money from targets.
Exploits were amid the most popular primary infection vector at 33%, stolen credentials at 16%, phishing at 14%, web compromises at 9%, and earlier compromises at 8%.
Finance topped in the targeted industry, with more than 17% of attacks targeting the sector, followed closely by professional services and business (11%), critical industries such as high tech (10%), governments (10%), and healthcare (9%).
Experts have highlighted a broader target of various industries, suggesting that anyone can be targeted by state-sponsored attacks, either politically or financially motivated.
Stuart McKenzie, Managing Director, Mandiant Consulting EMEA. said “Financially motivated attacks are still the leading category. “While ransomware, data theft, and multifaceted extortion are and will continue to be significant global cybercrime concerns, we are also tracking the rise in the adoption of infostealer malware and the developing exploitation of Web3 technologies, including cryptocurrencies.”
He also stressed that the “increasing sophistication and automation offered by artificial intelligence are further exacerbating these threats by enabling more targeted, evasive, and widespread attacks. Organizations need to proactively gather insights to stay ahead of these trends and implement processes and tools to continuously collect and analyze threat intelligence from diverse sources.”
The hackers have attacked over 140 Netskope customers situated in Asia, North America, and Southern Europe throughout different segments, driven by the financial and tech sectors.
Netskope has been examining different phishing and malware campaigns targeting users who look for PDF documents online. Hackers use tricky ways within these PDFs to resend victims to malicious websites or lure them into downloading malware. In the newly found campaign, they used fake CAPTCHAs and Cloudflare Turnstile to distribute the LegionLoader payload.
The infection begins with a drive-by download when a target looks for a particular document and is baited to a malicious site.
The downloaded file contains a fake CAPTCHA. If clicked, it redirects the user via a Clloudfare Turnstile CAPTCHA to a notification page.
In the last step, victims are urged to allow browser notifications.
When a user blocks the browser notification prompt or uses a browser that doesn’t support notifications, they are redirected to download harmless apps like Opera or 7-Zip. However, if the user agrees to receive browser notifications, they are redirected to another Cloudflare Turnstile CAPTCHA. Once this is done, they are sent to a page with instructions on how to download their file.
The download process requires the victim to open the Windows Run window (win + r) and put content copied to the clipboard (ctrl + v), and “ execute it by pressing enter (we described a similar approach in a post about Lumma Stealer),” Netscope said. In this incident, the command in the clipboard uses the “ command prompt to run cURL and download an MSI file.” After this, the “command opens File Explorer, where the MSI file has been downloaded. When the victim runs the MSI file, it will execute the initial payload.”
To avoid detection, the campaign uses a legitimate VMware-signed app that sideloads a malicious DLL to run and load the LegionLeader payload. Later, a new custom algorithm is used to remove the LegionLeader shellcode loader.
In the final stage, the hackers install a malicious browser extension that can steal sensitive info across different browsers, such as Opera, Chrome, Brave, and Edge. Netscope warns of an alarming trend where hackers are targeting users searching for PDF docs online via sophisticated tactics to install malware.
The latest "Qwen2.5-Omni-7B" is a multimodal model- it can process inputs like audio/video, text, and images- while also creating real-time text and natural speech responses, Alibaba’s cloud website reports. It also said that the model can be used on edge devices such as smartphones, providing higher efficiency without giving up on performance.
According to Alibaba, the “unique combination makes it the perfect foundation for developing agile, cost-effective AI agents that deliver tangible value, especially intelligent voice applications.” For instance, the AI can be used to assist visually impaired individuals to navigate their environment via real-time audio description.
The latest model is open-sourced on forums GitHub and Hugging Face, after a rising trend in China post DeepSeek breakthrough R1 model open-source. Open-source means a software in which the source code is created freely on web for potential modification and redistribution.
In recent years, Alibaba claims it has open-sourced more that 200 generative AI models. In the noise of China’s AI dominance intensified by DeepSeek due to its shoe string budget and capabilities, Alibaba and genAI competitors are also releasing new, cost-cutting models and services an exceptional case.
Last week, Chinese tech mammoth Baidu launched a new multimodal foundational model and its first reasoning-based model. Likewise, Alibaba introduced its updated Qwen 2.5 AI model in January and also launched a new variant of its AI assistant tool Quark this month.
Alibaba has also made strong commitments to its AI plan, recently, it announced a plan to put $53 billion in its cloud computing and AI infrastructure over the next three years, even surpassing its spending in the space over the past decade.
CNBC talked with Kai Wang, Asia Senior equity analyst at Morningstar, Mr Kai told CNBC that “large Chinese tech players such as Alibaba, which build data centers to meet the computing needs of AI in addition to building their own LLMs, are well positioned to benefit from China's post-DeepSeek AI boom.” According to CNBC, “Alibaba secured a major win for its AI business last month when it confirmed that the company was partnering with Apple to roll out AI integration for iPhones sold in China.”
Called Google Maps Timeline, from December, Google will save user location data for a maximum of 180 days. After the duration ends, the data will be erased from Google Cloud servers.
The new policy means Google can only save a user’s movements and whereabouts for 6 months, the user has an option to store the data on a personal device, but the cloud data will be permanently deleted from Google servers.
The new privacy change is welcomed, smartphones can balance privacy and convenience in terms of data storage, but nothing is more important than location data.
Users can change settings that suit them best, but the majority go with default settings. The problem here arises when Google uses user data for suggesting insights (based on anonymous location data), or improving Google services like ads products.
The Google Maps Timeline feature addresses questions about data privacy and security. The good things include:
Better privacy: By restricting the storage timeline of location data on the cloud, Google can reduce data misuse. Limiting the storage duration means less historical data is exposed to threat actors if there's a breach.
More control to users: When users have the option to retain location data on their devices, it gives them ownership over their personal data. Users can choose whether to delete their location history or keep the data.
Accountability from Google: The move is a positive sign toward building transparency and trust, showing a commitment to user privacy.
Services: Google features that use location history data for tailored suggestions might be impacted, and users may observe changes in correct location-based suggestions and targeted ads.
The problem in data recovery: For users who like to store their data for a longer duration, the new move can be a problem. Users will have to self-back up data if they want to keep it for more than 180 days.
According to the Forrester, the public cloud market is set to reach $1 trillion by year 2026, with the lion’s share of investment directed to the big four, i.e. Alibaba, Amazon Web Services, Google Cloud, and Microsoft.
In the wake of pandemic, businesses hastened their cloud migration and reaped the rewards as cloud services sped up innovation, offering elasticity to adjust to change demand, and scaled with expansion. Even as the C-suite reduces spending in other areas, it is certain that there is no going back. The demand from businesses for platform-as-a-service (PaaS), which is expected to reach $136 billion in 2023, and infrastructure-as-a-service (IaaS), which is expected to reach $150 billion, is particularly high.
Still, this rapid growth, which in fact caught business strategists and technologies by surprise, has its own cons. If organizations do not take the essential actions to increase the security of public cloud data, the risks are likely to grow considerably.
The challenges posed by "shadow data," or unknown, uncontrolled public cloud data, is a result of a number of issues. Business users are creating their own applications, and programmers are constantly creating new instances of their own code to create and test new applications. A number of these services retain and utilize critical data with no knowledge of the IT and security staff. Versioning, which allows several versions of data to be stored in the same bucket in the cloud, adds risks if policies are not set up correctly.
Unmanaged data repositories are frequently ignored when the rate of innovation quickens. In addition, if third parties or unrelated individuals are given excessive access privileges, sensitive data that is adequately secured could be transferred to an unsafe location, copied there, or become vulnerable.
A large number of security experts (82%) are aware of, and in fact, concerned about the growing issues pertaining to the public cloud data security problem. These professionals can swiftly aid in minimizing the hazards by doing the following:
Teams can automatically find all of their cloud data, not just known or tagged assets, thanks to a next-generation public cloud data security platform. All cloud data storages, including managed and unmanaged assets, virtual machines, shadow data stores, data caches and pipelines, and big data, are detected. This data is used by the platform to create an extensive, unified data catalog for multi-cloud environments used by enterprises. All sensitive data, including PII, PHI, and transaction data from the payment card industry (PCI), is carefully identified and categorized in the catalogs.
Security teams may apply and enforce the proper security policies and verify data settings against their organization's specified guardrails with complete insights into their sensitive cloud data. Public cloud data security may aid in exposing complicated policy breaches, which could further help in prioritizing risk-based mannerisms, on the basis of data sensitivity level, security posture, volume, and exposure.
The aforementioned is a process named data security posture management, that offers recommendations that are customized for every cloud environment, thus making them more effective and relevant.
Teams can then begin organizing sensitive data without interfering with corporate operations. Teams will be prompted by a public cloud data security platform to implement best practices, such as enabling encryption and restricting third-party access, and practicing greater data hygiene by eliminating unnecessary sensitive data from the environment.
Moreover, security teams can utilize the platform to enable constant monitoring of data. This way, security experts can efficiently identify policy violations and ensure that the public cloud data is following the firm’s mentioned guidelines and security postures, no matter where it is stored, used, or transferred in the cloud.
The advent of technology led malicious actors, to invade the privacy of users' systems in a few steps. Cloud security is one such technology that has increasingly worked to fortify users' data from threat actors.