Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label Meta. Show all posts

WhatsApp Reveals "Private Processing" Feature for Cloud Based AI Features

WhatsApp Reveals "Private Processing" Feature for Cloud Based AI Features

WhatsApp claims even it can not process private data

WhatsApp has introduced ‘Private Processing,’ a new tech that allows users to use advanced AI features by offloading tasks to privacy-preserving cloud servers, without exposing their chat to Meta. Meta claims even it cannot see the messages while processing them. The system employs encrypted cloud infrastructure and hardware-based isolation without making it visible to anyone, even Meta, or processing data. 

About private processing

For those who decide to use Private Processing, the system works in an obscure verification via the user’s WhatsApp client to confirm the user’s validity. 

Meta claims this system keeps WhatsApp’s end-to-end encryption intact while offering AI features in chats. However, the feature currently applies only to select use cases and excludes Meta’s broader AI deployments, including those used in India’s public service systems.

Private processing employs Trusted Execution Environments (TEEs) — safe virtual machines that use cloud infrastructure to keep AI requests hidden. 

About the system

  • Performs encryption of user requests from the system to the TEE utilizing end-to-end encryption
  • Encrypts user requests from the device to the TEE using end-to-end encryption
  • Restricts storage or logging of messages post-processing
  • Reports logs and binary images for external verification and audits

WhatsApp builds AI through wider privacy concerns

According to Meta, the Private processing is a response to privacy questions around AI and messaging. WhatsApp has now joined other companies like Apple that have introduced confidential AI computing models in the previous year. “To validate our implementation of these and other security principles, independent security researchers will be able to continuously verify our privacy and security architecture and its integrity,” Meta said.

It is similar to Apple’s private cloud computing in terms of public transparency and stateless processing. Currently, however, WhatsApp is using them only for select features. Apple, on the other hand, has declared plans to implement this model throughout all its AI tools, whereas WhatsApp has not made such claims, yet. 

WhatsApp says, “Private Processing uses anonymous credentials to authenticate users over OHTTP. This way, Private Processing can authenticate users to the Private Processing system but remains unable to identify them.”

Investigating the Role of DarkStorm Team in the Recent X Outage

 


It has been reported that Elon Musk’s social media platform, X, formerly known as Twitter, was severely disrupted on Monday after a widespread cyberattack that has caused multiple service disruptions. Data from outage monitoring service Downdetector indicates that at least three significant disruptions were experienced by the platform throughout the day, affecting millions of users around the world. During this time, over 41,000 people around the world, including Europe, North America, the Middle East, and Asia, reported outages. 
 
The most common technical difficulties encountered by users were prolonged connection failures and a lack of ability to fully load the platform. According to a preliminary assessment, it is possible that the disruptions were caused by a coordinated and large-scale cyber attack. While cybersecurity experts are still investigating the extent and origin of the incident, they have pointed to the growing trend of organised cyber-attacks targeting high-profile digital infrastructures, which is of concern. A number of concerns have been raised regarding the security framework of X following the incident, especially since the platform plays a prominent role in global communications and information dissemination. Authorities and independent cybersecurity analysts continue to analyze data logs and attack signatures to identify the perpetrators and to gain a deeper understanding of the attack methodology. An Israeli hacktivist collective known as the Dark Storm Team, a collective of pro-Palestinian hacktivists, has emerged as an important player in the cyberwarfare landscape. Since February 2010, the group has been orchestrating targeted cyberattacks against Israeli entities that are perceived as supportive of Israel. 
 
In addition to being motivated by a combination of political ideology and financial gain, this group is also well known for using aggressive tactics in the form of Distributed Denial-of-Service (DDoS) attacks, database intrusions, and other disruptive cyber attacks on government agencies, public infrastructure, and organizations perceived to be aligned with Israeli interests that have gained widespread attention. 
 
It has been reported that this group is more than just an ideological movement. It is also a cybercrime organization that advertises itself openly through encrypted messaging platforms like Telegram, offering its services to a variety of clients. It is rumored that it sells coordinated DDoS attacks, data breaches, and hacking tools to a wide range of clients as part of its offerings. It is apparent that their operations are sophisticated and resourceful, as they are targeting both vulnerable and well-protected targets. A recent activity on the part of the group suggests that it has escalated both in scale and ambition in the past few months. In February 2024, the Dark Storm Team warned that a cyberattack was imminent, and threatened NATO member states, Israel, as well as countries providing support for Israel. This warning was followed by documented incidents that disrupted critical government and digital infrastructure, which reinforced the capability of the group to address its threats. 
 
According to intelligence reports, Dark Storm has also built ties with pro-Russian cyber collectives, which broadens the scope of its operations and provides it with access to advanced hacking tools. In addition to enhancing their technical reach, this collaboration also signals an alignment of geopolitical interests. 

Among the most prominent incidents attributed to the group include the October 2024 DDoS attack against the John F Kennedy International Airport's online systems, which was a high-profile incident. As part of their wider agenda, the group justified the attack based on the airport's perceived support for Israeli policies, showing that they were willing to target essential infrastructure as part of their agenda. Dark Storm, according to analysts, combines ideological motivations with profit-driven cybercrime, making it an extremely potent threat in today's cyber environment, as well as being a unique threat to the world's cybersecurity environment. 
 
An investigation is currently underway to determine whether or not the group may have been involved in any of the recent service disruptions of platform X which occured. In order to achieve its objectives, the DarkStorm Team utilizes a range of sophisticated cyber tactics that combine ideological activism with financial motives in cybercrime. They use many of their main methods, including Distributed Denial-of-Service (DDoS) platforms, ransomware campaigns, and leaking sensitive information for a variety of reasons. In addition to disrupting the operations of their targeted targets, these activities are also designed to advance specific political narratives and generate illicit revenue in exchange for the disruption of their operations. In order to coordinate internally, recruit new members, and inform the group of operating updates, the group heavily relies on encrypted communication channels, particularly Telegram. Having these secure platforms allows them to operate with a degree of anonymity, which complicates the efforts of law enforcement and cybersecurity firms to track and dismantle their networks. 

Along with the direct cyberattacks that DarkStorm launches, the company is actively involved in the monetization of stolen data through the sale of compromised databases, personal information, and hacking tools on the darknet, where it is commonly sold. Even though DarkStorm claims to be an organization that consists of grassroots hackers, cybersecurity analysts are increasingly suspecting the group may have covert support from nation-state actors, particularly Russia, despite its public position as a grassroots hacktivist organization. Many factors are driving this suspicion, including the complexity and scale of their operations, the strategic choice of their targets, and the degree of technical sophistication evident in their attacks, among others. A number of patterns of activity suggest the groups are coordinated and well resourced, which suggests that they may be playing a role as proxy groups in broader geopolitical conflicts, which raises concerns about their possible use as proxies. 
 
It is evident from the rising threat posed by groups like DarkStorm that the cyber warfare landscape is evolving, and that ideological, financial, and geopolitical motivations are increasingly intertwined. Thus, it has become significantly more challenging for targeted organisations and governments to attribute attacks and defend themselves, as Elon Musk has become increasingly involved in geopolitical affairs, adding an even greater degree of complexity to the recent disruption of platform X cyberattack narrative. When Russian troops invaded Ukraine in February 2022, Musk has been criticized for publicly mocking Ukrainian President Volodymyr Zelensky, and for making remarks considered dismissive of Ukraine's plight. Musk was the first to do this in the current political environment. The President of the Department of Government Efficiency (DOGE), created under the Trump administration, is the head of the DOGE, an entity created under Trump’s administration that has been reducing U.S. federal employment in an unprecedented way since Trump returned to office. There is a marked change in the administration's foreign policy stance, signaling a shift away from longstanding US support for Ukraine, and means that the administration is increasingly conciliatory with Russia. Musk has a geopolitical entanglement that extends beyond his role at X as well. 
 
A significant portion of Ukraine's digital communication has been maintained during the recent wartime thanks to the Starlink satellite internet network, which he operates through his aerospace company SpaceX. It has been brought to the attention of the public that these intersecting spheres of influence – spanning national security, communication infrastructure, and social media – have received heightened scrutiny, particularly as X continues to be a central node in global politics. According to cybersecurity firms delving into the technical aspects of the Distributed Denial-of-Service (DDoS) attack, little evidence suggests that Ukrainian involvement may have been involved in the attack. 
 
It is believed that a senior analyst at a leading cybersecurity firm spoke on the condition of anonymity because he was not allowed to comment on X publicly because of restrictions on discussing X publicly. This analyst reported that no significant traffic was originating from Ukraine and that it was absent from the top 20 sources of malicious IPs linked to the attack. Despite the fact that Ukrainian IP addresses are rarely spotted in such data due to the widespread practice of IP spoofing and the widespread distribution of compromised devices throughout the world, the absence of Ukrainian IP addresses is significant since it allows attention to be directed to more likely sources, such as organized cybercrime groups and state-related organizations. 
 
There is no denying the fact that this incident reflects the fragile state of digital infrastructure in a politically polarized world where geopolitical tensions, corporate influence, and cyberwarfare are convergent, and as investigations continue, experts are concerned that actors such as DarkStorm Team's role and broader implications for global cybersecurity policy will continue to be a source of controversy.

WhatsApp Windows Vulnerability CVE-2025-30401 Could Let Hackers Deliver Malware via Fake Images

 

Meta has issued a high-priority warning about a critical vulnerability in the Windows version of WhatsApp, tracked as CVE-2025-30401, which could be exploited to deliver malware under the guise of image files. This flaw affects WhatsApp versions prior to 2.2450.6 and could expose users to phishing, ransomware, or remote code execution attacks. The issue lies in how WhatsApp handles file attachments on Windows. 

The platform displays files based on their MIME type but opens them according to the true file extension. This inconsistency creates a dangerous opportunity for hackers: they can disguise executable files as harmless-looking images like .jpeg files. When a user manually opens the file within WhatsApp, they could unknowingly launch a .exe file containing malicious code. Meta’s disclosure arrives just as new data from online bank Revolut reveals that WhatsApp was the source of one in five online scams in the UK during 2024, with scam attempts growing by 67% between June and December. 

Cybersecurity experts warn that WhatsApp’s broad reach and user familiarity make it a prime target for exploitation. Adam Pilton, senior cybersecurity consultant at CyberSmart, cautioned that this vulnerability is especially dangerous in group chats. “If a cybercriminal shares the malicious file in a trusted group or through a mutual contact, anyone in that group might unknowingly execute malware just by opening what looks like a regular image,” he explained. 

Martin Kraemer, a security awareness advocate at KnowBe4, highlighted the platform’s deep integration into daily routines—from casual chats to job applications. “WhatsApp’s widespread use means users have developed a level of trust and automation that attackers exploit. This vulnerability must not be underestimated,” Kraemer said. Until users update to the latest version, experts urge WhatsApp users to treat the app like email—avoid opening unexpected attachments, especially from unknown senders or new contacts. 

The good news is that Meta has already issued a fix, and updating the app resolves the vulnerability. Pilton emphasized the importance of patch management, noting, “Cybercriminals will always seek to exploit software flaws, and providers will keep issuing patches. Keeping your software updated is the simplest and most effective protection.” For now, users should update WhatsApp for Windows immediately to mitigate the risk posed by CVE-2025-30401 and remain cautious with all incoming files.

Meta Launches New Llama 4 AI Models

 



Meta has introduced a fresh set of artificial intelligence models under the name Llama 4. This release includes three new versions: Scout, Maverick, and Behemoth. Each one has been designed to better understand and respond to a mix of text, images, and videos.

The reason behind this launch seems to be rising competition, especially from Chinese companies like DeepSeek. Their recent models have been doing so well that Meta rushed to improve its own tools to keep up.


Where You Can Access Llama 4

The Scout and Maverick models are now available online through Meta’s official site and other developer platforms like Hugging Face. However, Behemoth is still in the testing phase and hasn’t been released yet.

Meta has already added Llama 4 to its own digital assistant, which is built into apps like WhatsApp, Instagram, and Messenger in several countries. However, some special features are only available in the U.S. and only in English for now.


Who Can and Can’t Use It

Meta has placed some limits on who can access Llama 4. People and companies based in the European Union are not allowed to use or share these models, likely due to strict data rules in that region. Also, very large companies, those with over 700 million monthly users — must first get permission from Meta.


Smarter Design, Better Performance

Llama 4 is Meta’s first release using a new design method called "Mixture of Experts." This means the model can divide big tasks into smaller parts and assign each part to a different “expert” inside the system. This makes it faster and more efficient.

For example, the Maverick model has 400 billion total "parameters" (which basically measure how smart it is), but it only uses a small part of them at a time. Scout, the lighter model, is great for reading long documents or big sections of code and can run on a single high-powered computer chip. Maverick needs a more advanced system to function properly.


Behemoth: The Most Advanced One Yet

Behemoth, which is still being developed, will be the most powerful version. It will have a huge amount of learning data and is expected to perform better than many leading models in science and math-based tasks. But it will also need very strong computing systems to work.

One big change in this new version is how it handles sensitive topics. Previous models often avoided difficult questions. Now, Llama 4 is trained to give clearer, fairer answers on political or controversial issues. Meta says the goal is to make the AI more helpful to users, no matter what their views are.

Meta's AI Bots on WhatsApp Spark Privacy and Usability Concerns




WhatsApp, the world's most widely used messaging app, is celebrated for its simplicity, privacy, and user-friendly design. However, upcoming changes could drastically reshape the app. Meta, WhatsApp's parent company, is testing a new feature: AI bots. While some view this as a groundbreaking innovation, others question its necessity and raise concerns about privacy, clutter, and added complexity. 
 
Meta is introducing a new "AI" tab in WhatsApp, currently in beta testing for Android users. This feature will allow users to interact with AI-powered chatbots on various topics. These bots include both third-party models and Meta’s in-house virtual assistant, "Meta AI." To make room for this update, the existing "Communities" tab will merge with the "Chats" section, with the AI tab taking its place. Although Meta presents this as an upgrade, many users feel it disrupts WhatsApp's clean and straightforward design. 
 
Meta’s strategy seems focused on expanding its AI ecosystem across its platforms—Instagram, Facebook, and now WhatsApp. By introducing AI bots, Meta aims to boost user engagement and explore new revenue opportunities. However, this shift risks undermining WhatsApp’s core values of simplicity and secure communication. The addition of AI could clutter the interface and complicate user experience. 

Key Concerns Among Users 
 
1. Loss of Simplicity: WhatsApp’s minimalistic design has been central to its popularity. Adding AI features could make the app feel overloaded and detract from its primary function as a messaging platform. 
 
2. Privacy and Security Risks: Known for its end-to-end encryption, WhatsApp prioritizes user privacy. Introducing AI bots raises questions about data security and how Meta will prevent misuse of these bots. 
 
3. Unwanted Features: Many users believe AI bots are unnecessary for a messaging app. Unlike optional AI tools on platforms like ChatGPT or Google Gemini, Meta's integration feels forced.
 
4. Cluttered Interface: Replacing the "Communities" tab with the AI tab consumes valuable space, potentially disrupting how users navigate the app. 

The Bigger Picture 

Meta may eventually allow users to create custom AI bots within WhatsApp, a feature already available on Instagram. However, this could introduce significant risks. Poorly moderated bots might spread harmful or misleading content, threatening user trust and safety. 

WhatsApp users value its security and simplicity. While some might welcome AI bots, most prefer such features to remain optional and unobtrusive. Since the AI bot feature is still in testing, it’s unclear whether Meta will implement it globally. Many hope WhatsApp will stay true to its core strengths—simplicity, privacy, and reliability—rather than adopting features that could alienate its loyal user base. Will this AI integration enhance the platform or compromise its identity? Only time will tell.

Meta Removes Independent Fact Checkers, Replaces With "Community Notes"


Meta to remove fact-checkers

Meta is dumping independent fact-checkers on Instagram and Facebook, similar to what X (earlier Twitter) did, replacing them with “community notes” where users’ comments decide the accuracy of a post.

On Tuesday, Mark Zuckerberg in a video said third-party moderators were "too politically biased" and it was "time to get back to our roots around free expression".

Tech executives are trying to build better relations with the new US President Donald Trump who will take oath this month, the new move is a step in that direction.  

Republican Party and Meta

The Republican party and Trump have called out Meta for its fact-checking policies, stressing it censors right-wing voices on its platform.

After the new policy was announced, Trump said in a news conference he was pleased with Meta’s decision to have  "come a long way".

Online anti-hate speech activists expressed disappointment with the shift, claiming it was motivated by a desire to align with Trump.

“Zuckerberg's announcement is a blatant attempt to cozy up to the incoming Trump administration – with harmful implications. Claiming to avoid "censorship" is a political move to avoid taking responsibility for hate and disinformation that platforms encourage and facilitate,” said Ava Lee of Global Witness. This organization sees itself as trying to bring big tech like Meta accountable.

Copying X

The present fact-checking program of Meta was introduced in 2016, it sends posts that seem false or misleading to independent fact-checking organizations to judge their credibility. 

Posts marked as misleading have labels attached to them, giving users more information, and move down in viewers’ social media feeds. This will now be replaced by community notes, starting in the US. Meta has no “immediate plans” to remove third-party fact-checkers in the EU or the UK.

The new community notes move has been copied from platform X, which was started after Elon Musk bought Twitter. 

It includes people with opposing opinions agreeing on notes that provide insight or explanation to disputed posts. 

We will allow more speech by lifting restrictions on some topics that are part of mainstream discourse and focusing our enforcement on illegal and high-severity violations. We will take a more personalized approach to political content, so that people who want to see more of it in their feeds can.

Are You Using AI in Marketing? Here's How to Do It Responsibly

 


Artificial Intelligence (AI) has emerged as a transformative force, reshaping industries and delivering unprecedented value to businesses worldwide. From automating mundane tasks to offering predictive insights, AI has catalyzed innovation on a massive scale. However, its rapid adoption raises significant concerns about privacy, data ethics, and transparency, prompting urgent discussions on regulation. The need for robust frameworks has grown even more critical as AI technologies become deeply entrenched in everyday operations.

Data Use and the Push for Regulation

During the early development stages of AI, major tech players such as Meta and OpenAI often used public and private datasets without clear guidelines in place. This unregulated experimentation highlighted glaring gaps in data ethics, leading to calls for significant regulatory oversight. The absence of structured frameworks not only undermined public trust but also raised legal and ethical questions about the use of sensitive information.

Today, the regulatory landscape is evolving to address these issues. Europe has taken a pioneering role with the EU AI Act, which came into effect on August 1, 2024. This legislation classifies AI applications based on their level of risk and enforces stricter controls on higher-risk systems to ensure public safety and confidence. By categorizing AI into levels such as minimal, limited, and high risk, the Act provides a comprehensive framework for accountability. On the other hand, the United States is still in the early stages of federal discussions, though states like California and Colorado have enacted targeted laws emphasizing transparency and user privacy in AI applications.

Why Marketing Teams Should Stay Vigilant

AI’s impact on marketing is undeniable, with tools revolutionizing how teams create content, interact with customers, and analyze data. According to a survey, 93% of marketers using AI rely on it to accelerate content creation, optimize campaigns, and deliver personalized experiences. However, this reliance comes with challenges such as intellectual property infringement, algorithmic biases, and ethical dilemmas surrounding AI-generated material.

As regulatory frameworks mature, marketing professionals must align their practices with emerging compliance standards. Proactively adopting ethical AI usage not only mitigates risks but also prepares businesses for stricter regulations. Ethical practices can safeguard brand reputation, ensuring that marketing teams remain compliant and trusted by their audiences.

Best Practices for Responsible AI Use

  1. Maintain Human Oversight
    While AI can streamline workflows, it should not replace human intervention. Marketing teams must rigorously review AI-generated content to ensure originality, eliminate biases, and avoid plagiarism. This approach not only improves content quality but also aligns with ethical standards.
  2. Promote Transparency
    Transparency builds trust. Businesses should be open about their use of AI, particularly when collecting data or making automated decisions. Clear communication about AI processes fosters customer confidence and adheres to evolving legal requirements focused on explainability.
  3. Implement Ethical Data Practices
    Ensure that all data used for AI training complies with privacy laws and ethical guidelines. Avoid using data without proper consent and regularly audit datasets to prevent misuse or biases.
  4. Educate Teams
    Equip employees with knowledge about AI technologies and the implications of their use. Training programs can help teams stay informed about regulatory changes and ethical considerations, promoting responsible practices across the organization.

Preparing for the Future

AI regulation is not just a passing concern but a critical element in shaping its responsible use. By embracing transparency, accountability, and secure data practices, businesses can stay ahead of legal changes while fostering trust with customers and stakeholders. Adopting ethical AI practices ensures that organizations are future-proof, resilient, and prepared to navigate the complexities of the evolving regulatory landscape.

As AI continues to advance, the onus is on businesses to balance innovation with responsibility. Marketing teams, in particular, have an opportunity to demonstrate leadership by integrating AI in ways that enhance customer relationships while upholding ethical and legal standards. By doing so, organizations can not only thrive in an AI-driven world but also set an example for others to follow.

Meta Introduces AI Features For Ray-Ban Glasses in Europe

 

Meta has officially introduced certain AI functions for its Ray-Ban Meta augmented reality (AR) glasses in France, Italy, and Spain, marking a significant step in the company's spread of its innovative wearable technology across Europe. 

Starting earlier this week, customers in these nations were able to interact with Meta's AI assistant solely through their voice, allowing them to ask general enquiries and receive responses through the glasses. 

As part of Meta's larger initiative to make its AI assistant more widely available, this latest deployment covers French, Italian, and Spanish in addition to English. The announcement was made nearly a year after the Ray-Ban Meta spectacles were first released in September 2023.

In a blog post outlining the update, Meta stated, "We are thrilled to introduce Meta AI and its cutting-edge features to regions of the EU, and we look forward to expanding to more European countries soon.” However, not all of the features accessible in other regions will be included in the European rollout. 

While customers in the United States, Canada, and Australia benefit from multimodal AI capabilities on their Ray-Ban Meta glasses, such as the ability to gain information about objects in view of the glasses' camera, these functions will not be included in the European update at present.

For example, users in the United States can ask their glasses to identify landmarks in their surroundings, such as "Tell me more about this landmark," but these functionalities are not available in Europe due to ongoing regulatory issues. 

Meta has stated its commitment to dealing with Europe's complicated legal environment, specifically the EU's AI Act and the General Data Protection Regulation (GDPR). The company indicated that it is aiming to offer multimodal capabilities to more countries in the future, but there is no set date. 

While the rollout in France, Italy, and Spain marks a significant milestone, Meta's journey in the European market is far from done. As the firm navigates the regulatory landscape and expands its AI solutions, users in Europe can expect more updates and new features for their Ray-Ban Meta glasses in the coming months. 

As Meta continues to grow its devices and expand its AI capabilities, all eyes will be on how the firm adjusts to Europe's legal system and how this will impact the future of AR technology worldwide.