Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Meta. Show all posts

Tech Giants Face Backlash Over AI Privacy Concerns






Microsoft recently faced material backlash over its new AI tool, Recall, leading to a delayed release. Recall, introduced last month as a feature of Microsoft's new AI companion, captures screen images every few seconds to create a searchable library. This includes sensitive information like passwords and private conversations. The tool's release was postponed indefinitely after criticism from data privacy experts, including the UK's Information Commissioner's Office (ICO).

In response, Microsoft announced changes to Recall. Initially planned for a broad release on June 18, 2024, it will first be available to Windows Insider Program users. The company assured that Recall would be turned off by default and emphasised its commitment to privacy and security. Despite these assurances, Microsoft declined to comment on claims that the tool posed a security risk.

Recall was showcased during Microsoft's developer conference, with Yusuf Mehdi, Corporate Vice President, highlighting its ability to access virtually anything on a user's PC. Following its debut, the ICO vowed to investigate privacy concerns. On June 13, Microsoft announced updates to Recall, reinforcing its "commitment to responsible AI" and privacy principles.

Adobe Overhauls Terms of Service 

Adobe faced a wave of criticism after updating its terms of service, which many users interpreted as allowing the company to use their work for AI training without proper consent. Users were required to agree to a clause granting Adobe a broad licence over their content, leading to suspicions that Adobe was using this content to train generative AI models like Firefly.

Adobe officials, including President David Wadhwani and Chief Trust Officer Dana Rao, denied these claims and clarified that the terms were misinterpreted. They reassured users that their content would not be used for AI training without explicit permission, except for submissions to the Adobe Stock marketplace. The company acknowledged the need for clearer communication and has since updated its terms to explicitly state these protections.

The controversy began with Firefly's release in March 2023, when artists noticed AI-generated imagery mimicking their styles. Users like YouTuber Sasha Yanshin cancelled their Adobe subscriptions in protest. Adobe's Chief Product Officer, Scott Belsky, admitted the wording was unclear and emphasised the importance of trust and transparency.

Meta Faces Scrutiny Over AI Training Practices

Meta, the parent company of Facebook and Instagram, has also been criticised for using user data to train its AI tools. Concerns were raised when Martin Keary, Vice President of Product Design at Muse Group, revealed that Meta planned to use public content from social media for AI training.

Meta responded by assuring users that it only used public content and did not access private messages or information from users under 18. An opt-out form was introduced for EU users, but U.S. users have limited options due to the lack of national privacy laws. Meta emphasised that its latest AI model, Llama 2, was not trained on user data, but users remain concerned about their privacy.

Suspicion arose in May 2023, with users questioning Meta's security policy changes. Meta's official statement to European users clarified its practices, but the opt-out form, available under Privacy Policy settings, remains a complex process. The company can only address user requests if they demonstrate that the AI "has knowledge" of them.

The recent actions by Microsoft, Adobe, and Meta highlight the growing tensions between tech giants and their users over data privacy and AI development. As these companies navigate user concerns and regulatory scrutiny, the debate over how AI tools should handle personal data continues to intensify. The tech industry's future will heavily depend on balancing innovation with ethical considerations and user trust.


Nvidia Climbs to Second Place in Global Market Value, Surpassing Apple

 


This month, Nvidia has achieved a historic achievement by overtaking Apple to become the world's second most valuable company, a feat that has only been possible because of the overwhelming demand for its advanced chips that are used to handle artificial intelligence tasks. A staggering $1.8 trillion has been added to the market value of the Santa Clara, California-based company's shares over the past year, increasing its market value by a staggering 147% this year. 

Nvidia has achieved a market capitalisation of over $3 trillion as a result of this surge, becoming the first semiconductor company to achieve this milestone. The value of Nvidia's shares has skyrocketed over the past few years, making it the second most valuable company in the world and larger than Apple, thanks to its surge in value. As a consequence of the excitement regarding artificial intelligence, which is largely based on Nvidia chips, the company has seen its shares rise dramatically over the past few years.

The popularity of the company has resulted in it becoming the largest company in Silicon Valley, which has led it to replace Apple, which has seen its share price fall due to concerns regarding iPhone sales in China and other concerns. Several weeks from now, Nvidia will be split ten times for ten shares, a move that could greatly increase the appeal of its stock to investors on a personal level. Nvidia’s surge over Apple’s market value signals a shift in Silicon Valley, where the co-founded company by Steve Jobs has dominated the field since the iPhone was launched in 2007. While Apple gained 0.78 per cent, the world’s most valuable company, Microsoft gained 1.91 per cent in value. 

As a result of the company’s graphics processing units fuelling a boom in artificial intelligence (AI), Nvidia’s rally continues an extraordinary streak of gains for the company. There has been a 260 per cent increase in revenue for the company in recent years, as tech titans such as Microsoft, Meta, Google, and Amazon race to implement artificial intelligence. 

Last month, Nvidia announced a 10-for-1 stock split as a way of making stock ownership more accessible to employees and investors. In the first half of this year, Nvidia shares have more than doubled in value after almost tripling in value in 2023. With the implementation of the split on Friday, the company will be able to appeal to a larger number of small-time investors, as the company's shares will become even more attractive. 

As a consequence of Microsoft, Meta Platforms, and Alphabet, all of these major tech companies are eager to enhance their artificial intelligence capabilities, which is why Nvidia's stock price has surged 147% in 2024. According to recent revenue estimates, the company's stock has gained close to $150 million in market capitalisation in one day, which is more than the entire market capitalization of AT&T. As well as a 4.5% increase in the PHLX chip index, many companies have benefited from the current optimism surrounding artificial intelligence, including Super Micro Computer, which builds AI-optimized servers using Nvidia chips. 

During his visit to the Computex tech fair in Taiwan, former Taipei resident Jensen Huang, chairman & CEO of Nvidia, received extensive media coverage that highlighted both his influence on the company's growing importance as well as his association with the event. Compared to Apple, there are challenges facing Apple due to weak demand for iPhones in China and stiff competition from its Chinese competitors. According to some analysts, Apple misses out on incorporating AI features compared to other tech giants because the company has been so slow in incorporating them. 

According to LSEG data, Nvidia's stock trades today at 39 times expected earnings, but the stock is still considered less expensive than a year ago, when the stock traded at more than 70 times expected earnings, indicating it's less expensive than it used to be.

Navigating Meta’s AI Data Training: Opt-Out Challenges and Privacy Considerations

Navigating Meta’s AI Data Training: Opt-Out Challenges and Privacy Considerations

The privacy policy update

Meta will reportedly amend its privacy policy beginning June 26 to allow its AI to be educated on your data. 

The story spread on social media after Meta sent out emails and notifications to subscribers in the United Kingdom and the European Union informing them of the change and offering them the option to opt out of data collecting. 

One UK-based user, Phillip Bloom, publicly published the message, informing everyone about the impending changes, which appear to also affect Instagram users.

The AI training process

These changes provide Meta permission to use your information and personal material from Meta-related services to train its AI. This implies that the social media giant will be able to use public Facebook posts, Instagram photographs and captions, and messages to Meta's AI chatbots to train its huge language model and other AI capabilities.

Meta states that private messages will not be included in the training data, and the business emphasizes in its emails and notifications that each user (in a protected region) has the "right to object" to the data being utilized. 

Once implemented, the new policy will begin automatically extracting information from the affected types of material. To avoid Meta removing your content, you can opt out right now by going to this Facebook help website. 

Keep in mind that this page will only load if you are in the European Union, the United Kingdom, or any country where Meta is required by law to provide an opt-out option.

Opting out: EU and UK users

If you live in the European Union, the United Kingdom, or another country with severe enough data protection regulations for Meta to provide an opt-out, go to the support page listed above, fill out the form, and submit it. 

You'll need to select your nation and explain why you're opting out in a text box, and you'll have the option to offer more information below that. You should receive a response indicating whether Meta will honor your request to opt out of having your data utilized. 

Prepare to fight—some users say that their requests are being denied, even though in countries governed by legislation such as the European Union's GDPR, Meta should be required to honor your request.

Challenges for users outside the EU and UK

There are a few caveats to consider. While the opt-out protects you, it does not guarantee that your postings will be protected if they are shared by friends or family members who have not opted out of using data for AI training. 

Make sure that any family members who use Facebook or other Meta services opt out, if possible. This move isn't surprising given that Meta has been gradually expanding its AI offerings on its platforms. 

As a result, the utilization of user data, particularly among Meta services, was always expected. There is too much data for the corporation to pass up as training material for its numerous AI programs.

Meta to Train AI with Public Facebook and Instagram Posts

 


 

Meta, the company behind Facebook and Instagram, is set to begin using public posts from European users to train its artificial intelligence (AI) systems starting June 26. This decision has sparked discussions about privacy and GDPR compliance.

Utilising Public Data for AI

European users of Facebook and Instagram have recently been notified that their public posts could be used to help develop Meta's AI technologies. The information that might be utilised includes posts, photos, captions, and messages sent to an AI, but private messages are excluded. Meta has emphasised that only public data from user profiles will be used, and data from users under 18 will not be included.

GDPR Compliance and Legitimate Interest

Under the General Data Protection Regulation (GDPR), companies can process personal data if they demonstrate a legitimate interest. Meta argues that improving AI systems constitutes such an interest. Despite this, users have the right to opt out of having their data used for this purpose by submitting a form through Facebook or Instagram, although these forms are currently unavailable.

Even if users opt out, their data may still be used if they are featured in another user's public posts or images. Meta has provided a four-week notice period before collecting data to comply with privacy regulations.

Regulatory Concerns and Delays

The Irish Data Protection Commission (DPC) intervened following Meta's announcement, resulting in a temporary delay. The DPC requested clarifications from Meta, which the company has addressed. Meta assured that only public data from EU users would be utilized and confirmed that data from minors would not be included.

Meta’s AI Development Efforts

Meta is heavily investing in AI research and development. The company’s latest large language model, Llama 3, released in April, powers its Meta AI assistant, though it is not yet available in Europe. Meta has previously used public posts to train its AI assistant but did not include this data in training the Llama 2 model.

In addition to developing AI software, Meta is also working on the hardware needed for AI operations, introducing custom-made chips last month.

Meta's initiative to use public posts for AI training highlights the ongoing balance between innovation and privacy. While an opt-out option is provided, its current unavailability and the potential use of data from non-consenting users underscore the complexities of data privacy.

European users should remain informed about their rights under GDPR and utilize the opt-out process when available. Despite some limitations, Meta's efforts to notify users and offer an opt-out reflect a step towards balancing technological advancement with privacy concerns.

This development represents a striking move in Meta's AI journey and accentuates the critical role of transparency and regulatory oversight in handling personal data responsibly.


Facebook Account Takeovers: Can Tech Giant Stop Hijacking Scams?

 

A Go Public investigation discovered that Meta has allowed a scam campaign to flourish on Facebook, as fraudsters lock users out of their accounts and mimic them. 

According to the CBC, Lesa Lowery is one of the many victims. For three days, she watched helplessly as Facebook scammers duped her friends out of thousands of dollars for counterfeit things. Her Facebook account was taken in early March. 

Lowery had her account hacked after changing her password in response to a Facebook-like email. The scammer locked her out, costing her friends $2,500. Many of Lowery's friends reported the incident to Facebook, but Meta did not. The scammer removed warnings and blocked friends. Lowery's ex-neighbor, Carol Stevens, lost $250 in the swindle. 

Are Meta’s efforts enough? 

Claudiu Popa, author of "The Canadian Cyberfraud Handbook," lambasted Meta for generating billions but failing to secure users, despite the fact that Meta's sales increased 16% to $185 billion last year. 

Meta wrote Go Public, stating that it has "over 15,000 reviewers across the globe" to fix breaches, but did not explain why the retirement home fraud proceeded.

Popa, a cybercrime specialist, believes that fraudsters employ AI to identify victims and create convincing emails. According to Sapio Research, 85% of cybersecurity professionals believe that AI-powered assaults have increased.

In March, 41 US state attorneys general stated that Meta assisted customers as the number of Facebook account takeovers increased. Meta indicated that it attempted to fix the issue but did not disclose specifics. Credential stuffing assaults and data breaches can result in account takeovers and dump sales.

According to The Register, Meta was taken over by Facebook via phone number recycling in the US. New telecom customers receive abandoned numbers without being disconnected from the previous owner's accounts. An outdated number may get a password reset request or a two-factor authentication token, potentially allowing unauthorised access.

Meta is aware of phone number recycling-related account takeovers; however, the social media giant noted that it "does not have control over telecom providers" reissuing phone numbers, and that users who had phone numbers linked to their Facebook accounts were no longer registered with them. 

Meanwhile, cybersecurity experts propose that the government take measures to address Facebook account takeovers. According to Popa, companies like Meta rely on legislation to protect users and respond fast to fraud.

Are Big Tech Companies Getting Rich from AI?

 


Big Tech companies like Amazon, Microsoft, and Alphabet have showcased impressive earnings, with a substantial boost from their advancements in artificial intelligence (AI) technology. Amazon's quarterly report revealed a 13% increase in net sales, primarily attributed to its AWS cloud computing segment, which saw a 17% sales boost, fueled by new AI functions like Amazon Q AI assistant and Amazon Bedrock generative AI service. Similarly, Alphabet's stock price surged nearly 10% following its robust earnings report, emphasising its AI-driven results. Microsoft also exceeded expectations, with its AI-heavy intelligent cloud division witnessing a 21% increase in revenue.

The Federal Communications Commission (FCC) has reinstated net neutrality rules, ensuring equal treatment of internet content by service providers. This move aims to prevent blocking, slowing down, or charging more for faster service for certain content, reinstating regulations repealed in 2017. Advocates argue that net neutrality preserves fair access, while opponents express concerns over regulatory burdens on broadband providers.

Strategies for Addressing Ransomware Threats

Ransomware attacks continue to pose a considerable threat to businesses, highlighting the unavoidable need for proactive measures. Halcyon CEO Jon Miller emphasises the importance of understanding ransomware risks and implementing robust backup systems. Having a clear plan of action in case of an attack is essential, including measures to minimise disruption and restore systems efficiently. While paying ransom may be a last resort in certain scenarios, it often leads to repeated targeting and underscores the necessity of enhancing overall security posture. Collaboration among companies and sharing of threat intelligence can also strengthen defences against ransomware attacks.

Meta's AI-enabled Smart Glasses

Meta's collaboration with Ray-Ban resulted in AI-enabled smart glasses, offering a seamless interface between the physical and online world. Priced at $299, these glasses provide enhanced functionalities like connecting with smartphones, music streaming, and camera features. Despite some limitations in identifying objects, these glasses signify a potential gateway to widespread adoption of virtual reality (VR) technology.

IBM and Nvidia Announce Major Acquisitions

IBM's acquisition of HashiCorp for $6.4 billion aims to bolster its cloud solutions with HashiCorp's expertise in managing cloud systems and applications. Similarly, Nvidia's purchase of GPU orchestrator Run:ai enhances its capabilities in efficiently utilising chips for processing needs, further solidifying its competitive edge.

As businesses increasingly adopt AI technology, collaborative decision-making and comprehensive training initiatives are essential for successful implementation. IBM's survey suggests that 40% of employees will require AI-related training and reskilling in the next three years, emphasising the urgency of investing in workforce development.

In essence, the recent earnings reports and strategic moves by tech giants translate the decisive role of AI in driving innovation and financial growth. However, amidst technological advancements, addressing cybersecurity threats like ransomware and ensuring equitable access to the internet remain crucial considerations for businesses and policymakers alike.


Technical Glitch Causes Global Disruption for Meta Users

 


In a recent setback for Meta users, a widespread service outage occurred on March 5th, affecting hundreds of thousands worldwide. Meta's spokesperson, Andy Stone, attributed the disruption to a "technical issue," apologising for any inconvenience caused.

Shortly after the incident, multiple hacktivist groups, including Skynet, Godzilla, and Anonymous Sudan, claimed responsibility. However, cybersecurity firm Cyberint revealed that the disruption might have been a result of a cyberattack, as abnormal traffic patterns indicative of a DDoS attack were detected.

The outage left Facebook and Instagram users unable to access the platforms, with many being inexplicably logged out. Some users, despite entering correct credentials, received "incorrect password" messages, raising concerns about a potential hacking event. Both desktop and mobile users, totaling over 550,000 on Facebook and 90,000 on Instagram globally, were impacted.

This isn't the first time Meta (formerly Facebook) faced such issues. In late 2021, a six-hour outage occurred when the Border Gateway Protocol (BGP) routes were withdrawn, effectively making Facebook servers inaccessible. The BGP functions like a railroad switchman, directing data packets' paths, and the absence of these routes caused a communication breakdown.

As the outage unfolded, users found themselves abruptly logged out of the platform, exacerbating the inconvenience. The disruption's ripple effect triggered concerns among users, with fears of a potential cyberattack amplifying the chaos.

It's worth noting that hacktivist groups often claim responsibility for disruptions they may not have caused, aiming to boost their perceived significance and capabilities. In this case, the true source of the disruption remains under investigation, and Meta continues to work on strengthening its systems against potential cyber threats.

In the contemporary sphere of technology, where service interruptions have become more prevalent, it is vital for online platforms to educate themselves on cybersecurity measures. Users are urged to exercise vigilance and adhere to best practices in online security, thus effectively mitigating the repercussions of such incidents.

This incident serves as a reminder of the interconnected nature of online platforms and the potential vulnerabilities that arise from technical glitches or malicious activities. Meta assures users that they are addressing the issue promptly and implementing measures to prevent future disruptions.

As the digital world persists in evolution, users and platforms alike must adapt to the dynamic landscape, emphasising the importance of cybersecurity awareness and resilient systems to ensure a secure online experience for all.




Signal Protocol Links WhatsApp, Messenger in DMA-Compliant Fusion

 


As part of the launch of the new EU regulations governing the use of digital "gatekeepers," Meta is ready to answer all of your questions about WhatsApp and Messenger providing end-to-end encryption (E2EE), while also complying with the requirements outlined in the Digital Markets Act (DMA). A blog post by Meta on Wednesday detailed how it plans to enable interoperability with Facebook Messenger and WhatsApp in the EU, which means users can message each other if they also use Signal's underlying encryption protocol when communicating with third-party messaging platforms. 

As the Digital Markets Act of Europe becomes more and more enforced, big tech companies are getting ready to comply with it. In response to the new competition rules that took effect on March 6, Google, Meta, and other companies have begun making plans to comply and what will happen to end users. 

There is no doubt that the change was not entirely the result of WhatsApp's decision. It is known that European lawmakers have designated WhatsApp parent company Meta as one of the six influential "gatekeeper" companies under their sweeping Digital Markets Act, giving it six months to allow others to enter its walled garden. 

Even though it's just a few weeks until the deadline for WhatsApp interoperability with other apps approaches, the company is describing its plans. As part of the first year of the regulation, the requirements were designed to support one-to-one chats and file sharing like images, videos, or voice messages, with plans for these requirements to be expanded in the coming years to include group chats and calls as well. 

In December, Meta decided to stop allowing Instagram to communicate with Messenger, presumably to implement a DMA strategy. In addition to Apple's iMessage app and Microsoft's Edge web browser, the EU has also made clear that the four parent companies of Facebook, Google, and TikTok are "gatekeepers," although Apple's parent company Alphabet and TikTok's parent company ByteDance are excluded. 

ETA stated that before the company can work with third-party providers to implement the service, they need to sign an agreement for interoperability between Messenger and WhatsApp. To ensure that other providers use the same security standards as WhatsApp, the company requires them to use the Signal protocol. 

However, if they can be found to meet these standards, they will accept others. As soon as another service sends a request for interoperability, Meta is given a window of three months in which to do so. The organization warns, however, that functionality may not be available for the general public to access immediately. 

The approach Meta has taken to interoperability is designed to meet the DMA requirements while also providing a feasible option for third-party providers looking to maximize security and privacy for their customers. For privacy and security, Meta will use the Signal Protocol to ensure end-to-end encrypted communication. This protocol is currently widely considered the gold standard for end-to-end encryption in E2EE.

Meta’s Facebook, Instagram Back Online After Two Hour Long Outage

 

On March 5, a technical failure resulted in widespread login issues across Meta's Facebook, Instagram, Threads, and Messenger platforms.

Meta's head of communications, Andy Stone, confirmed the issues on X, formerly known as Twitter, and stated that the company "resolved the issue as quickly as possible for everyone who was impacted, and we apologise for any inconvenience." 

Users reported getting locked out of their Facebook accounts, and the platform's feeds, as well as Threads and Instagram, did not refresh. WhatsApp, which is also owned by Meta, seems unaffected.

A senior official from the United States Cybersecurity and Infrastructure Security Agency told reporters Tuesday that the agency was "not cognizant of any specific election nexus nor any specific malicious cyber activity nexus to the outage.” 

The outage occurs just ahead of the March 7th deadline for Big Tech firms to comply with the European Union's new Digital Markets Act. To comply, Meta is making modifications, including allowing users to separate their Facebook and Instagram accounts, and preventing personal information from being pooled to target them with online adverts. It is unclear whether the downtime is related to Meta's preparations for the DMA. 

Facebook, Instagram, and WhatsApp went down for hours in 2021, which the firm blamed on inaccurate changes to routers that coordinate network traffic between its data centres. The following year, WhatsApp experienced another brief outage. 

Facebook engineers were dispatched to one of its key US data centres in California to restore service, indicating that the fix had to be done remotely. Further complicating matters, the outage briefly prevented some employees from using their badges to access workplaces and conference rooms, according to The New York Times, which initially reported that engineers had been called to the data centre.

Meta's AI Ambitions Raised Privacy and Toxicity Concerns

In a groundbreaking announcement following Meta CEO Mark Zuckerberg's latest earnings report, concerns have been raised over the company's intention to utilize vast troves of user data from Facebook and Instagram to train its own AI systems, potentially creating a competing chatbot. 

Zuckerberg's revelation that Meta possesses more user data than what was employed in training ChatGPT has sparked widespread apprehension regarding privacy and toxicity issues. The decision to harness personal data from Facebook and Instagram posts and comments for the development of a rival chatbot has drawn scrutiny from both privacy advocates and industry observers. 

This move, unveiled by Zuckerberg, has intensified anxieties surrounding the handling of sensitive user information within Meta's ecosystem. As reported by Bloomberg, the disclosure of Meta's strategic shift towards leveraging its extensive user data for AI development has set off a wave of concerns regarding the implications for user privacy and the potential amplification of toxic behaviour within online interactions. 

Additionally, the makers will potentially offer it free of charge to the public which led to different concerns in the tech community. While the prospect of accessible AI technology may seem promising, critics argue that Zuckerberg's ambitious plans lack adequate consideration for the potential consequences and ethical implications. 

Following the new development, Mark Zuckerberg reported to the public that he sees Facebook's continued user growth as an opportunity to leverage data from Facebook and Instagram to develop powerful, general-purpose artificial intelligence. With hundreds of billions of publicly shared images and tens of billions of public videos on these platforms, along with a significant volume of public text posts, Zuckerberg believes this data can provide unique insights and feedback loops to advance AI technology. 

Furthermore, as per Zuckerberg, Meta has access to an even larger dataset than Common Crawl, comprised of user-generated content from Facebook and Instagram, which could potentially enable the development of a more sophisticated chatbot. This advantage extends beyond sheer volume; the interactive nature of the data, particularly from comment threads, is invaluable for training conversational AI agents. This strategy mirrors OpenAI's approach of mining dialogue-rich platforms like Reddit to enhance the capabilities of its chatbot. 

What is Threatening? 

Meta's plan to train its AI on personal posts and conversations from Facebook comments raises significant privacy concerns. Additionally, the internet is rife with toxic content, including personal attacks, insults, racism, and sexism, which poses a challenge for any chatbot training system. Apple, known for its cautious approach, has faced delays in its Siri relaunch due to these issues. However, Meta's situation may be particularly problematic given the nature of its data sources. 

Mark Zuckerberg Apologizes to Families in Fiery US Senate Hearing

Mark Zuckerberg Apologizes to Families in Fiery US Senate Hearing

In a recent US Senate hearing, Mark Zuckerberg, the CEO of Meta (formerly Facebook), faced intense scrutiny over the impact of social media platforms on children. Families who claimed their children had been harmed by online content were present, and emotions ran high throughout the proceedings.

The Apology and Its Context

Zuckerberg's apology came after families shared heartbreaking stories of self-harm and suicide related to social media content. The hearing focused on protecting children online, and it provided a rare opportunity for US senators to question tech executives directly. Other CEOs, including those from TikTok, Snap, X (formerly Twitter), and Discord, were also in the hot seat.

The central theme was clear: How can we ensure the safety and well-being of young users in the digital age? The families' pain and frustration underscored the urgency of this question.

The Instagram Prompt and Child Sexual Abuse Material

One important topic during the hearing was an Instagram prompt related to child sexual abuse material. Zuckerberg acknowledged that the prompt was a mistake and expressed regret. The prompt mistakenly directed users to search for explicit content when they typed certain keywords. This incident raised concerns about the effectiveness of content moderation algorithms and the need for continuous improvement.

Zuckerberg defended the importance of free expression but also recognized the responsibility that comes with it. He emphasized the need to strike a balance between allowing diverse viewpoints and preventing harm. The challenge lies in identifying harmful content without stifling legitimate discourse.

Directing Users Toward Helpful Resources

During his testimony, Zuckerberg highlighted efforts to guide users toward helpful resources. When someone searches for self-harm-related content, Instagram now directs them to resources that promote mental health and well-being. While imperfect, this approach reflects a commitment to mitigating harm.

The Role of Parents and Educators

Zuckerberg encouraged parents to engage with their children about online safety and set boundaries. He acknowledged that technology companies cannot solve these issues alone; collaboration with schools and communities is essential.

Mark Zuckerberg's apology was a significant moment, but it cannot be the end. Protecting children online requires collective action from tech companies, policymakers, parents, and educators. We must continue to address the challenges posed by social media while fostering a healthy digital environment for the next generation.

As the hearing concluded, the families' pain remained palpable. Their stories serve as a stark reminder that behind every statistic and algorithm lies a real person—a child seeking connection, validation, and safety. 

Privacy at Stake: Meta's AI-Enabled Ray-Ban Garners' Mixed Reactions

 



There is a high chance that Meta is launching a new version of Ray-Ban glasses with embedded artificial intelligence assistant capabilities to revolutionize wearable technology. As a result of this innovation, users will have the ability to process audio and video cues to produce textual or audible responses in response to their actions. 

Among the top features of these glasses is the “Look and Ask” feature, which is a feature that lets the wearer snap a picture and inquire about it instantly, thereby reducing the amount of time it takes to translate languages and improving the interaction between the user and the environment. 

For its upcoming AI-integrated smart glasses, Meta has announced that they are launching an early access program, which enables users to take advantage of a host of new features and privacy concerns. In addition to Meta AI, the company's proprietary multimodal AI assistant, Meta AI will be available as part of the second generation of Meta Ray-Bans. 

It is possible to control features and get information about what you are seeing in real-time using the wake phrase “Hey Meta.” In doing so, however, the company gathers an extensive amount of personal information about you, and it leaves room for interpretation as to how this data is used. 

Currently in the beta phase, the glasses come with an artificial intelligence assistant that can process video and audio prompts, and provide a text or audio output to users. The company plans to launch an early access trial program shortly. In his Instagram reel, Zuckerberg demonstrated that the glasses could be used to suggest clothes and translate text, illustrating how useful they can be daily. 

It is important to note, however, that privacy advocates are raising concerns about the potential risks resulting from such advanced technology, since all images taken by the glasses are stored by Meta, ostensibly to train the artificial intelligence systems that operate the glasses. 

There are significant concerns raised about the extent and use of data collected by Meta, building on ongoing concerns regarding Meta's privacy policies. Although Meta cites that while it collects 'essential' data for maintaining the functionality of the device, such as battery life and connectivity, users are free to provide additional data for developing new features. 

The company's privacy policy, however, still has a lot of ambiguity around the types of data it collects to identify policy violations and misuses. The first model of Meta included safety features such as a visible camera light and a switch for recording, but despite these features, sales and engagement were lower than expected. 

In addition to advancing the field of AI, Meta's new enhancements aim to rebuild public trust amidst privacy concerns while also aiming to achieve a technological breakthrough. It has been announced that Meta's latest Ray-Ban spectacles will include a built-in AI assistant offering innovative features, such as real-time photo queries and language translation, despite controversy surrounding privacy practices. 

Despite the advancements in wearable technology, trust remains one of wearable technology's biggest challenges. As part of the first version of Meta's smart glasses, several safety features had been installed, such as a flashing light that signals when the cameras are in use, an on/off switch, and others, to ensure the glasses were safe to wear.  

Although sales were not as expected, they were still a bit lower than what was predicted - down 20% from the target. The fact that only 10% of the glasses were active after 18 months since the first launch shows that Meta did not achieve what it might have liked, even though they were ultimately purchased. 

The new AI features that Meta is developing are, needless to say, desperate to change these stats. Even though privacy concerns still loom large, it remains to be seen whether the tech giant will be able to convince its users of the company's reliability when it comes to personal data.

Trading Tomorrow's Technology for Today's Privacy: The AI Conundrum in 2024

 


Artificial Intelligence (AI) is a technology that continually absorbs and transfers humanity's collective intelligence with machine learning algorithms. It is a technology that is all-pervasive, and it will soon be all-pervasive as well. It is becoming increasingly clear that, as technology advances, so does its approach to data management the lack thereof. Thus, as the start of 2024 approaches, certain developments will have long-lasting impacts. 

Taking advantage of Google's recent integration of Bard, its chat-based AI tool, into a host of other Google apps and services is a good example of how generative AI is being moved more directly into consumer life through the use of text, images, and voice. 

A super-charged version of Google Assistant, Bard is equipped with everything from Gmail, Docs, and Drive, to Google Maps, YouTube, Google Flights, and hotels, all of which are bundled with it. Using a conversational, natural-language mode, Bard can filter enormous amounts of data online, while providing personalized responses to individual users, all while doing so in an unprecedented way. 

Creating shopping lists, summarizing emails, booking trips — all things that a personal assistant would do — for those without one. As of 2023, we have seen many examples of how not everything one sees or hears on the internet is real, whether it be politics, movies, or even wars. 

Artificial intelligence technology continues to advance rapidly, and the advent of deep fakes has raised concern in the country about its potential to influence electoral politics, especially during the Lok Sabha elections that are planned to take place next year. 

There is a sharp rise in deep fakes that have caused widespread concern in the country. In a deepfake, artificial intelligence can be used to create videos or audio that make sense of the actions or statements of people they did not do or say, resulting in the spread of misinformation and damage to their reputation. 

In the wake of the massive leap in public consciousness about the importance of generative AI that occurred in 2023, individuals and businesses will be putting artificial intelligence at the centre of even more decisions in the coming year. 

Artificial intelligence is no longer a new concept. In 2023, ChatGPT, MidJourney, Google Bard, corporate chatbots, and other artificial intelligence tools have taken the internet by storm. Their capabilities have been commended by many, while others have expressed concerns regarding plagiarism and the threat they pose to certain careers, including those related to content creation in the marketing industry. 

There is no denying that artificial intelligence, no matter what you think about it, has dramatically changed the privacy landscape. Despite whatever your feelings about AI are, the majority of people will agree on the fact that AI tools are trained on data that is collected from the creators and the users of them. 

For privacy reasons, it can be difficult to maintain transparency regarding how this data is handled since it can be difficult to understand how it is being handled. Additionally, users may forget that their conversations with AI are not as private as text conversations with other humans and that they may inadvertently disclose sensitive data during these conversations. 

According to the GDPR, users are already protected from fully automated decisions making a decision about the course of their lives by the GDPR -- for example, an AI cannot deny a bank loan based on how it analyzes someone's financial situation. The proposed legislation in many parts of the world will eventually lead to more law enforcement regulating artificial intelligence (AI) in 2024. 

Additionally, AI developers will likely continue to refine their tools to change them into (hopefully) more privacy-conscious oriented tools as the laws governing them become more complex. As Zamir anticipates that Bard Extensions will become even more personalized and integrated with the online shopping experience, such as auto-filling out of checkout forms, tracking shipments, and automatically comparing prices, Bard extensions are on course to become even more integrated with the online shopping experience. 

All of that entails some risk, according to him, from the possibility of unauthorized access to personal and financial information during the process of filling out automated forms, the possibility of maliciously intercepting information on real-time tracking, and even the possibility of manipulated data in price comparisons. 

During 2024, there will be a major transformation in the tapestry of artificial intelligence, a transformation that will stir a debate on privacy and security. From Google's Bard to deepfake anxieties, let's embark on this technological odyssey with vigilant minds as users ride the wave of AI integration. Do not be blind to the implications of artificial intelligence. The future of AI is to be woven by a moral compass, one that guides innovation and ensures that AI responsibly enriches lives.

Meta Rolls Out Default End-to-End Encryption on Messenger Amid Child Security Concerns

 

Meta Platforms (META.O) announced on Wednesday the commencement of the rollout of end-to-end encryption for personal chats and calls on both Messenger and Facebook. This heightened security feature, ensuring that only the sender and recipients can access messages and calls, is now immediately available. 

However, Meta acknowledges that the process of implementing default end-to-end encryption may take some time to be fully carried out across all Messenger accounts. While users previously had the option to activate end-to-end encryption for individual messages, Meta's latest update aims to establish this advanced privacy measure as the default setting for all users. This signifies a noteworthy enhancement in safeguarding user data. 

Privacy Safety Issues 

In introducing encryption, Meta emphasized that the content of messages is now inaccessible to everyone, including the company itself, unless a user opts to report a message, as mentioned by Loredana Crisan, the head of Messenger, in a post unveiling this update. To make this decision, Meta collaborated with external experts, academics, advocates, and governmental entities. Their joint efforts aimed to pinpoint potential risks, ensuring that the enhancement of privacy goes hand-in-hand with maintaining a safe online environment, as highlighted in Crisan's announcement. 

Why Law Agencies Criticizing the Move? 

Meta Platforms' move to introduce default encryption on Messenger has drawn criticism from various quarters, with notable voices such as Home Secretary James Cleverly and James Babbage, director general for threats at the National Crime Agency, expressing concerns about its potential impact on detecting child sexual abuse on the platform. 

In a disappointed tone, Home Secretary James Cleverly highlighted the significance of Meta's decision as a setback, particularly in light of collaborative efforts to address online harms. Despite this disappointment, he stressed a continued commitment to working closely with Meta to ensure the safety of children in the online space. 

James Babbage, director general for threats at the National Crime Agency, echoed this sentiment, characterizing Meta's choice to implement end-to-end encryption on Facebook Messenger as highly disappointing. He emphasized the increased challenges their team now faces in fulfilling their role of protecting children from sexual abuse and exploitation due to this development. 

Let’s Understand E2EE 

End-to-end encryption (E2EE) in messaging ensures the confidentiality of messages for all parties involved, including the messaging service. Within the framework of E2EE, a message undergoes decryption exclusively for the sender and the designated recipient, symbolizing the two "ends" of the conversation and giving rise to the term "end-to-end." 

"When E2EE is default, we will also use a variety of tools, including artificial intelligence, subject to applicable law, to proactively detect accounts engaged in malicious patterns of behaviour instead of scanning private messages," the company wrote. 

While numerous messaging services claim to provide encrypted communications, not all genuinely offer end-to-end encryption. Typically, a message undergoes encryption as it travels from the sender to the service's server and subsequently from the server to the intended recipient. Nevertheless, in certain instances, the message may be briefly decrypted when it reaches the server before undergoing re-encryption. 

The nomenclature "end-to-end" encryption is apt because it renders it practically impossible for any intermediary to decrypt the message. Users can place confidence in the fact that the messaging service lacks the technical capability to read their messages. To draw a parallel, envisage sending a letter secured in a locked box, of which solely the sender and the recipient possess the key. This physical barrier for anyone else mirrors the digital functionality of E2EE.

Meta Extends Ad-Free Facebook and Instagram Premium Access Worldwide



With the introduction of its ad-free subscription service, Meta, the parent company of Facebook and Instagram, offers European users the chance to enjoy their favourite social platforms without being bombarded with advertisements. The recent ruling of the EU's Court of Justice ordered Meta to obtain the consent of users before personalizing any ads for those users in response to a recent ruling issued by the Court of Justice of the EU. With this move, Meta is showing that it is complying with the regulatory framework that is changing in the European Union. 

According to the announcement, users in these regions will have the opportunity to choose between continuing to use the platforms for free ad-support or signing up for a free ad-free subscription experience in November. There is no possibility that the user information will be used for targeting adverts during the subscription period. 

Facebook and Instagram users in the European Union are soon going to be able to enjoy an ad-free experience but at a cost. Starting in November of this year, we will be able to opt into the new, premium service offered by Meta, which is the company’s parent company that owns the platforms. Meta is the company behind the platforms that operates the platforms and is the parent company of Meta. 

Regarding pricing, 18-and-up users will be asked to pay €9.99 per month (roughly $10.55 per month) if they want to access sites without advertisements through a web browser, and €12.99 for users who want to access websites through streamlined iOS and Android apps. Facebook users will not be shown ads on Facebook or Instagram after enrolling in the program, and their data and online activities will not be used to tailor future ads based on their browsing activity. 

Every additional account added to a user's Account Center in the future will be charged an additional fee of €6 per month for the web and €8 per month for iOS and Android devices beginning on March 1, 2024, by way of an increase of fees every month.

Historically, Meta has operated solely by offering free social networking services to its users, and by selling advertising to companies who wish to reach those users. As a result of data privacy laws and other government policies that are affecting technology companies, especially in Europe, it illustrate the fact that companies have been redesigning their products to comply with those policies. 

It is estimated that more than 450 million Europeans, across 27 countries, use Amazon, Apple, Google, TikTok and other companies to comply with new rules in the European Union. The number of people using Facebook each month is estimated to be 258 million, according to Meta's estimates. According to Meta's estimate, 257 million people use Instagram every month as well. 

For iOS and Android, it is important to note that the prices are adjusted based on the fees imposed by Apple and Google by their respective purchasing policies. The subscription will be valid until March 1, 2024, for all linked accounts within the Account Center of a user for six months. A monthly fee of €6 will, however, be charged starting March 1, 2024, for each additional account listed in a user's Account Center, starting on the web and €8 for iOS and Android. 

Meta was effectively barred from combining data collected from users across its various platforms - including Facebook, Instagram and WhatsApp - as well as from outside websites and apps in July, by the European Court of Justice, the highest court in the European Union, to protect the privacy of users. The E.U. regulators issued the fine in January for forcing Meta users to accept personalized ads as a requirement of using Facebook in a condition of fines of three billion euros. That decision was issued in response to a violation of privacy regulations. This may be a solution to comply in full with the judgment provided that we offer a subscription service without displaying adverts to our subscribers in Europe, Meta said in response to the European Court of Justice's judgement of July. 

A subscription can allow users to access the platforms without being exposed to the advertising that is displayed to their subscribers. There has been no paid and ad-free subscription for services like Facebook and Instagram since Facebook and its founder Mark Zuckerberg were formed in the early days of the company. As far as they are concerned, they have always believed that they can only offer their services for free, provided that advertisements accompany them. 

However, Meta is now offering a way for Instagram and Facebook users to subscribe to both services through one simple option. Due to pressure from the European Union, the move was made after the move was put forward, and therefore, the option is only available to customers in the European Union. 

This means that Instagram users in India will remain exposed to ads no matter whether they choose them or not, and will still see them on their feeds. In any case, if Instagram subscription plans prove to be popular in the European Union and Meta sees value in them, it might be possible for similar Instagram subscription plans to be introduced to India in the future.

It does seem quite a steep subscription price - even more so if users look at the figures in Indian rupees which would be Rs 880-Rs 1150 - but given that it allows users to enjoy Instagram and Facebook in ad-free settings, it is tempting. As well as this promise, Meta also promises that users of their paid plans will not be able to use their personal information for targeted marketing purposes. 

A short time ago Mark Zuckerberg said in an interview that Facebook wants their users to have free access to their service and added ads to it so that users and the company benefit from the process. This is one of the things that has been talked about again and again by Facebook and their CEO.

There will be no change to the ad-supported experience that Facebook and Instagram currently provide to users who choose to continue using the service for free. In Meta, users will be able to control their ad preferences and the ads shown to them as well as the data used for ad targeting by using tools and settings that will enable them to influence what ads they see and what data is used.

It is important to note that advertisers will continue to be able to target users who have opted for free, ad-supported online services in Europe, so they will still be able to conduct personalised advertising campaigns. To preserve both user and business value on its platforms, Meta commits to investing in new tools that offer enhanced controls over ad experiences on its platforms, so it can preserve value for both.

Meta is actively exploring options to provide teenagers with a responsible ad experience in line with the evolving regulatory landscape so that they will be able to explore advertising in a safe environment. Users over 18 will have the option of becoming subscribers for an ad-free experience, and Meta is actively exploring options to support teenagers in this area.

WhatsApp's New Twinning Feature: Manage Two Accounts on a Single Device

 


There has been an announcement by Meta that users of smartphone devices will soon have the ability to use two WhatsApp accounts on the same device. 

According to Zuckerberg, switching between work and personal accounts is now much easier with this feature in place – now you don't have to worry about logging out individually each time, carrying two phones, or having your messages sent from the wrong account. 

The WhatsApp Business feature has been in development for a few months now, both in the beta version of WhatsApp and in the business version. Now it is finally available. In a recent press release, Meta said the new capability aimed at making switching accounts easier for users, such as switching between their personal phone numbers and their professional numbers, a feature aimed to simplify life for users. 

There are many people who prefer to maintain two WhatsApp accounts: one for work and one for personal communication. As a result, these users need to download a copying app on Android or setup a WhatsApp Business account on iOS in order to use this method. In this situation, it is important to point out that the multi-account login feature opens up.

It gives users the option to switch from one WhatsApp account to another with just a few taps. For the feature to be enabled, users will need to obtain a new phone number (with a SIM card) or a new phone with multiple SIMs installed in order to use it. Through a one-time password, a verification will be done for the second number by the app.  

There has been some discussion regarding the availability of the feature on Android, but to date, it is only available on Android devices. In the coming weeks, users are expected to receive the new update. Meta also recommends that users only use the official WhatsApp application and not download unofficial or fake versions to make it easier for them to add more accounts. 

WhatsApp assures users that their messages are secure and private, whereas imitations may not provide the same level of security for your messages. Currently, Meta's decision is to create a new feature to make it easier for users to use multiple WhatsApp accounts on different devices in the future. 

As of 2021, Meta has now expanded this feature to include other smartphones, so users will now be able to access their accounts on Android tablets, browsers, or computers using the multi-device feature. As a result, users of Meta will now be able to use their WhatsApp accounts on two different smartphones simultaneously. 

When setting up a second account, users can do so by going to Settings > Add Account. When setting up, they will need their second mobile phone with a SIM or a device that has the physical or eSIM capabilities for multi-SIM. It was announced earlier this week that each account can have its own notifications and privacy settings. 

With the passkey support that WhatsApp launched earlier this week for the Android version, users can no longer use SMS-based two-factor authentication to log into the app. The Chief Executive Officer (CEO) of Meta, Mark Zuckerberg, has unveiled an upcoming functionality that will enable users to utilize two WhatsApp accounts on one device, thereby streamlining the administration of personal and professional dialogues. 

This functionality, initially accessible on the Android platform, is scheduled for global implementation in the forthcoming weeks. Users will be required to possess an extra telephone number for verification purposes. Meta strongly advises against the acquisition of unofficial WhatsApp versions due to security concerns. This advancement is congruent with Meta's endeavours to augment user satisfaction and extend multi-account capabilities across diverse devices.

Lawmaker Warns: Meta Chatbots Could Influence Users by ‘Manipulative’ Advertising


Senator Ed Markey has urged Meta to postpone the launch of its new chatbots since they could lead to increased data collection and confuse young users by blurring the line between content and advertisements.

The warning letter was issued the same day Meta revealed their plans to incorporate chatbots powered by AI into their sponsored apps, i.e. WhatsApp, Messenger, and Instagram.

In the letter, Markey wrote to Meta CEO Mark Zuckerberg that, “These chatbots could create new privacy harms and exacerbate those already prevalent on your platforms, including invasive data collection, algorithmic discrimination, and manipulative advertisements[…]I strongly urge you to pause the release of any AI chatbots until Meta understands the effect that such products will have on young users.”

According to Markey, the algorithms have already “caused serious harms,” to customers, like “collecting and storing detailed personal information[…]facilitating housing discrimination against communities of color.”

He added that while chatbots can benefit people, they also possess certain risks. He further highlighted the risk of chatbots, noting the possibility that they could identify the difference between ads and content. 

“Young users may not realize that a chatbot’s response is actually advertising for a product or service[…]Generative AI also has the potential to adapt and target advertising to an 'audience of one,' making ads even more difficult for young users to identify,” states Markey.

Markey also noted that chatbots might also make social media platforms more “addictive” to the users (than they already are).

“By creating the appearance of chatting with a real person, chatbots may significantly expand users’ -- especially younger users’ – time on the platform, allowing the platform to collect more of their personal information and profit from advertising,” he wrote. “With chatbots threatening to supercharge these problematic practices, Big Tech companies, such as Meta, should abandon this 'move fast and break things' ethos and proceed with the utmost caution.”

The lawmaker is now asking Meta to respond to a series of questions in regards to their new chatbots, including the ones that might have an impact on users’ privacy and advertising.

Moreover, the questions include a detailed insight into the roles of chatbots when it comes to data collection and whether Meta will commit not to use any information gleaned from them to target advertisements for their young users. Markey inquired about the possibility of adverts being integrated into the chatbots and, if so, how Meta intends to prevent those ads from confusing children.

In their response, a Meta spokesperson has confirmed that the company has indeed received the said letter. 

Meta further notes in a blog post that it is working in collaboration with the government and other entities “to establish responsible guardrails,” and is training the chatbots with consideration to safety. For instance, Meta writes, the tools “will suggest local suicide and eating disorder organizations in response to certain queries, while making it clear that it cannot provide medical advice.”  

Privacy Class Action Targets OpenAI and Microsoft

A new consumer privacy class action lawsuit has targeted OpenAI and Microsoft, which is a significant step. This legal action is a response to alleged privacy violations in how they handled user data, and it could be a turning point in the continuing debate over internet companies and consumer privacy rights.

The complaint, which was submitted on September 6, 2023, claims that OpenAI and Microsoft both failed to protect user information effectively, infringing on the rights of consumers to privacy. According to the plaintiffs, the corporations' policies for gathering, storing, and exchanging data did not adhere to current privacy laws.

According to the plaintiffs, OpenAI and Microsoft were accused of amassing vast quantities of personal data without explicit user consent, potentially exposing sensitive information to unauthorized third parties. The complaint also raises concerns about the transparency of these companies' data-handling policies.

This lawsuit follows a string of high-profile privacy-related incidents in the tech industry, emphasizing the growing importance of protecting user data. Critics argue that as technology continues to play an increasingly integral role in daily life, companies must take more proactive measures to ensure the privacy and security of their users.

The case against OpenAI and Microsoft echoes similar legal battles involving other tech giants, including Meta (formerly Facebook), further underscoring the need for comprehensive privacy reform. Sarah Silverman, a prominent figure in the entertainment industry, recently filed a lawsuit against OpenAI, highlighting the potentially far-reaching implications of this case.

The outcome of this lawsuit could potentially set a precedent for future legal action against companies that fall short of safeguarding consumer privacy. It may also prompt a broader conversation about the role of regulatory bodies in enforcing stricter privacy standards within the tech industry.

As the legal proceedings unfold, all eyes will be on the courts to see how this case against OpenAI and Microsoft will shape the future of consumer privacy rights in the United States and potentially serve as a catalyst for more robust data protection measures across the industry.

MasterClass: Online Learning Platform Accused of Violating Customer Privacy by Sharing Info to Meta


Online learning platform MasterClass has been charged for using trackers, transmitting certain customer data to Meta for advertisement purposes without users’ consent.

In regards to the case, the law firm Milberg Coleman Bryson Phillips Grossman, PLLC has been gathering the MasterClass victims, to take action against the Yanka Industries, owner of the online education platform. 

According to Milberg, their firm has “reasons to believe” that MasterClass has been using tracking tool that are “secretly transmit details about certain users and the videos they’ve watched to Facebook” for advertising purposes. They further noted that this data may link a MasterClass subscriber’s watch history to their Facebook accounts. 

“Many website operators gather data about the people who visit their websites by using an invisible tracking tool called the Meta (formerly known as Facebook) pixel. The pixel, which can be embedded on any webpage, can be programmed to record every action a visitor takes, such as the buttons they click, the searches they perform and the content they view,” explains Milberg.

The law firm further notes that the data collected via the tracking tools can be used by the website operator (in this case, Yanka Industries), to better target advertising to potential consumers.

“In the case of MasterClass.com, attorneys are specifically looking into whether the website is tracking which videos its users have watched and sending that information to Meta along with each person’s Facebook ID. A Facebook ID is a unique identifier linked to an individual’s Facebook profile and could potentially be used to match up a specific person with the videos they’ve watched on Yanka Industries’ website,” the firm stated.

If the accusations turn out to be true, MasterClass’ actions will then be a violation of the federal Video Privacy Protection Act (VPPA).

At present, Milberg is not filing a lawsuit against MasterClass, but is pursuing mass arbitration instead. This is a relatively new legal strategy, according to the law firm, that is comparable to a class action lawsuit and enables a group of people to demand restitution from a company for alleged wrongdoing.

According to Milberg, if it is proven true that MasterClass violated the VPPA, the corporation would be liable for compensating each customer $2,500. However, there was no specific proof offered by Milberg against MasterClass.

Meta Responds to User Complaints by Introducing Feeds for Threads

Meta, the parent company of social media giant Facebook, has recently revealed its plans to introduce feeds for Threads, a messaging app designed for close friends. This move comes in response to user complaints about the lack of a central content hub and the need for a more streamlined user experience. The company aims to enhance the app's functionality and provide a more engaging platform for users to connect and share content.

According to reports from BBC News, Meta's decision to introduce feeds for Threads follows numerous user complaints regarding the app's limited capabilities and disjointed user interface. Users have expressed their desire for a central hub where they can view and interact with content shared by their friends, similar to the experience offered by other social media platforms. Responding to this feedback, Meta plans to incorporate feeds into Threads to address these concerns and improve the overall user experience.

In an official statement, Meta spokesperson Jonathan Anderson stated, "We have taken note of the feedback we received from Threads users. We understand the importance of creating a cohesive and engaging environment for our users, and we are actively working on implementing feeds within the app. This will allow users to easily navigate and interact with the content shared by their friends, enhancing their overall experience on Threads."

The addition of feeds to Threads is expected to offer several benefits to users. It will provide a central content hub where users can view and engage with posts, photos, and videos shared by their friends. This new feature aims to foster a sense of community and encourage more active participation within the app. Moreover, the inclusion of feeds will enable users to stay up-to-date with the latest content from their close friends without having to navigate through multiple screens or individual conversations.

Meta's decision to address user feedback and enhance Threads aligns with the company's ongoing efforts to improve user satisfaction and retain a competitive edge in the social media landscape. By implementing feeds within the app, Meta aims to offer a more intuitive and enjoyable user experience, attracting and retaining users who value close-knit connections and personalized content sharing.

While Meta has not disclosed a specific timeline for the release of feeds on Threads, users can anticipate an update in the near future. The company remains committed to actively listening to user feedback and implementing changes that enhance the functionality and usability of its platforms.