Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label AI Surveillance. Show all posts

Users Warned to Check This Setting as Meta Faces Privacy Concerns

 


A new AI experiment launched by Meta Platforms Inc. continues to blur the lines between innovation and privacy in the rapidly evolving digital landscape of connectivity. There has been a report that the tech giant, well known for changing the way billions of people interact online, has begun testing an artificial intelligence-powered feature that will scan users' camera rolls to identify pictures and videos that are likely to be shared the most. 

By leveraging generative AI, this new Facebook feature will simplify the process of creating content and boosting user engagement by providing relevant images to users, applying creative edits, and assembling themed visual recaps - effectively turning users' own galleries into curated storyboards that tell a compelling story. 

Digital Trends reported recently that Meta has rolled out a feature for users in the United States and Canada that is currently opt-in and available on an opt-in basis. This is their latest attempt to keep pace with rivals like TikTok and Instagram in a tightening battle for attention. Apparently, the system analyses unshared media directly on the users' devices, identifying what the company refers to as "hidden gems," which would have otherwise remained undiscovered. 

As much as the feature is intended to promote more frequent and visually captivating posts through convenience, it also reignites long-standing discussions about data access, user consent, and the increasingly blurred line between personal privacy and algorithmic assistance that has become commonplace in the era of social media. During a move that has both sparked curiosity and unease, Meta quietly rolled out new Facebook settings that will allow the platform to analyse images stored within users' camera rolls-even those images that have never been uploaded or shared online—in a move that has sparked both intrigue and unease. 

With the advent of artificial intelligence, the feature is billed as “camera roll sharing suggestions,” which is intended to help people generate personalised recommendations such as travel highlights, thematic albums, and collages based on their private photos using the camera roll. According to Meta, the feature operates only with the consent of the user and is turned off by default, emphasising the user's complete control over whether or not he or she chooses to participate. Nevertheless, the emerging reports indicate a very different story. 

Many users claim that the feature is already active within their Facebook application despite having no memory of enabling it to begin with, indicating that it is an opt-in feature. It is for this reason that a growing number of people are starting to become sceptical of data permissions and privacy management, which has heightened ongoing concerns. As a result of these silent activations, there is still a broader issue at play-users might easily overlook background settings which grant extensive access to their personal information. 

The privacy advocacy community is therefore urging users to reexamine their Facebook privacy settings and to ensure their access to local photo libraries aligns with their expectations of digital privacy and comfort levels. By tapping Allow on a pop-up message labelled "cloud processing," Facebook users are in effect agreeing to Meta's AI Terms of Service, in which the platform will be able to analyse their stored media and even facial characteristics through artificial intelligence. 

After activating the feature, the user's camera roll will be continuously uploaded to Meta's cloud infrastructure, allowing Facebook to uncover so-called "hidden gems" within their photos, and select a collage, an AI-driven collage, a themed album, or create an edit tailored to individual moments. These settings were first introduced to select users as part of testing phases last summer, but they are now gradually appearing across the platform, hidden deep within the app's configuration menus under options such as "personalised creative ideas" and "AI-powered suggestions". 

According to Meta, the purpose of the tool is to improve the user experience by providing private, shareable recommendations of content based on the user's own device, all of which are created by Meta. Despite the fact that the company insists that the suggestions are only visible to those with an account, they are not used for targeted advertising. These suggestions are based on parameters such as time, location, and people or objects present. However, the quiet rollout has sparked the discomfort of some users who claim that they have never knowingly agreed to be notified about the service. 

There have been many reports of people finding the feature already activated, despite having no memory of granting consent, raising renewed concerns about transparency and informed user choice. Privacy advocates have said that although the tool may appear harmless and a simple way to simplify creative posting, it actually reveals a larger and more complex issue: the gradual normalisation of deep access to personal data under the guise of convenience, which has been occurring in recent years.

Keeping in mind the fact that Meta continues to expand its generative AI initiatives, the company's ability to mine personal images unposted for algorithmic insights enables Meta to pursue its technological ambitions in a way that often goes against the clear awareness of its users. Such features serve as a reminder of the delicate balance that exists between innovation and individual privacy in the digital age, as the race to dominate the AI ecosystem intensifies. 

In response to growing privacy concerns over Meta's data practices, many users are taking advantage of Meta's "Off-Facebook Activity" controls to limit the amount of personal information the company can collect and use beyond the platform's own application, as privacy concerns have intensified. In addition to being available on Facebook and Instagram, this feature allows users to view, manage, and delete the data that is shared with Meta by third-party services and websites. 

In the Facebook account's settings and privacy settings, users can select Off-Facebook Activity under "Your Facebook Information" so that they will be able to see what platforms have transmitted their data to, clear their history, and disable future tracking. Additionally, similar tools can be found under the Ads and Data & Privacy sections of Instagram under the Ads tab.

It is important to note that by disabling these options, Meta will not be able to store and analyse any activity that occurs outside of its ecosystem - ranging from e-commerce interactions to app usage patterns - reducing the personalisation of ads and limiting data flow between Meta and external platforms.

Despite the fact that the company maintains that this information assists in improving user experiences and providing relevant content, many critics believe that the practice violates one's privacy rights. Additionally, the controversy has reached the social media arena, where users continue to express their frustrations with Meta's pervasive tracking systems. In one viral TikTok video that has accumulated over half a million views, the creator described disabling the feature as a "small act of defiance," encouraging others to do the same to reclaim control of their digital footprint. 

While experts are warning that, despite the fact that Meta remains able to function properly, certain permissions needed for its functionality remain active, which means that complete data isolation remains elusive even after turning off tracking. However, privacy advocates assert that clearing off-Facebook Activity and preventing future tracking remain among the most effective methods users can use to significantly reduce Meta's access to their personal information. 

Despite growing concerns that Meta is utilising personal data in an increasingly expansive way in an effort to protect it, companies like Proton are positioning themselves as secure alternatives to Meta that emphasise transparency and user control. In response to the recent controversy over Meta's smart glasses - criticised for the potential to be turned into facial recognition and surveillance tools - calls have become more urgent for stronger safeguards against the abuse of private media. 

Unlike many of its peers, Proton advocates a fundamentally different approach: limiting data collection completely rather than attempting to manage it after it has been exposed. With Proton Drive, a cloud-based storage service that is encrypted, users can securely store their camera rolls and private folders without being concerned about third parties accessing them or harvesting their data. Regardless of the size of each file, including its metadata, all files are encrypted end-to-end, so that no one - not even Proton - can access or analyse the content of users. 

By encrypting photographs, people prevent the extraction of sensitive data, such as geolocation information and patterns, that can reveal personal routines and locations, through this level of security. With Proton Drive, users can store and retrieve their files anywhere using an app for both iOS and Android. Furthermore, users can control their privacy completely with a mobile app for both iOS and Android. In contrast to the majority of social media and tech platforms that monetise user data for advertising or model training, Proton's business model is entirely subscription-based, which eliminates the temptation to exploit the personal data of users. 

A five-gigabyte storage allowance is currently offered by the company, which is enough for about 1,000 high-resolution images, so that users are encouraged to safeguard their digital memories through a platform that prioritises confidentiality over commercialisation. Advocates for privacy are considering this model as a viable option in an era where technology is increasingly clashing with the right to personal security, a conflict that is becoming more prevalent. 

With the advancement of the digital age, the line between personalisation and intrusion becomes increasingly blurred, encouraging users to take an active role in managing their own data. The ongoing issues surrounding Meta's use of artificial intelligence to analyse photos, off-platform tracking, and secret data collection serve as a stark reminder that convenience is not always accompanied by privacy concerns. 

According to experts, reviewing app permissions, clearing connected data histories on a regular basis, and disabling non-essential tracking features can all help to significantly reduce the amount of unnecessary data exposed to the outside world. In addition, storing sensitive information in an encrypted cloud service like Proton Drive can also offer a safer environment while maintaining access to sensitive information. 

The power to safeguard our online privacy lies with the ability to be aware and act upon it. By remaining informed about new app settings, by reading consent disclosures carefully, and by being selective about the permissions users grant, every individual can regain control of their digital lives.

As artificial intelligence continues to redefine the limits of technology in our age, securing personal information has become more than a matter of protecting oneself from identity theft; it has become a form of digital self-defence that ensures users can remain innovative and preserve their basic right to privacy at the same time.

China Hacks Seized Phones Using Advanced Forensics Tool

 


There has been a significant concern raised regarding digital privacy and the practices of state surveillance as a result of an investigation conducted by mobile security firm Lookout. Police departments across China are using a sophisticated surveillance system, raising serious concerns about the state's surveillance policies. 

According to Chinese cybersecurity and surveillance technology company Xiamen Meiya Pico, Massistant, the system is referred to as Massistant. It has been reported that Lookout's analysis indicates that Massistant is geared toward extracting a lot of sensitive data from confiscated smartphones, which could help authorities perform comprehensive digital forensics on the seized devices. This advanced software can be used to retrieve a broad range of information, including private messages, call records, contact lists, media files, GPS locations, audio records, and even encrypted messages from secure messaging applications like Signal. 

A notable leap in surveillance capabilities has been demonstrated by this system, as it has been able to access protected platforms which were once considered secure, potentially bypassing encryption safeguards that were once considered secure. This discovery indicates the increasing state control over personal data in China, and it underscores how increasingly intrusive digital tools are being used to support law enforcement operations within the country. 

With the advent of sophisticated and widespread technologies such as these, there will be an increasing need for human rights protection, privacy protection, and oversight on the global stage as they become more sophisticated. It has been reported that Chinese law enforcement agencies are using a powerful mobile forensic tool known as Massistant to extract sensitive information from confiscated smartphones, a powerful mobile forensic tool known as Massistant. 

In the history of digital surveillance, Massistant represents a significant advance in digital surveillance technology. Massistant was developed by SDIC Intelligence Xiamen Information Co., Ltd., which was previously known as Meiya Pico. To use this tool, authorities can gain direct access to a wide range of personal data stored on mobile devices, such as SMS messages, call histories, contact lists, GPS location records, multimedia files and audio recordings, as well as messages from encrypted messaging apps like Signal, to the data. 

A report by Lookout, a mobile security firm, states that Massistant is a desktop-based forensic analysis tool designed to work in conjunction with Massistant, creating a comprehensive system of obtaining digital evidence, in combination with desktop-based forensic analysis software. In order to install and operate the tool, the device must be physically accessed—usually during security checkpoints, border crossings, or police inspections on the spot. 

When deployed, the system allows officials to conduct a detailed examination of the contents of the phone, bypassing conventional privacy protections and encryption protocols in order to examine the contents in detail. In the absence of transparent oversight, the emergence of these tools illustrates the growing sophistication of state surveillance capabilities and raises serious concerns over user privacy, data security, and the possibility of abuse. 

The further investigation of Massistant revealed that the deployment and functionality of the system are closely related to the efforts that Chinese authorities are putting into increasing digital surveillance by using hardware and software tools. It has been reported that Kristina Balaam, a Lookout security researcher, has discovered that the tool's developer, Meiya Pico, currently operating under the name SDIC Intelligence Xiamen Information Co., Ltd., maintains active partnerships with domestic and foreign law enforcement agencies alike. 

In addition to product development, these collaborations extend to specialised training programs designed to help law enforcement personnel become proficient in advanced technical surveillance techniques. According to the research conducted by Lookout, which included analysing multiple Massistant samples collected between mid-2019 and early 2023, the tool is directly related to Meiya Pico as a signatory certificate referencing the company can be found in the tool. 

For Massistant to work, it requires direct access to a smartphone - usually a smartphone during border inspections or police encounters - to facilitate its installation. In addition, once the tool has been installed, it is integrated with a desktop forensics platform, enabling investigators to extract large amounts of sensitive user information using a systematic approach. In addition to text messages, contact information, and location history, secure communication platforms provide protected content, as well. 

As its predecessor, MFSocket, Massistant is a program that connects mobile devices to desktops in order to extract data from them. Upon activation, the application prompts the user to grant the necessary permissions to access private data held by the mobile device. Despite the fact that the device owner does not require any further interaction once the initial authorisation is complete, the application does not require any further interaction once it has been launched. 

Upon closing the application, the user is presented with a warning indicating that the software is in the “get data” mode and that exiting will result in an error, and this message is available only in Simplified Chinese and American English, indicating the application’s dual-target audience. In addition, Massistant has introduced several new enhancements over MFSocket, namely the ability to connect to users' Android device using the Android Debug Bridge (ADB) over WiFi, so they can engage wirelessly and access additional data without having to use direct cable connections. 

In addition to the application's ability to remain undetected, it is also designed to automatically uninstall itself once users disconnect their USB cable, so that no trace of the surveillance operation remains. It is evident that these capabilities position Massistant as a powerful weapon in the arsenal of government-controlled digital forensics and surveillance tools, underlining growing concerns about privacy violations and a lack of transparency when it comes to the deployment of such tools.

Kristina Balaam, a security researcher, notes that despite Massistant's intrusive capabilities that it does not operate in complete stealth, so users have a good chance of detecting and removing it from compromised computers, even though it is invasive. It's important to know that the tool can appear on users' phone as a visible application, which can alert them to the presence of this application. 

Alternatively, technically proficient individuals could identify and remove the application using advanced utilities such as Android Debug Bridge (ADB), which enables direct communication between users' smartphone and their computer by providing a command-line interface. According to Balaam, it is important to note that the data exfiltration process can be almost complete by the time Massistant is installed, which means authorities may already have accessed and extracted all important personal information from the device by the time Massistant is installed. 

Xiamen Meiya Pico's MSSocket mobile forensics tool, which was also developed by the company Xiamen Meiya Pico, was the subject of cybersecurity scrutiny a couple of years ago, and Massistant was regarded as a successor tool by the company in 2019. In developing surveillance solutions tailored for forensic investigations, the evolution from MSSocket to Massistant demonstrates the company's continued innovation. 

Xiamen Meiya Pico, according to industry data, controls around 40 per cent of the Chinese digital forensics market, demonstrating its position as the market leader in the provision of data extraction technologies to law enforcement. However, this company is not to be overlooked internationally as its activities have not gone unnoticed. For the first time in 2021, the U.S. government imposed sanctions against Meiya Pico, allegedly supplying surveillance tools to Chinese authorities. 

It has been reported that these surveillance tools have been used in ways that are causing serious human rights and privacy violations. Despite the fact that media outlets, including TechCrunch, have inquired about the company's role in mass instant development and distribution, it has declined to respond to these inquiries. 

It was Balaam who pointed out that Massistant is just a tiny portion of a much larger and more rapidly growing ecosystem of surveillance software developed by Chinese companies. At the moment, Lookout is tracking over fifteen distinct families of spyware and malware that originated from China. Many of these programs are thought to be specifically designed for state surveillance and digital forensics purposes. 

Having seen this trend in action, it is apparent that the surveillance industry is both large and mature in the region, which exacerbates global concerns regarding unchecked data collection and misuse of intrusive technologies. A critical inflexion point has been reached in the global conversation surrounding privacy, state surveillance, and digital autonomy, because tools like Massistant are becoming increasingly common. 

Mobile forensic technology has become increasingly powerful and accessible to government entities, which has led to an alarming blurring of the lines between lawful investigation and invasive overreach. Not only does this trend threaten individual privacy rights, but it also threatens to undermine trust in the digital ecosystem when transparency and accountability are lacking, especially when they are lacking in both. 

Consequently, it highlights the urgency of adopting stronger device security practices for individuals, staying informed about the risks associated with physical device access, and advocating for encrypted platforms that are resistant to unauthorized exploits, as well as advocating for stronger security practices for individuals. 

For policymakers and technology companies around the world, the report highlights the imperative need to develop and enforce robust regulatory frameworks that govern the ethical use of surveillance tools, both domestically and internationally. It is important to keep in mind that if these technologies are not regulated and monitored adequately, then they may set a dangerous precedent, enabling abuses that extend much beyond their intended scope. 

The Massistant case serves as a powerful reminder that the protection of digital rights is a central component of modern governance and civic responsibility in an age defined by data.

AI Surveillance at Paris Olympics Raise Privacy Concerns

 

French authorities' plans to employ artificial intelligence to scan the thousands of athletes, coaches and spectators descending on Paris for the Olympics is a form of creeping surveillance, rights groups said. 

In recent months, authorities have tested artificial intelligence surveillance equipment at football stadiums, concerts, and train stations. These devices will scan the crowds, look for abandoned packages, locate weapons, and more when the games start in late July. 

According to French officials, police, fire and rescue agencies, as well as certain French transport security agents, will employ these technologies until March 31, 2025, although they won't be fully operational until the games. 

Campaigners worry that AI spying will become the new norm. "The Olympics are a huge opportunity to test this type of surveillance under the guise of security issues, and are paving the way to even more intrusive systems such as facial recognition," Katia Roux, advocacy lead at Amnesty International France, stated. 

The French government has enlisted four companies in the effort: Videtics, Orange Business, ChapsVision, and Wintics. These organisations' security solutions track eight critical metrics: traffic going against the flow, people in restricted zones, crowd movement, abandoned packages, the presence or usage of weapons, overcrowding, a body on the ground, and fire. 

The software has been tested during concerts by Depeche Mode and the Black Eyed Peas, as well as a football match between Paris Saint-Germain and Olympique Lyon. 

Olympics: An AI playground 

French politicians have attempted to appease critics by banning facial recognition. Authorities say it's a red line that should not be crossed. 

Matthias Houllier, Wintics' co-founder, stated that the experiment was "strictly limited" to the eight use-cases mentioned in the law, and that features like crowd movement detection could not be utilised for other methods such as gait detection, which uses a person's unique walk to identify them. Wintics' design made it "absolutely impossible" for both end users and advanced engineers to utilise it for facial recognition. 

Experts are concerned that the government's methods for evaluating test performance, as well as the particular way this technology operates, have not been made public. 

"There is nowhere near the necessary amount of transparency about these technologies. There is a very unfortunate narrative that we cannot permit transparency about such systems, particularly in a law enforcement or public security context, but this is nonsense", Leufer said. 

"The use of surveillance technologies like these, especially in law enforcement and public security contexts, holds perhaps the greatest potential for harm, and therefore requires the highest level of public accountability," he added.

Ahmedabad Creates History as India’s First City With AI-Linked Surveillance System

 

The largest city in the Indian state of Gujarat, Ahmedabad, made history by being the first to install a surveillance system connected to artificial intelligence (AI). In order to enhance public safety and security, the city has teamed up with a tech company to install a state-of-the-art artificial intelligence system that can analyse massive amounts of data. 

A cutting-edge artificial intelligence command and control station, with an amazing 9 by 3 metre screen that monitors a vast 460 square km region, including Ahmedabad and its surrounding areas, is located in the huge Paldi sector of the city. This AI surveillance system provides a six-camera view of the entire city by combining live drone footage with camera feeds from buses and traffic signals.

With its sophisticated facial recognition technology, the AI-linked surveillance system can recognise and track people in real time. It is a priceless tool for Ahmedabad's law enforcement agencies since it can also identify and react to patterns of criminal activity. 

The local authority is confident that the new AI system will strengthen the effectiveness of its law enforcement operations and aid in the prevention and detection of crime. The system will also help with traffic management, crowd control, and disaster response.

"The implementation of this AI-linked surveillance system is a significant milestone for Ahmedabad and for India as a whole," a spokesperson for the city stated. "We are committed to leveraging the latest technology to ensure the safety and security of our citizens, and we believe that this system will play a crucial role in achieving that goal.” 

The introduction of an AI-powered monitoring system has ignited a national debate regarding potential advantages and drawbacks of such advanced technology in public places. While some have praised the system for its potential to increase safety and security, others have expressed concerns about privacy and data protection issues. 

Nonetheless, Ahmedabad's pioneering initiative has set a precedent for other Indian cities to follow as they seek to use AI to improve public safety and security. Ahmedabad has clearly established itself as a leader in the adoption of AI technology for the benefit of its citizens with the effective implementation of this system.