Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Privacy. Show all posts

UK Government’s New AI System to Monitor Bank Accounts

 



The UK’s Department for Work and Pensions (DWP) is gearing up to deploy an advanced AI system aimed at detecting fraud and overpayments in social security benefits. The system will scrutinise millions of bank accounts, including those receiving state pensions and Universal Credit. This move comes as part of a broader effort to crack down on individuals either mistakenly or intentionally receiving excessive benefits.

Despite the government's intentions to curb fraudulent activities, the proposed measures have sparked significant backlash. More than 40 organisations, including Age UK and Disability Rights UK, have voiced their concerns, labelling the initiative as "a step too far." These groups argue that the planned mass surveillance of bank accounts poses serious threats to privacy, data protection, and equality.

Under the proposed Data Protection and Digital Information Bill, banks would be mandated to monitor accounts and flag any suspicious activities indicative of fraud. However, critics contend that such measures could set a troubling precedent for intrusive financial surveillance, affecting around 40% of the population who rely on state benefits. Furthermore, these powers extend to scrutinising accounts linked to benefit claims, such as those of partners, parents, and landlords.

In regards to the mounting criticism, the DWP emphasised that the new system does not grant them direct access to individuals' bank accounts or allow monitoring of spending habits. Nevertheless, concerns persist regarding the broad scope of the surveillance, which would entail algorithmic scanning of bank and third-party accounts without prior suspicion of fraudulent behaviour.

The joint letter from advocacy groups highlights the disproportionate nature of the proposed powers and their potential impact on privacy rights. They argue that the sweeping surveillance measures could infringe upon individual liberties and exacerbate existing inequalities within the welfare system.

As the debate rages on, stakeholders are calling for greater transparency and safeguards to prevent misuse of the AI-powered monitoring system. Advocates stress the need for a balanced approach that addresses fraud while upholding fundamental rights to privacy and data protection.

While the DWP asserts that the measures are necessary to combat fraud, critics argue that they represent a disproportionate intrusion into individuals' financial privacy. As this discourse takes shape, the situation is pronouncing the importance of finding a balance between combating fraud and safeguarding civil liberties in the digital sphere. 


New Car Owners Beware: Study Finds Serious Data Protection Flaws

 


Modern gadgets have been collecting every bit of user data they can gather, just to sell it off to the highest bidder, ever since tech companies realized that data could be sold for dollars. While the user's car has long been a part of the data-sharing network, it seems that its contribution might be significantly greater than most of us would have expected. 

It may even be the biggest seller of users' personal information. There are so many so-called connected cars out there, cars that have internet access, that are becoming a regular part of the car driving experience, and the proliferation is raising concerns among consumers regarding their data privacy rights. 

As reported by Counterpoint Technology Market Research, more than 95% of the passenger cars sold by 2030 will be equipped with embedded connectivity, according to the company. Consequently, car manufacturers are now able to offer functions related to safety and security, predictive maintenance, and prognostics to their customers. 

Additionally, this opens the door for companies to collect, share, or sell personal information about individuals, including driving habits, and other information that people may not wish to share with others. Despite many car manufacturers' efforts to give consumers the option to opt out of excessive data sharing, Counterpoint senior analyst Parv Sharma explains that these options are often hidden within menus, which is also the case for many other consumer technologies where the sale of data has the potential to generate income. 

As a result of a McKinsey report published in 2021, various use cases of monetizing car data could result in an annual revenue stream of $250 billion to $400 billion for industry players by 2030 from multiple use cases for monetizing car data. It is true that there are valid reasons for collecting data from drivers and vehicles, such as those for emergency and security-related purposes, and that there may not always be a way for individuals to opt out of some essential services like that. 

It is important to share more data with other companies as a result of the fact that predictive maintenance enables manufacturers to detect when a part of their fleet is failing earlier than expected and to issue a recall on it, according to James Hodgson, ABI Research's director of smart mobility and automotive research. 

It is becoming increasingly apparent that there are privacy concerns surrounding the use of car companies to share driver information with insurers, and as car companies become involved in the insurance business, there is growing concern about privacy. For instance, driving habits and details regarding car usage might be reported to data collectors and passed along to insurance companies to make rate decisions based on those details. 

It's also important to understand that this is not the same as the new model of usage-based insurance that would allow drivers to earn lower rates if they allow insurers to embed devices into their cars that track their behaviour, which could be offered by companies such as Progressive and Root. There are widespread efforts underway by regulatory authorities to understand car manufacturers' data-sharing practices and to ensure that potential privacy violations are not committed. 

On the other hand, in response to the announcement made at its board meeting in July 2023 by the enforcement division of the California Privacy Protection Agency, there will be a review of the connected vehicle industry conducted under its purview. An official spokeswoman declined to comment further on that review, saying that it is underway. 

A federal investigation of the data-sharing practices of carmakers might also be a basis for federal action in the future. In Doubt-Keegan's view, publishing basic information about data practices can be insufficient to avoid the FTC's enforcement of those practices. There has been an increase in public awareness about this issue. There has been a letter sent (December) by Senator Edward J. Markey (D-Mass.) to 14 car manufacturers urging them to implement and enforce stronger privacy protections in their automobiles, as the senator is a member of the Senate Commerce, Science, and Transportation Committee.

Premiums Affected as Internet-Connected Cars Share Data with Insurers

 


All kinds of popular features, such as in-car apps, remote functions, and even Wi-Fi hot spots, are available on most new vehicles that offer internet services. In addition to being a goldmine of data for automakers, these "connected" cars can also serve as a goldmine for insurance companies as well. An article published in the New York Times this week discussed the extent to which tracking driver information can affect insurance rates, as well as how it may affect driver insurance rates. 

The insurance industry has in recent years provided incentives to consumers who install dongles in their cars or download smartphone apps that allow them to monitor a variety of things, including how much they drive, how fast they turn corners, how hard they hit the brakes, and whether or not they speed when driving. 

A patent application by Ford Motor describes how “drivers are traditionally reluctant to participate in such programs,” but instead, car companies are collecting information directly from internet-connected vehicles for use by insurance companies. This is the opposite of what's happening now. As far as tracking users' driving data regarding car insurance adjustments is concerned, it is not a new concept at all. 

If users prove that they are good drivers, they can often reduce their insurance premiums, normally by letting their insurance company track users' vehicle data such as trips taken, speeds, distance driven, etc. This is a way that the insurer will be able to lower users' premiums. Certainly, there is a significant difference between tracking of that type and what is emerging about the Smart Driver from General Motors. 

There are a lot of direct insurer tracking programs that help consumers save money on their bills, but Smart Driver is not a user's typical tracking program, most of its users are not knowingly entering into such an agreement seeking savings; in Smart Driver's case, as well as the way data is transmitted to insurers, the consent is not nearly as clear as it might seem. GM's "connected" services, OnStar Smart Driver, are known to share driver data with other auto manufacturers. 

According to Car and Driver, it was not surprising that other automakers also had a similar data-sharing program. The idea is fine when automakers effectively notify consumers that their data will be tracked and shared with others. A usage-based insurance policy entails that the insurance company monitors the behaviour of the driver to determine the best policy. 

There is a problem with the growing number of internet-connected vehicles that share the personal information of their drivers without these drivers even being aware that they have consented to this practice. Kenn Dahl says he has always been able to drive safely because he was careful as a child. In addition to driving a leased Chevrolet Bolt, he owns a software company near Seattle and owns one of its employees. Neither he nor anyone else in his family has a history of causing accidents. 

The cost of his auto insurance shot up by 21% in 2022, and Mr Dahl, 65, was shocked when he received a bill for a hike of such proportion. It was also not uncommon to receive high insurance quotes from other insurers as well. The insurer told him it was the LexisNexis report that he had on file that was a contributory factor.

It is important to understand that LexisNexis is a global data broker with a stake in the insurance and auto insurance industries and is known for keeping tabs on traffic accidents and speeding tickets in the automobile industry. LexisNexis sent Mr. Dahl his 258-page "consumer disclosure report" at his request as per the Fair Credit Reporting Act, which it is required to provide to customers under the law. 

Typically, someone will agree to the terms of service when they install or update an app on their smartphone, but they need to read the fine print before accepting these terms before installing or updating the app on their smartphone. Even though consumers are advised to carefully read contracts before agreeing to them, there is also a powerful argument that corporations must be transparent as to how and when their personal information is going to be shared with others.

This is why the California Privacy Protection Agency (CPPA) has enlisted the help of its Enforcement Division to investigate how and to what extent automobiles equipped with features such as location sharing, smartphone integration, web-based entertainment, and cameras could collect and share consumer data with others, according to a report from Reuters. 

The apprehension echoed by the US Department of Commerce regarding the prospective national security threats posed by Chinese electric vehicles (EVs) finds a parallel in the contemporary discourse surrounding the management of data about driving behaviour in "connected" automobiles.

Individuals keen on understanding the handling of such data by their vehicles are advised to diligently examine the privacy policies associated with any car applications they utilize. Additionally, consumers may avail themselves of consumer disclosure reports provided by LexisNexis, as mandated by the Fair Credit Reporting Act overseen by the Federal Trade Commission.

User Privacy: Reddit Discloses FTC Probe into AI Data Licensing Ahead of IPO


In a surprising turn of events, Reddit, the popular social media platform, has revealed that it is under investigation by the Federal Trade Commission (FTC) regarding its practices related to AI data licensing. The disclosure comes just before Reddit's highly anticipated initial public offering (IPO), raising important questions about user privacy and the responsible use of data in the age of artificial intelligence.

The Investigation 

The FTC's inquiry focuses on Reddit's handling of user-generated content, particularly its sale, licensing, or sharing with third parties to train AI models. While the details of the investigation remain confidential, the fact that it is non-public suggests that the agency is taking the matter seriously. As Reddit prepares to go public, this scrutiny could have significant implications for the company's reputation and future growth.

User Privacy at Stake

At the heart of this issue lies the delicate balance between innovation and user privacy. Reddit, like many other platforms, collects vast amounts of data from its users—posts, comments, upvotes, and more. This data is a goldmine for AI developers seeking to improve algorithms, personalize recommendations, and enhance user experiences. However, the challenge lies in ensuring that this data is used ethically and transparently.

Transparency Matters

Reddit's disclosure sheds light on the need for greater transparency in data practices. Users entrust platforms with their personal information, assuming it will be used responsibly. When data is shared with third parties, especially for commercial purposes, users deserve to know. Transparency builds trust, and any opacity in data handling can erode that trust.

Informed Consent

Did Reddit users explicitly consent to their content being used for AI training? The answer is likely buried deep within the platform's terms of service, a document few users read thoroughly. Informed consent requires clear communication about data usage, including how it benefits users and what risks are involved. The FTC's investigation will likely scrutinize whether Reddit met these standards.

The AI Black Box

AI models are often considered "black boxes." Users contribute data, but they rarely understand how it is transformed into insights or recommendations. When Reddit licenses data to third parties, users lose control over how their content is used. The investigation should prompt a broader conversation about making AI processes more transparent and accountable.

Balancing Innovation and Responsibility

Reddit's situation is not unique. Companies across industries grapple with similar challenges. AI advancements promise incredible benefits, from personalized content to medical breakthroughs, but they also raise ethical dilemmas. As we move forward, striking the right balance between innovation and responsibility becomes paramount.

Industry Standards

The FTC's investigation could set a precedent for industry standards. Companies must adopt clear guidelines for data usage, especially when AI is involved. These guidelines should prioritize user consent, data anonymization, and accountability.

User Empowerment

Empowering users is crucial. Platforms should provide accessible tools for users to manage their data, control permissions, and understand how their content contributes to AI development. Transparency dashboards and granular consent options can empower users to make informed choices.

Responsible AI Partnerships

When licensing data, companies should choose partners committed to ethical AI practices. Collaboration should align with user expectations and respect privacy rights. Responsible partnerships benefit both users and the AI ecosystem.

Privacy Perils: Experts Warn of Pitfalls in Sharing Pregnancy Photos Online

 


Taking pregnancy pictures online can even lead to the creation of a digital identity for your child that could be exploited, according to data scientists. When a child appears online, experts say it puts them at risk of identity theft or distribution of images to third parties once they appear online. 

In addition, they say parents should consider what their children share online because this contributes to their child's development as a digital individual. Children are exposed to identity theft, as well as the distribution of images to third parties as soon as they appear online, according to experts who claim they have a digital identity once they appear online.

According to a new study published in Paediatrics and Parenting, parents often think sharing pictures on social media sites is safe, but this may not always be the case, according to the study. As the experts pointed out, parents should consider the content they share with their children on social media, as this contributes to the development of a digital identity in them as a child. 

Dr Valeska Berg, from Edith Cowan University, in Australia, says that many parents do not realize the importance of building a digital identity for their children when they share photos and other identifying information on social media sites such as Facebook. Their posts about being pregnant or anticipating the birth of their child often include personal information that identifies them. 

A study published in Paediatrics and Parenting shows that parents think it is safe to share pictures on social media platforms, even before the child is born, and that this creates a digital identity even before the child is born. 

The doctor has emphasized that parents need to establish secure networks for virtual interactions, regardless of whether they are using Instagram, Facebook, or any other platform. Changing the profile to private is not enough to ensure that a child's photos are safe, she explained. 

As a rule of thumb, Dr Berg advises people to shield their children's faces from photos to preserve their privacy and to avoid publishing specific information about them on the Internet. The researcher said that children should be involved in the process of establishing their digital identity as much as possible. 

For a deeper understanding of this important, fast-moving field, research is needed to identify ways of accomplishing this as well as give voice to the experiences of young children. “In conclusion, the findings highlight the necessity for parents to remain aware and vigilant about the implications of sharing pregnancy and childhood photos online in the future.” 

In conclusion, future studies should explore the perspectives of children as key stakeholders in the creation of their digital identity.” Thoughtful consideration and proactive measures must be taken to safeguard the privacy of children, as their digital footprints are developed from an early age. 

The research of Dr. Valeska Berg emphasizes the importance of ongoing exploration and dialogue for children to participate actively in shaping their digital identities in the future. In today's digital age, it is very important to make informed decisions and take responsible digital practices, and this call to action underscores this.

Signal Protocol Links WhatsApp, Messenger in DMA-Compliant Fusion

 


As part of the launch of the new EU regulations governing the use of digital "gatekeepers," Meta is ready to answer all of your questions about WhatsApp and Messenger providing end-to-end encryption (E2EE), while also complying with the requirements outlined in the Digital Markets Act (DMA). A blog post by Meta on Wednesday detailed how it plans to enable interoperability with Facebook Messenger and WhatsApp in the EU, which means users can message each other if they also use Signal's underlying encryption protocol when communicating with third-party messaging platforms. 

As the Digital Markets Act of Europe becomes more and more enforced, big tech companies are getting ready to comply with it. In response to the new competition rules that took effect on March 6, Google, Meta, and other companies have begun making plans to comply and what will happen to end users. 

There is no doubt that the change was not entirely the result of WhatsApp's decision. It is known that European lawmakers have designated WhatsApp parent company Meta as one of the six influential "gatekeeper" companies under their sweeping Digital Markets Act, giving it six months to allow others to enter its walled garden. 

Even though it's just a few weeks until the deadline for WhatsApp interoperability with other apps approaches, the company is describing its plans. As part of the first year of the regulation, the requirements were designed to support one-to-one chats and file sharing like images, videos, or voice messages, with plans for these requirements to be expanded in the coming years to include group chats and calls as well. 

In December, Meta decided to stop allowing Instagram to communicate with Messenger, presumably to implement a DMA strategy. In addition to Apple's iMessage app and Microsoft's Edge web browser, the EU has also made clear that the four parent companies of Facebook, Google, and TikTok are "gatekeepers," although Apple's parent company Alphabet and TikTok's parent company ByteDance are excluded. 

ETA stated that before the company can work with third-party providers to implement the service, they need to sign an agreement for interoperability between Messenger and WhatsApp. To ensure that other providers use the same security standards as WhatsApp, the company requires them to use the Signal protocol. 

However, if they can be found to meet these standards, they will accept others. As soon as another service sends a request for interoperability, Meta is given a window of three months in which to do so. The organization warns, however, that functionality may not be available for the general public to access immediately. 

The approach Meta has taken to interoperability is designed to meet the DMA requirements while also providing a feasible option for third-party providers looking to maximize security and privacy for their customers. For privacy and security, Meta will use the Signal Protocol to ensure end-to-end encrypted communication. This protocol is currently widely considered the gold standard for end-to-end encryption in E2EE.

Advocating for the Persistence of Cash to Counteract Intrusive Banking Practices

 


The Bank of England released news this week that the value of notes in circulation has increased by nearly 16 percent since last year as it announced the opening of a new exhibition on the future of money (who could resist a tour through the history of payment methods?) 

A curator at the Bank of England Museum, Jennifer Adam, stated that even though many people are making more use of digital payments regularly, many people may still be using cash regularly. She also added that if users are physically handing over cash in shops to keep track of their finances, it will be much easier for them to keep track of their finances. 

There is also a theory that the spike in cash can also be attributed to “the turmoil caused by the pandemic and a rise in living costs”. In today's world, users are sick and tired of Big Brother, the state that is grabbing our data with its tentacles. 

Big Brother isn't the only problem. The government is utilizing its catalogue of scapegoats to avoid addressing the current economic hardship that families are facing to avoid addressing the election looming ahead. To whip up divisive and xenophobic, anti-immigrant sentiment, there is no better example than Rishi Sunak’s ongoing struggle to implement an illegal flagship Rwanda policy which is the best example of this principle. 

During the last week, Sunak accepted (then backed out of) a £1000 bet with TalkTV host Piers Morgan that he would get planes in the air before the next general election, which exemplifies the government’s distancing from asylum seekers most affected by this policy, highlighting how the government has become increasingly indifferent to the misfortunes of asylum seekers.  

In light of the passage of the second reading in the House of Lords of the Data Protection and Digital Information Bill (DPDI), amendments to the bill will likely have a greater impact on benefits recipients regarding savings accounts, overseas travel, and other benefits. Additionally, several cruel pieces of legislation have been passed to weaken the welfare system in a misguided attempt to help people find work and to 'crackdown' on fraudulent welfare claimants by debilitating the system. 

This government seems determined to fight workers and benefits recipients against one another for votes, as evidenced by Sunak's promise of cutting disability benefits to reduce taxes. As a result of the DPDI Bill, a bill introduced by the Secretary for Work and Pensions, Mel Stride, the DWP will be able to spy on welfare recipients' bank accounts to improve the welfare system. 

Accordingly, nearly 9 million people and anyone connecting them to the claimant could be involved in surveillance. This can include previous and current partners, children, and even landlords, who may be linked to the claimant. The government is, however, facing mounting pressure against the bill, which is being backed by the private sector.

Over 80,000 signatures have been collected so far in favour of a petition asking that the government stop scrutinizing bank accounts, and to preserve benefits claimants' dignity and privacy. There have also been concerns voiced by politicians regarding privacy and surveillance. 

According to a senior government official, the government is making an Orwellian "nightmare" come true, as the House of Lords is considering a bill that would allow officials to snoop on the bank accounts of benefit claimants. For the Department for Work and Pensions (DWP) to be able to track fraud and errors among those claiming benefits, the Data Protection and Digital Information Bill would compel banks to provide the Department with data to assist in finding fraud and errors. 

In the House of Lords, it has now passed its second reading, which means it has passed its second reading in parliament. In his speech, Sir Prem Sikka told the House of Lords that George Orwell's iconic novel 1984, first published in 1949, proclaimed Big Brother to be the spectre of the future. 

A newly elected Conservative government has now given shape to this nightmare by allegedly rolling back many of the policies and programs of the state. As a result of the government's actions, the right of people to protest and withdraw their labour has already been undermined. The sick, disabled, elderly, poor, unfortunate, and everyone else there is on the streets are now subjected to snooping and 24/7 surveillance of their bank accounts, building societies, and other accounts without a court order.

Cash is resurging as a means of sending a reassuring message to those who have fled data to ensure that users are not alone in our flight. After the Facebook generation began to realise that posting photos of themselves getting sloshed on the internet was a mistake in an attempt to make their future bosses rethink their claims of loving nothing more than a quiet night in front of the TV, they soon stopped posting photos of themselves getting sloshed on the internet. The convenience and ease of buying everything on the go with a phone are now being less attractive for Millennials as they begin to realize that banks are watching their every move.

Privacy Under Siege: Analyzing the Surge in Claims Amidst Cybersecurity Evolution

 

As corporate directors and security teams grapple with the new cybersecurity regulations imposed by the Securities and Exchange Commission (SEC), a stark warning emerges regarding the potential impact of mishandling protected personally identifiable information (PII). David Anderson, Vice President of Cyber Liability at Woodruff Sawyer, underscores the looming threat that claims arising from privacy mishandling could rival the costs associated with ransomware attacks. 

Anderson notes that, while privacy claims may take years to navigate the legal process, the resulting losses can be just as catastrophic over the course of three to five years as a ransomware claim is over three to five days. This revelation comes amidst a shifting landscape where privacy issues, especially those related to protected PII, are gaining prominence in the cybersecurity arena. 

In a presentation outlining litigation trends for 2024, Dan Burke, Senior Vice President and National Cyber Practice Leader at Woodruff-Sawyer sheds light on the emergence of pixel-tracking claims as a focal point for plaintiffs. These claims target companies engaging in website activity tracking through pixels without obtaining proper consent, adding a new layer of complexity to the privacy landscape. 

A survey conducted by Woodruff-Sawyer reveals that 31% of cyber insurance underwriters consider privacy as their top concern for 2024, following closely behind ransomware, which remains a dominant worry for 63% of respondents. This underscores the industry's recognition of the escalating importance of safeguarding privacy in the face of evolving cyber threats. James Tuplin, Senior Vice President and Head of International Cyber at Mosaic Insurance predicts that underwriters will closely scrutinize privacy trends in 2024. 

The prolonged nature of privacy litigation, often spanning five to seven years, means that this year will witness the culmination of cases filed before the implementation of significant privacy laws. Privacy management poses challenges for boards and security teams, exacerbated by a lack of comprehensive understanding regarding the types of data collected and its whereabouts within organizations. 

Sherri Davidoff, Founder and CEO at LMG Security, likens data hoarding to hazardous material, emphasizing the need for companies to prioritize data elimination, particularly PII, to mitigate regulatory and legal risks. Companies may face significant challenges despite compliance with various regulations and state laws. Michelle Schaap, who leads the privacy and data security practice at Chiesa Shahinian & Giantomasi (CSG Law), cautions that minor infractions, such as inaccuracies in privacy policies or incomplete opt-out procedures, can lead to regulatory violations and fines. 

Schaap recommends that companies leverage assistance from their cyber insurers, engaging in exercises such as security tabletops to address compliance gaps. A real-world example from 2022, where a company's misstatement about multifactor authentication led to a denied insurance claim, underscores the critical importance of accurate and transparent adherence to privacy laws. 

As privacy claims rise to the forefront of cybersecurity concerns, companies must adopt a proactive approach to privacy management, acknowledging its transformation from an IT matter to a critical business issue. Navigating the intricate web of privacy laws, compliance challenges, and potential litigation requires a comprehensive strategy to protect sensitive data and corporate reputations in this evolving cybersecurity landscape.

Serco Leisure Faces Legal Action for Unlawful Employee Face Scanning



Serco Leisure, a prominent leisure firm based in the UK, finds itself at the centre of a regulatory storm as the Information Commissioner's Office (ICO) intensifies its scrutiny. The ICO has raised serious concerns over the alleged illegal processing of biometric data, affecting more than 2,000 employees spread across 38 leisure facilities operated by the company. At the heart of the matter is the contentious implementation of facial scanning and fingerprint technology, ostensibly deployed to track staff attendance. This move has drawn sharp criticism from the ICO, which contends that the company's actions in this regard are not only ethically questionable but also fall short of principles of fairness and proportionality.

Despite Serco Leisure claiming it sought legal advice before installing the cameras and asserting that employees did not complain during the five years, the ICO found the firm had failed to provide a clear alternative to collecting biometric data. The company's staff, who also undergo fingerprint scanning, were not offered less intrusive methods, such as ID cards or fobs.

The ICO, led by UK Information Commissioner John Edwards, argued that Serco Leisure's actions created a power imbalance in the workplace, leaving employees feeling compelled to surrender their biometric data. Edwards emphasised that the company neglected to fully assess the risks associated with biometric technology, prioritising business interests over employee privacy.

According to the ICO, biometric data, being unique to an individual, poses greater risks in the event of inaccuracies or security breaches. Unlike passwords, faces and fingerprints cannot be reset, heightening concerns regarding data security.

Serco Leisure, while committing to comply with the enforcement notice, insisted that the facial scanning technology aimed to simplify clocking in and out for workers. The company claimed that it consulted with team members before the technology's implementation and received positive feedback.

After this occurrence, the ICO is releasing new guidance for organisations considering the use of employees' biometric data. This guidance aims to help such organisations comply with data protection laws. The controversial nature of biometric technology has sparked debates, with privacy advocates asserting that it infringes on individuals' rights, especially as artificial intelligence enhances the capabilities of these systems. On the other hand, law enforcement and some businesses argue that it is a precise and efficient method for ensuring safety and catching criminals. 

Serco Leisure's use of facial scanning technology to monitor staff attendance has raised legal concerns, leading to an enforcement notice from the ICO. The incident surfaces the need for organisations to carefully consider the privacy implications of biometric data usage and explore less intrusive alternatives to protect employee privacy while maintaining operational efficiency. The ICO's upcoming guidance will serve as a crucial resource for organisations navigating the complexities of using biometric data in the workplace.



American Express Faces Criticism Over Weak Password Policies

 



American Express found itself under scrutiny as users raised eyebrows over their seemingly weak password policies. The requirements, limiting passwords to 6 to 8 characters with a narrow scope of allowed characters, have sparked concerns about the vulnerability of user accounts. This has ignited a broader conversation about the importance of robust password practices and the need for companies to adapt to advancing cybersecurity standards.

Upon investigation, it was discovered that a user who raised the issue received a response from American Express, defending their policy. The email claimed that the website employs 128-bit encryption, making passwords composed solely of letters and numbers more secure. The rationale behind avoiding special characters was explained as a measure to thwart hacking software, which supposedly recognizes them easily.

However, security experts argue that this explanation is flawed. The concept of password "entropy," representing the variety of possible values, is critical in assessing the strength of a password. American Express's limitations on character types result in low password entropy, potentially compromising user accounts. The assertion that hackers can easily identify non-alphabetic characters is debunked by cybersecurity experts who emphasise that allowing special characters and longer passwords enhances security.

Moreover, the email defended the 8-character limit by claiming it reduces keyboard contact, purportedly preventing hacking software from deciphering passwords based on common key presses. However, critics argue that the opposite is true – encouraging longer and more complex passwords would provide greater protection against hacking attempts.

In an effort to address the apprehensions voiced by users, American Express sought to reassure its clientele by emphasising the implementation of robust security measures. The company highlighted the presence of advanced monitoring systems meticulously designed to promptly identify any instances of irregular or potentially fraudulent activity related to card usage. Despite this assurance, a palpable sense of scepticism lingers among users, casting doubt upon the efficacy of the prevailing password policy. This incredulity suggests that, for users, the confidence in the overall security posture of their accounts may be influenced by factors beyond the mere detection of suspicious activities, placing a spotlight on the ongoing debate regarding the adequacy of the current password protocols in place.

The controversy has surfaced a review of American Express's password policies. It remains to be seen whether the company will adapt its approach to align with modern cybersecurity standards. As users await potential changes, the debate serves as a reminder of the importance of robust password practices and the need for companies to stay vigilant in the confounding world of online security.


What Is The Virtual Chief Information Security Officer?

 


In our fast-paced digital age, where everything is just a click away, ensuring the safety of our online space has become more important than ever. It's like having a virtual fortress that needs protection from unseen threats. Now, imagine having a friendly digital guardian, the Virtual Chief Information Security Officer (vCISO), to watch over your activities. This isn't about complex tech jargon; it's about making your online world safer, simpler, and smarter.

Understanding the vCISO

The vCISO operates from a remote stance yet assumes a pivotal role in securing your digital assets. Functioning as a vigilant custodian for your crucial data, they meticulously enforce compliance, maintain order, and mitigate potential risks. Essentially, the vCISO serves as a professional guardian, even from a distance, ensuring the integrity and security of your data.


Benefits of Opting for a vCISO

1. Save Costs: Hiring a full-time CISO can be expensive. A vCISO is more budget-friendly, letting you pay for the expertise you need without breaking the bank.

2. Flexibility: The vCISO adapts to your needs, providing support for short-term projects or ongoing guidance, just when you need it.

3. Top-Tier Talent Access: Imagine having a pro on speed dial. The vCISO gives you access to experienced knowledge without the hassle of hiring.

4. Strategic Planning: A vCISO crafts specific security plans that align with your business goals, going beyond mere checkboxes to authentically strengthen the defenses of your digital infrastructure.

5. Independent View: Stepping away from office politics, a vCISO brings a fresh, unbiased perspective focused solely on improving your security.

Meet Lahiru Livera: Your Virtual Cybersecurity Guide

Lahiru Livera serves as a trusted expert in ensuring online safety. He's skilled at spotting and tackling problems early on, setting up strong security measures, and acting quickly when issues arise. Moreover, he shares valuable knowledge with your team, enabling them to navigate the digital world effectively and become protectors against potential online threats.

Whether your team is big or small, consider getting a vCISO. Connect with Lahiru Livera, your online safety guide, and firmly bolster digital existence of your team to withstand any forthcoming challenges.

All in all, the vCISO presents a straightforward and cost-effective method to ensure online safety. Think of it as having a knowledgeable ally, readily available when needed, without straining your budget. Lahiru Livera stands prepared to assist you in identifying potential issues, establishing intelligent protocols, and transforming your team into adept defenders against online threats. 


Protecting User Privacy by Removing Personal Data from Data Broker Sites

 


As part of its new subscription service model, Mozilla Firefox is offering its users the possibility of finding and removing their personal and sensitive information from data brokers across the internet. This new subscription model is known as Mozilla Monitor Plus and will allow users to locate and remove their sensitive information. 

To eliminate their phone numbers, e-mail, home addresses, and other information that is usually sold to data broker platforms for profit, the company offers a new subscription model called Mozilla Monitor-Plus. This is particularly interesting since Mozilla already offers a free service of privacy monitoring called Firefox Monitor which was previously known as Mozilla Monitor - which is now being revamped to strengthen privacy for users.

Previously, Mozilla Monitor was a free service that sent users notifications when their email accounts had been compromised. The new version is now called Monitor-Plus, and it is a subscription-based service. Approximately 10 million current Mozilla Monitor users will now have the opportunity to run scans to see if their personal information has been hacked by using the subscription-based service. 

Whenever a breach is detected, Monitor Plus provides the tools to make sure that a user's information remains private again if a breach is detected. Data broker websites have a convoluted and confusing process that individuals have to deal with when they try to remove their information from them. It is not uncommon for people to find themselves unsure of who is using their personal information or how to get rid of it once they find it online.

However, most sites have either an opt-out page or require them to contact the broker directly to request removal. This process can be simplified by Mozilla Monitor, which searches across 190 data broker sites known for selling private and personal information proactively.

Mozilla will initiate a request on behalf of the user for removal if any data provided to Mozilla is discovered on those sites, including name, location, and birthdate. The removal process can take anywhere from a day to a month, depending on how serious the problem is. There are two subscription options available for users of this feature, the Monitor Plus subscription costs $13.99 per month or $8.99 per month with an annual subscription, which includes this feature. 

The free option for users who do not wish to subscribe to Firefox is to scan data broker sites once. However, these users will have to manually go through the steps to remove their information from these websites. This may encourage them to upgrade to the Monitor Plus subscription, as it provides automatic removals for a process that can be very tedious otherwise.

In regards to data breaches, both free and paid users will continue to receive alerts and will have access to tools to learn how to fix high-risk breaches. By providing their email addresses, as well as a few personal details such as their first and last name, city, state, and date of birth, users can initiate a free one-time scan for their device.

There will then be the possibility to scan the tool for potential exposures and let users know about them and how they can be fixed. It is Mozilla's policy to initiate a data removal request on behalf of users who wish to have their data removed. The status of the requests of users can be viewed, as well as the progress of their requests can be tracked. 

Furthermore, Mozilla will perform a monthly scan after the removal of personal information to ensure that it is kept safe on 190+ data broker sites even after the removal. Users must submit their first and last name, current city and state, date of birth, and email address to initiate a scan. Mozilla has an extensive privacy policy that protects the privacy of this information and encrypts it.

With this kind of information in hand, Mozilla applies a scan to your personal information, showing you where your information has been exposed by data breaches, brokers, or websites that collect personal information. In 2023 alone, 233 million people will have been affected by data breaches, and it is for this reason that a tool such as this is vital in the current environment. The Mozilla Monitor Plus subscription will include monthly scans and automatic removal of any malware that is found on your computer.

More than 800 False "Temu" Domains Trick Customers Into Losing Their Credentials

Credential Theft

Cybersecurity experts caution against falling for Temu phishing scams since they use phony freebies to obtain passwords. In the last three months, more than 800 new "Temu" domains have been registered.

The most recent company that con artists have used for their phishing schemes is Temu. With over 800 new domains registered as "Temu" in the last three months, cybersecurity researcher Jeremy Fuchs of Checkpoint's Harmony Email has observed that hackers are taking advantage of Temu's giveaway offers to persuade users to divulge their passwords.

Just so you know, Temu is an international e-commerce site with 40% of its users residing in the United States. It provides customers with direct shipping of discounted goods. Launched in 2022, Temu is accessible in 48 nations, encompassing Australia, Southeast Asia, Europe, and the Middle East.

It ranks second in the Apple App Store and first in the Google Play Store for shopping apps as of February 7, 2024. The majority of app users are older folks, aged 59 and up.

The Scam

According to analysts, Temu Rewards is the source of the example phishing email. On closer inspection, though, you'll see that it was received from an unconnected onmicrosoft.com email account. The email has a link to a page that harvests credentials and a blank image. By telling recipients they have won, the threat actors hope to draw in receivers.

Phishing and Brand Names

Threat actors have previously used popular brands and current trends to their advantage to obtain sensitive data, including credentials, from unsuspecting consumers.

Cyjax researchers uncovered a sophisticated phishing campaign that was aimed at over 400 firms in a variety of industries. To spread malware and get money from advertisements, the con artists—who most likely have Chinese ties—used 42,000 domains, and at least 24,000 survey and landing pages to advertise the scheme.

Bloster AI cybersecurity experts have uncovered a USPS Delivery phishing campaign that employs sophisticated tactics to target victims in the United States. CheckPhish from Bolster found more than 3,000 phishing domains that imitated Walmart. Customers were misled by the advertising into believing they had failed delivery and unpaid bills. Threat actors have refined their attack strategies, moving from misleading messaging to enticing victims to download apps that steal banking or financial data.

In January 2024, it was found that business owners of Meta Platforms, Inc. were the target of a phishing scam that attempted to obtain their email addresses and passwords to gain control of their Facebook page, profile, and financial information. The hoax created a sense of urgency and authenticity by leveraging Meta Platforms' authority.

Cybersecurity and Temu

Temu has experienced several cybersecurity-related problems, including claims that it was gathering data from users and devices, including SMS messages and bank account details.

A class-action lawsuit was launched in November 2023 in the United States, claiming that the corporation had obtained its customers' data illegally. Moreover, an additional revelation emerged that implicated Temu in the unapproved release of customer information, specifically concerning data that allegedly surfaced for sale on the dark web following transactions made by users of the app.


Meta's AI Ambitions Raised Privacy and Toxicity Concerns

In a groundbreaking announcement following Meta CEO Mark Zuckerberg's latest earnings report, concerns have been raised over the company's intention to utilize vast troves of user data from Facebook and Instagram to train its own AI systems, potentially creating a competing chatbot. 

Zuckerberg's revelation that Meta possesses more user data than what was employed in training ChatGPT has sparked widespread apprehension regarding privacy and toxicity issues. The decision to harness personal data from Facebook and Instagram posts and comments for the development of a rival chatbot has drawn scrutiny from both privacy advocates and industry observers. 

This move, unveiled by Zuckerberg, has intensified anxieties surrounding the handling of sensitive user information within Meta's ecosystem. As reported by Bloomberg, the disclosure of Meta's strategic shift towards leveraging its extensive user data for AI development has set off a wave of concerns regarding the implications for user privacy and the potential amplification of toxic behaviour within online interactions. 

Additionally, the makers will potentially offer it free of charge to the public which led to different concerns in the tech community. While the prospect of accessible AI technology may seem promising, critics argue that Zuckerberg's ambitious plans lack adequate consideration for the potential consequences and ethical implications. 

Following the new development, Mark Zuckerberg reported to the public that he sees Facebook's continued user growth as an opportunity to leverage data from Facebook and Instagram to develop powerful, general-purpose artificial intelligence. With hundreds of billions of publicly shared images and tens of billions of public videos on these platforms, along with a significant volume of public text posts, Zuckerberg believes this data can provide unique insights and feedback loops to advance AI technology. 

Furthermore, as per Zuckerberg, Meta has access to an even larger dataset than Common Crawl, comprised of user-generated content from Facebook and Instagram, which could potentially enable the development of a more sophisticated chatbot. This advantage extends beyond sheer volume; the interactive nature of the data, particularly from comment threads, is invaluable for training conversational AI agents. This strategy mirrors OpenAI's approach of mining dialogue-rich platforms like Reddit to enhance the capabilities of its chatbot. 

What is Threatening? 

Meta's plan to train its AI on personal posts and conversations from Facebook comments raises significant privacy concerns. Additionally, the internet is rife with toxic content, including personal attacks, insults, racism, and sexism, which poses a challenge for any chatbot training system. Apple, known for its cautious approach, has faced delays in its Siri relaunch due to these issues. However, Meta's situation may be particularly problematic given the nature of its data sources. 

Mark Zuckerberg Apologizes to Families in Fiery US Senate Hearing

Mark Zuckerberg Apologizes to Families in Fiery US Senate Hearing

In a recent US Senate hearing, Mark Zuckerberg, the CEO of Meta (formerly Facebook), faced intense scrutiny over the impact of social media platforms on children. Families who claimed their children had been harmed by online content were present, and emotions ran high throughout the proceedings.

The Apology and Its Context

Zuckerberg's apology came after families shared heartbreaking stories of self-harm and suicide related to social media content. The hearing focused on protecting children online, and it provided a rare opportunity for US senators to question tech executives directly. Other CEOs, including those from TikTok, Snap, X (formerly Twitter), and Discord, were also in the hot seat.

The central theme was clear: How can we ensure the safety and well-being of young users in the digital age? The families' pain and frustration underscored the urgency of this question.

The Instagram Prompt and Child Sexual Abuse Material

One important topic during the hearing was an Instagram prompt related to child sexual abuse material. Zuckerberg acknowledged that the prompt was a mistake and expressed regret. The prompt mistakenly directed users to search for explicit content when they typed certain keywords. This incident raised concerns about the effectiveness of content moderation algorithms and the need for continuous improvement.

Zuckerberg defended the importance of free expression but also recognized the responsibility that comes with it. He emphasized the need to strike a balance between allowing diverse viewpoints and preventing harm. The challenge lies in identifying harmful content without stifling legitimate discourse.

Directing Users Toward Helpful Resources

During his testimony, Zuckerberg highlighted efforts to guide users toward helpful resources. When someone searches for self-harm-related content, Instagram now directs them to resources that promote mental health and well-being. While imperfect, this approach reflects a commitment to mitigating harm.

The Role of Parents and Educators

Zuckerberg encouraged parents to engage with their children about online safety and set boundaries. He acknowledged that technology companies cannot solve these issues alone; collaboration with schools and communities is essential.

Mark Zuckerberg's apology was a significant moment, but it cannot be the end. Protecting children online requires collective action from tech companies, policymakers, parents, and educators. We must continue to address the challenges posed by social media while fostering a healthy digital environment for the next generation.

As the hearing concluded, the families' pain remained palpable. Their stories serve as a stark reminder that behind every statistic and algorithm lies a real person—a child seeking connection, validation, and safety. 

Privacy Watchdog Fines Italy’s Trento City for Privacy Breaches in Use of AI


Italy’s privacy watchdog has recently fined the northern city of Trento since they failed to keep up with the data protection guidelines in how they used artificial intelligence (AI) for street surveillance projects. 

Trento was the first local administration in Italy to be sanctioned by the GPDP watchdog for using data from AI tools. The city has been fined a sum of 50,000 euros (454,225). Trento has also been urged to take down the data gathered in the two European Union-sponsored projects. 

The privacy watchdog, known to be one of the most proactive bodies deployed by the EU, for evaluating AI platform compliance with the bloc's data protection regulations temporarily outlawed ChatGPT, a well-known chatbot, in Italy. In 2021, the authority also reported about a facial recognition system tested under the Italian Interior Ministry, which did not meet the terms of privacy laws.

Concerns around personal data security and privacy rights have been brought up by the rapid advancements in AI across several businesses.

Following a thorough investigation of the Trento projects, the GPDP found “multiple violations of privacy regulations,” they noted in a statement, while also recognizing how the municipality acted in good faith.

Also, it mentioned that the data collected in the project needed to be sufficiently anonymous and that it was illicitly shared with third-party entities. 

“The decision by the regulator highlights how the current legislation is totally insufficient to regulate the use of AI to analyse large amounts of data and improve city security,” it said in a statement.

Moreover, in its presidency of the Group of Seven (G7) major democracies, the government of Italy which is led by Prime Minister Giorgia Meloni has promised to highlight the AI revolution.

Legislators and governments in the European Union reached a temporary agreement in December to regulate ChatGPT and other AI systems, bringing the technology one step closer to regulations. One major source of contention concerns the application of AI to biometric surveillance.  

23andMe Faces Privacy Breach

 


Recently, 23andMe, a prominent genetic testing provider, finds itself grappling with a substantial security breach spanning five months, from April 29 to September 27. This breach has exposed the health reports and raw genotype data of affected customers, shedding light on vulnerabilities in safeguarding personal genetic information. We need to look closely to extrapolate the implications of this breach on the privacy of your genetic data.

The breach occurred through a credential stuffing attack, where attackers used stolen credentials from other data breaches or compromised online platforms. The compromised information, including data for 1 million Ashkenazi Jews and 4.1 million individuals in the UK, was posted on hacking forums like BreachForums and the unofficial 23andMe subreddit.

The stolen data includes sensitive information such as health reports, wellness reports, carrier status reports, and self-reported health conditions. 23andMe also acknowledged that for users of the DNA Relatives feature, the attackers might have scraped DNA Relatives and Family Tree profile information.

The exposed information encompasses ancestry reports, matching DNA segments, self-reported locations, ancestor birth locations, family names, profile pictures, birth years, and details from the "Introduce yourself" section.

To address the breach, 23andMe took action by requiring all customers to reset their passwords on October 10. Additionally, since November 6, the company mandated two-factor authentication for all customers to enhance security and block future credential-stuffing attempts.

The data breach affected 6.9 million people out of the existing 14 million customers, with 14,000 user accounts breached. Approximately 5.5 million individuals had their data scraped through the DNA Relatives feature, and 1.4 million via the Family Tree feature.

This security incident led to the filing of multiple lawsuits against 23andMe. In response, the company updated its Terms of Use on November 30, making it more challenging for customers to join class-action lawsuits against them. The updated terms state that disputes should be resolved individually rather than through class actions or collective arbitration.

While 23andMe claims that these changes were made to streamline the arbitration process and enhance customer understanding, the incident underscores the importance of safeguarding personal genetic information.

Looking at the bigger picture 23andMe faced a significant data breach that exposed sensitive customer data for months. The breach prompted the company to implement security measures like password resets and two-factor authentication. Despite these efforts, the incident resulted in lawsuits, leading to changes in the company's Terms of Use. This event highlights the need for advanced security measures in the genomics and biotechnology industry, emphasising the importance of protecting users' personal information.


Data Breaches on the Rise: A Deep Dive into the AI-Driven Privacy Crisis

 


It is becoming increasingly apparent that artificial intelligence (AI) has become increasingly widespread in many aspects of our lives as technology continues to advance at an unprecedented rate. It is anticipated that artificial intelligence is going to revolutionize the way we interact with technology in the future. 

From generative artificial intelligence that can generate content with a simple prompt to smart home devices that learn about our habits and preferences, it has the potential to revolutionize the way we interact with technology in the future. From a data privacy perspective, AI has been a double-edged sword, according to experts. 

A security tool like this can be a powerful security tool for preventing attacks and unauthorized access. Still, on the other end, it is highly susceptible to data breaches and exploitation due to its heavy reliance on silos of data. An AI privacy policy is a set of practices and concerns that revolve around how artificial intelligence systems collect, store, and ethically use personal information. 

The program is designed to address the crucial need for protecting individual data rights and maintaining confidentiality as AI algorithms process and learn from a vast amount of personal information. An increasingly valuable commodity such as data is a highly valuable commodity. 

The importance of protecting individual privacy in the era of data augmentation is so great that it has become a challenge to balance technological innovation with maintaining individual privacy. As with any other weapon of choice, artificial intelligence has its own set of dangers, as bad actors can also make use of these tools to harm others. 

There have been several questions about artificial intelligence, including its impact on data privacy, following the recent AI Safety Summit in the UK. In the process of developing technology in real-time, the fear surrounding artificial intelligence is also growing, since it is difficult to predict how AI will develop in the future as it evolves in real-time.

There are concerns about how personal data is collected, processed, and stored by AI-based systems that rely on learning and making predictions based on it. Increasingly, the use of deep-fake technology as a means to bypass data privacy laws is becoming more prevalent as a result of the ease of access to AI tools such as chatbots and image generators. 

It's interesting to note that Trellix's recent research has found that AI, and the integration of Large Language Models (LLMs), are rapidly transforming how bad actors use social engineering strategies to manipulate their victims.

The concept of Artificial Intelligence has existed since the early 19th century and the use of it in the way we understand it has been widely accepted since the development of the "Turing test" in 1950. However, this concept was not discovered until the early 1940s, if not earlier. 

Artificial intelligence is undergoing rapid growth due to several factors, including advances in algorithmic design, the development of networked computing power, and the ability to save unprecedented amounts of data. Three of these factors work together to accelerate development in the field of AI. 

Since the 1960s, many of the developments that we have seen in the field of robotics and artificial intelligence have been enabled by technological advancements in addition to a shift in the way individuals think about intelligent machines. Even though many people are not aware of the fact that AI technologies are already being used in our daily lives, there are real-life applications for these technologies already in place. 

As one of the characteristics of artificial intelligence, once it is capable of being implemented effectively, it ceases to be referred to as artificial intelligence and becomes mainstream computing instead. There are many examples of mainstream AI technology, such as the automated voice that greets you when you call, or the recommendation of a movie based on your preferences. 

Although these systems, such as speech recognition, natural language processing, and predictive analytics, are already an integral part of our lives, we often do not remember that they are based on AI techniques. Generally speaking, organizations should proceed with caution as they approach AI to avoid making mistakes.

The ease with which it simplifies complex technical concepts has revolutionized the way people work today as it has revolutionized the way people work today with its benefits. It is nonetheless recommended that when implementing such a system, a lot of care should be taken due to its dual nature. 

Recently, cybercriminals have been using artificial intelligence tricks and techniques to make themselves look like officials and create fake data to create confusion or try to impersonate them for example the recent attacks on Booking.com are examples of this. 

To shield themselves from potential AI-driven attacks, businesses are advised to adopt a proactive stance within the dynamic cybersecurity landscape. This entails a heightened commitment to investing in robust defences against sophisticated cyber threats. Through the strategic amalgamation of appropriate technologies, skilled personnel, and effective tactics, Security Operations (SecOps) teams can position themselves well to mitigate cyber threats originating within their organizations.

Enhanced Security Alert: Setting Up Stolen Device Protection on iOS 17.3

 


It has been announced that Apple has released iOS 17.3, the latest version of its iPhone operating system. This new version has several important new features, including Stolen Device Protection, which provides users with additional security measures if their phone is stolen. 

As every iPhone user should know, this is one of the most important features users can enable, as it ensures that they have the best security without doing anything. In case any user's iPhone is stolen and they have turned on Stolen Device Protection, it will be able to place limits on certain settings changes when it is not at home or work, which makes it difficult for them to make changes. 

Once the user's phone has been unlocked, and if a thief wants to change these settings, they will first have to authenticate using Face ID or Touch ID. It is therefore near-impossible for them to modify protected settings if they also have their biometrics – a near-impossible procedure. 

A feature called Stolen Device Protection, when enabled, adds extra security steps to a range of other security measures. Currently, it is required to use biometric authentication (such as Face ID or Touch ID) to access things like stored credit card information or account passwords, which is not possible to do with a passcode. If, however, users lose their phone, only they can retrieve these items, even if someone knows their passcode and the user can't find it.

The second thing that needs to be done is to wait an hour before attempting a security-related action – such as changing the Apple ID password – and then to pass a second biometric authentication test. As a result, the user will have a lot more time to mark their device as lost or remotely erase it to prevent the wrong hands from getting to their data. This should make it harder for a trespasser to access a user's data. When the Stolen Device Protection feature is activated, it adds additional security measures to specific features and actions within a recognized area of the iPhone in case the iPhone leaves that area. 

To ensure that key changes to accounts or the device itself remain inaccessible even if a thief gains access to the device's passcode, this additional security layer guarantees that they will never be able to gain access to the device. The thief will need to authenticate themselves using either Face ID or Touch ID to change these settings after unlocking the stolen device. 

If a thief has access to a stolen passcode, he or she will still have to replicate the actual owner's biometrics to modify protected settings, which is a very difficult task to accomplish. In addition to limiting what information the owner's iPhone thief can access, Stolen Device Protection also requires biometric authentication, such as Face ID or Touch ID, to view saved passwords or to make changes to the stolen Apple savings account, depending on which iPhone it is. 

Having an unlocked iPhone will stop thieves from using it to steal users' money or open an Apple credit card in the actual owner's name under the false identity of the owner. Some of the changes may have been made as a result of reports of iPhone owners having their devices snatched by thieves after they observed them logging in with their PINs and scanning their phones.

When an iPhone is accessed and accessed by someone who is not authorized to do so, thieves can steal money from the device, open credit card accounts, and do many other things once they have gained access to the device. The thieves can also completely lock victims out of their accounts with Apple, which makes it very difficult for them to disable their iPhones or track their stolen phones with Apple's Find My feature to track and disable their phones. 

The victims can sometimes not be able to access the photos and files that have been saved in their iCloud accounts. With this new feature, hackers will find it harder to use stolen iPhones to ruin users' lives and ruin their reputations. Having this feature on may cause some inconvenience for users at times, but the fact remains that they should turn it on to save the day. 

As soon as users have installed iOS 17.3 and wish to enable Stolen Device Protection, go to the Settings section of iOS and choose Face ID & Passcode. If users swipe down when using the app, they will find the section on Stolen Device Protection, which they should tap, to enable the feature.

Google to put Disclaimer on How its Chrome Incognito Mode Does ‘Nothing’


The description of Chrome’s Incognito mode is set to be changed in order to state that Google monitors users of the browser. Users will be cautioned that websites can collect personal data about them.

This indicates that the only entities that are kept from knowing what a user is browsing on incognito would be their family/friends who use the same device. 

Chrome Incognito Mode is Almost Useless

At heart, Google might not only be a mere software developer. It is in fact a business that is motivated through advertising, which requires it to collect information about its users and their preferences in order to sell them targeted advertising. 

Unfortunately, users cannot escape this surveillance just by switching to incognito. In fact, Google is paying a sum of $5 billion to resolve a class-action lawsuit filed against them, accusing the company of betraying its customers regarding the privacy assurance they support. Google is now changing its description of Incognito mode, which will make it clear that it does not really protect the user’s privacy. 

Developers can get a preview of what this updated feature exactly is, by using Chrome Canary. According to MSPowerUser, the aforementioned version of Chrome displayed a disclaimer when the user went Incognito, stating:

"You’ve gone Incognito[…]Others who use this device won’t see your activity, so you can browse more privately. This won’t change how data is collected by websites you visit and the services they use, including Google."

(In the above statement, the text in bold is the new addition to the disclaimer.)

Tips for More Private Browsing 

Chrome remains one of the popular browsers, even Mac users can use Safari instead. Privacy is just one of the reasons Apple fans should use Safari instead of Chrome.) However, there are certain websites that users would prefer not to get added to their Google profile which has the rest of their private information. Thus, users are recommended to switch to Safari Private Browsing, since Apple does not use Safari to track its users (it claims to). 

Even better, use DuckDuckGo when you want to disconnect from the internet. This privacy-focused search engine and browser won't monitor or save the searches of its users; in fact, its entire purpose is to protect users' online privacy.