Search This Blog

Showing posts with label Privacy. Show all posts

Preserving Email Privacy: How to Block Hidden Read Receipts and Enhance Security


Disabling Read Receipts: Taking Control of Your Email Privacy

In today's fast-paced tech-dominated world, the pressure to respond to emails and messages immediately can be overwhelming. But what if you want to reclaim your time and manage it on your terms? One way to do that is by ensuring your emails are more private, and a key step in achieving this is to disable read receipts.

Tech expert Jon Morgan, CEO of Ventures Smarter, explains that blocking hidden read receipts can be a crucial step in preserving your email activity's privacy and preventing others from knowing whether you've read their messages or not. He provides a simple guide to help you achieve this:

The first step is to disable read receipts in your email client or service. While the process may vary depending on the email platform you use, you can usually find this option in the settings or preferences section. Look for a setting related to read receipts or message tracking, and disable it. By doing so, your email client won't send read receipts to the sender, allowing you to maintain your privacy and respond at your own pace.

Reviewing Privacy Settings: Enhancing Email Security and Anonymity

Disabling read receipts is just the beginning. To bolster your email privacy, it's important to review the privacy settings of your email account. Many email services offer various privacy options that can further protect your communication from prying eyes. Jon Morgan advises paying attention to features such as blocking external images or preventing remote content from loading automatically.

By enabling these settings, you can prevent senders from receiving notifications when you open their emails or download images. This step adds an extra layer of confidentiality to your communication and reduces the risk of unintentionally revealing your activity to the sender. Take the time to explore your email service's privacy options and customize them according to your needs.

Using Email Clients with Advanced Privacy Features: Safeguarding Your Communication

In your quest for enhanced email privacy, it's worth considering using an email client or application that prioritizes privacy and security. Certain email clients offer advanced features like encrypted messaging, blocking read receipts, and additional privacy controls. Making the switch to such a client can significantly enhance your email security and provide you with more control over your personal information.

Before choosing an email client, Jon Morgan recommends conducting thorough research to find one that aligns with your specific privacy requirements and preferences. Look for a client that not only offers robust privacy features but also aligns with your desired user experience. By selecting a privacy-focused email client, you can take another step towards safeguarding your communication.

Offline Reading and Other Privacy Measures: Ensuring Complete Email Confidentiality

If you truly want to ensure complete privacy in your email communication, Jon Morgan suggests reading your emails offline, without an internet connection. By disconnecting from the internet while reading your messages, you eliminate the risk of triggering read receipts or tracking requests that could be sent back to the sender. This step guarantees that your email activity remains entirely private and allows you to read and respond to messages on your terms.

Disabling remote content loading in your email client's settings adds an extra layer of protection. By default, many email clients automatically load remote content, such as images, when you open an email. However, this feature can be exploited to track whether you've read the message. To counter this, disable remote content loading in your email client. This ensures that the sender won't receive any notifications when you open their email or load external images, further preserving your privacy.

For an added level of security, Jon Morgan suggests considering the use of a virtual private network (VPN). A VPN encrypts your internet connection, making it difficult for anyone to track your online activities, including your email interactions. By utilizing a VPN, you can protect your privacy and prevent tracking attempts, thus safeguarding your email communication.

Implementing these privacy measures gives you back control over your time and allows you to manage your emails without feeling overwhelmed or rushed. By disabling read receipts, adjusting privacy settings, using email clients with advanced features, and considering offline reading and VPN usage, you can enjoy a heightened level of email security and privacy while navigating the digital landscape.

Safeguarding Your Work: What Not to Share with ChatGPT

 

ChatGPT, a popular AI language model developed by OpenAI, has gained widespread usage in various industries for its conversational capabilities. However, it is essential for users to be cautious about the information they share with AI models like ChatGPT, particularly when using it for work-related purposes. This article explores the potential risks and considerations for users when sharing sensitive or confidential information with ChatGPT in professional settings.
Potential Risks and Concerns:
  1. Data Privacy and Security: When sharing information with ChatGPT, there is a risk that sensitive data could be compromised or accessed by unauthorized individuals. While OpenAI takes measures to secure user data, it is important to be mindful of the potential vulnerabilities that exist.
  2. Confidentiality Breach: ChatGPT is an AI model trained on a vast amount of data, and there is a possibility that it may generate responses that unintentionally disclose sensitive or confidential information. This can pose a significant risk, especially when discussing proprietary information, trade secrets, or confidential client data.
  3. Compliance and Legal Considerations: Different industries and jurisdictions have specific regulations regarding data privacy and protection. Sharing certain types of information with ChatGPT may potentially violate these regulations, leading to legal and compliance issues.

Best Practices for Using ChatGPT in a Work Environment:

  1. Avoid Sharing Proprietary Information: Refrain from discussing or sharing trade secrets, confidential business strategies, or proprietary data with ChatGPT. It is important to maintain a clear boundary between sensitive company information and AI models.
  2. Protect Personal Identifiable Information (PII): Be cautious when sharing personal information, such as social security numbers, addresses, or financial details, as these can be targeted by malicious actors or result in privacy breaches.
  3. Verify the Purpose and Security of Conversations: If using a third-party platform or integration to access ChatGPT, ensure that the platform has adequate security measures in place. Verify that the conversations and data shared are stored securely and are not accessible to unauthorized parties.
  4. Be Mindful of Compliance Requirements: Understand and adhere to industry-specific regulations and compliance standards, such as GDPR or HIPAA, when sharing any data through ChatGPT. Stay informed about any updates or guidelines regarding the use of AI models in your particular industry.
While ChatGPT and similar AI language models offer valuable assistance, it is crucial to exercise caution and prudence when using them in professional settings. Users must prioritize data privacy, security, and compliance by refraining from sharing sensitive or confidential information that could potentially compromise their organizations. By adopting best practices and maintaining awareness of the risks involved, users can harness the benefits of AI models like ChatGPT while safeguarding their valuable information.

Promoting Trust in Facial Recognition: Principles for Biometric Vendors

 

Facial recognition technology has gained significant attention in recent years, with its applications ranging from security systems to unlocking smartphones. However, concerns about privacy, security, and potential misuse have also emerged, leading to a call for stronger regulation and ethical practices in the biometrics industry. To promote trust in facial recognition technology, biometric vendors should embrace three key principles that prioritize privacy, transparency, and accountability.
  1. Privacy Protection: Respecting individuals' privacy is crucial when deploying facial recognition technology. Biometric vendors should adopt privacy-centric practices, such as data minimization, ensuring that only necessary and relevant personal information is collected and stored. Clear consent mechanisms must be in place, enabling individuals to provide informed consent before their facial data is processed. Additionally, biometric vendors should implement strong security measures to safeguard collected data from unauthorized access or breaches.
  2. Transparent Algorithms and Processes: Transparency is essential to foster trust in facial recognition technology. Biometric vendors should disclose information about the algorithms used, ensuring they are fair, unbiased, and capable of accurately identifying individuals across diverse demographic groups. Openness regarding the data sources and training datasets is vital, enabling independent audits and evaluations to assess algorithm accuracy and potential biases. Transparency also extends to the purpose and scope of data collection, giving individuals a clear understanding of how their facial data is used.
  3. Accountability and Ethical Considerations: Biometric vendors must demonstrate accountability for their facial recognition technology. This involves establishing clear policies and guidelines for data handling, including retention periods and the secure deletion of data when no longer necessary. The implementation of appropriate governance frameworks and regular assessments can help ensure compliance with regulatory requirements, such as the General Data Protection Regulation (GDPR) in the European Union. Additionally, vendors should conduct thorough impact assessments to identify and mitigate potential risks associated with facial recognition technology.
Biometric businesses must address concerns and foster trust in their goods and services as facial recognition technology spreads. These vendors can aid in easing concerns around facial recognition technology by adopting values related to privacy protection, openness, and accountability. Adhering to these principles can not only increase public trust but also make it easier to create regulatory frameworks that strike a balance between innovation and the defense of individual rights. The development of facial recognition technology will ultimately be greatly influenced by the moral and ethical standards upheld by the biometrics sector.






Convincing Phishing Pages are Now Possible With Phishing-as-a-Service

 


In several phishing campaigns since mid-2022, a previously unknown phishing-as-a-service (PaaS) offering named "Greatness" has been used as a backend component for various spam campaigns. In addition to MFA bypass, IP filtering, and integration with Telegram bots, Greatness includes features found in some of the most advanced PaaS offerings. These features include integration with some of the most advanced PaaS offerings. 

Phishing attacks are mostly social engineering attacks. Depending on who conducts the attack, they can target a wide range of people. There is a possibility that these emails are spam or scam emails looking to access PayPal accounts. 

There is also the possibility of phishing being an attack specifically targeted at a particular individual. Attackers often tailor their emails to speak directly to you and include information only available from an acquaintance. When an attacker gains access to your data, he or she usually obtains this information. Even if the recipient is very cautious in their responses, it is very difficult for them to avoid being a victim when an email of this kind is sent. Based on research conducted by PhishMe Research, over 97% of all fraudulent emails sent to consumers contain ransomware. 

As a result of the availability of phishing kits like Greatness, threat actors, rookies, and professionals alike, now can design convincing login pages that comply with the account registration process of various online services while bypassing the two-factor authentication protections offered by the service.

As a result of this, the fake pages that appear to be authentic behave as a proxy for the attacker to harvest credentials entered by victims and time-based one-time passwords (TOTPs). 

In addition to the possibility of conducting phishing through text messages, social media, and phone calls, the term 'phishing' is most commonly used in the context of attacks that appear via email. Oftentimes, phishing emails can reach thousands of users directly and disguise themselves among the myriad of benign emails that are received by busy users every day. As a result of attacks, malicious code may be installed on systems (such as ransomware), systems may be sabotaged, and intellectual property may be stolen. 

The focus of Greatness is, for now, limited to Microsoft 365 phishing pages, which allows its affiliates to create highly convincing decoy and login pages, using Greatness' attachment and link builder. The attack incorporates features such as pre-filling the victim's email address and showing the victim's appropriate company logo and background image, which were derived from the actual Microsoft 365 login page in which the victim worked or worked for the target organization. The complexity of the software makes Greatness a particularly attractive option for businesses that do phishing. 

A geographic analysis of the targets in a number of the various campaigns that are ongoing and have been conducted in the past revealed the majority of victims to be companies based in the U.S., U.K., Australia, South Africa, and Canada, with manufacturing, health care, and technology sectors being the most frequently targeted industries. There are slight differences in the exact distribution of victims between each campaign and each country in terms of the sector and location. 

Whenever affiliates deploy and configure the phishing kit provided by Greatness, they can access its more advanced features without technical knowledge. They may even take advantage of the service's more advanced features even if they are unskilled. There are two types of phishing kits. One uses an API to generate phishing claims. The other uses a phishing kit to perform a "man-in-the-middle attack" and generate phishing claims. 

In the latest UK government survey titled "Cyber Security Breaches Survey 2021", the UK government reports that phishing remains the "most common attack vector" when it comes to attack attempts involving their systems. Even though phishing is still being used due to its continued success, up to 32% of employees click on a phishing email link while up to 8% of employees are unaware of the sending. 

The risk of a data breach or malware infection is greatly increased when an individual clicks on a link in a phishing email and then enters their login credentials to access company resources. There are always going to be several levels of privilege escalation, even when an employee has lower access privileges. Cybercriminals put a lot of effort into making their phishing attack vector as convincing as possible to increase their chances of success. 

With the emergence of the Greatness product, Microsoft 365 users are at higher risk of being compromised. Phishing pages can appear more convincing and effective against businesses. Approximately 90% of the affiliates of Greatness target businesses according to the data that Cisco Talos collected. A study of the targeted organizations across several campaign campaigns indicates that manufacturing is the sector given the most attention. This is followed by the healthcare and technology sectors. 

The threat was first observed during mid-2022, and according to VirusTotal, a spike in activity was experienced in December 2022 and March 2023. This was a time when attachment samples increased considerably. 

As part of the attack chain, malicious emails often contain HTML attachments which are executed on opening. This code often contains obfuscated JavaScript code which redirects the recipient to a landing page with their email address pre-filled and prompts them for a password and two-factor authentication code to access the site. 

The credentials entered are forwarded via Telegram to the affiliate's Telegram channel. They will be used to gain unauthorized access to the accounts being accessed. 

If a victim opens an attachment that contains an HTML file, the web browser will execute some narrow JavaScript code that will establish a connection to the attacker's server to get the HTML code of the phishing page. In turn, the attacker's server will display the phishing page to the user in the same browser window. An image of a spinning wheel is displayed on the screen in the code, pretending to show that the document is being loaded, with a blurred image. 

The PaaS is then responsible for connecting to Microsoft 365 and impersonating the victim to log into the victim's account. As a result, if the service detects that MFA is being used, it will prompt the victim to authenticate by using their chosen MFA method (e.g., SMS code, voice call code, push notification, according to the website). 

After a service receives the MFA, the service will continue to impersonate the victim behind the scenes to complete the login process. This will enable it to collect authenticated session cookies associated with the victim. The affiliates will then receive these updates through their Telegram channel or via an email directly from the web panel, depending on which method they choose. 

As it works in conjunction with the API, the phishing kit creates a "man-in-the-middle" attack, asking the victim for information, which is then passed to the legitimate login page in real time, and is further logged by the API. 

If the victim uses MFA (Master Key Authentication), the PaaS affiliate can steal the user passwords and usernames associated with the account and the authenticated session cookies. This is one of the reasons why the Telegram bot is used - it notifies the attacker as soon as possible about valid cookies so that they can make a quick move if the target looks interesting. This likely is one of the reasons why authenticated sessions typically expire after a while, which is one of the reasons the bot is utilized.

ChatGPT and Data Privacy Concerns: What You Need to Know

As artificial intelligence (AI) continues to advance, concerns about data privacy and security have become increasingly relevant. One of the latest AI systems to raise privacy concerns is ChatGPT, a language model based on the GPT-3.5 architecture developed by OpenAI. ChatGPT is designed to understand natural language and generate human-like responses, making it a popular tool for chatbots, virtual assistants, and other applications. However, as ChatGPT becomes more widely used, concerns about data privacy and security have been raised.

One of the main concerns about ChatGPT is that it may need to be more compliant with data privacy laws such as GDPR. In Italy, ChatGPT was temporarily banned in 2021 over concerns about data privacy. While the ban was later lifted, the incident raised questions about the potential risks of using ChatGPT. Wired reported that the ban was due to the fact that ChatGPT was not transparent enough about how it operates and stores data and that it may not be compliant with GDPR.

Another concern is that ChatGPT may be vulnerable to cyber attacks. As with any system that stores and processes data, there is a risk that it could be hacked, putting sensitive information at risk. In addition, as ChatGPT becomes more advanced, there is a risk that it could be used for malicious purposes, such as creating convincing phishing scams or deepfakes.

ChatGPT also raises ethical concerns, particularly when it comes to the potential for bias and discrimination. As Brandeis University points out, language models like ChatGPT are only as good as the data they are trained on, and if that data is biased, the model will be biased as well. This can lead to unintended consequences, such as reinforcing existing stereotypes or perpetuating discrimination.

Despite these concerns, ChatGPT remains a popular and powerful tool for many applications. In 2021, the BBC reported that ChatGPT was being used to create chatbots that could help people with mental health issues, and it has also been used in the legal and financial sectors. However, it is important for users to be aware of the potential risks and take steps to mitigate them.

While ChatGPT has the potential to revolutionize the way we interact with technology, it is essential to be aware of the potential risks and take steps to address them. This includes ensuring compliance with data privacy laws, taking steps to protect against cyber attacks, and being vigilant about potential biases and discrimination. By doing so, we can harness the power of ChatGPT while minimizing its potential risks.

Chinese Government to Ban TikTok Apps From Collecting U.S. Data

 


A spokesperson for TikTok issued a statement today responding to charges that the U.S. Congress was working to advance legislation. This would create another avenue for the US president to ban the popular video-sharing application from the country. 

There was a vote in the US House Foreign Affairs Committee earlier today that led to the passage of the Deterring America's Technological Adversaries (Data) Act. This would roll back US sanctions protections to creative content dating back 35 years to deter technological adversaries from targeting American institutions. Currently, the bill is being drafted in such a way that it would require the president to issue sweeping sanctions against Chinese companies that transfer personal data related to citizens of the US to organizations or individuals in China or "subject to the influence of China." 

The Coven tattoo studio owner is Angel Mae Glutz, who works in both fine art and tattooing. Most of Glutz's business is promoted on social media platforms, including TikTok. This has helped bring in clients from all over the world and promote her business. 

The ongoing battle on Capitol Hill between China-based TikTok and Congress may end up being a distraction for entrepreneurs like Glutz who rely on social media to market their businesses. Earlier this year, the White House banned TikTok's use on government devices and lawmakers are now considering legislation that would limit foreign adversaries' use of communication platforms and technology. 

Recently, many U.S. allies have expressed concerns about the video-sharing platform, most recently warning their staff to delete the app from their phones after the app caused an uproar among European Union institutions. In the Netherlands, the decision is being considered to follow the lead taken by Germany and Canada. 

According to CEO Shou Chew on Tuesday, TikTok now has 150 million monthly active users in the United States, which is a huge increase over the 40 million that the platform had earlier this year, while new calls are being made for its banning in the country. 

Generally speaking, TikTok poses a very low-risk danger to national security. This is in so far as the Chinese government can exercise influence over the app or its parent company which is not under its control. According to Chinese national security law, companies under its jurisdiction must comply with a wide variety of security activities under their jurisdiction to comply with the law. This is a serious issue since the public has little or no means to verify that leverage has been used in the way it has been described in the public domain. 

A violent border clash between India and China in 2020 caused a TikTok ban in India which in turn caused over 200 million TikTok users to be abruptly disconnected. Following the ban, TikTok has not returned to India. 

The United States, Canada, and the United Kingdom, among others, have recently enacted laws restraining TikTok use on official, government devices. However, they did not ban the app on personal devices. Last year, TikTok was found guilty of a massive data scandal. It was revealed that several employees accessed users' data, including journalists, as part of TikTok's effort to combat leaks in the media and crack down on them. 

These employees were terminated from the company according to the statement. There has been a sharp rise in the number of laws proposed by the U.S. to ban TikTok from the country completely. Other lawmakers have proposed mandating that ByteDance sell the video-sharing platform or ban the app completely.

Protecting Your Privacy on ChatGPT: How to Change Your Settings

OpenAI's ChatGPT is an advanced AI language model that has been trained on vast amounts of text data from the internet. However, recent concerns have arisen regarding data privacy and the use of personal data by AI models like ChatGPT. As more people become aware of the potential risks, there has been a growing demand for more control over data privacy. 

In response to these concerns, OpenAI has recently announced new ways to manage your data in ChatGPT. These changes aim to give users more control over how their data is used by the AI model. However, it is important to take action immediately to protect your data and privacy.

According to a recent article on BGR, users can take the following steps to prevent their data from training OpenAI:
  1. Go to the ChatGPT settings page.
  2. Scroll down to the 'Data' section.
  3. Click on 'Delete all my data.'
By deleting your data, you prevent OpenAI from using it to train ChatGPT. It is important to note that this action will not delete any messages you have sent or received, only the data used to train the AI model.

In addition to this, TechCrunch has also provided some useful advice to protect your data from ChatGPT. They recommend turning off the 'Training' feature, which allows ChatGPT to continue training on new data even after you have deleted your old data.

OpenAI has also introduced new features that allow users to choose how their data is used. For example, users can choose to opt out of certain types of data collection or only allow their data to be used for specific purposes.

It is crucial to be aware of the risks associated with AI language models and take necessary measures to protect your data privacy. By following the steps mentioned above, you can ensure that your data is not being used to train ChatGPT without your consent.

FTC Proposes Ban on Meta Profiting Off Children’s Data

The Federal Trade Commission (FTC) has accused Facebook of violating its 2019 privacy agreement by allowing advertisers to target children with ads based on their activity on other apps and websites. The FTC has proposed a ban on Meta from profiting off children's data and a blanket prohibition on any company monetizing the data of children aged under 13.

According to the FTC, Facebook’s Messenger Kids app, which is aimed at children under 13, was also used to gather data on children's activity that was used for advertising purposes. The Messenger Kids app is designed to allow children to communicate with friends and family in a safe and controlled environment, but the FTC alleges that Facebook failed to adequately protect children's data and privacy.

The proposed ban would prevent Meta from using children's data to target ads or sharing such data with third-party advertisers. The FTC also suggested that the company should provide parents with greater control over the data that is collected about their children.

Facebook has responded to the FTC's allegations, stating that it has taken significant steps to protect children's privacy, including requiring parental consent before children can use the Messenger Kids app. The company has also stated that it will continue to work with the FTC to resolve any concerns and will take any necessary steps to comply with the law.

The proposed ban on profiting off children's data is part of a wider crackdown by regulators on big tech companies and their data practices. The FTC has also proposed new rules that would require companies to obtain explicit consent from consumers before collecting or sharing their personal information.

In addition to the FTC's proposed ban, lawmakers in the US have also proposed new legislation that would strengthen privacy protections for children online. The bill, known as the Children's Online Privacy Protection Modernization Act, would update the Children's Online Privacy Protection Act (COPPA) to reflect changes in technology and the way children use the internet.

The proposed legislation would require companies to obtain parental consent before collecting any personal information from children under 16, and would also establish a new agency to oversee online privacy protections for children.

The proposed ban on profiting off children's data, along with the proposed legislation, highlights the growing concern among lawmakers and regulators over the use of personal data, particularly when it comes to vulnerable groups such as children. While companies may argue that they are taking steps to protect privacy, regulators are increasingly taking a tougher stance and pushing for more stringent rules to ensure that individuals' data is properly safeguarded.

ChatGPT Privacy Concerns are Addressed by PrivateGPT

 


Specificity and clarity are the two key ingredients in creating a successful ChatGPT prompt. Your prompt needs to be specific and clear to ensure the most effective response from the other party. For creating effective and memorable prompts, here are some tips: 

An effective prompt must convey your message in a complete sentence that identifies what you want. If you want to avoid vague and ambiguous responses, avoid phrases or incomplete sentences. 

A more specific description of what you're looking for will increase your chances of getting a response according to what you're looking for, so the more specific you are, the better. The words "something" or "anything" should be avoided in your prompts as much as possible. The most efficient way to accomplish what you want is to be specific about it. 

ChatGPT must understand the nature of your request and convey it in such a way. This is so that ChatGPT can be viewed as the expert in the field you seek advice. As a result of this, ChatGPT will be able to understand your request much better and provide you with helpful and relevant responses.

In the AI chatbot industry and business in general as well, the ChatGPT model, released by OpenAI, appears to be a game-changer for the AI industry and business.

In the chat process, PrivateGPT sits at the center and removes all personally identifiable information from user prompts. This includes health information and credit card data, as well as contact information, dates of birth, and Social Security numbers. It is delivered to ChatGPT. To make the experience for users as seamless as possible, PrivateGPT works with ChatGPT to re-populate the PII within the answer, according to a statement released this week by Private AI, the creator of PrivateGPT.

It is worth remembering however that ChatGPT is the first of a new era for chatbots. Several questions and responses were answered, software code was generated, and programming prompts were fixed. It demonstrated the power of artificial intelligence technology.

Use cases and benefits will be numerous. The GDPR does bring with it many challenges and risks related to privacy and data security, particularly as it pertains to the EU. 

A data privacy company Private AI announced that PrivateGPT is a "privacy layer" used as a security layer for large language models (LLMs) like OpenAI's ChatGPT. The updated version automatically redacts sensitive information and personally identifiable information (PII) users give out while communicating with AI. 

By using its proprietary AI system PrivateAI is capable of deleting more than 50 types of PII from user prompts before submitting them to ChatGPT, which is administered by Atomic Inc. OpenAI is repopulated with placeholder data to allow users to query the LLM without revealing sensitive personal information to it.    

Your Details are Hidden on this Secret ID on Your Phone

 


The amount of people who want to exploit your private information is staggering, from social media platforms to email providers. It is imperative to remember not only online stores but personal services as well. 

Many online businesses rely heavily on your information, and they pay no attention to customer privacy. You are unknown to most advertisers and marketers. In addition, a Mobile Advertising ID (MAID) identifier is assigned to your behavior, and a history of your activities is gathered. 

With this tiny bit of information, your location, your shopping history, or your recent online searches can be accessed. There were very few factors you could control until recently to block your MAID from marketing campaigns. As a result of Apple's decision, iOS users now can choose who targets them through the app. 

Criminals, however, are likely to generate much greater profits if they can match the ID with the individual. A MAID's ability to defraud you Most companies or advertising agencies would not be able to find out who the MAID belongs to if he or she was not attached to a company. 

In this collection, there are numerous data sets, and there should be no personally identifiable information (PII) included in the collection. Vice's Motherboard wrote about one company that offers the tracking of MAIDs with the PII associated with each of them. 

The use of mobile phones in everyday life poses a considerable amount of privacy risk, which is a major concern. Your MAID can be linked to the following information that can be provided by the company:
  • Full name
  • Physical address
  • Phone number
  • Email address
  • IP address
There should be a red flag raised for everyone after it was revealed that data brokers are capable of integrating advertising IDs with mobile phone numbers.

Automated Bots Pose Growing Threat To Businesses

The capability to detect, manage, and mitigate bot-based requests has become of utmost importance as cyber attackers become more automated. Edgio, a company created by the merging of Limelight Networks, Yahoo Edgecast, and Layer0, has unveiled its own bot management service in response to this expanding threat. In order to compete with competing services from Web application firewall (WAF) providers and Internet infrastructure providers, the service focuses on leveraging machine learning and the company's Web security capacity to enable granular policy controls.

Bot management is not just about preventing automated attacks, but also identifying and monitoring good bots such as search bots and performance monitoring services. According to Richard Yew, senior director of product management for security at Edgio, “You definitely need the security solution but you also want visibility to be able to monitor good bot traffic.” In 2022, for example, the number of application and API attacks more than doubled, growing by 137%, according to Internet infrastructure firm Akamai. 

The impact of bots on businesses can be seen in areas such as inventory-hoarding attacks or ad fraud. As a result, bot management should involve all aspects of an organization – not just security. Sandy Carielli, principal analyst at Forrester Research noted that “bot management is not just about security being the decision-makers. If you're dealing with a lot of inventory-hoarding attacks, your e-commerce team is going to want to say in. If you're dealing with a lot of ad fraud, your marketing team will want to be in the room.”

Bot management systems typically identify the source of Web or API requests and then use policies to determine what to allow, what to deny, and which requests represent potentially interesting events or anomalies. Nowadays, 42% of all Internet traffic comes from automated systems — not humans — according to data from Imperva. To deal with this, Edgio inspects traffic at the edge of the network and only allows ‘clean’ traffic through its network. This helps stop attacks before they can impact other parts of the network. Content delivery networks (CDNs) such as Akamai, Cloudflare, and Fastly have also adopted bot management features as well.

Bot management is clearly becoming a more crucial issue for enterprises as automated attacks increase in frequency. Organizations require all-encompassing solutions to address this issue, involving teams from marketing, security, and e-commerce. Employing such technologies enables organizations to safeguard their resources from dangerous bot attacks while keeping track of reputable good bots. 


Vehicles Stolen Using High-Tech Methods by Criminals

 


Over the past 20 years, the number of cars stolen in the United States has been reduced by half. However, authorities are now seeing an increasing number of break-ins associated with high-tech techniques being used in these break-ins. 

There has been evidence to suggest that some employees at the Immigration and Customs Enforcement Agency (ICE) misused law enforcement databases to spy on their romantic partners, neighbors, and business partners. 

According to a new dataset obtained through records requests, hundreds of ICE employees and contractors have been under scrutiny since 2016 because they attempted to access medical, biometric, and location data without permission. There are more questions raised by the revelations about ICE's rights to protect sensitive information. 

Local intelligence agencies have found that in the current period, criminals are using sophisticated technology to target high-end luxury cars equipped with keyless entry systems and emergency starting features to commit theft. 

It was noted that the group identified three main methods criminals use to gain access to and steal vehicles with these features across the nation.

There was a video that was captured by Michael Shin of Los Angeles two years ago, where he captured the image of a man opening his car while holding just a backpack. As Shin explained, the man was not prepared to break into the car, as he had no break-in tools in his possession.  An NICB official affirmed that 35 vehicles were tested using this type of system by the NICB. As a result, 18 test cars were opened, started, and driven off by the team, with no problems at all. 

Morris said it was believed that professional criminals have discovered how to build their versions of the devices that the NICB used for its break-in tests. Morris explained that the NICB used devices supplied by a company that works closely with law enforcement on security testing for these tests. 

With criminals discovering how to hack into vehicle security systems and defeat them, car owners must be vigilant to protect their vehicles. As Morris pointed out in his statement, this is a serious reminder of the risks associated with today's cars that function as essentially "computers on wheels." 

In a recent study, ESET researchers discovered that there is a significant amount of sensitive data contained within old enterprise routers. The company purchased an old router and analyzed it, discovering it had login details for the company VPN, hashed root admin passwords, and details of the previous owner. The old routers contained login details for the company VPN and other valuable information. As a result of the information available on the router, it is easy to impersonate the company that sold it previously. Passkeys are going to take over all your passwords in the future, but a messy phase is beginning to emerge in the race to replace all your passwords with them. Getting new technologies off to a good start is among the biggest challenges in introducing them to the market. 

The fact that authorities have been puzzled by this type of break-in in the past has been a source of puzzlement for several years now but insurance investigators now believe that criminals are using key fobs - the little authentication devices you use to access newer models that are “keyless” - to start and unlock cars remotely by simply pushing a button. 

As a result of tests conducted by the research and development team, the group found that the vehicle's computer-controlled systems are being exploited by thieves carrying out highly sophisticated cyber-attacks.

It is important to note that a combination of CAN attacks, FOB relays, and key cloning attacks are among these attacks. 

  • When a CAN Attack occurs, high-tech electronic equipment is used to gain entry to the vehicle's Control Area Network and then access the computer system to start the engine using remote access software. As a result, the vehicle begins working as soon as the engine is started. 
  • By utilizing advanced receivers and transmitters aimed at remote reading the vehicle's security key, Fob Relaying is possible, allowing an attacker to unlock and begin the vehicle even if it is in the owner's possession. 
  • In the third method, a variety of sophisticated techniques and equipment are used to disable the vehicle's alarm system and then clone and steal the security key for the vehicle after the vehicle has been forced entry.

ChatGPT: A Threat to Privacy?

 


Despite being a powerful and innovative AI chatbot that has quickly drawn several people's attention, ChatGPT has some serious pitfalls that seem to be hidden behind its impressive features. 

For any question you ask it, it will be able to provide you with an answer that sounds like it was written by a human, as it has been trained on massive amounts of data from across the net to gain the knowledge and writing skills necessary to provide answers that sound like they were created by humans. 

There is no denying that time is money, and chatbots such as ChatGPT and Bing Chat have become invaluable tools for people. Computers write codes, analyze long emails, and even find patterns in large amounts of data with thousands of fields. 

This chatbot has astonished its users with some of its exciting features and is one of the most brilliant inventions of Open AI. ChatGPT can be used by creating an account on their website for the first time. In addition to being a safe and reliable tool, it is also extremely easy to use. 

However, many users have questions about chatbot accessibility to the user's data. OpenAI saves OpenGPT conversations for future analysis, along with the openings. The company has published a FAQ page where its employees can selectively review selected chats to ensure their safety, according to the FAQ page. 

You should not assume that anything you say to ChatGPT will remain confidential or private after sharing. OpenAI discovered a critical bug that has prompted a terrible security issue. 

OpenAI CEO Sam Altman stated that some users could view the titles of other users' conversations on a lesser percentage of occasions. Altman says the bug (now fixed) resides in a library accessible via an open-source repository. A detailed report will be released by the company later as the company feels "terribly about this." 

The outage tracker Downdetector highlights that the platform suffered a brief outage before the company disabled chat history. As per Downdetector's outage map, some users could not access the AI-powered chatbot at midnight on March 23. 

It was designed to synthesize natural-sounding human language through a large language model called ChatGPT. ChatGPT works like a conversation with a person. When you speak to ChatGPT, it can listen to what you say and correct itself when it gets wrong. This is just like when you speak with someone. 

After a short period, ChatGPT will automatically delete your session logs that are saved by ChatGPT. 

When you create an account with ChatGPT, the service collects your personal information. It contains personal information such as your name, email address, telephone number, and payment information. 

Whenever an individual user registers with ChatGPT, the data associated with that user's account is saved. By encrypting this data, the company ensures it stays safe and only retains it if it is needed to meet business or legal requirements. 

The ChatGPT privacy policy explains, though, that even though encryption methods may not always be completely secure, this may not be the case. Users should be aware of this when sharing their personal information on a website like this. 

It is suggested in OpenAI's FAQ that users should not "share any sensitive information in your conversations" because OpenAI cannot delete specific prompts from the history of your conversations. Additionally, ChatGPT is not connected to the Internet, and the results may sometimes be incorrect because it cannot access the Internet directly. 

It has been a remarkable journey since ChatGPT was launched last year and has seen rapid growth since then. Additionally, the AI-powered chatbot is one of the fastest-growing platforms out there.

Reports claim that ChatGPT had 13.2 million users in January, according to a report on the service. ChatGPT's website says these gains are due to impressive performance, a simple interface, and free access. Those who wish for improved performance can subscribe for a monthly fee. 

Upon clearing the ChatGPT data and eliminating the ChatGPT conversations, OpenAI will delete all of your ChatGPT data. It will permanently remove it from their servers. 

This process is likely to take between one and two weeks, but please remember that it can take longer. It is also possible to send a request to delete your account to deletion@openai.com if you would rather not log in or visit the help section of the website.

The Montana Legislature Banned TikTok

 


A bill introduced in Montana would prevent apps like TikTok from being listed for download on app stores such as Google Play and Apple's App Store. The bill is forwarded to Republican Governor Gianforte for signature. 

TikTok, owned by Chinese investors, continues to be the target of fierce battles. As part of their efforts to address short-form video apps, Montana lawmakers voted on Friday to ban the most popular app from the state. 

Reuters writes that a bill would prevent applications like TikTok from being listed on apps stores, like Google Play or Apple's App Store in Montana. A 54-43 vote in the Montana House of Representatives approved the bill, SB419. Upon signing the bill, Gianforte will ensure it comes into effect in January. Despite the potential for substantial legal challenges, the legislation may still pass. 

However, there is nothing in the bill that makes it illegal for people who already use the app. This is regardless of the enacted law. The bill's original version forced internet providers to block TikTok. However, that particular language was removed, and it is not part of the amended bill. 

A state government has taken the first step in restricting TikTok in response to perceived security concerns since the legislation was passed. A national ban on TikTok seems to be on the cards after some federal lawmakers have called for an end to the app. 

A bill has been introduced targeting TikTok. It outlines the potential penalties imposed on the company if it violates the law daily. In addition to app stores that violate the law, penalties would also apply. As a result, users who access TikTok as part of their routine will not be penalized for doing so. 

As a result of allegations that TikTok's Chinese owner, ByteDance, places US users' personal information at risk for marketing purposes, the app has come under significant scrutiny from US legislators in recent months. Several congressmen have called for American data sharing with the Chinese government at the federal and state level. Last month, a congressional committee grilled TikTok CEO Shou Zi Chew on the issues widely held by the general public on social media.  

Numerous claims are being made against TikTok, including accusations of data theft, data mining, piracy, and data collection. However, TikTok has repeatedly denied these claims. To gain respect among US legislators, TikTok poured more than $1 billion into establishing a database where American users' data would be archived exclusively on Oracle's servers.

As acknowledged by its champions, the bill's supporters have no practical plans for operationalizing this attempt to censor American voices and therefore have no chance of succeeding. It has also been confirmed by TikTok's spokesperson Brooke Oberwetter that a court will decide whether the bill's constitutionality can stand up in court. Brooke hopes that the government of Montana will continue to abuse the First Amendment to keep TikTok users and creators in Montana from earning a living and protecting their rights under the First Amendment. 

Currently, the bill is being sent to the governor to be signed into law. There is a high probability that Republican governor Greg Gianforte will sign it. In Montana, TikTok has been banned from government devices because he previously banned it. Similar executive orders have been enacted by other states to ban the use of the app on devices and networks owned and operated by the government. 

Data safety concerns, surveillance by the Chinese government, and the involvement of minors in "dangerous activities" resulting from TikTok use were cited in the bill, which included a claim that minors were cooking chicken in NyQuil and climbing milk crates as dangerous activities. Critics of the app say that these activities were part of a set of challenges that had become popular. 

As a result of the links that TikTok's parent company, ByteDance, has with TikTok's parent company, the Chinese government has been widely expressed as having a potential risk of accessing user data from TikTok. 

In addition, they worry that this kind of information could be used by Chinese intelligence agencies or propaganda campaigns for their benefit. It is unclear whether the Chinese government has accessed or used any data related to TikTok's US users to influence them, and there has been no public evidence of this. According to Christopher Wray, Director of the FBI, the FBI does not believe many signs would be at first glance if this were to happen if it did happen. 

To make TikTok safer and more sustainable, the US government has called on its Chinese owners to spin off TikTok. In the context of its Project Texas initiative, TikTok says it can address national security concerns by installing a "firewall" around US users' data covering a wide area of cyberspace. 

Despite the uncertainty surrounding Montana's legislation's future, there is still hope for it. TikTok is a member of an industry group called NetChoice, which also has other technology companies in its membership. The group declared Friday that SB419 violates the US Constitution by trying to punish a person without a trial, or so-called "bills of attainment." 

It has been alleged by other civil society organizations that SB419 violates Montanans' rights to free expression as well as their access to information under the First Amendment. Earlier this week, the American Civil Liberties Union sent a letter to members of state legislatures in which the organization made the argument that government restrictions on freedom of speech must meet a high constitutional standard. 

As a result of SB 419, Montanans would be better off without a platform where they could speak out freely and exchange ideas daily; this would be censorship. 

According to the letter, if this becomes a law, it will set a dangerous precedent that government bodies will hold excessive control over Montanans’ access to the internet. According to Lynn Greenky, a First Amendment scholar and associate professor of Communication Studies at Syracuse University, the legislation also refers to "dangerous content" and "dangerous challenges" to TikTok phrases, raising an immediate "red flag" that will trigger a more thorough review of the bill. 

The bill sponsor, Shelley Vance, did not respond to a request for comment immediately after receiving it. In response to a question about Gianforte's comments, Gianforte's spokesperson failed to respond immediately. If the law is passed, the app ban will be implemented before 2024 begins. Several Congressmen are expressing concerns about the app as security concerns rise due to Chinese owners. As part of the Biden administration's warning issued last month, TikTok's parent company ByteDance, based in China, was told to divest ownership of the service or face a ban by the federal government.

Controversial Cybersecurity Practices of ICE

US Immigration and Customs Enforcement (ICE) have come under scrutiny for its questionable tactics in data collection that may have violated the privacy of individuals and organizations. Recently, ICE's use of custom summons to gather data from schools, clinics, and social media platforms has raised serious cybersecurity concerns.

According to a Wired report, ICE issued 1,509 custom summons to a significant search engine in 2020, seeking information on individuals and organizations involved in protests against ICE. While the summons is legal, experts have criticized the lack of transparency and oversight in the process and the potential for data breaches and leaks.

ICE's data collection practices have also targeted schools and clinics, with reports suggesting that the agency has sought information on students' and patients' immigration status. These actions raise serious questions about the privacy rights of individuals and the ethics of using sensitive data for enforcement purposes.

The Intercept has also reported on ICE's use of social media surveillance, which raises concerns about the agency's ability to monitor individuals' online activities and potentially use that information against them. The lack of clear policies and oversight regarding ICE's data collection practices puts individuals and organizations at risk of having their data mishandled or misused.

As the use of data becomes more prevalent in law enforcement, it is essential to ensure that agencies like ICE are held accountable for their actions and that appropriate safeguards are put in place to protect the privacy and cybersecurity of individuals and organizations. One expert warned, "The more data you collect, the more potential for breaches, leaks, and mistakes."

Privacy and cybersecurity are seriously at risk due to ICE's use of bespoke summonses and other dubious data collection techniques. It is essential that these problems are addressed and that the proper steps are made to safeguard both organizations' and people's rights.

Clearview AI Scraps 30 Billion Images Illicitly, Giving Them to Cops


Clearview’s CEO has recently acknowledged the notorious facial recognition database, used by the law enforcement agencies across the nation, that was apparently built in part using 30 billion photos that were illicitly scraped by the company from Facebook and other social media users without their consent. Critics have dubbed this practice as creating a "perpetual police line-up," even for individuals who did not do anything wrong. 

The company often boasts of its potential for identifying rioters involved in the January 6 attack on the Capitol, saving children from being abused or exploited, and assisting in the exoneration of those who have been falsely accused of crimes. Yet, critics cite two examples in Detroit and New Orleans where incorrect face recognition identifications led to unjustified arrests. 

Last month, the company CEO, Hoan Ton-That admitted in an interview with the BBC that Clearview utilized photos without users’ knowledge. This made it possible for the organization's enormous database, which is promoted to law enforcement on its website as a tool "to bring justice to victims." 

What Happens When Unauthorized Data is Scraped 

Privacy advocates and digital platforms have long criticized the technology for its intrusive aspects, with major social media giants like Facebook sending cease-and-desist letters to Clearview in 2020, accusing the company of violating their users’ privacy. 

"Clearview AI's actions invade people's privacy which is why we banned their founder from our services and sent them a legal demand to stop accessing any data, photos, or videos from our services," says a Meta spokesperson in an email Insider, following the revelation. 

The spokesperson continues by informing Insider that Meta, since then, has made “significant investments in technology and devotes substantial team resources to combating unauthorized scraping on Facebook products.”

When unauthorized scraping is discovered, the company may take action “such as sending cease and desist letters, disabling accounts, filing lawsuits, or requesting assistance from hosting providers to protect user data,” the spokesperson said. 

In spite of internal policies, biometric face prints are made and cross-referenced in the database once a photo has been scraped by Clearview AI, permanently linking the individuals to their social media profiles and other identifying information. Individuals in the photos have little recourse to try to remove themselves from the photos. 

Searching Clearview’s database is one of the many methods where police agencies can make use of social media content to aid in investigations, like making requests directly to the platform for user data. Although the use of Clearview AI or other facial recognition technologies by law enforcement is not monitored in most states and is not subject to federal regulation, some critics argue that it should even be banned.  

iCloud Keychain Data and Passwords are at Risk From MacStealer Malware

 


Uptycs, a cybersecurity company that discovered the information-stealing malware while searching for threats on the dark web, is warning that Mac computers have been the latest targets of updated info-stealing malware. 

The iCloud Keychain can easily access cryptocurrency wallets with the help of MacStealer. This is an innovative malware that steals your credentials from your web browsers, cryptocurrency wallets, and potentially sensitive files stored in your iCloud Keychain. 

The MacStealer malware is distributed as malware-as-a-service (MaaS), whereby the developer sells pre-built builds for $100, allowing customers to run their marketing campaigns and spread the malware to their victims. 

On the dark web, cybercriminals use Mac computers as a breeding ground to launch malware and conduct illegal activities. This makes the dark web a prime place to conduct illegal activities and launch malware. 

Upon discovering the newly discovered macOS malware, the Uptycs threat research team reported that it could run on multiple versions of Mac OS. This included the current Mac OS, Catalina (10.15), and the latest and greatest Apple OS, Ventura (13.2). 

Sellers claim that the malware is still in beta testing and that there are no panels or builders available. In China, Big Sur, Monterey, and Ventura provides rebuilt DMG payloads that infect macOS with malware. 

To charge a low $100 price for a piece of malware without a builder and panel, the threat actor uses this fact. Despite this, he will release more advanced features as soon as possible. 

A new threat named MacStealer is using Telegram as a command and control (C2) platform to exfiltrate data, with the latest example being called PharmBot. There is a problem that affects primarily computers running MacOS Catalina and later with CPUs built on the M1 or M2 architecture. 

According to Uptycs' Shilpesh Trivedi and Pratik Jeware in their latest report on the MacStealer exploit, the tool steals files and cookies from the victim's browser and login information. 

In its first advertising on online hacking forums at the beginning of the month, this project was advertised for $100, but it is still far from being finished. There is an idea among the malware authors of adding features to allow them to access notes in Apple's Notes app and Safari web browser. 

Functioning of Malware

MacStealer is distributed by the threat actors using an unsigned DMG file which is disguised as being something that can be executed on Mac OS if it is tricked into going into the system.

As a result, the victim is presented with a fake password prompt to run the command, which is made to look real. The compromised machine becomes vulnerable to malware that collects passwords from it. 

Once it has collected all the data described in the previous section, the malware then begins to spread. As soon as the stolen data is collected, it is stored in a ZIP file. It is then sent to a remote server for processing and analysis. Later on, the threat actor will be in a position to collect this information as well.

Additionally, MacStealer is also able to send some basic information to a pre-configured Telegram channel, which allows the operator to be notified immediately when updates to the stolen data have been made, which will enable him to download the ZIP file immediately as well.

What can You do to Protect Your Mac?

You can do a few things right now to ensure that you have the latest software update installed on your Mac computer, beginning with opening the Settings app and checking that it is the latest version. 

The first thing you should do is install it as soon as possible if it has not been installed already. You should make sure that all of your Apple devices are up-to-date before you begin using them since Apple is constantly improving its security. 

Your devices will be protected from malware if you use antivirus software, which protects you from potentially malicious links on the internet. By clicking the magnifying glass icon at the top of my webpage, you can find my expert review of the highest-rated antivirus protection for your Windows, Mac, Android, and iOS devices, which includes reviews of which ranked antivirus protection for Windows, Mac, Android, and iOS devices.  

Different forms of malware, such as email attachments, bogus software downloads, and other techniques of social engineering, are utilized to spread stealer malware. 

Keeping up-to-date the operating system and security software of the computer is one of the best ways to mitigate such threats. In addition, they should not download files from unknown sources or click on links they find on the internet. 

"It becomes more important for data stored on Macs to be protected from attackers as Macs become more popular among leadership teams as well as development and design teams within organizations", SentinelOne researcher Phil Stokes said in a statement last week.

Clearview: Face Recognition Software Used by US Police


Clearview, a facial recognition company has apparently conducted nearly a million searches, helping US police. Haon Ton, CEO of Clearview has revealed to BBC that the firm now has looked into as much as 30 billion images from various platforms including Facebook, taken without users’ consent. 

Millions of dollars have been fined against the corporation over and over again in Europe and Australia for privacy violations. Critics, however, argue that the police using Clearview to their aid puts everyone into a “perpetual police line-up.” 

"Whenever they have a photo of a suspect, they will compare it to your face[…]It's far too invasive," says Matthew Guariglia from the Electronic Frontier Foundation. 

The figure has not yet been clarified by the police in regard to the million searches conducted by Clearview. But, Miami Police has admitted to using this software for all types of crimes in a rare revelation to the BBC. 

How Does Clearview Works 

Clearview’s system enables a law enforcement customer to upload an image of a face, followed by looking for matches in a database of billions of images it has in store. It then provides links to where the corresponding images appear online. It is regarded as one of the world's most potent and reliable facial recognition companies. 

The firm has now been banned from providing its services to most US companies after the American Civil Liberties Union (ACLU) accused Clearview AI of violating privacy laws. However, there seems to be an exemption for police, with Mr. Ton saying that his software is used by hundreds of police forces across the US. 

Yet, the US police do not routinely reveal if they do use the software, and in fact have banned the software in several US cities like Portland, San Francisco, and Seattle. 

Police frequently portray the use of facial recognition technology to the public as being limited to serious or violent offenses. 

Moreover, in an interview with law enforcement about the efficiency of Clearview, Miami Police admitted to having used the software for all types of crime, from murders to shoplifting. Assistant Chief of Police Armando Aguilar said his team used the software around 450 times a year, and it has helped in solving murder cases. 

Yet, critics claim that there are hardly any rules governing the use of facial recognition by police.

Pleading TikTok to "Think of the Children" Misses the Point


In nearly every congress hearing on big tech, be it on privacy, monopoly, or in the case of last week’s TikTok hearing on national security, at least one lawmaker is seen to be concerned about something along with the lines of “But think of the kids!” 

In a recent hearing, a number of officials, including New Jersey Democrat Frank Melone, cited studies demonstrating that TikTok disseminates offensive material for children and teenagers. The site sends content about self-harm and eating disorders to children and young people every 2.6 minutes, or every eight minutes, according to a new study from the Center for Countering Digital Hate. The concern is furthered by the fact that TikTok is a popular platform choice among young users. According to a 2022 Pew Research Survey, the app was utilized by 67 percent of the teens polled, followed by YouTube. 

Callum Hood, research director at the Center for Countering Digital Hate, said in a press statement “Without legally mandated security through design, transparency, and accountability, the algorithm will continue to put vulnerable users at risk.” 

Although, Shou Zi Chew, CEO of TikTok noted that these are the issues that almost all major social media platforms have faced in recent years. These concerns are echoes of complaints that Meta has made in the past, particularly in connection to Instagram. 

When it comes to commenting on how harmful could a platform be to children, it often seems more of an attention-seeking tactic, highlighting some of the most common worries that American parents have. What kind of monster would not want to ensure that children are protected from exploitation and hazardous content? The attention paid to young users also presents one of the few open doors for bipartisan collaboration. 

But only a day before Chew was scheduled to testify before Congress, another gunshot forced students at Denver East High School to flee their classrooms. A pandemic-era program that provided free school meals to all children was phased away earlier this year in favor of a system based on income, which will put more obstacles in the way of the kids who need it the most. Due in large part to entrenched problems with economic inequality and a deteriorating social safety net, about one-third of children in the US live in poverty. 

Children are impacted by things like a lack of gun safety regulations and a lack of funding for social or educational initiatives, but these concerns frequently result in impasses in legislative and policymaking processes. Moreover, pleading with lawmakers to "think about the children" rarely has an impact. When it comes to Big Tech, the focus on "the kids" frequently oversimplifies and diverts attention from the more delicate issues of privacy, widespread data collection, the outsized power of certain companies to dominate smaller competitors, and the transnational nature of extremist content and misinformation. Instead, we need to ask deeper questions: How long should companies be able to keep data? What should it be used for? Can private companies that want to educate the next generation of consumers ever be incentivized to set time limits or restrict access to content for young users? Overall, how do our systems allow damage? 

There are certain ways that would get the concerns regarding children's well-being to light, practically protecting them. Although, it is rare to find favor in Congress. While officials may express concerns about how TikTok in the US differs from its Chinese counterpart, Douyin, in terms of the experience for young users, little has changed in legislation to address the online harms experienced by US children in the five years since the Tide Pod challenge or even the 18 months since Frances Haugen first testified before Congress, despite her frequent appearances on television hearings. 

In regard to these cases, Senators Edward J. Markey and Bill Cassidy are proposing a bipartisan bill for 2021 that would prohibit internet companies from gathering user data from users between the ages of 13 and 15 and establish a juvenile marketing and privacy branch at the Federal Trade Commission. However, the bill is yet to be voted on in the Senate.