Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Data Consent. Show all posts

Disney to Pay $10 Million Fine in FTC Settlement Over Child Data Collection on YouTube

 

Disney has agreed to pay millions of dollars in penalties to resolve allegations brought by the Federal Trade Commission (FTC) that it unlawfully collected personal data from young viewers on YouTube without securing parental consent. Federal law under the Children’s Online Privacy Protection Act (COPPA) requires parental approval before companies can gather data from children under the age of 13. 

The case, filed by the U.S. Department of Justice on behalf of the FTC, accused Disney Worldwide Services Inc. and Disney Entertainment Operations LLC of failing to comply with COPPA by not properly labeling Disney videos on YouTube as “Made for Kids.” This mislabeling allegedly allowed the company to collect children’s data for targeted advertising purposes. 

“This case highlights the FTC’s commitment to upholding COPPA, which ensures that parents, not corporations, control how their children’s personal information is used online,” said FTC Chair Andrew N. Ferguson in a statement. 

As part of the settlement, Disney will pay a $10 million civil penalty and implement stricter mechanisms to notify parents and obtain consent before collecting data from underage users. The company will also be required to establish a panel to review how its YouTube content is designated. According to the FTC, these measures are intended to reshape how Disney manages child-directed content on the platform and to encourage the adoption of age verification technologies. 

The complaint explained that Disney opted to designate its content at the channel level rather than individually marking each video as “Made for Kids” or “Not Made for Kids.” This approach allegedly enabled the collection of data from child-directed videos, which YouTube then used for targeted advertising. Disney reportedly received a share of the ad revenue and, in the process, exposed children to age-inappropriate features such as autoplay.  

The FTC noted that YouTube first introduced mandatory labeling requirements for creators, including Disney, in 2019 following an earlier settlement over COPPA violations. Despite these requirements, Disney allegedly continued mislabeling its content, undermining parental safeguards. 

“The order penalizes Disney’s abuse of parental trust and sets a framework for protecting children online through mandated video review and age assurance technology,” Ferguson added. 

The settlement arrives alongside an unrelated investigation launched earlier this year by the Federal Communications Commission (FCC) into alleged hiring practices at Disney and its subsidiary ABC. While separate, the two cases add to the regulatory pressure the entertainment giant is facing. 

The Disney case underscores growing scrutiny of how major media and technology companies handle children’s privacy online, particularly as regulators push for stronger safeguards in digital environments where young audiences are most active.

Zoom Refutes Claims of AI Training on Calls Without Consent

 

Zoom has revised its terms of service following concerns that its artificial intelligence (AI) models were being trained on customer calls without consent, leading to a backlash. 

In response, the company clarified in a blog post that audio, video, and chats would not be utilized for AI purposes without proper consent. This move came after users noticed modifications to Zoom's terms of service in March, which raised worries about potential AI training.

The video conferencing platform took action to enhance transparency, asserting that it had introduced changes to address the concerns. 

In June, Zoom introduced AI-powered features, including the ability to summarize meetings without recording the entire session. These features were initially offered as a free trial.

However, experts raised concerns that the initial phrasing of the terms of service could grant Zoom access to more user data than necessary, including content from customer calls. 

Data protection specialist Robert Bateman expressed apprehension about the broad contractual provisions that granted considerable data usage freedom to the service provider.

Zoom later amended its terms to explicitly state that customer consent is required for using audio, video, or chat content to train their AI models. This alteration was made to ensure clarity and user awareness.

AI applications are software tools designed to perform intelligent tasks, often mimicking human behavior by learning from vast datasets. Concerns have arisen over the potential inclusion of personal, sensitive, or copyrighted material in the data used to train AI models.

Zoom, like other tech companies, has intensified its focus on AI products to keep up with the growing interest in the technology. The Open Rights Group, a digital privacy advocacy organization, cautioned against Zoom's approach of launching AI features as a free trial and encouraging customer participation, deeming it more alarming due to potential opacity in its privacy policy.

A spokesperson for Zoom reiterated that customers retain the choice to enable generative AI features and decide whether to share content with Zoom for product improvement. 

The company's Chief Product Officer, Smita Hashim, emphasized that account owners and administrators can opt to activate the features and that those who do so will undergo a transparent consent process for AI model training using customer content. Screenshots displayed warning messages for users joining meetings with AI tools, offering the option to consent or exit the meeting.