Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label AI Content. Show all posts

Influencers Alarmed as New AI Rules Enforce Three-Hour Takedowns

 

India’s new three-hour takedown rule for online content has triggered unease among influencers, agencies, and brands, who fear it could disrupt campaigns and shrink creative freedom.

The rule, introduced through amendments to the IT Intermediary Rules on February 11, slashes the takedown window from 36 hours to just three, with the stated goal of curbing unlawful and AI-generated deepfake content. Creators argue that while tackling deepfakes and harmful material is essential, such a compressed deadline leaves almost no room to contest wrongful flags or provide context, especially when automated moderation tools make mistakes. They warn that legitimate posts could be penalised simply because systems misread nuance, humour, or sensitive but educational topics.

Influencer Ekta Makhijani described the deadline as “incredibly tight,” noting that if a brand campaign video is misflagged, an entire launch window could be lost in hours rather than days. She highlighted how parenting content around breastfeeding or toddler behaviour has previously been misinterpreted by moderation tools, and said the shorter window magnifies the risk of such false positives. Apparel brand founder Akanksha Kommirelly added that small creators lack round-the-clock legal and compliance teams, making it unrealistic for them to respond to takedown notices at all times.

Experts also worry about a chilling effect on speech, especially satire, political commentary, and advocacy. With platforms facing tighter liability, agencies fear an “act first, verify later” culture in which companies remove anything remotely borderline to stay safe. Raj Mishra of Chtrbox warned that, in practice, the incentive becomes to take down flagged content immediately, which could hit investigative work or edgy creative pieces hardest. India’s linguistic diversity further complicates moderation, as systems trained mainly on English may misinterpret regional content.A

longside takedowns, mandatory AI labelling is reshaping creator workflows and brand strategies. Kommirelly noted that prominent AI tags on visual campaigns may weaken brand recall, while Mishra cautioned that platforms could quietly de-prioritise AI-labelled content in algorithms, reducing reach regardless of audience acceptance. This dual pressure—strict timelines and AI disclosure—forces creators to rethink how they script, edit, and publish content.

Agencies like Kofluence and Chtrbox are responding by building compliance support systems for the creator economy. These include AI content guides, pre-upload checks, documentation protocols, legal support networks, and even insurance options to cover campaign disruptions. While most stakeholders accept that tougher rules are needed against deepfakes and abuse, they are urging the government to differentiate emergency takedowns for clearly illegal content from more contested speech so that speed does not entirely override fairness.