Search This Blog

Powered by Blogger.

Blog Archive

Labels

Securing Reality: The Role of Strict Laws and Digital Literacy in the Fight Against Deepfakes

Artificial intelligence-driven deceptive content challenges India with two-pronged strategy: government advisories and new legislation.

 


The Ministry of Electronics and Information Technology, in response to the growing concern in India regarding deepfakes, which are the manipulation of appearances for deceptive purposes using artificial intelligence, has issued an advisory to social media intermediaries, requesting they take active steps to identify and combat deepfake content and misinformation, as stated in the IT Rules 2021. 

In a statement made on Tuesday, Union Minister of State for Electronics and Technology Rajeev Chandrasekhar said that the government may consider introducing a new law to deal with deep fakes and misinformation. Meanwhile, the IT ministry has also scheduled two meetings with executives of social media firms for Thursday and Friday, as part of its social media strategy. 

There is an urgent need for intermediaries to exercise due diligence when reporting such issues and to take swift action to remove the content within 36 hours of being notified and to disable access to it. It is possible that these platforms could lose the protection of safe harbour if they fail to comply with the regulations. 

A fake video starring Telugu actor Rashmika Mandanna has prompted a new directive aimed at preventing online gender violence by widening the use of artificial intelligence, which is a possible method for misuse of AI to make the world a less safe place for women. 

There has been a directive by the federal government to ensure that all deep fake content reported by users on social media platforms is removed within 36 hours, failing which they will lose the 'safe harbour immunity' and be subject to criminal and judicial proceedings under Indian law. 

Hundreds of images and videos are edited and digitally altered every single day that teens are exposed to on the internet. It's clear that young people today are very skilled at consuming manipulated media in a manner that is fun, lighthearted, or ironic, from blurry, neon filters on Snapchat to short, lighthearted, or ironic TikTok videos. 

Being online today means that it is quite common to see altered media in a variety of ways. There are a variety of altered videos, also known as "synthetic media" that are mostly based on real videos in which real people appear to do or say something that they have never actually done or said. In contrast to shallow fakes, deep fakes are created almost entirely with bots or artificial intelligence, which is why some people claim that near-imperceptible deep fakes will be created in the near future. 

This technology is rapidly advancing, so some experts believe that many of these fakes will soon be nearly impossible to diagnose. In addition to looking authentic, a deep fake will also move, talk, and sing like an original. One may end up discovering a deep fake depicting oneself on the internet, similar to how it was discovered recently about a celebrity due to their own animated deepfake. 

As reported by a recent news article, 98 per cent of the deepfake videos that have been produced use adult content and feature women, and when it comes to vulnerability of India, it ranks sixth among the most susceptible nations to deepfake videos with adult content. 

The use of artificial intelligence algorithms for the manipulation of multimedia content such as videos, audio recordings, or images is known as deep fakes. These types of fakes can make it difficult to differentiate real content from fake content that has been altered. A copy of the advisory states that the government has asked that, "Users are advised not to host such information/content/Deep Fakes and that any such content is removed within 36 hours of being reported, and to ensure that rapid action is taken as soon as possible, within the specified timeframes outlined in the IT Rules 2021, and that access to the content/information is disabled." 

A statement issued by the ministry stressed the importance of intermediaries acting in accordance with the relevant provisions of the Information Technology Act and Rules, and that if they do not, they will be subject to Rule 7 of the Information Technology Rules, 2021, which may result in the organization losing the protection offered under Section 79(1) of the Information Technology Act, 2000. 

As a result of section 79, any third-party information, data, or communication related to a third-party platform or intermediary is protected from being held liable for any third-party information, data, or communication related to that third party. 

Rajeev Chandrasekhar, Union Minister of State for Electronics and Information Technology, has urged those affected by deep fakes - content generated by artificial intelligence (AI) morphing real images or videos into something that appears realistic but is still misleading - to report the matter to the police and to request remedial measures as required by the Information Technology Act, which provides criminal penalties and jail time for violators of the law. 

The rise of deep fake technology necessitates a comprehensive policy framework to address its implications for society. In order to tackle this issue, it is crucial to form a dedicated task force comprising policymakers, technology experts, cybercrime specialists from law enforcement agencies, and other stakeholders. 

The task force’s primary goal will be to develop comprehensive guidelines, strategies, and actionable points to combat deep fake threats effectively. To ensure the success of these efforts, it is essential for lawmakers, law enforcers, and citizen bodies to come together and collaborate. By joining forces, they can raise awareness about the prevention of such crimes and provide immediate assistance to deepfake victims. 

In order to achieve this, it is recommended to take actions like, promptly reporting any deepfake crimes, implementing public awareness campaigns to educate individuals about the risks of deepfakes and emphasize the importance of verifying content, and encouraging schools and educational institutions to include digital and social media literacy in their curricula, and providing psychological support and counseling for victims of deepfake attacks.

Furthermore, it is important to acknowledge that the easy accessibility of affordable technology and the widespread availability of explicit content have contributed to the menace of deep faking. Therefore, it is crucial to establish an effective task force and launch a comprehensive public awareness campaign to mitigate the impact of deep fake technology and protect its victims. By actively addressing this issue, we can work towards harnessing the potential of this growing industry while safeguarding individuals from its harmful effects.
Share it:

Artificial Intelligence

Cyberattacks

CyberCrime

Cybersecurity

Digital Literacy

Laws

Technology