Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Digital Literacy. Show all posts

What Are The Risks of Generative AI?

 




We are all drowning in information in this digital world and the widespread adoption of artificial intelligence (AI) has become increasingly commonplace within various spheres of business. However, this technological evolution has brought about the emergence of generative AI, presenting a myriad of cybersecurity concerns that weigh heavily on the minds of Chief Information Security Officers (CISOs). Let's synthesise this issue and see the intricacies from a microscopic light.

Model Training and Attack Surface Vulnerabilities:

Generative AI collects and stores data from various sources within an organisation, often in insecure environments. This poses a significant risk of data access and manipulation, as well as potential biases in AI-generated content.


Data Privacy Concerns:

The lack of robust frameworks around data collection and input into generative AI models raises concerns about data privacy. Without enforceable policies, there's a risk of models inadvertently replicating and exposing sensitive corporate information, leading to data breaches.


Corporate Intellectual Property (IP) Exposure:

The absence of strategic policies around generative AI and corporate data privacy can result in models being trained on proprietary codebases. This exposes valuable corporate IP, including API keys and other confidential information, to potential threats.


Generative AI Jailbreaks and Backdoors:

Despite the implementation of guardrails to prevent AI models from producing harmful or biased content, researchers have found ways to circumvent these safeguards. Known as "jailbreaks," these exploits enable attackers to manipulate AI models for malicious purposes, such as generating deceptive content or launching targeted attacks.


Cybersecurity Best Practices:

To mitigate these risks, organisations must adopt cybersecurity best practices tailored to generative AI usage:

1. Implement AI Governance: Establishing governance frameworks to regulate the deployment and usage of AI tools within the organisation is crucial. This includes transparency, accountability, and ongoing monitoring to ensure responsible AI practices.

2. Employee Training: Educating employees on the nuances of generative AI and the importance of data privacy is essential. Creating a culture of AI knowledge and providing continuous learning opportunities can help mitigate risks associated with misuse.

3. Data Discovery and Classification: Properly classifying data helps control access and minimise the risk of unauthorised exposure. Organisations should prioritise data discovery and classification processes to effectively manage sensitive information.

4. Utilise Data Governance and Security Tools: Employing data governance and security tools, such as Data Loss Prevention (DLP) and threat intelligence platforms, can enhance data security and enforcement of AI governance policies.


Various cybersecurity vendors provide solutions tailored to address the unique challenges associated with generative AI. Here's a closer look at some of these promising offerings:

1. Google Cloud Security AI Workbench: This solution, powered by advanced AI capabilities, assesses, summarizes, and prioritizes threat data from both proprietary and public sources. It incorporates threat intelligence from reputable sources like Google, Mandiant, and VirusTotal, offering enterprise-grade security and compliance support.

2. Microsoft Copilot for Security: Integrated with Microsoft's robust security ecosystem, Copilot leverages AI to proactively detect cyber threats, enhance threat intelligence, and automate incident response. It simplifies security operations and empowers users with step-by-step guidance, making it accessible even to junior staff members.

3. CrowdStrike Charlotte AI: Built on the Falcon platform, Charlotte AI utilizes conversational AI and natural language processing (NLP) capabilities to help security teams respond swiftly to threats. It enables users to ask questions, receive answers, and take action efficiently, reducing workload and improving overall efficiency.

4. Howso (formerly Diveplane): Howso focuses on advancing trustworthy AI by providing AI solutions that prioritize transparency, auditability, and accountability. Their Howso Engine offers exact data attribution, ensuring traceability and accountability of influence, while the Howso Synthesizer generates synthetic data that can be trusted for various use cases.

5. Cisco Security Cloud: Built on zero-trust principles, Cisco Security Cloud is an open and integrated security platform designed for multicloud environments. It integrates generative AI to enhance threat detection, streamline policy management, and simplify security operations with advanced AI analytics.

6. SecurityScorecard: SecurityScorecard offers solutions for supply chain cyber risk, external security, and risk operations, along with forward-looking threat intelligence. Their AI-driven platform provides detailed security ratings that offer actionable insights to organizations, aiding in understanding and improving their overall security posture.

7. Synthesis AI: Synthesis AI offers Synthesis Humans and Synthesis Scenarios, leveraging a combination of generative AI and cinematic digital general intelligence (DGI) pipelines. Their platform programmatically generates labelled images for machine learning models and provides realistic security simulation for cybersecurity training purposes.

These solutions represent a diverse array of offerings aimed at addressing the complex cybersecurity challenges posed by generative AI, providing organizations with the tools needed to safeguard their digital assets effectively.

While the adoption of generative AI presents immense opportunities for innovation, it also brings forth significant cybersecurity challenges. By implementing robust governance frameworks, educating employees, and leveraging advanced security solutions, organisations can navigate these risks and harness the transformative power of AI responsibly.

Securing Reality: The Role of Strict Laws and Digital Literacy in the Fight Against Deepfakes

 


The Ministry of Electronics and Information Technology, in response to the growing concern in India regarding deepfakes, which are the manipulation of appearances for deceptive purposes using artificial intelligence, has issued an advisory to social media intermediaries, requesting they take active steps to identify and combat deepfake content and misinformation, as stated in the IT Rules 2021. 

In a statement made on Tuesday, Union Minister of State for Electronics and Technology Rajeev Chandrasekhar said that the government may consider introducing a new law to deal with deep fakes and misinformation. Meanwhile, the IT ministry has also scheduled two meetings with executives of social media firms for Thursday and Friday, as part of its social media strategy. 

There is an urgent need for intermediaries to exercise due diligence when reporting such issues and to take swift action to remove the content within 36 hours of being notified and to disable access to it. It is possible that these platforms could lose the protection of safe harbour if they fail to comply with the regulations. 

A fake video starring Telugu actor Rashmika Mandanna has prompted a new directive aimed at preventing online gender violence by widening the use of artificial intelligence, which is a possible method for misuse of AI to make the world a less safe place for women. 

There has been a directive by the federal government to ensure that all deep fake content reported by users on social media platforms is removed within 36 hours, failing which they will lose the 'safe harbour immunity' and be subject to criminal and judicial proceedings under Indian law. 

Hundreds of images and videos are edited and digitally altered every single day that teens are exposed to on the internet. It's clear that young people today are very skilled at consuming manipulated media in a manner that is fun, lighthearted, or ironic, from blurry, neon filters on Snapchat to short, lighthearted, or ironic TikTok videos. 

Being online today means that it is quite common to see altered media in a variety of ways. There are a variety of altered videos, also known as "synthetic media" that are mostly based on real videos in which real people appear to do or say something that they have never actually done or said. In contrast to shallow fakes, deep fakes are created almost entirely with bots or artificial intelligence, which is why some people claim that near-imperceptible deep fakes will be created in the near future. 

This technology is rapidly advancing, so some experts believe that many of these fakes will soon be nearly impossible to diagnose. In addition to looking authentic, a deep fake will also move, talk, and sing like an original. One may end up discovering a deep fake depicting oneself on the internet, similar to how it was discovered recently about a celebrity due to their own animated deepfake. 

As reported by a recent news article, 98 per cent of the deepfake videos that have been produced use adult content and feature women, and when it comes to vulnerability of India, it ranks sixth among the most susceptible nations to deepfake videos with adult content. 

The use of artificial intelligence algorithms for the manipulation of multimedia content such as videos, audio recordings, or images is known as deep fakes. These types of fakes can make it difficult to differentiate real content from fake content that has been altered. A copy of the advisory states that the government has asked that, "Users are advised not to host such information/content/Deep Fakes and that any such content is removed within 36 hours of being reported, and to ensure that rapid action is taken as soon as possible, within the specified timeframes outlined in the IT Rules 2021, and that access to the content/information is disabled." 

A statement issued by the ministry stressed the importance of intermediaries acting in accordance with the relevant provisions of the Information Technology Act and Rules, and that if they do not, they will be subject to Rule 7 of the Information Technology Rules, 2021, which may result in the organization losing the protection offered under Section 79(1) of the Information Technology Act, 2000. 

As a result of section 79, any third-party information, data, or communication related to a third-party platform or intermediary is protected from being held liable for any third-party information, data, or communication related to that third party. 

Rajeev Chandrasekhar, Union Minister of State for Electronics and Information Technology, has urged those affected by deep fakes - content generated by artificial intelligence (AI) morphing real images or videos into something that appears realistic but is still misleading - to report the matter to the police and to request remedial measures as required by the Information Technology Act, which provides criminal penalties and jail time for violators of the law. 

The rise of deep fake technology necessitates a comprehensive policy framework to address its implications for society. In order to tackle this issue, it is crucial to form a dedicated task force comprising policymakers, technology experts, cybercrime specialists from law enforcement agencies, and other stakeholders. 

The task force’s primary goal will be to develop comprehensive guidelines, strategies, and actionable points to combat deep fake threats effectively. To ensure the success of these efforts, it is essential for lawmakers, law enforcers, and citizen bodies to come together and collaborate. By joining forces, they can raise awareness about the prevention of such crimes and provide immediate assistance to deepfake victims. 

In order to achieve this, it is recommended to take actions like, promptly reporting any deepfake crimes, implementing public awareness campaigns to educate individuals about the risks of deepfakes and emphasize the importance of verifying content, and encouraging schools and educational institutions to include digital and social media literacy in their curricula, and providing psychological support and counseling for victims of deepfake attacks.

Furthermore, it is important to acknowledge that the easy accessibility of affordable technology and the widespread availability of explicit content have contributed to the menace of deep faking. Therefore, it is crucial to establish an effective task force and launch a comprehensive public awareness campaign to mitigate the impact of deep fake technology and protect its victims. By actively addressing this issue, we can work towards harnessing the potential of this growing industry while safeguarding individuals from its harmful effects.