We are all drowning in information in this digital world and the widespread adoption of artificial intelligence (AI) has become increasingly commonplace within various spheres of business. However, this technological evolution has brought about the emergence of generative AI, presenting a myriad of cybersecurity concerns that weigh heavily on the minds of Chief Information Security Officers (CISOs). Let's synthesise this issue and see the intricacies from a microscopic light.
The lack of robust frameworks around data collection and input into generative AI models raises concerns about data privacy. Without enforceable policies, there's a risk of models inadvertently replicating and exposing sensitive corporate information, leading to data breaches.
The absence of strategic policies around generative AI and corporate data privacy can result in models being trained on proprietary codebases. This exposes valuable corporate IP, including API keys and other confidential information, to potential threats.
Despite the implementation of guardrails to prevent AI models from producing harmful or biased content, researchers have found ways to circumvent these safeguards. Known as "jailbreaks," these exploits enable attackers to manipulate AI models for malicious purposes, such as generating deceptive content or launching targeted attacks.
To mitigate these risks, organisations must adopt cybersecurity best practices tailored to generative AI usage:
1. Implement AI Governance: Establishing governance frameworks to regulate the deployment and usage of AI tools within the organisation is crucial. This includes transparency, accountability, and ongoing monitoring to ensure responsible AI practices.
2. Employee Training: Educating employees on the nuances of generative AI and the importance of data privacy is essential. Creating a culture of AI knowledge and providing continuous learning opportunities can help mitigate risks associated with misuse.
3. Data Discovery and Classification: Properly classifying data helps control access and minimise the risk of unauthorised exposure. Organisations should prioritise data discovery and classification processes to effectively manage sensitive information.
4. Utilise Data Governance and Security Tools: Employing data governance and security tools, such as Data Loss Prevention (DLP) and threat intelligence platforms, can enhance data security and enforcement of AI governance policies.
Various cybersecurity vendors provide solutions tailored to address the unique challenges associated with generative AI. Here's a closer look at some of these promising offerings:
1. Google Cloud Security AI Workbench: This solution, powered by advanced AI capabilities, assesses, summarizes, and prioritizes threat data from both proprietary and public sources. It incorporates threat intelligence from reputable sources like Google, Mandiant, and VirusTotal, offering enterprise-grade security and compliance support.
2. Microsoft Copilot for Security: Integrated with Microsoft's robust security ecosystem, Copilot leverages AI to proactively detect cyber threats, enhance threat intelligence, and automate incident response. It simplifies security operations and empowers users with step-by-step guidance, making it accessible even to junior staff members.
3. CrowdStrike Charlotte AI: Built on the Falcon platform, Charlotte AI utilizes conversational AI and natural language processing (NLP) capabilities to help security teams respond swiftly to threats. It enables users to ask questions, receive answers, and take action efficiently, reducing workload and improving overall efficiency.
4. Howso (formerly Diveplane): Howso focuses on advancing trustworthy AI by providing AI solutions that prioritize transparency, auditability, and accountability. Their Howso Engine offers exact data attribution, ensuring traceability and accountability of influence, while the Howso Synthesizer generates synthetic data that can be trusted for various use cases.
5. Cisco Security Cloud: Built on zero-trust principles, Cisco Security Cloud is an open and integrated security platform designed for multicloud environments. It integrates generative AI to enhance threat detection, streamline policy management, and simplify security operations with advanced AI analytics.
6. SecurityScorecard: SecurityScorecard offers solutions for supply chain cyber risk, external security, and risk operations, along with forward-looking threat intelligence. Their AI-driven platform provides detailed security ratings that offer actionable insights to organizations, aiding in understanding and improving their overall security posture.
7. Synthesis AI: Synthesis AI offers Synthesis Humans and Synthesis Scenarios, leveraging a combination of generative AI and cinematic digital general intelligence (DGI) pipelines. Their platform programmatically generates labelled images for machine learning models and provides realistic security simulation for cybersecurity training purposes.
These solutions represent a diverse array of offerings aimed at addressing the complex cybersecurity challenges posed by generative AI, providing organizations with the tools needed to safeguard their digital assets effectively.
While the adoption of generative AI presents immense opportunities for innovation, it also brings forth significant cybersecurity challenges. By implementing robust governance frameworks, educating employees, and leveraging advanced security solutions, organisations can navigate these risks and harness the transformative power of AI responsibly.
As generative AI technology gains momentum, the focus on cybersecurity threats surrounding the chips and processing units driving these innovations intensifies. The crux of the issue lies in the limited number of manufacturers producing chips capable of handling the extensive data sets crucial for generative AI systems, rendering them vulnerable targets for malicious attacks.
According to recent records, Nvidia, a leading player in GPU technology, announced cybersecurity partnerships during its annual GPU technology conference. This move underscores the escalating concerns within the industry regarding the security of chips and hardware powering AI technologies.
Traditionally, cyberattacks garner attention for targeting software vulnerabilities or network flaws. However, the emergence of AI technologies presents a new dimension of threat. Graphics processing units (GPUs), integral to the functioning of AI systems, are susceptible to similar security risks as central processing units (CPUs).
Experts highlight four main categories of security threats facing GPUs:
1. Malware attacks, including "cryptojacking" schemes where hackers exploit processing power for cryptocurrency mining.
2. Side-channel attacks, exploiting data transmission and processing flaws to steal information.
3. Firmware vulnerabilities, granting unauthorised access to hardware controls.
4. Supply chain attacks, targeting GPUs to compromise end-user systems or steal data.
Moreover, the proliferation of generative AI amplifies the risk of data poisoning attacks, where hackers manipulate training data to compromise AI models.
Despite documented vulnerabilities, successful attacks on GPUs remain relatively rare. However, the stakes are high, especially considering the premium users pay for GPU access. Even a minor decrease in functionality could result in significant losses for cloud service providers and customers.
In response to these challenges, startups are innovating AI chip designs to enhance security and efficiency. For instance, d-Matrix's chip partitions data to limit access in the event of a breach, ensuring robust protection against potential intrusions.
As discussions surrounding AI security evolve, there's a growing recognition of the need to address hardware and chip vulnerabilities alongside software concerns. This shift reflects a proactive approach to safeguarding AI technologies against emerging threats.
The intersection of generative AI and GPU technology highlights the critical importance of cybersecurity in the digital age. By understanding and addressing the complexities of GPU security, stakeholders can mitigate risks and foster a safer environment for AI innovation and adoption.
In a recent eye-opening report from cybersecurity experts at Perception Point, a major spike in sneaky online attacks has been uncovered. These attacks, called Business Email Compromise (BEC), zoomed up by a whopping 1,760% in 2023. The bad actors behind these attacks are using fancy tech called generative AI (GenAI) to craft tricky emails that pretend to be from big-shot companies and bosses. These fake messages trick people into giving away important information or even money, putting both companies and people at serious risk.
The report highlights a dramatic escalation in BEC attacks, from a mere 1% of cyber threats in 2022 to a concerning 18.6% in 2023. Cybercriminals now employ sophisticated emails crafted through generative AI, impersonating reputable companies and executives. This deceptive tactic dupes unsuspecting victims into surrendering sensitive data or funds, posing a significant threat to organisational security and financial stability.
Exploiting the capabilities of AI technology, cybercriminals have embraced GenAI to orchestrate intricate and deceptive attacks. BEC attacks have become a hallmark of this technological advancement, presenting a formidable challenge to cybersecurity experts worldwide.
Beyond BEC attacks, the report sheds light on emerging threat vectors employed by cybercriminals to bypass traditional security measures. Malicious QR codes, known as “quishing,” have seen a considerable uptick, comprising 2.7% of all phishing attacks. Attackers exploit users’ trust in these seemingly innocuous symbols by leveraging QR codes to conceal malicious sites.
Additionally, the report reveals a concerning trend known as “two-step phishing,” witnessing a 175% surge in 2023. This tactic capitalises on legitimate services and websites to evade detection, exploiting the credibility of well-known domains. Cybercriminals circumvent conventional security protocols with alarming efficacy by directing users to a genuine site before redirecting them to a malicious counterpart.
The urgent need for enhanced security measures cannot be emphasised more as cyber threats evolve in sophistication and scale. Organisations must prioritise advanced security solutions to safeguard their digital assets. With one in every five emails deemed illegitimate and phishing attacks comprising over 70% of all threats, the imperative for robust email security measures has never been clearer.
Moreover, the widespread adoption of web-based productivity tools and Software-as-a-Service (SaaS) applications has expanded the attack surface, necessitating comprehensive browser security and data governance strategies. Addressing vulnerabilities within these digital ecosystems is paramount to mitigating the risk of data breaches and financial loss.
Perception Point’s Annual Report highlights the urgent need for proactive cybersecurity measures in the face of evolving cyber threats. As cybercriminals leverage technological advancements to perpetrate increasingly sophisticated attacks, organisations must remain vigilant and implement robust security protocols to safeguard against potential breaches. By embracing innovative solutions and adopting a proactive stance towards cybersecurity, businesses can bolster their defences and protect against the growing menace of BEC attacks and other malicious activities. Stay informed, stay secure.
In the latest study, researchers have made the first "generative AI worms" that can spread from one device to another, deploying malware or stealing data in the process.
Nassi, in collaboration with fellow academics Stav Cohen and Ron Bitton, developed the worm, which they named Morris II in homage to the 1988 internet debacle caused by the first Morris computer worm. The researchers demonstrate how the AI worm may attack a generative AI email helper to steal email data and send spam messages, circumventing several security measures in ChatGPT and Gemini in the process, in a research paper and website.
The study, conducted in test environments rather than on a publicly accessible email assistant, coincides with the growing multimodal nature of large language models (LLMs), which can produce images and videos in addition to text.
Prompts are language instructions that direct the tools to answer a question or produce an image. This is how most generative AI systems operate. These prompts, nevertheless, can also be used as a weapon against the system.
Prompt injection attacks can provide a chatbot with secret instructions, while jailbreaks can cause a system to ignore its security measures and spew offensive or harmful content. For instance, a hacker might conceal text on a website instructing an LLM to pose as a con artist and request your bank account information.
The researchers used a so-called "adversarial self-replicating prompt" to develop the generative AI worm. According to the researchers, this prompt causes the generative AI model to output a different prompt in response.
The researchers connected ChatGPT, Gemini, and open-source LLM, LLaVA, to develop an email system that could send and receive messages using generative AI to demonstrate how the worm may function. They then discovered two ways to make use of the system: one was to use a self-replicating prompt that was text-based, and the other was to embed the question within an image file.
A video showcasing the findings shows the email system repeatedly forwarding a message. Also, according to the experts, data extraction from emails is possible. According to Nassi, "It can be names, phone numbers, credit card numbers, SSNs, or anything else that is deemed confidential."
Nassi and the other researchers report that they expect to see generative AI worms in the wild within the next two to three years in a publication that summarizes their findings. According to the research paper, "many companies in the industry are massively developing GenAI ecosystems that integrate GenAI capabilities into their cars, smartphones, and operating systems."
Apart from providing a space for experimentation, other points increasingly show that open-source LLMs are going to gain the same attention closed-source LLMs are getting now.
The open-source nature allows organizations to understand,
modify, and tailor the models to their specific requirements. The collaborative
environment nurtured by open-source fosters innovation, enabling faster
development cycles. Additionally, the avoidance of vendor lock-in and adherence
to industry standards contribute to seamless integration. The security benefits
derived from community scrutiny and ethical considerations further bolster the
appeal of open-source LLMs, making them a strategic choice for enterprises
navigating the evolving landscape of artificial intelligence.
Google has been working on the development of the Gemini large language model (LLM) for the past eight months and just recently provided access to its early versions to a small group of companies. This LLM is believed to be giving head-to-head competition to other LLMs like Meta’s Llama 2 and OpenAI’s GPT-4.
The AI model is designed to operate on various formats, be it text, image or video, making the feature one of the most significant algorithms in Google’s history.
In a blog post, Google CEO Sundar Pichai wrote, “This new era of models represents one of the biggest science and engineering efforts we’ve undertaken as a company.”
The new LLM, also known as a multimodal model, is capable of various methods of input, like audio, video, and images. Traditionally, multimodal model creation involves training discrete parts for several modalities and then piecing them together.
“These models can sometimes be good at performing certain tasks, like describing images, but struggle with more conceptual and complex reasoning,” Pichai said. “We designed Gemini to be natively multimodal, pre-trained from the start on different modalities. Then we fine-tuned it with additional multimodal data to further refine its effectiveness.”
Google also unveiled the Cloud TPU v5p, its most potent ASIC chip, in tandem with the launch. This chip was created expressly to meet the enormous processing demands of artificial intelligence. According to the company, the new processor can train LLMs 2.8 times faster than Google's prior TPU v4.
For ChatGPT and Bard, two examples of generative AI chatbots, LLMs are the algorithmic platforms.
The Cloud TPU v5e, which touted 2.3 times the price performance over the previous generation TPU v4, was made generally available by Google earlier last year. The TPU v5p is significantly faster than the v4, but it costs three and a half times as much./ Google’s new Gemini LLM is now available in some of Google’s core products. For example, Google’s Bard chatbot is using a version of Gemini Pro for advanced reasoning, planning, and understanding.
Developers and enterprise customers can use the Gemini API in Vertex AI or Google AI Studio, the company's free web-based development tool, to access Gemini Pro as of December 13. Further improvements to Gemini Ultra, including thorough security and trust assessments, led Google to announce that it will be made available to a limited number of users in early 2024, ahead of developers and business clients.
Since it completed its first anniversary, here we will shed light on some of the changes that the AI tool has brought with itself:
One of the first notable changes that collectively made the world take a mental leap into the future of automation. Earlier (pre-2022), when being asked about automation in future, one would expect blue-collar roles to be its first victim, taking into account that these jobs demand lower skills and are repetitive in nature.
However, OpenAI completely changed this perspective by proving that white collar (especially the creative roles) were at a much higher risk in terms of automation.
Education is the next industry that has undergone permanent change. Writing essays, memorizing facts for exams, and answering multiple-choice questions correctly have all been part of the standard educational and testing regimen for generations.
But now, ChatGPT has brought in a revolution. It scores remarkably well on a variety of standardised tests, delivers coherent knowledge from a wide range of sources, and can compose essays better than most of its users. This has challenged the long-standing educational paradigm, bringing up a number of practical and philosophical issues in the process.
A more contemporary domain transformed with ChatGPT is Geopolitics. Governments all around the world now recognize AI as one of the key technologies of this century, thanks to OpenAI's offering, which has sparked global talks and the development of strategies in this area.
For instance, the competition between the US and China in the field of AI, the multilateral conference on AI safety held recently in the UK, and the passing of the EU’s AI Act.
ChatGPT has sparked some serious discussions on AI in international relations, which is only expected to intensify in the future.
It is anticipated that the strategic assets that nuclear engineers were during the Cold War will be viewed similarly with today’s top AI engineers.
One of the reasons ChatGPT deserves praise for is embedded AI in popular applications. The expectations for apps have been permanently changed, whether it is directly caused by something like Adobe incorporating generative AI into their creative cloud suite, or indirectly caused by things like Microsoft Windows or their Office Suite being upgraded through a “Copilot” that is driven by ChatGPT.
This tendency is now being adopted by more and more applications, suggesting that generative AI may soon outperform the internet in terms of ubiquity. One should not bet against it, since it has amassed hundreds of millions of users in less than a year.
Another significant change brought about by ChatGPT was the introduction of the idea of "Knowledge as a service." An underlying neural network that stores data powers ChatGPT and a plethora of other generative AI tools. After that, this information can be accessed whenever needed to generate ideas or new insights. This area has emerged overnight as a result of ChatGPT's capacity to deliver precise and carefully chosen information on demand. Now that a number of businesses are requiring these capabilities internally and that new updates enable the development of personalized ChatGPTs, the field of "KaaS" is only going to expand.
The points mentioned above, however, are just the tip of the iceberg. These are the mere subset of changes induced by ChatGPT and how the world has changed with its introduction to these tools.
One can conclude that generative AI is all set to change varied aspects of lives, and the world will ultimately not be the same as it is today. One can only imagine the degree of changes one will experience with all that AI has to offer.