Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Security Copilot. Show all posts

Microsoft and OpenAI Reveal Hackers Weaponizing ChatGPT

 

In a digital landscape fraught with evolving threats, the marriage of artificial intelligence (AI) and cybercrime has become a potent concern. Recent revelations from Microsoft and OpenAI underscore the alarming trend of malicious actors harnessing advanced language models (LLMs) to bolster their cyber operations. 

The collaboration between these tech giants has shed light on the exploitation of AI tools by state-sponsored hacking groups from Russia, North Korea, Iran, and China, signalling a new frontier in cyber warfare. According to Microsoft's latest research, groups like Strontium, also known as APT28 or Fancy Bear, notorious for their role in high-profile breaches including the hacking of Hillary Clinton’s 2016 presidential campaign, have turned to LLMs to gain insights into sensitive technologies. 

Their utilization spans from deciphering satellite communication protocols to automating technical operations through scripting tasks like file manipulation and data selection. This sophisticated application of AI underscores the adaptability and ingenuity of cybercriminals in leveraging emerging technologies to further their malicious agendas. The Thallium group from North Korea and Iranian hackers of the Curium group have followed suit, utilizing LLMs to bolster their capabilities in researching vulnerabilities, crafting phishing campaigns, and evading detection mechanisms. 

Similarly, Chinese state-affiliated threat actors have integrated LLMs into their arsenal for research, scripting, and refining existing hacking tools, posing a multifaceted challenge to cybersecurity efforts globally. While Microsoft and OpenAI have yet to detect significant attacks leveraging LLMs, the proactive measures undertaken by these companies to disrupt the operations of such hacking groups underscore the urgency of addressing this evolving threat landscape. Swift action to shut down associated accounts and assets coupled with collaborative efforts to share intelligence with the defender community are crucial steps in mitigating the risks posed by AI-enabled cyberattacks. 

The implications of AI in cybercrime extend beyond the current landscape, prompting concerns about future use cases such as voice impersonation for fraudulent activities. Microsoft highlights the potential for AI-powered fraud, citing voice synthesis as an example where even short voice samples can be utilized to create convincing impersonations. This underscores the need for preemptive measures to anticipate and counteract emerging threats before they escalate into widespread vulnerabilities. 

In response to the escalating threat posed by AI-enabled cyberattacks, Microsoft spearheads efforts to harness AI for defensive purposes. The development of a Security Copilot, an AI assistant tailored for cybersecurity professionals, aims to empower defenders in identifying breaches and navigating the complexities of cybersecurity data. Additionally, Microsoft's commitment to overhauling software security underscores a proactive approach to fortifying defences in the face of evolving threats. 

The battle against AI-powered cyberattacks remains an ongoing challenge as the digital landscape continues to evolve. The collaborative efforts between industry leaders, innovative approaches to AI-driven defence mechanisms, and a commitment to information sharing are pivotal in safeguarding digital infrastructure against emerging threats. By leveraging AI as both a weapon and a shield in the cybersecurity arsenal, organizations can effectively adapt to the dynamic nature of cyber warfare and ensure the resilience of their digital ecosystems.

Microsoft ‘Cherry-picked’ Examples to Make its AI Seem Functional, Leaked Audio Revealed


According to a report by Business Insiders, Microsoft “cherry-picked” examples of generative AI’s output since the system would frequently "hallucinate" wrong responses. 

The intel came from a leaked audio file of an internal presentation on an early version of Microsoft’s Security Copilot a ChatGPT-like artificial intelligence platform that Microsoft created to assist cybersecurity professionals.

Apparently, the audio consists of a Microsoft researcher addressing the result of "threat hunter" testing, in which the AI examined a Windows security log for any indications of potentially malicious behaviour.

"We had to cherry-pick a little bit to get an example that looked good because it would stray and because it's a stochastic model, it would give us different answers when we asked it the same questions," said Lloyd Greenwald, a Microsoft Security Partner giving the presentation, as quoted by BI.

"It wasn't that easy to get good answers," he added.

Security Copilot

Security Copilot, like any chatbot, allows users to enter their query into a chat window and receive responses as a customer service reply. Security Copilot is largely built on OpenAI's GPT-4 large language model (LLM), which also runs Microsoft's other generative AI forays like the Bing Search assistant. Greenwald claims that these demonstrations were "initial explorations" of the possibilities of GPT-4 and that Microsoft was given early access to the technology.

Similar to Bing AI in its early days, which responded so ludicrous that it had to be "lobotomized," the researchers claimed that Security Copilot often "hallucinated" wrong answers in its early versions, an issue that appeared to be inherent to the technology. "Hallucination is a big problem with LLMs and there's a lot we do at Microsoft to try to eliminate hallucinations and part of that is grounding it with real data," Greenwald said in the audio, "but this is just taking the model without grounding it with any data."

The LLM Microsoft used to build Security Pilot, GPT-4, however it was not trained on cybersecurity-specific data. Rather, it was utilized directly out of the box, depending just on its massive generic dataset, which is standard.

Cherry on Top

Discussing other queries in regards to security, Greenwald revealed that, "this is just what we demoed to the government."

However, it is unclear whether Microsoft used these “cherry-picked” examples in its to the government and other potential customers – or if its researchers were really upfront about the selection process of the examples.

A spokeswoman for Microsoft told BI that "the technology discussed at the meeting was exploratory work that predated Security Copilot and was tested on simulations created from public data sets for the model evaluations," stating that "no customer data was used."  

Security Copilot: Microsoft Employes GPT-4 to Improve Security Incident Response


Microsoft has been integrating Copilot AI assistants across its product line as part of its $10 billion investment in OpenAI. The latest one is Microsoft Security Copilot, that aids security teams in their investigation and response to security issues. 

According to Chang Kawaguchi, vice president and AI Security Architect at Microsoft, defenders are having a difficult time coping with a dynamic security environment. Microsoft Security Copilot is designed to make defenders' lives easier by using artificial intelligence to help them catch incidents that they might otherwise miss, improve the quality of threat detection, and speed up response. To locate breaches, connect threat signals, and conduct data analysis, Security Copilot makes use of both the GPT-4 generative AI model from OpenAI and the proprietary security-based model from Microsoft. 

The objective of Security Copilot is to make “Defenders’ lives better, make them more efficient, and make them more effective by bringing AI to this problem,” Kawaguchi says. 

How Does Security Copilot Work? 

Security Copilot ensures to ingest and decode huge amounts of security data, like the 65 trillion security signals Microsoft pulls every day and all the data reaped by the Microsoft products the company is using, including Microsoft Sentinel, Defender, Entra, Priva, Purview, and Intune. Analysts can investigate incidents, research information on prevalent vulnerabilities and exposures. 

When analysts and incident response team type "/ask about" into a text prompt, Security Copilot will respond with information based on what it knows about the organization's data. 

According to Kawaguchi, by doing this, security teams will be able to draw the dots between various elements of a security incident, such as a suspicious email, a malicious software file, or the numerous system components that had been hacked. The queries could range from being general information in regards with vulnerabilities, or specific to the organization’s environment, like looking in the logs for signs that some Exchange flaw has been exploited. 

The queries could be general, such as an explanation of a vulnerability, or specific to the organization’s environment, such as looking in the logs for signs that a particular Exchange flaw had been exploited. And because Security Copilot uses GPT-4, it can respond to natural language questions. Additionally, as Security Copilot makes use of GPT-4, it can respond to queries in natural language. 

The analyst can review brief summaries of what transpired before following Security Copilot's prompts to delve deeper into the inquiry. These actions can all be recorded and shared with other security team members, stakeholders, and senior executives using a "pinboard." The completed tasks are all saved and available for access. Also, there is a summary that is generated automatically and updated as new activities are finished. 

“This is what makes this experience more of a notebook than a chat bot experience,” says Kawaguchi, mentioning also that the tool can also create PowerPoint presentations on the basis of the investigation conducted by the security team, which could then be used to share details of the incident that follows. 

The company claims that Security Copilot is not designed to replace human analysts, but rather to give them the information they need to work fast and efficiently throughout an investigation. By looking at each asset in the environment, threat hunters may use the tool to see if an organization is vulnerable to known vulnerabilities and exploits.