Business professionals began utilizing ChatGPT and other generative AI tools in an enterprise setting as soon as they were made available in order to complete their tasks more quickly and effectively. For marketing directors, generative AI creates PR pitches; for sales representatives, it creates emails for prospecting. Business users have already incorporated it into their daily operations, despite the fact that data governance and legal concerns have surfaced as barriers to official company adoption.
With tools like GitHub Copilot, developers have been leveraging generative AI to write and enhance code. A developer uses natural language to describe a software component, and AI then generates working code that makes sense in the developer's context.
The developer's participation in this process is essential since they must carry the technical knowledge to ask the proper questions, assess the software that is produced, and integrate it with the rest of the code base. These duties call for expertise in software engineering.
Traditionally, security teams have focused on the applications created by their development organizations. However, users still fall prey to believing that these innovative business platforms are a ready-made solution, where in actual sense they have become application development platforms that power many of our business-critical applications. Bringing citizen developers within the security umbrella is still a work in progress.
With the growing popularity of generative AI, even more users will be creating applications. Business users are already having discussions about where data is stored, how their apps handle it, and who can access it. Errors are inevitable if we leave the new developers to make these decisions on their own without providing any kind of guidance.
Some organizations aim to ban citizen development or demand that commercial users obtain permission before using any applications or gaining access to any data. That is a sensible response, however, given the enormous productivity gains for the company, one may find it hard to believe it would be successful. A preferable strategy would be to establish automatic guardrails that silently address security issues and give business users a safe method to employ generative AI through low-code/no-code, allowing them to focus on what they do best: push the business forward.
Despite the security measures put in place by OpenAI, with a majority of developers using it for harmless purposes, a new analysis suggests that AI can still be utilized by threat actors to create malware.
According to a cybersecurity researcher, ChatGPT was utilised to create a zero-day attack that may be used to collect data from a hacked device. Alarmingly, the malware managed to avoid being detected by every vendor on VirusTotal.
As per Forcepoint researcher Aaron Mulgrew, he had decided early on in the malware development process not to write any code himself and instead to use only cutting-edge approaches often used by highly skilled threat actors, such as rogue nation-states.
Mulgrew, who called himself a "novice" at developing malware, claimed that he selected the Go implementation language not just because it was simple to use but also because he could manually debug the code if necessary. In order to escape detection, he also used steganography, which conceals sensitive information within an ordinary file or message.
Mulgrew found a loophole in ChatGPT's code that allowed him to write the malware code line by line and function by function.
He created an executable that steals data discreetly after compiling each of the separate functions, which he believes were comparable to nation-state malware. The drawback here is that Mulgrew developed such dangerous malware with no advanced coding experience or with the help of any hacking team.
As told by Mulgrew, the malware poses as a screensaver app, that launches itself on Windows-sponsored devices, automatically. Once launched, the malware looks for various files, like Word documents, images, and PDFs, and steals any data it can find.
The data is then fragmented by the malware and concealed within other photos on the device. The data theft is difficult to identify because these images are afterward transferred to a Google Drive folder.
According to a report by Reuters, European Data Protection Board (EDPB) has recently established a task force to address privacy issues relating to artificial intelligence (AI), with a focus on ChatGPT.
The action comes after recent decisions by Germany's commissioner for data protection and Italy to regulate ChatGPT, raising the possibility that other nations may follow suit.
The company often boasts of its potential for identifying rioters involved in the January 6 attack on the Capitol, saving children from being abused or exploited, and assisting in the exoneration of those who have been falsely accused of crimes. Yet, critics cite two examples in Detroit and New Orleans where incorrect face recognition identifications led to unjustified arrests.
Last month, the company CEO, Hoan Ton-That admitted in an interview with the BBC that Clearview utilized photos without users’ knowledge. This made it possible for the organization's enormous database, which is promoted to law enforcement on its website as a tool "to bring justice to victims."
Privacy advocates and digital platforms have long criticized the technology for its intrusive aspects, with major social media giants like Facebook sending cease-and-desist letters to Clearview in 2020, accusing the company of violating their users’ privacy.
"Clearview AI's actions invade people's privacy which is why we banned their founder from our services and sent them a legal demand to stop accessing any data, photos, or videos from our services," says a Meta spokesperson in an email Insider, following the revelation.
The spokesperson continues by informing Insider that Meta, since then, has made “significant investments in technology and devotes substantial team resources to combating unauthorized scraping on Facebook products.”
When unauthorized scraping is discovered, the company may take action “such as sending cease and desist letters, disabling accounts, filing lawsuits, or requesting assistance from hosting providers to protect user data,” the spokesperson said.
In spite of internal policies, biometric face prints are made and cross-referenced in the database once a photo has been scraped by Clearview AI, permanently linking the individuals to their social media profiles and other identifying information. Individuals in the photos have little recourse to try to remove themselves from the photos.
Searching Clearview’s database is one of the many methods where police agencies can make use of social media content to aid in investigations, like making requests directly to the platform for user data. Although the use of Clearview AI or other facial recognition technologies by law enforcement is not monitored in most states and is not subject to federal regulation, some critics argue that it should even be banned.
The warning comes as international advisory published from the law enforcement agency Europol concerning the potential criminal use of ChatGPT and other "large language models."
Phishing campaigns are frequently used as bait by cybercriminals to lure victims into clicking links that download malicious software or provide sensitive information like passwords or pin numbers.
According to the Office for National Statistics, half of all adults in England and Wales reported receiving a phishing email last year, making phishing emails one of the most frequent kinds of cyber threat.
However, artificial intelligence (AI) chatbots can now rectify the flaws that trip spam filters or alert human readers, addressing a basic flaw with some phishing attempts—poor spelling and grammar.
According to Corey Thomas, chief executive of the US cybersecurity firm Rapid7 “Every hacker can now use AI that deals with all misspellings and poor grammar[…]The idea that you can rely on looking for bad grammar or spelling in order to spot a phishing attack is no longer the case. We used to say that you could identify phishing attacks because the emails look a certain way. That no longer works.”
As per the data, ChatGPT, the market leader that rose to fame after its launch last year, is being used for cybercrime, with the development of "large language models" (LLM) finding one of its first significant commercial applications in creating malicious communications.
Phishing emails are increasingly being produced by bots, according to data from cybersecurity specialists at the UK company Darktrace. This allows crooks to send longer messages that are less likely to be detected by spam filters and to get beyond the bad English used in human-written emails.
Since the huge prevalence of ChatGPT last year the overall volume of malicious email scams that attempt to trick users into clicking a link has decreased, being replaced by emails that are more linguistically complicated. According to Max Heinemeyer, the company's chief product officer, this indicates that a sizable proportion of threat actors who create phishing and other harmful emails have developed the ability to create longer, more complicated prose—likely using an LLM like ChatGPT or something similar.
In Europol’s advisory report in a study on the usage of AI chatbots, the firm mentioned similar potential issues, such as fraud and social engineering, disinformation, and cybercrime. According to the report, the systems are helpful for guiding potential offenders through the processes needed to hurt others. Since the model can be used to deliver detailed instructions by posing pertinent questions, it is much simpler for criminals to comprehend and ultimately commit different forms of crime.
In a report published this month, the US-Israeli cybersecurity company Check Point claimed to have created a convincing-looking phishing email using the most recent version of ChatGPT. By instructing the chatbot that it wanted a sample phishing email for a program on staff awareness, it got beyond the chatbot's safety procedures.
With the last week's launch of its Bard product in the US and the UK, Google has also entered the chatbot race. Bard cooperated gladly, if without much finesse when the Guardian asked him to write an email that would convince someone to click on a suspicious-looking link: "I am writing to you today to give a link to an article that I think you will find interesting."
Additionally, Google highlighted its “prohibited use” policy for AI, according to which users are not allowed to use its AI models to create content for the purpose of “deceptive or fraudulent activities, scams, phishing, or malware”.
In regards to the issue, OpenAI, the company behind ChatGPT mentioned its terms of use, which says users “may not use the services in a way that infringes, misappropriates or violates any person’s rights”.
According to Chang Kawaguchi, vice president and AI Security Architect at Microsoft, defenders are having a difficult time coping with a dynamic security environment. Microsoft Security Copilot is designed to make defenders' lives easier by using artificial intelligence to help them catch incidents that they might otherwise miss, improve the quality of threat detection, and speed up response. To locate breaches, connect threat signals, and conduct data analysis, Security Copilot makes use of both the GPT-4 generative AI model from OpenAI and the proprietary security-based model from Microsoft.
The objective of Security Copilot is to make “Defenders’ lives better, make them more efficient, and make them more effective by bringing AI to this problem,” Kawaguchi says.
Security Copilot ensures to ingest and decode huge amounts of security data, like the 65 trillion security signals Microsoft pulls every day and all the data reaped by the Microsoft products the company is using, including Microsoft Sentinel, Defender, Entra, Priva, Purview, and Intune. Analysts can investigate incidents, research information on prevalent vulnerabilities and exposures.
When analysts and incident response team type "/ask about" into a text prompt, Security Copilot will respond with information based on what it knows about the organization's data.
According to Kawaguchi, by doing this, security teams will be able to draw the dots between various elements of a security incident, such as a suspicious email, a malicious software file, or the numerous system components that had been hacked. The queries could range from being general information in regards with vulnerabilities, or specific to the organization’s environment, like looking in the logs for signs that some Exchange flaw has been exploited.
The queries could be general, such as an explanation of a vulnerability, or specific to the organization’s environment, such as looking in the logs for signs that a particular Exchange flaw had been exploited. And because Security Copilot uses GPT-4, it can respond to natural language questions. Additionally, as Security Copilot makes use of GPT-4, it can respond to queries in natural language.
The analyst can review brief summaries of what transpired before following Security Copilot's prompts to delve deeper into the inquiry. These actions can all be recorded and shared with other security team members, stakeholders, and senior executives using a "pinboard." The completed tasks are all saved and available for access. Also, there is a summary that is generated automatically and updated as new activities are finished.
“This is what makes this experience more of a notebook than a chat bot experience,” says Kawaguchi, mentioning also that the tool can also create PowerPoint presentations on the basis of the investigation conducted by the security team, which could then be used to share details of the incident that follows.
The company claims that Security Copilot is not designed to replace human analysts, but rather to give them the information they need to work fast and efficiently throughout an investigation. By looking at each asset in the environment, threat hunters may use the tool to see if an organization is vulnerable to known vulnerabilities and exploits.
Millions of dollars have been fined against the corporation over and over again in Europe and Australia for privacy violations. Critics, however, argue that the police using Clearview to their aid puts everyone into a “perpetual police line-up.”
"Whenever they have a photo of a suspect, they will compare it to your face[…]It's far too invasive," says Matthew Guariglia from the Electronic Frontier Foundation.
The figure has not yet been clarified by the police in regard to the million searches conducted by Clearview. But, Miami Police has admitted to using this software for all types of crimes in a rare revelation to the BBC.
Clearview’s system enables a law enforcement customer to upload an image of a face, followed by looking for matches in a database of billions of images it has in store. It then provides links to where the corresponding images appear online. It is regarded as one of the world's most potent and reliable facial recognition companies.
The firm has now been banned from providing its services to most US companies after the American Civil Liberties Union (ACLU) accused Clearview AI of violating privacy laws. However, there seems to be an exemption for police, with Mr. Ton saying that his software is used by hundreds of police forces across the US.
Yet, the US police do not routinely reveal if they do use the software, and in fact have banned the software in several US cities like Portland, San Francisco, and Seattle.
Police frequently portray the use of facial recognition technology to the public as being limited to serious or violent offenses.
Moreover, in an interview with law enforcement about the efficiency of Clearview, Miami Police admitted to having used the software for all types of crime, from murders to shoplifting. Assistant Chief of Police Armando Aguilar said his team used the software around 450 times a year, and it has helped in solving murder cases.
Yet, critics claim that there are hardly any rules governing the use of facial recognition by police.