In the latest study, researchers have made the first "generative AI worms" that can spread from one device to another, deploying malware or stealing data in the process.
Nassi, in collaboration with fellow academics Stav Cohen and Ron Bitton, developed the worm, which they named Morris II in homage to the 1988 internet debacle caused by the first Morris computer worm. The researchers demonstrate how the AI worm may attack a generative AI email helper to steal email data and send spam messages, circumventing several security measures in ChatGPT and Gemini in the process, in a research paper and website.
The study, conducted in test environments rather than on a publicly accessible email assistant, coincides with the growing multimodal nature of large language models (LLMs), which can produce images and videos in addition to text.
Prompts are language instructions that direct the tools to answer a question or produce an image. This is how most generative AI systems operate. These prompts, nevertheless, can also be used as a weapon against the system.
Prompt injection attacks can provide a chatbot with secret instructions, while jailbreaks can cause a system to ignore its security measures and spew offensive or harmful content. For instance, a hacker might conceal text on a website instructing an LLM to pose as a con artist and request your bank account information.
The researchers used a so-called "adversarial self-replicating prompt" to develop the generative AI worm. According to the researchers, this prompt causes the generative AI model to output a different prompt in response.
The researchers connected ChatGPT, Gemini, and open-source LLM, LLaVA, to develop an email system that could send and receive messages using generative AI to demonstrate how the worm may function. They then discovered two ways to make use of the system: one was to use a self-replicating prompt that was text-based, and the other was to embed the question within an image file.
A video showcasing the findings shows the email system repeatedly forwarding a message. Also, according to the experts, data extraction from emails is possible. According to Nassi, "It can be names, phone numbers, credit card numbers, SSNs, or anything else that is deemed confidential."
Nassi and the other researchers report that they expect to see generative AI worms in the wild within the next two to three years in a publication that summarizes their findings. According to the research paper, "many companies in the industry are massively developing GenAI ecosystems that integrate GenAI capabilities into their cars, smartphones, and operating systems."
The boom in AI technology has raised concerns over its potential to replace millions of jobs across the world. This week, the International Monetary Fund (IMF) reported that around 40% of all jobs will be impacted by the growing AI.
While Gates does not disagree with the stats, he believes, and history has it, that with every new technology comes fear and then new opportunities.
“As we had [with] agricultural productivity in 1900, people were like ‘Hey, what are people going to do?’ In fact, a lot of new things, a lot of new job categories were created and we’re way better off than when everybody was doing farm work,” Gates said. “This will be like that.”
AI, according to Gates, will make everyone's life easier. He specifically mentioned helping doctors with their paperwork, saying that it is "part of the job they don't like, we can make that very efficient," in a Tuesday interview with CNN's Fareed Zakaria.
He adds that since there is not a need for “much new hardware,” accessing AI will be over “the phone or the PC you already have connected over the internet connection you already have.”
Gates believes that improvements with OpenAI’s ChatGPT-4 were “dramatic since the AI bot can essentially “read and write,” this way it is “almost like having a white-collar worker to be a tutor, to give health advice, to help write code, to help with technical support calls.”
He notes that incorporating new technology into sectors like education and medicine will be “fantastic.”
Microsoft and OpenAI have a multibillion-dollar collaboration. Gates remains one of Microsoft's biggest shareholders.
In his interview with Zakaria at Davos for the World Economic Forum, Bill Gates noted that the objective of Gates Foundation is “to make sure that the delay between benefitting people in poor countries versus getting to rich countries will make that very short[…]After all, the shortages of doctors and teachers is way more acute in Africa then it is in the West.”
However, the IMF had a more pessimistic view in this regard. The group believes that AI has the potential to ‘deepen inequality’ with any politician’s interference.
Recently, OpenAI and WHOOP collaborated to launch a GPT-4-powered, individualized health and fitness coach. A multitude of questions about health and fitness can be answered by WHOOP Coach.
It can answer queries such as "What was my lowest resting heart rate ever?" or "What kind of weekly exercise routine would help me achieve my goal?" — all the while providing tailored advice based on each person's particular body and objectives.
In addition to WHOOP, Summer Health, a text-based pediatric care service available around the clock, has collaborated with OpenAI and is utilizing GPT-4 to support its physicians. Summer Health has developed and released a new tool that automatically creates visit notes from a doctor's thorough written observations using GPT-4.
The pediatrician then swiftly goes over these notes before sending them to the parents. Summer Health and OpenAI worked together to thoroughly refine the model, establish a clinical review procedure to guarantee accuracy and applicability in medical settings, and further enhance the model based on input from experts.
GPT Vision has been used in radiography as well. A document titled "Exploring the Boundaries of GPT-4 in Radiology," released by Microsoft recently, evaluates the effectiveness of GPT-4 in text-based applications for radiology reports.
The ability of GPT-4 to process and interpret medical pictures, such as MRIs and X-rays, is one of its main uses in radiology. According to the report, "GPT-4's radiological report summaries are equivalent, and in certain situations, even preferable than radiologists."a
Be My Eyes is improving its virtual assistant program by leveraging GPT-4's multimodal features, particularly the visual input function. Be My Eyes helps people who are blind or visually challenged with activities like item identification, text reading, and environment navigation.
Many people have tested ChatGPT as a therapist when it comes to mental health. Many people have found ChatGPT to be beneficial in that it offers human-like interaction and helpful counsel, making it a unique alternative for those who are unable or reluctant to seek professional treatment.
Both Google and Apple have been employing LLMs to make major improvements in the healthcare business, even before OpenAI.
Google unveiled MedLM, a collection of foundation models designed with a range of healthcare use cases in mind. There are now two models under MedLM, both based on Med-PaLM 2, giving healthcare organizations flexibility and meeting their various demands.
In addition, Eli Lilly and Novartis, two of the biggest pharmaceutical companies in the world, have formed strategic alliances with Isomorphic Labs, a drug discovery spin-out of Google's AI R&D division based in London, to use AI to find novel treatments for illnesses.
Apple, on the other hand, intends to include more health-detecting features in their next line of watches, concentrating on ailments like apnea and hypertension, among others.
OpenAI has addressed significant security flaws in its state-of-the-art language model, ChatGPT, which has become widely used, in recent improvements. Although the business concedes that there is a defect that could pose major hazards, it reassures users that the issue has been addressed.
Security researchers originally raised the issue when they discovered a possible weakness that would have allowed malevolent actors to use the model to obtain private data. OpenAI immediately recognized the problem and took action to fix it. Due to a bug that caused data to leak during ChatGPT interactions, concerns were raised regarding user privacy and the security of the data the model processed.
OpenAI's commitment to transparency is evident in its prompt response to the situation. The company, in collaboration with security experts, has implemented mitigations to prevent data exfiltration. While these measures are a crucial step forward, it's essential to remain vigilant, as the fix may need to be fixed, leaving room for potential risks.
The company acknowledges the imperfections in the implemented fix, emphasizing the complexity of ensuring complete security in a dynamic digital landscape. OpenAI's dedication to continuous improvement is evident, as it actively seeks feedback from users and the security community to refine and enhance the security protocols surrounding ChatGPT.
In the face of this security challenge, OpenAI's response underscores the evolving nature of AI technology and the need for robust safeguards. The company's commitment to addressing issues head-on is crucial in maintaining user trust and ensuring the responsible deployment of AI models.
The events surrounding the ChatGPT security flaw serve as a reminder of the importance of ongoing collaboration between AI developers, security experts, and the wider user community. As AI technology advances, so must the security measures that protect users and their data.
Although OpenAI has addressed the possible security flaws in ChatGPT, there is still work to be done to guarantee that AI models are completely secure. To provide a safe and reliable AI ecosystem, users and developers must both exercise caution and join forces in strengthening the defenses of these potent language models.
GPTs are advanced AI chatbots that can be customized by OpenAI’s ChatGPT users. They utilize the Large Language Model (LLM) at the heart of ChatGPT, GPT-4 Turbo, but are augmented with more, special components that impact their user interface, such as customized datasets, prompts, and processing instructions, enabling them to perform a variety of specialized tasks.
However, the parameters and sensitive data that a user might use to customize the GPT could be left vulnerable to a third party.
For instance, Decrypt used a simple prompt hacking technique—asking for the "initial prompt" of a custom, publicly shared GPT— to access the entire prompt and confidential data of a custom.
In their study, the researchers tested over 200 custom GPTs wherein the high risk of such attacks was revealed. These jailbreaks might also result in the extraction of initial prompts and unauthorized access to uploaded files.
The researchers further highlighted the risks of these assaults since they jeopardize both user privacy and the integrity of intellectual property.
“The study revealed that for file leakage, the act of asking for GPT’s instructions could lead to file disclosure,” the researchers found.
Moreover, the researchers revealed that attackers can cause two types of disclosures: “system prompt extraction” and “file leakage.” While the first tricks the model into sharing basic configuration and prompts, the second coerces the model into revealing its confidential training datasets.
The researchers further note that the existing defences, like defensive prompts, prove insufficient in front of the sophisticated adversarial prompts. The team said that this will require a more ‘robust and comprehensive approach’ to protect the new AI models.
“Attackers with sufficient determination and creativity are very likely to find and exploit vulnerabilities, suggesting that current defensive strategies may be insufficient,” the report further read. "To address these issues, additional safeguards, beyond the scope of simple defensive prompts, are required to bolster the security of custom GPTs against such exploitation techniques." The study prompted the broader AI community to opt for more robust security measures.
Although there is much potential for customization of GPTs, this study is an important reminder of the security risks involved. AI developments must not jeopardize user privacy and security. For now, it is advisable for users to keep the most important or sensitive GPTs to themselves, or at least not train them with their sensitive data.
Microsoft recently made headlines by temporarily blocking internal access to ChatGPT, a language model developed by OpenAI, citing data concerns. The move sparked curiosity and raised questions about the security and potential risks associated with this advanced language model.
According to reports, Microsoft took this precautionary step on Thursday, sending ripples through the tech community. The decision came as a response to what Microsoft referred to as data concerns associated with ChatGPT.
While the exact nature of these concerns remains undisclosed, it highlights the growing importance of scrutinizing the security aspects of AI models, especially those that handle sensitive information. With ChatGPT being a widely used language model for various applications, including customer service and content generation, any potential vulnerabilities in its data handling could have significant implications.
As reported by ZDNet, Microsoft still needs to provide detailed information on the duration of the block or the specific data issues that prompted this action. However, the company stated that it is actively working with OpenAI to address these concerns and ensure a secure environment for its users.
As per the developer’s status page, ChatGPT and its API have been experiencing "periodic outages" since November 8 at approximately noon PST.
According to the most recent update published on November 8 at 19.49 PST, OpenAI said, “We are dealing with periodic outages due to an abnormal traffic pattern reflective of a DDoS attack. We are continuing work to mitigate this.”
While the application seemed to have been operating normally, a user of the API reported seeing a "429 - Too Many Requests" error, which is consistent with OpenAI's diagnosis of DDoS as the cause of the issue.
Hacktivist group Anonymous Sudan took to Telegram, claiming responsibility of the attacks.
The group claimed to have targeted OpenAI specifically because of its support for Israel, in addition to its stated goal of going against "any American company." The nation has recently been under heavy fire for bombing civilians in Palestine.
The partnership between OpenAI and the Israeli occupation state, as well as the CEO's declaration that he is willing to increase investment in Israel and his multiple meetings with Israeli authorities, including Netanyahu, were mentioned in the statement.
Additionally, it asserted that “AI is now being used in the development of weapons and by intelligence agencies like Mossad” and that “Israel is using ChatGPT to oppress the Palestinians.”
"ChatGPT has a general biasness towards Israel and against Palestine," continued Anonymous Sudan.
In what it described as retaliation for a Quran-burning incident near Turkey's embassy in Stockholm, the group claimed responsibility for DDoS assaults against Swedish companies at the beginning of the year.
Jake Moore, cybersecurity advisor to ESET Global, DDoS mitigation providers must continually enhance their services.
“Each year threat actors become better equipped and use more IP addresses such as home IoT devices to flood systems, making them more difficult to protect,” says Jake.
“Unfortunately, OpenAI remains one of the most talked about technology companies, making it a typical target for hackers. All that can be done to future-proof its network is to continue to expect the unexpected.”
The effectiveness of phishing emails created by artificial intelligence (AI) is quickly catching up to that of emails created by humans, according to disturbing new research. With artificial intelligence advancing so quickly, there is concern that there may be a rise in cyber dangers. One example of this is OpenAI's ChatGPT.
IBM's X-Force recently conducted a comprehensive study, pitting ChatGPT against human experts in the realm of phishing attacks. The results were eye-opening, demonstrating that ChatGPT was able to craft deceptive emails that were nearly indistinguishable from those composed by humans. This marks a significant milestone in the evolution of cyber threats, as AI now poses a formidable challenge to conventional cybersecurity measures.
One of the critical findings of the study was the sheer volume of phishing emails that ChatGPT was able to generate in a short span of time. This capability greatly amplifies the potential reach and impact of such attacks, as cybercriminals can now deploy a massive wave of convincing emails with unprecedented efficiency.
Furthermore, the study highlighted the adaptability of AI-powered phishing. ChatGPT demonstrated the ability to adjust its tactics in response to recipient interactions, enabling it to refine its approach and increase its chances of success. This level of sophistication raises concerns about the evolving nature of cyber threats and the need for adaptive cybersecurity strategies.
While AI-generated phishing is on the rise, it's important to note that human social engineers still maintain an edge in certain nuanced scenarios. Human intuition, emotional intelligence, and contextual understanding remain formidable obstacles for AI to completely overcome. However, as AI continues to advance, it's crucial for cybersecurity professionals to stay vigilant and proactive in their efforts to detect and mitigate evolving threats.
Cybersecurity measures need to be reevaluated in light of the growing competition between AI-generated phishing emails and human-crafted attacks. Defenders must adjust to this new reality as the landscape changes. Staying ahead of cyber threats in this quickly evolving digital age will require combining the strengths of human experience with cutting-edge technologies.
A user may as well gain access to one such ‘evil’ version of OpenAI’s ChatGPT. While these AI versions may not necessarily by legal in some parts of the world, it could be pricey.
Gaining access to the evil chatbot versions could be tricky. To do so, a user must find the right web forum with the right users. To be sure, these users might have posted the marketed a private and powerful large language model (LLM). One can get in touch with these users in encrypted messaging services like Telegram, where they might ask for a few hundred crypto dollars for an LLM.
After gaining the access, users can now do anything, especially the ones that are prohibited in ChatGPT and Google’s Bard, like having conversation with the AI on how to make pipe bombs or cook meth, engaging in discussions about any illegal or morally questionable subject under the sun, or even using it to finance phishing schemes and other cybercrimes.
“We’ve got folks who are building LLMs that are designed to write more convincing phishing email scams or allowing them to code new types of malware because they’re trained off of the code from previously available malware[…]Both of these things make the attacks more potent, because they’re trained off of the knowledge of the attacks that came before them,” says Dominic Sellitto, a cybersecurity and digital privacy researcher at the University of Buffalo.
These models are becoming more prevalent, strong, and challenging to regulate. They also herald the opening of a new front in the war on cybercrime, one that cuts far beyond text generators like ChatGPT and into the domains of audio, video, and graphics.
“We’re blurring the boundaries in many ways between what is artificially generated and what isn’t[…]“The same goes for the written text, and the same goes for images and everything in between,” explained Sellitto.
Phishing emails, which demand that a user provide their financial information immediately to the Social Security Administration or their bank in order to resolve a fictitious crisis, cost American consumers close to $8.8 billion annually. The emails may contain seemingly innocuous links that actually download malware or viruses, allowing hackers to take advantage of any sensitive data directly from the victim's computer.
Fortunately, these phishing mails are quite easy to detect. In case they have not yet found their way to a user’s spam folder, one can easily identify them on the basis of their language, which may be informal and grammatically incorrect wordings that any legit financial firm would never use.
However, with ChatGPT, it is becoming difficult to spot any error in the phishing mails, bringing about a veritable AI generative boom.
“The technology hasn’t always been available on digital black markets[…]It primarily started when ChatGPT became mainstream. There were some basic text generation tools that might have used machine learning but nothing impressive,” Daniel Kelley, a former black hat computer hacker and cybersecurity consultant explains.
According to Kelley, these LLMs come in a variety of forms, including BlackHatGPT, WolfGPT, and EvilGPT. He claimed that many of these models, despite their nefarious names, are actually just instances of AI jailbreaks, a word used to describe the deft manipulation of already-existing LLMs such as ChatGPT to achieve desired results. Subsequently, these models are encapsulated within a customized user interface, creating the impression that ChatGPT is an entirely distinct chatbot.
However, this does not make AI models any less harmful. In fact, Kelley believes that one particular model is both one of the most evil and genuine ones: According to one description of WormGPT on a forum promoting the model, it is an LLM made especially for cybercrime that "lets you do all sorts of illegal stuff and easily sell it online in the future."
Both Kelley and Sellitto agrees that WormGPT could be used in business email compromise (BEC) attacks, a kind of phishing technique in which employees' information is stolen by pretending to be a higher-up or another authority figure. The language that the algorithm generates is incredibly clear, with precise grammar and sentence structure making it considerably more difficult to spot at first glance.
One must also take this into account that with easier access to the internet, really anyone can download these notorious AI models, making it easier to be disseminated. It is similar to a service that offers same-day mailing for buying firearms and ski masks, only that these firearms and ski masks are targeted at and built for criminals.
ChatGPT is a large language model (LLM) from OpenAI that can generate text, translate languages, write different kinds of creative content, and answer your questions in an informative way. It is still under development, but it has already been used for a variety of purposes, including creative writing, code generation, and research.
However, ChatGPT also poses some security and privacy risks. These risks are highlighted in the following articles:
Overall, ChatGPT is a powerful tool with a number of potential benefits. However, it is important to be aware of the security and privacy risks associated with using it. Users should carefully consider the instructions they give to ChatGPT and only use trusted plugins. They should also be careful about what websites and web applications they authorize ChatGPT to access.
Here are some additional tips for using ChatGPT safely: