Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label technology risks. Show all posts

Generative AI Has an Increasing Effect on the Workforce and Productivity

In a recent report published by KPMG, it was revealed that an overwhelming 97% of participants anticipate a significant or exceedingly substantial influence of generative AI on their respective organizations within the upcoming 12 to 18 months. Furthermore, the survey highlights that generative AI has secured its position as the foremost burgeoning enterprise technology. 

A notable 80% of respondents are of the opinion that this technology will instigate major upheavals in their industry, while an impressive 93% are convinced of its potential to deliver substantial value to their business operations. Generative AI, a facet of artificial intelligence, encompasses systems within machine learning. 

These systems possess the ability to produce diverse content forms—ranging from text, and images, to code—usually prompted by user input. This breed of AI models finds growing integration within various online utilities and chatbots. Users can engage by typing inquiries or directions into designated input spaces. 

Consequently, the AI model then undertakes the task of crafting responses akin to human communication. Among the participants, 62% indicated that their organizations were presently engaged in the application of generative AI. 

Additionally, 23% mentioned that they were in the initial phases of exploring its potential, while 14% revealed they were contemplating its integration. This implies that only a minimal 1% falls within the category of having either disregarded generative AI after assessment or possessing no intentions to employ it whatsoever. 

A noteworthy observation is that individuals not in IT leadership roles exhibited a higher tendency (73%) to report active utilization of generative AI, compared to IT leaders (59%). This suggests a realm of experimentation that transcends the boundaries of the IT department. Furthermore, enterprises boasting 5,000 or more employees displayed a greater likelihood (69%) of adopting the technology, in contrast to smaller counterparts (57%). 

A significant majority of U.S. executives, comprising 66%, emphasized that the introduction of generative AI into their operations would entail a dual approach: the recruitment of fresh talent and the upskilling of current employees. Notably, a substantial 71% of these executives envision the imperative need for the IT/Tech department to actively hire and provide training to their workforce to ensure seamless integration of generative AI. 

Throughout the implementation phase, executives hold the view that a certain skill set will emerge as paramount. Specifically, proficiency in domains such as AI, machine learning (ML), natural language processing (NLP), text-to-speech, and speech-to-text capabilities are anticipated to take precedence. In the realm of the financial services sector, opinions were equally divided regarding this matter. On the contrary, the retail industry appears to harbour a penchant for risk-taking, as evidenced by 60% of respondents indicating that being overly cautious holds a more significant peril. 

Meanwhile, those entrenched in the technology domain lean towards prudence, with 58% expressing the belief that rapid progression carries a greater potential hazard. Expanding the analysis, enterprises boasting 5,000 employees or more emerge as the most cautious contenders, with a substantial 75% indicating that erring on the side of moving too swiftly constitutes the primary concern. 

In contrast, smaller businesses find their concerns leaning toward the opposite end, with approximately 62.8% perceiving sluggishness as the most prominent threat. Interestingly, a noticeable discrepancy emerged between non-IT leaders and their IT counterparts in terms of the progress made in shaping generative AI policies and guidelines. 

A substantial 65% of non-IT leaders were actively engaged in this endeavour, whereas only 42% of IT leaders exhibited the same inclination. Likewise, a similar pattern emerged when considering the identification of practical use cases, with 59% of non-IT leaders ahead of their IT counterparts at 38%. 

Delving deeper into specific industries, the retail sector took the lead in the pre-existing identification of use cases, boasting a notable 49%. This positioned them ahead of the technology and manufacturing domains, both standing at 42%, while the financial services industry trailed with 32% in terms of this proactive readiness.

ChatGPT Hallucinations Open Developers to Supply Chain Malware Attacks

Researchers have discovered a concerning vulnerability in ChatGPT that could potentially be exploited by attackers to propagate harmful code packages. This particular weakness stems from ChatGPT's tendency to provide inaccurate information, which could be leveraged to introduce malicious software and Trojans into trusted applications and code repositories such as npm, PyPI, GitHub, and various others. 

This represents a substantial threat to the software supply chain. In a recent blog post, researchers from Vulcan Cyber's Voyager18 research team have shed light on a concerning method employed by threat actors, known as "AI package hallucinations." This technique exploits ChatGPT's capability to generate recommendations, leading to the creation of seemingly legitimate code packages that contain malicious elements. 

Developers who interact with the chatbot may unknowingly download these packages and integrate them into their software, which can subsequently be widely distributed. This discovery highlights the potential risks associated with the misuse of ChatGPT and its impact on software security. 

What is AI- Hallucination? 

In the realm of artificial intelligence, the term "hallucination" refers to a response generated by AI that appears reasonable but falls short in terms of accuracy, bias, or outright falsehood. This phenomenon arises due to the nature of ChatGPT and similar large language models (LLMs) that form the foundation of generative AI platforms. 

When posed with questions, these models rely on information sourced from the vast expanse of the Internet, which includes various types of data such as sources, links, blogs, and statistics. However, the training data available to these models may not always be reliable or of the highest quality. Consequently, the AI's responses can be influenced by this imperfect training data, leading to hallucinations that do not align with factual information. 

In the blog post authored by Bar Lanyado, the lead researcher at Voyager18, he highlighted that LLMs such as ChatGPT possess extensive training and exposure to vast amounts of textual data. As a consequence, these models have the ability to generate responses that may appear plausible but are actually fictional. 

Furthermore, he said that LLMs have a tendency to extrapolate beyond their training, potentially leading to the production of responses that seem credible but lack accuracy. 

Researchers Conducted an Experiment Of An AI Hallucination 

In their demonstration, the researchers conducted an experiment utilizing ChatGPT 3.5 to validate their concept. They constructed a scenario where an attacker posed a coding problem to the platform, requesting a solution. As a response, ChatGPT generated a set of packages, including some that were non-existent, indicating they were not available within a reputable package repository. 

This practical demonstration served to illustrate how the platform could generate misleading and potentially malicious package recommendations. According to the researchers, the fabricated code packages produced by ChatGPT could serve as a novel avenue for attackers to distribute malicious software, bypassing conventional techniques like typosquatting or masquerading. 

By presenting these fabricated packages as genuine recommendations from ChatGPT, attackers can exploit the trust developers place in the platform's suggestions. Consequently, there is a significant risk of malicious code infiltrating legitimate applications and code repositories, thereby posing a major threat to the software supply chain. 

How To Detect Bad Code Libraries?

According to the researchers, detecting malicious packages can be challenging, especially when threat actors employ obfuscation techniques or create functional Trojan packages. However, developers can take preventive measures by thoroughly validating the libraries they download. It is crucial to ensure that these libraries not only perform their intended functions but also aren't cleverly disguised Trojans posing as legitimate packages, as highlighted by Lanyado. 

Risks Of the AI-Language Model 

Since its release in November, ChatGPT has gained popularity not only among users but also among threat actors who exploit it for cyberattacks. In the first half of 2023, security incidents have included scams targeting user credentials, theft of Chrome cookies through malicious ChatGPT extensions, and phishing campaigns utilizing ChatGPT as bait for malicious websites. 

While some experts argue the security risk may be exaggerated, the researchers emphasized that the rapid adoption of generative AI platforms like ChatGPT has indeed introduced potential security concerns due to their integration into daily professional activities and workload management.