Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label KPMG Survey. Show all posts

The Impact of Artificial Intelligence on the Evolution of Cybercrime

 

The role of artificial intelligence (AI) in the realm of cybercrime has become increasingly prominent, with cybercriminals leveraging AI tools to execute successful attacks. However, defenders in the cybersecurity field are actively combating these threats. As anticipated by cybersecurity experts a year ago, AI has played a pivotal role in shaping the cybercrime landscape in 2023, contributing to both an escalation of attacks and advancements in defense mechanisms. Looking ahead to 2024, industry experts anticipate an even greater impact of AI in cybersecurity.

The Google Cloud Cybersecurity Forecast 2024 highlights the role of generative AI and large language models in fueling various cyberattacks. According to a KPMG poll, over 90% of Canadian CEOs believe that generative AI increases their vulnerability to breaches, while a UK government report identifies AI as a threat to the country's upcoming election.

Although AI-related threats are still in their early stages, the frequency and sophistication of AI-driven attacks are on the rise. Organizations are urged to prepare for the evolving landscape.

Cybercriminals employ four primary methods utilizing readily available AI tools such as ChatGPT, Dall-E, and Midjourney: automated phishing attacks, impersonation attacks, social engineering attacks, and fake customer support chatbots.

AI has significantly enhanced spear-phishing attacks, eliminating previous indicators like poor grammar and spelling errors. With tools like ChatGPT, cybercriminals can craft emails with flawless language, mimicking legitimate sources to deceive users into providing sensitive information.

Impersonation attacks have also surged, with scammers using AI tools to impersonate real individuals and organizations, conducting identity theft and fraud. AI-powered chatbots are employed to send voice messages posing as trusted contacts to extract information or gain access to accounts.

Social engineering attacks are facilitated by AI-driven voice cloning and deepfake technology, creating misleading content to incite chaos. An example involves a deepfake video posted on social media during Chicago's mayoral election, falsely depicting a candidate making controversial statements.

While fake customer service chatbots are not yet widespread, they pose a potential threat in the near future. These chatbots could manipulate unsuspecting victims into divulging sensitive personal and account information.

In response, the cybersecurity industry is employing AI as a security tool to counter AI-driven scams. Three key strategies include developing adversarial AI, utilizing anomaly detection to identify abnormal behavior, and enhancing detection response through AI systems. By creating "good AI" and training it to combat malicious AI, the industry aims to stay ahead of evolving cyber threats. Anomaly detection helps identify deviations from normal behavior, while AI systems in detection response enhance the rapid identification and mitigation of legitimate threats.

Overall, as AI tools continue to advance, both cybercriminals and cybersecurity experts are leveraging AI capabilities to shape the future of cybercrime. It is imperative for the industry to stay vigilant and adapt to emerging threats in order to effectively mitigate the risks associated with AI-driven attacks.

Generative AI Has an Increasing Effect on the Workforce and Productivity

In a recent report published by KPMG, it was revealed that an overwhelming 97% of participants anticipate a significant or exceedingly substantial influence of generative AI on their respective organizations within the upcoming 12 to 18 months. Furthermore, the survey highlights that generative AI has secured its position as the foremost burgeoning enterprise technology. 

A notable 80% of respondents are of the opinion that this technology will instigate major upheavals in their industry, while an impressive 93% are convinced of its potential to deliver substantial value to their business operations. Generative AI, a facet of artificial intelligence, encompasses systems within machine learning. 

These systems possess the ability to produce diverse content forms—ranging from text, and images, to code—usually prompted by user input. This breed of AI models finds growing integration within various online utilities and chatbots. Users can engage by typing inquiries or directions into designated input spaces. 

Consequently, the AI model then undertakes the task of crafting responses akin to human communication. Among the participants, 62% indicated that their organizations were presently engaged in the application of generative AI. 

Additionally, 23% mentioned that they were in the initial phases of exploring its potential, while 14% revealed they were contemplating its integration. This implies that only a minimal 1% falls within the category of having either disregarded generative AI after assessment or possessing no intentions to employ it whatsoever. 

A noteworthy observation is that individuals not in IT leadership roles exhibited a higher tendency (73%) to report active utilization of generative AI, compared to IT leaders (59%). This suggests a realm of experimentation that transcends the boundaries of the IT department. Furthermore, enterprises boasting 5,000 or more employees displayed a greater likelihood (69%) of adopting the technology, in contrast to smaller counterparts (57%). 

A significant majority of U.S. executives, comprising 66%, emphasized that the introduction of generative AI into their operations would entail a dual approach: the recruitment of fresh talent and the upskilling of current employees. Notably, a substantial 71% of these executives envision the imperative need for the IT/Tech department to actively hire and provide training to their workforce to ensure seamless integration of generative AI. 

Throughout the implementation phase, executives hold the view that a certain skill set will emerge as paramount. Specifically, proficiency in domains such as AI, machine learning (ML), natural language processing (NLP), text-to-speech, and speech-to-text capabilities are anticipated to take precedence. In the realm of the financial services sector, opinions were equally divided regarding this matter. On the contrary, the retail industry appears to harbour a penchant for risk-taking, as evidenced by 60% of respondents indicating that being overly cautious holds a more significant peril. 

Meanwhile, those entrenched in the technology domain lean towards prudence, with 58% expressing the belief that rapid progression carries a greater potential hazard. Expanding the analysis, enterprises boasting 5,000 employees or more emerge as the most cautious contenders, with a substantial 75% indicating that erring on the side of moving too swiftly constitutes the primary concern. 

In contrast, smaller businesses find their concerns leaning toward the opposite end, with approximately 62.8% perceiving sluggishness as the most prominent threat. Interestingly, a noticeable discrepancy emerged between non-IT leaders and their IT counterparts in terms of the progress made in shaping generative AI policies and guidelines. 

A substantial 65% of non-IT leaders were actively engaged in this endeavour, whereas only 42% of IT leaders exhibited the same inclination. Likewise, a similar pattern emerged when considering the identification of practical use cases, with 59% of non-IT leaders ahead of their IT counterparts at 38%. 

Delving deeper into specific industries, the retail sector took the lead in the pre-existing identification of use cases, boasting a notable 49%. This positioned them ahead of the technology and manufacturing domains, both standing at 42%, while the financial services industry trailed with 32% in terms of this proactive readiness.