Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label AI Safety. Show all posts

Prez Biden Signs AI Executive Order for Monitoring AI Policies


On November 2, US President Joe Biden signed a new comprehensive executive order detailing intentions for business control and governmental monitoring of artificial intelligence. The legislation, released on October 30, aims at addressing several widespread issues in regard to privacy concerns, bias and misinformation enabled by the high-end AI technology that is becoming more and more ingrained in the contemporary world. 

The White House's Executive Order Fact Sheet makes it obvious that US regulatory authorities aim to both try to govern and benefit from the vast spectrum of emerging and rebranded "artificial intelligence" technologies, even though the solutions are still primarily conceptual.

The administrator’s executive order aims at creating new guidelines for the security and safety of AI use. By applying the Defense Production Act, the order directs businesses to provide US regulators with safety test results and other crucial data whenever they are developing AI that could present a "serious risk" for US military, economic, or public security. However, it is still unclear who will be monitoring these risks and to what extent. 

Nevertheless, prior to the public distribution of any such AI programs, the National Institute of Standards and Technology will shortly establish safety requirements that must be fulfilled.

In regards to the order, Ben Buchanan, the White House Senior Advisor for AI said, “I think in many respects AI policy is like running a decathlon, where we don’t get to pick and choose which events we do[…]We have to do safety and security, we have to do civil rights and equity, we have to do worker protections, consumer protections, the international dimension, government use of AI, [while] making sure we have a competitive ecosystem here.”

“Probably some of [order’s] most significant actions are [setting] standards for AI safety, security, and trust. And then require that companies notify us of large-scale AI development, and that they share the tests of those systems in accordance with those standards[…]Before it goes out to the public, it needs to be safe, secure, and trustworthy,” Mr. Buchanan added. 

A Long Road Ahead

In an announcement made by President Biden on Monday, he urged Congress to enact bipartisan data privacy legislation to “protect all Americans, especially kids,” from AI risks. 

While several US states like Massachusetts, California, Virginia, and Colorado have agreed on passing the legislation, the US however lacks comprehensive legal safeguards akin to the EU’s General Data Protection Regulation (GDPR).

GDPR, enacted in 2018, severely limits how businesses can access the personal data of their customers. If they are found to be violating the law, they may as well face hefty fines. 

However, according to Sarah Kreps, professor of government and director of the Tech Policy Institute at Cornell University, the White House's most recent requests for data privacy laws "are unlikely to be answered[…]Both sides concur that action is necessary, but they cannot agree on how it should be carried out."  

Blocking Access to AI Apps is a Short-term Solution to Mitigate Safety Risk


Another major revelation in regard to ChatGPT recently came to light through research conducted by Netskope. According to their analysis, business organizations are experiencing about 183 occurrences of sensitive data being posted to ChatGPT for every 10,000 corporate users each month. Amongst the sensitive data being exposed, source code bagged the largest share.

The security researchers further scrutinized the data of the million enterprise users worldwide and emphasized the growing trend of generative AI app usage, which witnessed an increase of 22.5% over the past two months. This consequently escalated the chance of sensitive data being exposed. 

ChatGPT Reigning the Generative AI Market

Apparently, organizations with 10,000 (or more) users are utilizing some or the other AI tool – with an average of 5 apps – on a regular basis. Compared to other generative AI apps, ChatGPT has more than 8 times as many daily active users. Within the next seven months, it is anticipated that the number of people accessing AI apps will double at the present growth pace.

The AI app with the swiftest growth in installations over the last two months was Google Bard, which is presently attracting new users at a rate of 7.1% per week versus 1.6% for ChatGPT. Although the generative AI app market is expected to considerably grow before then, with many more apps in development, Google Bard is not projected to overtake ChatGPT for more than a year at the current rate.

Besides the intellectual property (excluding source code) and personally identifiable information, other sensitive data communicated via ChatGPT includes regulated data, such as financial and healthcare data, as well as passwords and keys, which are typically included in source code.

According to Ray Canzanese, Threat Research Director, Netskope Threat Lab, “It is inevitable that some users will upload proprietary source code or text containing sensitive data to AI tools that promise to help with programming or writing[…]Therefore, it is imperative for organizations to place controls around AI to prevent sensitive data leaks. Controls that empower users to reap the benefits of AI, streamlining operations and improving efficiency, while mitigating the risks are the ultimate goal. The most effective controls that we see are a combination of DLP and interactive user coaching.”

Safety Measures to Adopt AI Apps

As opportunistic attackers look to profit from the popularity of artificial intelligence, Netskope Threat Labs is presently monitoring ChatGPT proxies and more than 1,000 malicious URLs and domains, including several phishing attacks, malware distribution campaigns, spam, and fraud websites.

While blocking access to AI content and apps may seem like a good idea, it is indeed a short-term solution. 

James Robinson, Deputy CISO at Netskope, said “As security leaders, we cannot simply decide to ban applications without impacting on user experience and productivity[…]Organizations should focus on evolving their workforce awareness and data policies to meet the needs of employees using AI products productively. There is a good path to safe enablement of generative AI with the right tools and the right mindset.”

Organizations must focus their strategy on finding acceptable applications and implementing controls that enable users to use them to their maximum potential while protecting the business from dangers in order to enable the safe adoption of AI apps. For protection against assaults, such a strategy should incorporate domain filtering, URL filtering, and content inspection.

Here, we are listing some more safety measures to secure data and use AI tools with safety: 

  • Disable access to apps that lack a legitimate commercial value or that put the organization at disproportionate risk. 
  • Educate employees to remind users of their company policy pertaining to the usage of AI apps.
  • Utilize cutting-edge data loss prevention (DLP) tools to identify posts with potentially sensitive data.  

Canadian Cybersecurity Head Warns of Surging AI-Powered Hacking and Disinformation

 

Sami Khoury, the Head of the Canadian Centre for Cyber Security, has issued a warning about the alarming use of Artificial Intelligence (AI) by hackers and propagandists. 

According to Khoury, AI is now being utilized to create malicious software, sophisticated phishing emails, and spread disinformation online. This concerning development highlights how rogue actors are exploiting emerging technology to advance their cybercriminal activities.

Various cyber watchdog groups share these concerns. Reports have pointed out the potential risks associated with the rapid advancements in AI, particularly concerning Large Language Models (LLMs), like OpenAI's ChatGPT. LLMs can fabricate realistic-sounding dialogue and documents, making it possible for cybercriminals to impersonate organizations or individuals and pose new cyber threats.

Cybersecurity experts are deeply worried about AI's dark underbelly and its potential to facilitate insidious phishing attempts, propagate misinformation and disinformation, and engineer malevolent code for sophisticated cyber attacks. The use of AI for malicious purposes is already becoming a reality, as suspected AI-generated content starts emerging in real-world contexts.

A former hacker's revelation of an LLM trained on malevolent material and employed to craft a highly persuasive email soliciting urgent cash transfer underscored the evolving capabilities of AI models in cybercrime. While the employment of AI for crafting malicious code is still relatively new, the fast-paced evolution of AI technology poses challenges in monitoring its full potential for malevolence.

As the cyber community grapples with uncertainties surrounding AI's sinister applications, urgent concerns arise about the trajectory of AI-powered cyber-attacks and the profound threats they may pose to cybersecurity. Addressing these challenges becomes increasingly pressing as AI-powered cybercrime evolves alongside AI technology.

The emergence of AI-powered cyber-attacks has alarmed cybersecurity experts. The rapid evolution of AI models raises fears of unknown threats on the horizon. The ability of AI to create convincing phishing emails and sophisticated misinformation presents significant challenges for cyber defense.

The cybersecurity landscape has become a battleground in an ongoing AI arms race, as cybercriminals continue to leverage AI for malicious activities. Researchers and cybersecurity professionals must stay ahead of these developments, creating effective countermeasures to safeguard against the potential consequences of AI-driven hacking and disinformation campaigns.