Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Cyber Officials. Show all posts

Top Cyber Official Says AI Needs Better Security

 


Artificial intelligence (AI) is used by hackers and propagandists to develop malicious software, draft convincing phishing emails to infect computers, and spread false information via the web, according to Canada's top cybersecurity official, who spoke to Reuters on Thursday. This report suggests cybercriminals have also adopted the technological revolution sweeping Silicon Valley. This proves that this is not an unfamiliar phenomenon. 

To protect against malicious attacks involving cyberspace, Lindy Cameron, the Chief Technology Officer for the National Cyber Security Centre, believes it is vital to integrate cyber security into artificial intelligence (AI) systems. Despite the fact that AI development is at its infancy, but robust security measures cannot be overstated. Many companies have expressed concerns that since they are eager to release AI products as soon as possible, they might overlook security considerations. This could pose a serious risk to users. 

A former intelligence chief has warned that there could be devastating consequences for attacks on artificial intelligence systems in areas such as transportation, utilities, and national security. This is if these attacks succeed. 

It is predicted that AI will play an increasingly significant role in many aspects of daily life, such as homes, cities, and even combat operations, sometime in the future. The benefits of these changes, however, come with risks. Several researchers, including Hannigan and Lorenzo Cavallaro from University College London, who are experts in adversarial machine learning, have pointed out vulnerabilities found in artificial intelligence systems. These vulnerabilities may be exploited by malicious actors in the future. 

There is a study explaining that AI systems can be breached with malicious code injected into the system. This is done by misleading it with fake data. Both of these actions compromise the AI system's results. AI-generated outcomes can be difficult to identify and trust because of these vulnerabilities. 

In a recent interview, Sami Khoury the head of the Canadian Center for Cyber Security said his agency had seen the use of artificial intelligence (AI) being used in phishing emails, in crafting emails in a more focused manner, in malicious code(s), as well as in disinformation and misinformation. 

The evidence that Khoury provided was neither detailed nor specific. Nevertheless, the implication that cybercriminals are already using artificial intelligence gives a new urgency to the chorus of alarm arising from the use of this emerging technology by international criminal organizations. 

Recently, several cyber security watchdog groups have been releasing reports indicating that artificial intelligence (AI) poses a serious threat to society, particularly the rapidly improving language processing programs known as large language models (LLMs), which use a large amount of text as a starting point to create convincing dialogues, documents, and other forms of communication.

Besides disruptions, there are broader national security implications associated with the use of artificial intelligence. As a result, intelligence systems are capable of being misused by malicious actors to analyze satellite imagery taken by military forces, leading to false identifications of real assets versus bogus ones.

It is now reported that real-world attacks on AI systems are occurring here and now, despite the fact that such concerns have previously been theoretical. The Center for Security and Emerging Technology at Georgetown University notes that these malware attacks could affect a wide range of industries, including the banking industry, telecommunications, and the government, particularly those responsible for detecting cyberattacks using artificial intelligence systems. 

AI systems must be able to address these security challenges in order for them to be successful. The early days of internet security must be used to learn from companies and developers to make sure security considerations are prioritized during the development of AI products. According to Cameron, it is the producers' responsibility to ensure the security of their artificial intelligence systems in order to give consumers the confidence that the technology is safe without having to worry about potential risks associated with it. 

It is suggested that those who develop AI systems should focus on creating a robust and secure AI system, as a way to mitigate the risks. Additionally, it provides a pertinent explanation of the importance of regulatory measures and standards in order to secure AI technologies. Furthermore, another suggestion is that government agencies, researchers, and industry experts work together in order to resolve these challenges and improve the overall security of AI systems by collaborating. 

Considering the possible negative consequences of an attack on national security and the potential threat of malicious attacks on AI systems, the integration of robust security measures is integral to safeguarding against them. Developing secure artificial intelligence technologies requires collaboration among developers, regulators, and experts in the field of cybersecurity who have a wealth of experience in the field. The only way that AI can be harnessed for the benefit of society effectively and safely is through these collective actions that will benefit society as a whole. 

It has been said by Khoury that despite the fact that the use of artificial intelligence to draft malicious code is still in its infancy - "there's still a long way to go because it takes a lot to write a good exploit" - the concern is that the evolution of AI models is taking such rapid strides that it can be difficult to determine their malicious potential prior to them being released into the wild.