Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label UK Government Report. Show all posts

The Dark Side of AI: How Cyberthreats Could Get Worse, Report Warns

 

A UK government report warns that by 2025, artificial intelligence could escalate the risk of cyberattacks and undermine public confidence in online content . It also suggests that terrorists could use the technology to plot chemical or biological strikes. 

However, other experts doubt that technology will advance as predicted. It is expected this week that Prime Minister Rishi Sunak would highlight the opportunities and threats of technology. 

The government report analyses generative AI, the kind of technology that now powers image-generating software and chatbots that are widely employed.

The possibility that AI would enable faster, more potent, and extensive cyberattacks by 2025 is another concern pointed up in the report. Hackers could benefit from employing artificial intelligence (AI) to successfully mimic official language, according to Joseph Jarnecki of the Royal United Services Institute. This presents challenges since bureaucratic terminology has a specific tone that hackers have found tricky to exploit. 

This report's release comes before Sunak's speech detailing the UK government's intentions to guarantee AI safety and position the nation as a global leader in this area. Although Sunak acknowledges that AI has the potential to improve economic growth and problem-solving abilities, he also stresses the need to address the risks and anxieties that come with it. 

"AI will bring new knowledge, new opportunities for economic growth, new advances in human capability, and the chance to solve problems we once thought were beyond us. But it also brings new dangers and new fears," Mr Sunak is expected to say. 

Furthermore, "frontier AI," or highly advanced AIs, will be discussed at a government summit the following week. The question of whether these technologies are dangerous for humans is still up for debate. Another recently released report from the Government Office for Science states that many experts believe this risk to be unlikely and that there are few possible ways for it to come to go through. 

An AI would need to have influence over vital systems, such as financial or military systems, in order to threaten human existence. It would also call for the development of new abilities like autonomy, self-improvement, and the capacity to avoid human monitoring. The report does concede, though, that opinions differ on when specific future capabilities might emerge.