However, he notes that while the potential for harm in AI usage is disputed, “we must not put heads in the sand,” over AI risks.
Sunak notes that the technology is already creating new job opportunities and that its advancement would catalyze economic growth and productivity, though he acknowledged that it would have an impact on the labor market.
“The responsible thing for me to do is to address those fears head on, giving you the peace of mind that we will keep you safe, while making sure you and your children have all the opportunities for a better future that AI can bring[…]Doing the right thing, not the easy thing, means being honest with people about the risks from these technologies,” Sunak stated. On Wednesday, the government had released documents highlighting the risks of AI.
Existential risks from the technology cannot be ruled out, according to one research on the future risks of frontier AI, the term given to frontier AI systems will be discussed at the summit.
“Given the significant uncertainty in predicting AI developments, there is insufficient evidence to rule out that highly capable Frontier AI systems, if misaligned or inadequately controlled, could pose an existential threat.”
The paper also presents several concerning scenarios about the advancement of AI.
One warns of the potential backlash from the public, as their jobs are being taken by AI. “AI systems are deemed technically safe by many users … but they are nevertheless causing impacts like increased unemployment and poverty,” says the paper, creating a “fierce public debate about the future of education and work”.
In another case mentioned in the document, dubbed as the ‘Wild West,’ the illicit use of AI to commit fraud and scams leads to social instability as a result of numerous victims of organized crime, widespread trade secret theft by enterprises, and an increase in the amount of AI-generated content that clogs the internet.
“This could lead to ‘personalised’ disinformation, where bespoke messages are targeted at individuals rather than larger groups and are therefore more persuasive,” said the discussion document, cautioning of the potential decrease in public trust when it comes to factual information and in civic processes like elections.
“Frontier AI can be misused to deliberately spread false information to create disruption, persuade people on political issues, or cause other forms of harm or damage,” it says. In regards to the documents, Mr. Sunak added that among the aforementioned risks outlined in the document was also a risk of AI being used by terrorist groups, "to spread fear and disruption on an even greater scale."
He notes that reducing the danger of AI causing the extinction of humans should be a "global priority".
However, he stated: "This is not a risk that people need to be losing sleep over right now and I don't want to be alarmist." He said that, on the whole, he was "optimistic" about AI's capacity to improve people's lives.
The disruption AI is already causing in the workplace is a threat that many will be far more familiar with.
Mr. Sunak emphasized how effectively AI technologies do administrative duties that are typically performed by an employee manually, such as drafting contracts and assisting in decision-making.
He added that technology has always changed how people generate money and that education is the best way to prepare individuals for the shifting market. For example, automation has already altered the nature of employment in factories and warehouses, but it has not completely eliminated human involvement.
The prime minister encouraged people to see artificial intelligence as a "co-pilot" in the day-to-day operations of the workplace, saying it was oversimplified to suggest the technology will "take people's jobs".
There are certain risks that the unchecked generative AI possesses with the overabundant information it may hold. Companies run the risk of disclosing their valuable assets when they feed private, sensitive data into open AI models. Some businesses choose to localize AI models on their systems and train them using their confidential data in order to reduce this danger. However, for best outcomes, such a strategy necessitates a well-organized data architecture.
The appealing elements of generative AI and Large Language Models (LLMs) are their capabilities to compile information to produce fresh ideas, but these skills also carry inherent risks. If not carefully handled, gen AI can unintentionally result in issues like:
AI systems must handle personal data with the utmost care, especially sensitive or special category personal data. Concerns about unintentional data leaks that could lead to data privacy violations are raised by the growing integration of marketing and consumer data into LLMs.
It is occasionally illegal to use consumer data in AI systems, which has negative legal repercussions. As companies adopt AI, they must carefully negotiate this treacherous terrain to ensure they uphold contractual commitments.
The goals of current and potential future AI regulations focus on a transparent and lucid disclosure of AI technology. For instance, the business must disclose whether a person or an AI is handling a customer's engagement with a chatbot on a support website. Maintaining trust and upholding ethical standards depend on adherence to such restrictions.
Recent legal actions against eminent AI companies highlight the significance of handling data responsibly. The importance of strict data governance and transparency is highlighted by these lawsuits, which include class action cases involving copyright infringement, consumer protection, and data protection issues. They also suggest possible conditions for exposing the origins of AI training data.
Since their use of copyrighted data to build and train their models, AI giants have been the main targets of various lawsuits. Allegations of copyright infringement, consumer protection violations, and data protection legislation violations are made in recent class action lawsuits filed in the Northern District of California, including one filed on behalf of authors and another on behalf of victim users. These submissions emphasize the value of treating data responsibly and could indicate that in the future it will be necessary to identify the sources of training data.
Moreover, businesses possess serious risks when they significantly rely on AI models, not just AI developers like OpenAI. The case of how many of the apps implement improper AI model training may taint entire products. The parent business Everalbum was forced to destroy improperly gathered data and AI models after the Federal Trade Commission (FTC) accused Everalbum of misleading consumers about the use of face recognition technology and data retention. This forced Everalbum to cease in 2020.
Despite the legal challenges, CEOs are under pressure to adopt generative AI if they wish to increase their business’ productivity. Businesses can create best practices and get ready for new requirements by using the frameworks and legislation currently in place. AI systems are covered by provisions in existing data protection regulations, such as those requiring transparency, notice, and the protection of individual privacy rights. Some of these best practices involve: