Search This Blog

Powered by Blogger.

Blog Archive

Labels

Leading Tech Talent Issues Open Letter Warning About AI's Danger to Human Existence

More than 1,000 IT industry luminaries, including Elon Musk, Steve Wozniak, and Andrew Yang, have called for more time to set human safety guidelines.

 

Elon Musk, Steve Wozniak, and Tristan Harris of the Center for Humane Technology are among the more than 1,100 signatories to an open letter that was published online Tuesday evening and requests that "all AI labs immediately pause for at least 6 months the training of AI systems more powerful than GPT-4." 

"Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable," the letter reads.

A "level of planning and management" is allegedly "not happening," according to the letter, and in its place, unnamed "AI labs" have been "locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one - not even their creators - can understand, predict, or reliably control."

Some of the AI specialists who signed the letter state that the pause they are requesting should be "public and verifiable, and include all essential participants. Governments should intervene and impose a moratorium, the letter advises, if the proposed stop cannot be swiftly implemented.

Indeed, the letter is intriguing both because of those who have signed it, including some engineers from Meta and Google, Emad Mostaque, founder and CEO of Stability AI, and non-technical individuals like a self-described electrician and an esthetician, as well as those who haven't. For instance, no one from OpenAI, the company that created the GPT-4 big language model, has signed this letter. Neither has the team from Anthropic, which split off from OpenAI to create a "safer" AI chatbot. 

Sam Altman, the CEO of OpenAI, told the WSJ earlier this week that GPT-5 training has not yet begun at OpenAI. Altman also mentioned that the business has historically prioritised safety during development and spent more than six months testing GPT-4 for safety issues prior to release. He told the Journal, "In a way, this is preaching to the choir. "I believe that we have been discussing these topics loudly, intensely, and for the longest."

Altman had a conversation with this editor in January, during which he made the case that "starting these [product releases] now [makes sense], where the stakes are still relatively low, rather than just putting out what the entire industry will have in a few years with no time for society to update," was the better course of action. 

In a more recent interview, Altman discussed his friendship with Musk, who cofounded OpenAI but left the group in 2018 due to conflicts of interest, with computer scientist and well-known podcaster Lex Fridman. Musk was a cofounder of OpenAI. According to a more recent claim from the outlet Semafor, Musk departed when Altman, who was named CEO of OpenAI in early 2019, and the other company founders rejected his offer to lead it. 

Given that he has spoken out about AI safety for many years and has recently targeted OpenAI in particular, claiming the organisation is all talk and no action, Musk is arguably the least surprise signatory to this open letter. Fridman questioned Altman about Musk's frequent and recent tweets criticising the company. 

"Elon is definitely criticising us on Twitter right now on a few different fronts, and I feel empathy because I think he is — appropriately so — incredibly anxious about His safety," Altman added. Although I'm sure there are other factors at play as well, that is undoubtedly one of them."
Share it:

AI tools

Cyber Threats

Machine learning

Technology

User Privacy

User Safety