A gaggle of leading AI experts and managers warned that the technology poses a “risk of extinction” in an alarming joint statement published on Tuesday.
OpenAI CEO Sam Altman, whose company created ChatGPT, and “Godfather of AI” Geoffrey Hinton were amongst greater than 350 outstanding figures who see AI as an existential threat, according to a one-sentence open letter organized by the non-profit Center for AI Safety .
“Reducing the danger of extinction by artificial intelligence needs to be a world priority alongside other societal threats corresponding to pandemics and nuclear war.” experts wrote in a 22-word statement.
This transient statement is the most recent in a series of warnings from leading experts in regards to the potential of AI to create chaos in society – with potential dangers corresponding to the spread of misinformation, severe economic shocks due to job losses, and even outright attacks on humanity.
Control increased after the uncontrollable popularity of OpenAI’s ChatGPT product.
![Altman herself](https://nypost.com/wp-content/uploads/sites/2/2023/05/NYPICHPDPICT000011939427.jpg?w=1024)
The potential threats were evident as recently as last week, when a possibly AI-generated image of a fake Pentagon explosion sparked a sell-off that briefly wiped billions of dollars from the US stock market before being exposed.
The AI Security Center said the transient statement was intended to “start a discussion” on the subject, given the “broad spectrum of vital and urgent threats posed by AI.”
As well as to Altman and Hinton, notable signatories included Google’s DeepMind head Demis Hassabis and one other outstanding AI lab leader, Anthropic CEO Dario Amodei.
Altman, Hassabis and Amodei were amongst a select group of experts who met with President Biden earlier this month to discuss potential threats and laws around artificial intelligence.
Hinton and fellow signer Yoshua Bengio won the 2018 Turing Award, the computing world’s highest honor, for his or her work on advances in neural networks which were described as “major breakthroughs in artificial intelligence.”
![Altman herself](https://nypost.com/wp-content/uploads/sites/2/2023/05/NYPICHPDPICT000011798311.jpg?w=1024)
“As we grapple with the immediate threats of AI, corresponding to malicious use, disinformation and disenfranchisement, the AI industry and governments world wide must also seriously confront the danger that future AI could pose a threat to human existence,” he said. Dan Hendrycks, director of the AI Security Center.
“Reducing the danger of extinction by AI would require global motion,” added Hendrycks. “The world has successfully worked together to mitigate the risks of nuclear war. The identical level of effort is required to address the threats posed by future AI systems.”
Despite his leadership role in OpenAI, Altman has been vocal about his concerns in regards to the limitless development of advanced AI systems.
![Elon Musk](https://nypost.com/wp-content/uploads/sites/2/2023/05/NYPICHPDPICT000011940700.jpg?w=1024)
In testimony at Capitol Hill earlier this month, Altman argued in favor of government regulations on technology, including protective barriers.
On the time, Altman admitted that his best fear was that AI could “do significant harm to the world” without supervision.
Elsewhere, Hinton recently quit his part-time job as an AI researcher for Google so he can speak more freely about his concerns.
![Dr. Geoffrey Hinton](https://nypost.com/wp-content/uploads/sites/2/2023/05/NYPICHPDPICT000011284475.jpg?w=1024)
Hinton said he now partly regrets his life’s work, which may allow “bad actors” to do “bad things” that will likely be difficult to prevent.
The 22-word statement was noticeably shorter than the previous open letter that sparked scrutiny in March.
Billionaire Elon Musk was amongst tons of of experts who called for a six-month hiatus in advanced AI development so leaders can consider how to proceed safely.
Their lengthy open letter – signed by the identical experts who backed the AI Security Center’s statement – suggested that the risks of AI include the possible “loss of control of our civilization.”
![artificial intelligence](https://nypost.com/wp-content/uploads/sites/2/2023/05/NYPICHPDPICT000011789813-1.jpg?w=1024)
Musk was much more blunt during his speech on the Wall Street Journal conference in London last week, stating that he sees a “non-zero probability” that AI will “be the terminator” – a reference to the worst case scenario of James Cameron’s 1984 sci-fi film r.-fi movie.
Former Google CEO Eric Schmidt echoed Musk’s concerns, arguing that AI will not be removed from becoming an “existential threat” to humanity that might cause “many, many, many, many individuals injured or killed.”