Long before Elon Musk and Apple co-founder Steve Wozniak signed a letter warning that artificial intelligence poses a “serious threat” to humanity, British theoretical physicist Stephen Hawking was sounding the alarm about rapidly advancing technology.
“The event of full AI could mean the end of the human race,” Hawking told the BBC in a 2014 interview.
Hawking, who had suffered from amyotrophic lateral sclerosis (ALS) for over 55 years, died in 2018 at the age of 76. While he had criticisms of AI, he also used a really basic form of technology to speak as a result of his illness, which weakens his muscles and required Hawking to make use of a wheelchair.
Hawking was unable to talk in 1985 and relied on a range of means to speak, including a speech generation device operated by Intel that allowed him to make use of facial movements to pick words or letters that were synthesized into speech.
Hawking’s comment to the BBC in 2014 that AI could “spell the end of the human race” was in response to a matter about the potential modernization of voice technology he relied on. He told the BBC that very basic forms of AI have already proven powerful, but creating systems that rival or surpass human intelligence could be catastrophic for the human race.
“It will take off by itself and redesign itself at an accelerating rate,” he said.
![Stephen Hawking hosts a press conference to announce Breakthrough Starshot, a new space exploration initiative, at One World Observatory on April 12, 2016 in New York City.](https://nypost.com/wp-content/uploads/sites/2/2023/05/hawking-2.jpg?w=1024)
“Humans who’re constrained by slow biological evolution wouldn’t have the ability to compete and could be replaced,” Hawking added.
A couple of months after his death, Hawking’s last book hit the market. Titled “Transient Answers to the Big Questions,” his book contained answers to often asked questions. The science book outlines Hawking’s arguments against the existence of God, how humans will likely sooner or later live in space, and his concerns about genetic engineering and global warming.
Artificial intelligence also ranked primary on his “big questions” list, arguing that computers will “probably overtake humans in intelligence” inside 100 years.
![Elon Musk attends the 2022 Met Gala for "In America: An Anthology of Fashion" at the Metropolitan Museum of Art on May 2, 2022 in New York City.](https://nypost.com/wp-content/uploads/sites/2/2023/05/Musk.jpg?w=1024)
“We may face an explosion of intelligence that will eventually result in machines whose intelligence surpasses ours by greater than our intelligence of snails,” he wrote.
He argued that computers must be trained to evolve to human goals, adding that not taking the risks of AI seriously could potentially be “our worst mistake ever”.
“It’s tempting to dismiss the concept of very smart machines as mere science fiction, but that could be a mistake – and potentially our worst mistake ever.”
Hawking’s remarks reflect the concerns of this yr’s tech giant Elon Musk and Apple co-founder Steve Wozniak in a letter published in March. Two tech leaders, together with 1000’s of other experts, signed a letter calling for a minimum of a six-month hiatus from constructing AI systems more powerful than the GPT-4 OpenAI chatbot.
![Professor Stephen Hawking attends the official screening of](https://nypost.com/wp-content/uploads/sites/2/2023/05/hawking-1.jpg?w=1024)
“AI systems with intelligence competing with humans may pose a serious threat to society and humanity, as shown by extensive research and confirmed by top AI labs,” reads a letter published by the non-profit organization Future of Life.
ChatGPT OpenAI became the fastest growing user base with 100 million monthly energetic users in January as people around the world rushed to make use of a chatbot that simulates human-like conversations based on the prompts they receive. The lab released the latest version of the platform, GPT-4, in March.
Despite calls to halt research at AI labs working on technology that would surpass GPT-4, the release of the system was a watershed moment that resonated throughout the tech industry and prompted various firms to compete in constructing their very own AI systems.
Google is working on rebuilding its search engine and even making a recent one that will likely be based on artificial intelligence; Microsoft launched “recent Bing search engine”, described as “an AI-powered second distant for the web”; and Musk said he would launch a competing AI system, which he described as “maximum truth-seeking.”
Hawking announced a yr before his death that the world must “learn the best way to prepare for and avoid the potential threats” of artificial intelligence, arguing that the systems “could be the worst event in the history of our civilization.” Nonetheless, he noted that the future remains to be unknown and that artificial intelligence could prove helpful to humanity if properly trained.
“The success of creating effective artificial intelligence could also be the best event in the history of our civilization. Or the worst. We just do not know. So we cannot know whether AI will help us endlessly, whether it can be ignored and sidelined by it, or whether it can be destroyed by it,” Hawking said during a speech at the Web Summit technology conference in Portugal in 2017.