
Prince Harry, Richard Branson, Steve Bannon and the ‘Godfathers of AI’ call on AI labs to halt their pursuit of ‘superintelligence’ – warning that technology could exceed human control
A New open letterThe conference, signed by a group of AI scientists, celebrities, policymakers, and religious leaders, calls for a moratorium on the development of “superintelligence”—hypothetical AI technology that could exceed the intelligence of all of humanity—until the technology is safe and can be reliably controlled.
Notable signatories of the letter include Geoffrey Hinton, the AI pioneer and Nobel Prize winner, and other prominent AI figures such as Yoshua Bengio and Stuart Russell, as well as business leaders such as Virgin co-founder Richard Branson and apple Co-founder Steve Wozniak. It has also been signed by celebrities, including actor Joseph Gordon-Levitt, who recently signed it He expressed his concerns On Meta AI products, will.i.am, Prince Harry and Meghan, the Duke and Duchess of Sussex. Political and national security figures as diverse as Trump ally and strategist Steve Bannon and Mike Mullen, chairman of the Joint Chiefs of Staff under Presidents George W. Bush and Barack Obama, also appear on the list of more than 1,000 other sites.
A new poll conducted in conjunction with the open letter, written and distributed by the nonprofit Future of Life Institute, shows that the public generally agrees with the call to halt the development of super-powerful AI technology.
In the United States, the poll found that only 5% of US adults support the status quo of unregulated development of advanced artificial intelligence, while 64% agreed that superintelligence should not be developed until it is safe and demonstrably controllable. The survey found that 73% want strong regulation of advanced AI.
“95% of Americans don’t want the race to superintelligence, and experts want to ban it,” Max Tegmark, president of Future of Life, said in the statement.
Superintelligence is broadly defined as a type of artificial intelligence capable of outperforming all of humanity at most cognitive tasks. There is currently no consensus on when or whether superintelligence will be achieved, and the timelines suggested by experts are merely guesses. Some of the more aggressive estimates have said superintelligence could be achieved by the late 2020s, while more conservative views push it back far or doubt that current technology can achieve it at all.
Many leading AI laboratories, incl dead, Google DeepMind and OpenAI are actively pursuing this level of advanced AI. The letter calls on these leading AI labs to halt their pursuit of these capabilities until there is “broad scientific consensus that this will be done in a safe and controllable manner, and strong public acceptance.”
“Frontier AI systems can outperform most individuals at most cognitive tasks within just a few years,” Yoshua Bengio, a Turing Award-winning computer scientist who, along with Hinton, is considered one of the “godfathers” of artificial intelligence, said in a statement. “To safely advance toward superintelligence, we must scientifically determine how to design AI systems that are fundamentally incapable of harming people, whether through misalignment or malicious use,” he said. “We also need to ensure that the public has a much stronger say in the decisions that will shape our collective future.”
The signatories claim that the pursuit of superintelligence raises serious risks of economic displacement and disempowerment, and poses a threat to national security as well as civil liberties. The letter accuses technology companies of pursuing this potentially dangerous technology without guardrails, oversight, or broad public approval.
“To get the most out of what artificial intelligence can offer humanity, there is simply no need to reach for the unknown and extremely risky goal of superintelligence, which is by far a very distant frontier. By definition, this will lead to a power that we cannot understand or control,” actor Stephen Fry said in the statement.
Post Comment