'Profound Risks': Why Elon Musk, Other Experts Want A Pause On 'Giant' AI Experiments
An open letter signed by over 1,300 people, including Elon Musk and writer Yuval Noah Harari, spoke about the challenges posed by AI systems such as job cuts, misinformation and propaganda.
Twitter CEO Elon Musk, author Yuval Noah Harari and several other experts on Wednesday called for a pause on 'giant AI experiments' citing "profound risks to humanity and society". In an open letter, they called on all AI labs to "immediately pause" training of AI systems stronger than ChatGPT-4 for at least six months.
This comes at a time when OpenAI's ChatGPT has taken the world by storm while fanning fears of more job cuts.
"This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium," the letter, with over 1,300 signatories, said.
The letter spoke about the challenges posed by AI systems such as job cuts, misinformation and propaganda. "Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?," it further read. Such apprehensions were also expressed by OpenAI CEO Sam Altman in a recent interview saying that one of the worst outcomes of AI could be its use for large-scale disinformation. "Now that they're getting better at writing computer code. I am worried that these can be used for offensive cyber-attacks," he said in an interview with ABC News.
Here is what Elon Musk and other tech and AI experts said in the letter:
Pause on AI experiments
"Contemporary AI systems are now becoming human-competitive at general tasks," the letter noted. The signatories maintained that "powerful AI systems" should be developed only when the experts are assured of its positive effects and manageable risks. "This confidence must be well justified and increase with the magnitude of a system's potential effects," it said.
Citing OpenAI's recent statement, the letter said that it is now needed to get independent review before starting to train future systems. It further said that the governments must intervene and institute a moratorium if an immediate pause is not enacted immediately.
It further clarified pause won't hamper the AI development but will be a step-back from "the dangerous race to ever-larger unpredictable black-box models with emergent capabilities." "AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal," it said.
Safe AI systems
For safer AI systems, the letter said that AI developers need to work with policymakers to accelerate "development of robust AI governance systems." These systems, the letter said, should include new and capable regulatory authorities dedicated to AI. It suggested other features of the systems such as oversight and tracking of highly capable AI systems and large pools of computational capability, provenance and watermarking systems to help distinguish real from synthetic and to track model leaks, a robust auditing and certification ecosystem and liability for AI-caused harm.
It said the focus of AI research and development should be pegged around making the current "state-of-the-art systems" more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.
Do you always want to share the authentic news with your friends?