20230530 sam altman
OpenAI CEO Sam Altman testifies before a Senate Judiciary Privacy, Technology & the Law Subcommittee hearing titled 'Oversight of A.I.: Rules for Artificial Intelligence' on Capitol Hill in Washington, U.S. Image Credit: Reuters

Hundreds of artificial intelligence scientists and tech executives signed a one-sentence letter that succinctly warns AI poses an existential threat to humanity, the latest example of a growing chorus of alarms raised by the very people creating the technology.

"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," according to the statement released Tuesday by the nonprofit Center for AI Safety.

The open letter was signed by more than 350 researchers and executives, including chatbot ChatGPT creator OpenAI's chief executive Sam Altman, as well as 38 members of Google's DeepMind artificial intelligence unit.

read more

Altman and others have been at the forefront of the field, pushing new "generative" AI to the masses, such as image generators and chatbots that can have humanlike conversations, summarize text and write computer code. OpenAI's ChatGPT bot was the first to launch to the public in November, kicking off an arms race that led Microsoft and Google to launch their own earlier this year.

Since then, a growing faction within the AI community has been warning about the potential risks of a doomsday type scenario where the technology grows sentient and attempts to destroy humans in some way. They are pitted against a second group of researchers who say this is a distraction from problems like inherent bias in current AI, the potential for it take jobs and its ability to lie.

Skeptics also point out that companies who sell AI tools can benefit from the widespread idea that they are more powerful than they actually are - and they can front-run potential regulation on shorter term risks if they hype up those that are longer term.

Dan Hendrycks, a computer scientist who leads the Center for AI Safety, said the single sentence letter was designed to ensure the core message isn't lost.

"We need widespread acknowledgment of the stakes before we can have useful policy discussions," Hendrycks wrote in an email. "For risks of this magnitude, the takeaway isn't that this technology is overhyped, but that this issue is currently underemphasized relative to the actual level of threat."

In late March, a different public letter gathered more than 1,000 signatures from members of the academic, business and technology worlds who called for an outright pause on the development of new high-powered AI models until regulation could be put into place. Most of the field's most influential leaders didn't sign that one, but have signed the new statement, including Altman and two of Google's most senior AI executives: Demis Hassabis and James Manyika. Microsoft Chief Technology Officer Kevin Scott and Microsoft Chief Scientific Officer Eric Horvitz both signed it as well.

Notably absent from the letter are the chief executives of Google, Sundar Pichai, and Microsoft, Satya Nadella, the field's two most powerful corporate leaders.

Industry leaders are also stepping up their engagement with Washington power brokers. Earlier this month, Altman visited with President Biden to discuss AI regulation. He later testified on Capitol Hill, warning lawmakers that artificial intelligence could cause significant harm to the world. He drew attention to specific "risky" applications including using it spread disinformation and potentially aid in more targeted drone strikes.

Hendrycks added that "ambitious global coordination" might be required to deal with the problem, possibly drawing lessons from both nuclear nonproliferation or pandemic prevention. Though a number of ideas for AI governance have been proposed, no sweeping solutions have been adopted.

Altman, the OpenAI CEO, suggested in a recent blog post that there likely will be a need for an international organization that can inspect systems, test their compliance with safety standards, and place restrictions on their use similar to how the International Atomic Energy Agency governs nuclear technology.

Addressing the apparently hypocrisy of sounding the alarm over AI while rapidly working to advance it, Altman told Congress that it was better to get the tech out to many people now while it is still early so that society can understand and evaluate its risks, rather than waiting until it is already too powerful to control.

Others have implied that the comparison to nuclear technology may be alarmist. Former White House tech adviser Tim Wu said likening the threat posed by AI to nuclear fallout misses the mark and clouds the debate around reining in the tools by shifting the focus away from the harms it may already be causing.

"There are clear harms from AI, misuse of AI already that we're seeing, and I think we should do something about those, but I don't think they're . . . yet shown to be like nuclear technology," he told The Washington Post in an interview last week.