California: Sam Altman, CEO of artificial intelligence startup OpenAI, said “there are many ways it could go wrong,” speaking about the rapid rise of AI technology. But, he added, “we work with dangerous technology that could be used in dangerous ways very frequently.”
Altman has recently expressed concern over the potential for the increasingly powerful AI technology to inflict harm. In an interview at the Bloomberg Technology Summit in San Francisco, he said global regulation could address big risks but shouldn’t be overdone.
OpenAI, which makes the wildly popular ChatGPT chatbot, has been valued at more than $27 billion and is a leader in the booming field of venture-backed AI companies. Addressing whether he would financially benefit from OpenAI’s success, Altman said “I have enough money,” and that his motivation was the potential benefits of the technology.
“This concept of having enough money is not something that is easy to get across to other people,” he said.
The CEO also said he wanted to make a contribution to human technological progress with artificial intelligence. “I think this will be like, the most important step yet that humanity has to get through with technology,” Altman added. “And I really care about that.”
OpenAI is at the forefront of generative AI technology, which is capable of generating text or images guided by only a few words of user prompts. The startup’s products - including ChatGPT and image generator Dall-E - have dazzled audiences. They’ve also helped spark a multilbillion-dollar frenzy among venture capital investors and entrepreneurs who are vying to help lay the foundation of a new era of technology.
To generate revenue, OpenAI is giving companies access to the application programming interfaces needed to create their own applications that make use of its AI models. The company is also selling access to a premium version of its chatbot, called ChatGPT Plus. OpenAI doesn’t release information about total sales.
The speed and power of the fast-growing AI industry has spurred governments and regulators to try to set guardrails around its development. It’s an effort that Altman himself has endorsed.
Altman was among the artificial intelligence experts who met with President Joe Biden this week in San Francisco. The CEO has been traveling widely and speaking about AI, including in Washington, where he told US senators that, “if this technology goes wrong, it can go quite wrong.”
Major AI companies, including Microsoft and Alphabet’s Google, have committed to participating in an independent public evaluation of their systems. But the US is also seeking a broader regulatory push. The Commerce Department said earlier this year that it was considering rules that could require AI models to go through a certification process before being released.
Last month, Altman signed onto a brief statement that included support from more than 350 executives and researchers saying “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.”
Despite dire warnings from technology leaders, some AI researchers contend that artificial intelligence isn’t advanced enough to justify fears that it will destroy humanity, and that focusing on doomsday scenarios is only a distraction from issues like algorithmic bias, racism and the risk of rampant disinformation.
OpenAI’s ChatGPT and Dall-E, both released last year, have inspired startups to incorporate AI into a vast array of fields, including financial services, consumer goods, healthcare and entertainment. Bloomberg Intelligence analyst Mandeep Singh estimates the generative AI market could grow by 42 per cent to reach $1.3 trillion by 2032.