AI leaders set to make safeguard pledges following White House push
Leading U.S. artificial intelligence companies are set to publicly commit Friday to safeguards for the technology at the White House's request, according to people familiar with the plans.
Companies including Microsoft, Alphabet's Google and OpenAI are expected to pledge to responsibly develop and deploy AI, after the White House previously warned the firms that they must ensure the technology doesn't lead to harm.
But the fact the commitments are voluntary illustrates the limits of what President Joe Biden's administration can do to steer the most advanced AI models away from potential misuse. Congress has spent months holding information sessions to better understand AI before drafting any legislation, and lawmakers may never find consensus on binding regulation.
Friday's list of commitments from the White House are expected to be matched by pledges from top AI companies that participated in a May meeting with Vice President Kamala Harris. There, she and top White House officials told the companies that they bear responsibility for ensuring the safety of their technology.
"The regulatory process can be relatively slow, and here we cannot afford to wait a year or two," White House Chief of Staff Jeff Zients said in a podcast interview last month.
The companies' commitments will expire when Congress passes legislation addressing the issues, according to a draft of the White House document. The guidelines are focused on generative AI, such as OpenAI's popular ChatGPT, as well as the most powerful existing AI models and even more capable future models, according to the draft.
The document is subject to change before Friday, according to the people familiar with the matter. A White House spokesperson declined to comment.
Even the developers of AI technology - while enthusiastic about its potential - have warned it presents unforeseen risks. The Biden administration has previously offered guidelines for its development, including the Risk Management Framework from the National Institute of Standards and Technology that emerged from months of engagement with industry leaders and others.
In the document set to be issued Friday, the White House will suggest eight commitments focused on safety, security and social responsibility, according to the draft document. They include:
- Allowing independent experts to try to push models into bad behavior - a process known as "red-teaming."
- Sharing trust and safety information with government and other companies.
- Using watermarking on audio and visual content to help identify content generated by AI.
- Investing in cybersecurity measures.
- Encouraging third parties to uncover security vulnerabilities.
- Reporting societal risks such as inappropriate uses and bias.
- Prioritizing research on AI's societal risks.
- Using the most cutting edge AI systems, known as frontier models, to solve society's greatest problems.
Spokespeople for Microsoft, OpenAI and Google all declined to comment.
Governments around the world have called for global AI governance akin to the agreements in place to prevent nuclear war. Group of 7 countries, for example, committed to coordinate their approach to the technology in Hiroshima, Japan, earlier this year, and the U.K. plans to hold an international AI summit before the end of the year.
All of these efforts, however, lag far behind the pace of AI developments spurred by intense competition between corporate rivals and by the fear that Chinese innovation could overtake Western advances.
That leaves Western leaders, for now, asking companies to police themselves.
Even in Europe, where the E.U.'s AI Act is far ahead of the incipient regulatory efforts of the U.S. Congress, leaders have recognized the need for voluntary commitments from companies before binding law is in place. In meetings with tech executives over the past three months, Thierry Breton, the European Union's internal market commissioner, has called on AI developers to agree to an "AI Pact" to set some nonbinding guardrails.