Hiroshima: Leaders of the Group of Seven countries agreed on the need for governance in accordance with G7 values in the field of generative AI, expressing concern about the disruptive potential of rapidly expanding technologies.
In what they are calling the “Hiroshima Process,” the governments are set to hold cabinet-level discussions on the issue and present the results by the end of the year, the leaders said in a statement at the G7 summit on Friday.
In order to ensure that AI development is human-centric and trustworthy, Japanese Prime Minister Fumio Kishida sought cooperation toward a secure, cross-border flow of data, pledging financial contribution to such an effort. The call for greater regulation echoes those from industry and government leaders globally after OpenAI’s ChatGPT set off a race among companies to develop the technology more quickly.
The fear is that the advancements - which can produce authoritative and human-sounding text, and generate images and videos - if allowed to progress unchecked, could be a powerful tool for disinformation and political disruption. Sam Altman, CEO of OpenAI, along with International Business Machine’s privacy chief, called on US senators this week to regulate AI more heavily.
Separately, the World Health Organization said in a statement this week that adopting AI too quickly ran the risk of medical errors, possibly eroding trust in the technology and delaying its adoption.
UK Prime Minister Rishi Sunak wants to craft policy to manage the risks and benefits of AI, and has invited Altman and others to the UK. The European Union is taking a step toward regulating AI tools, requiring companies to make sure users know when they’re interacting with AI, and to ban its real-time use for identifying people in public. Altman has said he would welcome having a new regulatory authority as a way for the US to maintain its leadership in the field.
Japan’s government tends to prefer overseeing AI with softer guidelines than strict regulatory laws, such as those of the EU.
“What’s important is that the government should ultimately crack down using hard law if there is a major problem,” said Hiroki Habuka, senior associate at Wadhwani Center for AI and Advanced Technologies. “But if the law is too detailed, it won’t be able to keep up with changes in technology.”
Setting an international standard for regulating generative AI at this point will be challenging as even among the G7 countries, there are differing values considered appropriate in society, he said.
It will be important to involve as many countries as possible in the discussion to regulate AI, including lower-income nations, said Kyoko Yoshinaga, senior fellow at the Institute for Technology Law & Policy at Georgetown University Law Center.