On May 16, a US Senate Judiciary subcommittee conducted a hearing on artificial intelligence oversight with three key witnesses: Sam Altman, CEO of OpenAI and the company behind ChatGPT; Christina Montgomery, Chief Privacy and Trust Officer for IBM; and NYU Professor Emeritus Gary Marcus.
This was a clear call for AI regulation by the US government, one that is also extremely relevant to the wider international community. The explosion of AI Large Language Models (LLMs) such as ChatGPT may well supplant many white-collar jobs. (A quarter of work tasks in the US and Europe, according to Goldman Sachs, while many experts worry that advanced versions of AI could even pose potentially existential risks to humanity.)
While industry leaders rarely welcome regulation, leading figures including Sam Altman are urging the government to work together to prevent the misuse of this technology. Prof. Gary Marcus, who also founded Geometric Intelligence, argued that LLMs need to be regulated for greater transparency and to avoid anypotential subtle manipulation.
Christine Montgomery, also called for transparency so that we know what the algorithms are trained on and explained that IBM is calling for precision regulation of AI.
Misinformation, the need-to-know what information is AI-generated, loss of jobs, invasion of privacy, manipulation of personal behavior and opinions, manipulation of political systems, and copyright regulation, were the main topics of concern.
The overarching dilemma behind all these points was that if the US overregulates itself, then other countries will surpass its capabilities, potentially creating national security concerns.
AI as a job creator
Existential threats of the unknown are difficult to regulate, but job loss and unemployment is not. The proliferation of AI is a moment of transformation for humanity - by shaping the development and application of AI, government policies can help mitigate job losses while fostering new employment opportunities that leverage the good that AI can do.
This could include ‘human-in-the-loop’ AI systems – where regulations encourage the use of people to work alongside AI in such a way that it complements rather than supplants human thinking.
By imposing guidelines on transparency, accountability and fairness, governments can also ensure that decisions about job displacement are not solely based on economics, and that human safety is prioritized over profits.
Lastly, governments can set up social safety nets for those impacted by AI-driven automation. This can include unemployment benefits, job transition programs, and retraining initiatives.
While these policies will save some jobs, there will be an urgent need to create new ones, and upskill existing employees. Regulation can promote AI research and development through grants, tax incentives, and public funding, to stimulate the creation of new jobs in AI-related fields such as data science, machine learning, and robotics.
Government policies can also foster an environment conducive to the growth of AI start-ups, while also investing in AI education and training to ensure that the workforce is equipped with the necessary skills to seize these new opportunities.
A climate change countermeasure?
What would international cooperation on these big questions look like? The US Senate hearing included a discussion on which international body might be appropriate to convene a conversation on global AI regulation.
The science and lightning speed advancements in AI require urgent global cooperation to avoid dangerous outcomes. There is no time to create a new governance body.
To my surprise, the International Panel on Climate Change (IPCC) was mentioned as an organization that could pool scientific knowledge to help inform global policies. That is a creative idea that should be pursued immediately.
We can leverage existing global schemes to help transition to a world run in part by AI. While there are considerable and perhaps even existential risks posed by AI, climate change is already wreaking havoc for billions of people.
We should leverage systems in place to mitigate and adapt to climate change to help us address the complex challenges surrounding AI.
In the UAE, we are particularly well-placed for this conversation. The appointment in 2017 of the world’s first minister of state for AI was a wise move that recognized the threats and opportunities of new technologies to our lives. Expect to see other nations following this example.
Meanwhile, my advice to individuals remains the same: learn how to use these models and comprehend how they work. Understand how they produce content, how they can help you, and enhance you.
To avoid doing so, is to guarantee you will be manipulated by them.