The Governor of California Gavin Newsom recently shot down a bill which would have imposed some tough regulations on Artificial Intelligence (AI) developers including mandatory safety testing and the inclusion of kill switches in applications. His argument was that innovation should not be stifled and drive AI businesses from the state.
“California is home to 32 of the world’s 50 leading AI companies,” the governor said in a statement. “The bill applies stringent standards to even the most basic functions — so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology.”
Known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, or SB 1047, the act targeted companies developing generative AI such as text, images and audio.
Companies building models that cost over $100m would have been required to implement “kill switches” for their AI and publish plans for the testing and mitigation of extreme risks. Companies would also have been required to shut down the model completely in case of emergencies.
Big names like Google, Meta, and OpenAI had voiced their concerns about the bill’s tough rules saying it could hinder AI development. But far from ending the debate over safety and AI, Gavin Newsom’s veto has only laid the different arguments wide open. Despite the veto even Newsom admitted that safety protocols had to be adopted. The question is — how?
Encouraging innovation
AI technology is extremely powerful. We have seen what the potential is both for innovation and manipulation. Deep fake videos have thrived in the age of AI and misinformation. And at the moment these technology companies which are making this extremely powerful tech are not facing any restrictions in the US.
Some experts argue that broad or sweeping restrictions or regulation of AI is simply not possible especially with much of the technology still in an experimental stage. But we are already seeing what the potential impact of misuse could be. That is where regulation balanced with encouraging innovation becomes increasingly important.
There is no doubt that too much regulation can stifle innovation and this is a very delicate balancing act. Transparency is key. As experts argue, it’s imperative to keep AI ethical and ensure there is transparency about processes and data management to keep out biases. The challenge lies in ensuring that AI is not used for harmful purposes.
But when companies driven by profit are leading the charge, this concern only grows. Which is why regulation becomes key. Yes, we don’t really know what the potential harmful effects of AI are likely to be, but companies need to be held accountable. Developers must be under pressure on the risks associated with AI and what they can do to nip it in the bud.
The California bill did just that by encouraging innovation but also encouraging the experts to think more about the potential risks involved. A justified concern about the bill is how it may have disproportionately affected smaller start ups who simply could not have kept pace with the proposed safety standards.
They may not have been able to compete with larger tech firms. The California bill was far from perfect but it was a start and with modifications is something that can be built on.