AI Race: A bigger threat than nuclear weapons

World must act to ensure AI remains a force for good rather than threat to our existence

Last updated:
4 MIN READ
The AI race is heating up, with innovators scrambling to shape the future of technology
The AI race is heating up, with innovators scrambling to shape the future of technology

The world is witnessing growing competition between the United States and China in artificial intelligence (AI). China’s recent launch of Deepseek, a cutting-edge AI model, along with Alibaba’s release of a new version of Qwen, are further signs of this race heating up.

AI has transformed our lives, increasing efficiency and changing the way we work, communicate, and even think. However, while AI has brought many benefits, its unfettered development could be even more dangerous than the nuclear arms race of the past.

AI is different from nuclear weapons because it is not a physical object. It is software that can be copied, shared, and used in countless ways, making it much harder to regulate. Unlike nuclear weapons, which require large facilities and specialised materials, AI can be developed by private companies and even individuals.

This implies that AI, if misused, could create chaos on a scale we have never seen before. The ability to quickly spread AI-powered tools, from deepfake propaganda to automated cyberattacks, means that dangers can emerge suddenly, with little warning or time to prepare.

The rivalry between the US and China in AI is part of their broader struggle for technological and economic power. The US has long led AI development through companies like OpenAI, Google, and Microsoft. Meanwhile, China has made rapid advancements in AI through significant government-backed investments and private sector innovation.

AI is being integrated across various sectors, including governance, security, and economic planning. China has also leveraged AI for national development, enhancing public services, streamlining industry operations, and advancing scientific research. At the same time, concerns have been raised regarding AI’s role in surveillance and its implications for personal privacy.

While both the US and China are striving for leadership in AI, the challenge lies in ensuring that AI serves the greater good rather than fuelling geopolitical tensions. If this trend continues without international cooperation, the world could see an even deeper divide, where AI is used not only for technological progress but also as a tool for strategic influence.

The dangers of an unchecked AI race are profound and multifaceted. AI-powered misinformation can manipulate public opinion, destabilising democracies and undermining trust in institutions. Automated cyberattacks could target essential infrastructure, disrupting power grids, financial systems, and health care networks, leading to large-scale economic and humanitarian crises.

AI-driven surveillance, if left unchecked, could erode privacy rights and create a society where individuals are constantly monitored, stifling free speech and personal liberties.

The weaponisation of AI could make warfare more unpredictable and uncontrollable, with autonomous weapons systems making life-and-death decisions without human oversight. In such a scenario, conflicts could escalate rapidly, with AI misinterpreting signals and retaliating with lethal force, leading to unintended wars and mass destruction.

Additionally, the economic implications of unregulated AI are equally alarming. As AI continues to automate industries, millions of jobs could be displaced without proper policies in place to support workers. This could widen economic inequality, fuelling social unrest and exacerbating divisions within societies.

Countries that lag in AI development could find themselves at an even greater disadvantage, deepening global disparities and potentially leading to conflicts over access to AI technology.

If AI becomes a monopolised tool wielded by a few powerful nations or corporations, the global economic order could be reshaped in ways that favour the privileged, leaving the majority of the world struggling to keep pace.

To prevent AI from spiralling out of control, the world needs strong international cooperation and a power global governance structure. Just as countries created treaties and institutions to control nuclear weapons, we need a global agreement on AI development and use.

This should include clear rules on AI in warfare, transparency about AI projects, and international oversight to prevent AI from being used for harm. Governments and tech companies must work together to ensure AI is developed responsibly and for the benefit of all people.

Unlike nuclear treaties, which focus on disarmament, AI regulation must go further — addressing the ethics, applications, and limitations of AI before it reaches an irreversible point.

The AI race is not just about technology — it is about the future of humanity. If AI is used as a tool for power and control, it could bring instability, conflict and destruction.

AI should not be a weapon of coercion or an instrument of authoritarianism, but without clear regulations, that is precisely the direction we are heading. But if we create rules to keep AI in check, it could help solve some of the world’s biggest problems, from health care to climate change.

Imagine AI being used to predict natural disasters, optimise global food supply chains, or discover cures for diseases. The potential for good is immense, but it can only be realised if AI remains in human hands and under ethical control.

The choice is ours, but time is running out. The world must act now to ensure AI remains a force for good rather than a threat to our existence. Leaders must recognise that AI is not just another technology — it is a transformative force of the 21st century, one that will shape the future of global power and human civilisation itself. The question is no longer whether AI will change the world, but whether we will be able to harness it for the betterment of humankind.

Ashok Swain@ashoswai

Ashok Swain is a professor of peace and conflict research at Uppsala University, Sweden.