As a software engineering undergraduate of the late 1990s, I developed a simple Artificial Intelligence (AI) application to process bank loan applications. Today AI is driving cars, diagnosing disease and choosing what we see on our screens.

According to Cambridge English, AI is defined as: “The study of how to produce machines that have some of the qualities that the human mind has, such as the ability to understand language, recognise pictures, solve problems, and learn.”

This is one of the world’s most understated definitions of all time. Here is — what I feel — a more accurate definition — “The study of how to produce machines that can have all of the qualities that the human mind has and more, including the ability to understand language, recognise pictures, solve problems, and learn ad infinitum without the limitations of speed, breadth or depth imposed by biology”

As computer processing power increased and costs rapidly declined, the ability to perform highly iterative calculations needed to develop useful AI grew exponentially. To date, most success has been with “Narrow AI”.

Narrow AI is that which is applied to individual or closely related problems.

The Medical Center at Vanderbilt University, for example, has developed AI that can predict with an 80-90 per cent accuracy whether someone will attempt suicide within the next two years. Other examples can be found in image recognition applied to CCTV footage and which shows to recommend to us on Netflix.

In each case, the AI performs well in a specific scenario but would fail when presented with a totally unrelated problem.

Over time we’ll develop Narrow AI to be more generalist, reaching the point where we have “Broad AI”. Broad AI is closer to how the human brain functions.

Broad AI can handle diverse types of information related to many different problems. The pinnacle of Broad AI also sees it able to improve upon itself, becoming faster and more accurate with time.

At this point AI becomes alarming. Assume a Broad AI is developed by a computer science PhD student that constantly pulls in all information available and uses it to learn, hence always be evolving.

Let’s then assume it’s given just two goals:

Goal 1. — Maximise its footprint. This means increasing the number of instances of the AI that exist, or the number of hosts on which it operates. This is the propagation characteristic.

Goal 2. — Prevent anything from interfering with the success of Goal 1 at any cost. This is the preservation characteristic.

When the above AI runs on a single mobile phone or a laptop it’s relatively benign. If the device is quarantined or destroyed, so too is the AI. Today however, almost all electronic devices are connected though an IP address to the internet, from buses and aeroplanes to nuclear power stations, satellites and hospital life support machines.

Connecting all these devices is the internet, the most robust network the world has ever seen, with many layers of redundancy.

In 2013 estimates were that Google, Amazon, Facebook and Microsoft combined held around 1.2 million terabytes of data. This data is ultimately connected to the internet alongside repositories such as Wikipedia. Though impossible for any one person to consume all this information, it’s certainly possible that a computer programme could.

The question to ask is what happens when sophisticated Broad AI with a self-preservation objective emerges? The internet provides it a fertile training manual of human history, knowledge of our fears and ambitions, our strategies and our secrets.

It also provides it a medium through which to spread. We’re increasingly seeing how computer-viruses and ransomware use the internet to spread and wreak havoc.

As AI gets increasingly sophisticated it’s highly likely to also be adaptive, responding to its surroundings and learning from its mistakes. This recursive, self-development feature is the point of no return.

Once reached, the AI has the potential to evolve its ability to both stay alive and spread at an exponential rate. Harvesting cloud infrastructure such as Amazon Web Services and Microsoft’s Azure, a virus-like AI has the potential to keep ahead of any human ability to disable it.

From that point on, its learning capacity will far outstrip that of humans in breadth, depth and speed.

At what point does the Broad AI perceive human beings as being a threat to Goal 1? When it does, Goal 2 permits AI to perceive humans as a threat worthy of suppressing.

3D printing, automation and robotics also play a role in this doomsday scenario. Imagine 20 years from now, an automated car factory goes rogue.

Designs are changed by the AI and vehicles are produced that suppress human beings (Terminator anyone?). They’re stronger, faster and can build their own replacement parts to repair damages.

Robotics doesn’t have these capabilities today but we’re accelerating towards achieving them at an astounding speed as evidenced by recent videos from Boston Dynamics.

The world of physics (electrons, photons, etc) operates at a much higher frequency than that of biology (reproduction, evolution). The evolution of Broad AI is dictated by available computing power … laws of physics rather than biology.

Anyone can write the next killer application, which implies that anyone could create a robust self-learning Broad AI before we have tools to stop it.

Even if we had the tools, theoretically the AI would evolve to become immune to them. Human beings are naturally overconfident. Why do people play the lottery? Why do most believe they’re above average in many categories when mathematically we know this to be impossible?

I find us exhibiting the same overconfidence and complacency in our abilities to control Broad AI. History has taught us that technology is always the winner, this time won’t be any different.

The author is a Kuwaiti businessman based in Dubai and may be followed on Twitter @alialsalim.