AI
AI without oversight will present a fair share of high-wired risks. This needs to be curtailed at the outset. Image Credit: Supplied

AI is going to enable a broad range of tasks to be automated, new services to be created, and, ultimately, economies to be more efficient. Generative AI marks a new stage in this underlying trend, whose many applications we are only beginning to explore.

We must not lose sight of the fact that, despite their remarkable performances, AI systems are essentially machines, nothing more than algorithms built into processors that are able to assimilate large amounts of data. These machines are incapable of human intelligence, in the fullest sense of the term (i.e. involving sensitivity, adaptation to context, empathy), reflexivity and consciousness, and probably will be for a long time to come.

The ethical questions raised by the increasing importance of AI are nothing new, and the advent of ChatGPT and other tools has simply made them more pressing. Aside from the subject of employment, these questions touch, on one hand, on the discrimination created or amplified by AI and the training data it uses. And one the other, the propagation of misinformation (either deliberately or as a result of ‘AI hallucinations’).

Correcting the flaws

These two topics have long been a concern for algorithm researchers, lawmakers and businesses in the field, and they have already begun to implement technical and legal solutions to counteract the risks. Let’s take a look, firstly, at the technical solutions.

Ethical principles are being incorporated into the very development of AI tools. At Thales, we have been committed for some while now to not building ‘black boxes’ when we design AI systems. We have established guidelines that ensure the systems are transparent and explainable. We also endeavour to minimise bias (notably regarding gender and physical appearance) in the design of our algorithms, through the training data we use and the makeup of our teams.

Secondly, the legal solutions. The European institutions seem to have taken the lead on that matter, having worked for over two years on a draft regulation aimed at limiting by law the most dangerous uses of AI.

It is also through education and true societal change that we will succeed in guarding against the risks inherent in misusing AI. Together, we must succeed in removing ourselves from the kind of culture of immediacy that has flourished with the advent of digital technology, and which is likely to be exacerbated by the massive spread of these new tools.

Amplify shortcomings

As we know, generative AI enables highly viral – but not necessarily trustworthy – content to be produced very easily. There is a risk that it will amplify the widely recognised shortcomings in how social media works, notably in its promotion of questionable and divisive content, and the way it provokes instant reaction and confrontation.

These systems, by accustoming us to getting answers that are ‘ready to use’, without having to search, authenticate or cross-reference sources, make us intellectually lazy. They risk aggravating the situation by weakening our critical thinking.

So whilst the existential dangers raised by some fear-mongers are probably exaggerated, we do need to sound a wake-up call. We must look for ways to put an end to this harmful propensity for immediacy that has been contaminating society and creating a breeding-ground for conspiracy theories for almost two decades.

If we address this challenge, we will finally be able to leverage the tremendous potential this technology has to advance science, medicine, productivity and education.