Image Credit: Supplied

Around the world, governments, technology companies and academia are seeking consensus around artificial intelligence (AI) in everything, and trying to understand how best we can gain the benefits of AI, while also making sure the technology has adequate governance and regulation, and is used fairly, without harm.

A common critical concern in AI systems is AI bias – when an AI algorithm produces unfair, prejudiced or discriminatory outcomes or predictions because of erroneous assumptions in the underlying machine learning. But before we can overcome the issue, we must first understand where the biases come from.

Bias in data and AI

Bias in data and AI algorithms mostly arise when the underlying data used to train the AI model is skewed in some way. One form of AI bias is when data used to train AI models may be inherently biased due to how it was collected. If the data selected for training is inaccurate, incomplete or in some way incompatible so that it doesn’t represent the population it means to serve, then the AI model can develop a bias based on this. A good example is when facial recognition systems are trained on a data set mainly from one race or gender, then the system will be better at recognising faces from the bias group, and may perform poorly for individuals from other racial or gender groups.

Historical societal biases, too, as well as inequalities, are often reflected in data, and may manifest in hiring data, lending records or criminal justice data, leading to AI models that perpetuate existing disparities. In fact, algorithms themselves can introduce bias when they make predictions based on patterns in the training data, but inadvertently amplify existing biases. For example, an AI system trained to filter job applications based on historical hiring decisions may only select candidates who have certain characteristics while excluding others.

Further examples of bias can occur in supervised learning, when human operators label the data used for training, and data preprocessing bias, as well as feedback loop bias, also hold the potential to pose problems. If data preprocessing steps – data cleaning, normalisation, and feature selection – are not done carefully, they can eliminate relevant information or reinforce existing biases from the operator. Feedback loop bias is when the operator or end user is biased to a certain outcome, indicating to the AI that a certain result is ‘correct’ and training the AI to select those results. The AI will continue to favour the ‘correct’ result and the operator will continue to rate them as ‘correct’, even if they are not, creating an ongoing loop of biased results.

A somewhat newer concern, arising with arguably the most accessed tool, is when Generative AI (GAI) systems struggle to comprehend questions, causing a misinterpretation of queries, which in turn can lead to the generation of incorrect answers; a phenomenon known as AI hallucination.

The causes of hallucination – where AI algorithms and deep learning neural networks fail to correspond to any data it has been trained on, or even exhibit any discernible pattern – are diverse and include factors beyond programming, such as input information, incorrect data classification, inadequate training, and challenges in interpreting questions across different languages, or contextualising them. And with no limitations to any specific type of data, it can occur across various synthetic data forms, including text, images, audio, video and computer code.

Fairness by design

To overcome bias and safeguard fairness in AI-enhanced big data analytics, a proactive approach is necessary, whereby fairness by design principles can help guide the development of AI systems that are less likely to introduce or promote bias.

Ensuring that training data is diverse and representative of the population that the AI system will serve reduces the risk of under-representation or over-representation of certain groups, while regularly auditing AI systems will help identify and rectify bias.

- Thomas Pramotedham, CEO of Presight

Fortunately, there are ways to develop algorithms that actively work to mitigate bias. For instance, techniques like adversarial debiasing aim to reduce discrimination in model predictions. And we can also make AI systems more transparent and explainable, allowing users and stakeholders to have insight into how decisions are made, which can help identify and address bias.

By promoting diversity in the teams developing AI systems – affording diverse perspectives – we can help identify the issue more effectively, and encouraging users to provide feedback on AI system outputs can be invaluable in rectifying issues.

The road ahead

Ensuring fairness in AI-enhanced big data analytics is an ongoing journey, and there are several promising directions, from regulatory frameworks and ethical AI education, to bias mitigation tools, and bias impact assessments.

A somewhat simple first step in addressing bias could be collaboration and partnerships – among governments, tech companies and advocacy groups – where shared best practices, resources and data can expedite progress.

Governments and regulatory bodies are increasingly recognising the importance of addressing AI bias, and steps have already been taken to involve the development of comprehensive regulatory frameworks. Earlier in November, the UK, US, China, and EU members signed the Bletchley Declaration to regulate and curb potential risks of AI technology, while also signing a code of conduct for companies involved in the building of some of the most advanced AI systems.

And that came on the back of the President of the United States signing an Executive Order on the safe, secure, and trustworthy development and use of AI, given irresponsible use has the potential to “exacerbate societal harms such as fraud, discrimination, bias, and disinformation; displace and disempower workers; stifle competition; and pose risks to national security”.

World powers, then, do see how AI could be misused to a detrimental degree, and are actively taking steps to mitigate it. Such as the eight guiding principles, albeit US-specific, of President Biden’s administration: 1) that it must be safe and secure; 2) promote responsible innovation, competition, and collaboration; 3) require a commitment to supporting American workers; 4) be consistent with a dedication to advancing equity and civil rights; 5) that the interests of those who use, interact with, or purchase AI and AI-enabled products must be protected; 6) privacy and civil liberties must be protected; 7) manage the risks and increase internal governmental capacity to regulate, govern, and support responsible use of AI; 8) lead the way to global societal, economic, and technological progress.

But promoting ethical AI education and training is essential, too, including educating AI practitioners, developers, and users about the implications of bias and how to address it. And continued research and development of bias mitigation tools and libraries will be crucial, as will the adoption of standardised bias impact assessments to help organisations measure the potential impact of their AI systems on different demographic groups.

Overcoming this technological bias is a complex and multifaceted challenge and requires collective effort. Crucially, however, the challenge is not insurmountable, and by addressing bias head-on, and ensuring fairness in AI-enhanced big data analytics, the path to a bias-free AI future is clearly within our grasp.

The writer is the CEO of Presight

This content comes from Reach by Gulf News, which is the branded content team of GN Media.