Stock - AI
Artificial Intelligence can make life and work a lot easier in the years to come. But chinks are there, and which require careful vetting. Image Credit: Shutterstock

Just like electricity changed the last century for the better, AI will transform this era. It can help society scale new heights – making us healthier, more prosperous and more sustainable.

But as we celebrate and anticipate AI’s enormous potential for economic and social good, there are questions and concerns. People worry about how AI makes decisions. What information does it use? Is it objective and fair or it is biased? And how can we find out?

These need to be resolved. Because no matter how big or exciting its potential, if society decides not to trust AI, it cannot succeed. In IBM’s Global AI Adoption Index 2021, 86 per cent of businesses surveyed believed that consumers are more likely to choose AI services from a company that uses an ethical framework and offers transparency on its data and AI models.

I believe that establishing core principles is the departure point of building AI that is fairer, more responsible and more inclusive. We use our company’s three guiding principles for trust and transparency to shape how we develop and deploy AI.

• Firstly, AI systems must be transparent and explainable. When humans develop AI systems and gather the data used to train them, they can, consciously or unconsciously, inject their own biases into their work, with unfair recommendations as a result. These must be mitigated by having correct procedures and processes in place.

• Secondly, AI’s purpose is to augment human intelligence. AI is not about man vs machine but is man plus machine. AI should make all of us better at our jobs, and the benefits of the AI era should touch many, not just the elite.

• Thirdly, insights from AI belong to their creator. IBM clients’ data and insights belong to them, not us.

For now, 91 per cent of businesses using AI say their ability to explain how it arrived at a decision is critical. Such transparency can help reduce the bias in AI systems that is a cause for concern. Bias can have serious consequences when it influences recommendations in sensitive areas - job recruitment, court decisions and more.

IBM worked with a bank who wanted to use AI for its loan decision process. The bank provided loan data. The data showed that men, having all other factors equal, were more likely to get loans than females.

Ditch the legacy

This was based on historical societal biases, not on true financial metrics. We could mitigate this bias - but if not detected, an AI system will use that data to learn and will then perpetuate its biases, meaning a continuation of fewer women getting loan approvals.

A lack of diversity within teams developing AI makes it difficult for developers to anticipate bias and its potential impact. Put together diverse teams and you will reduce blind spots and increase chances of detecting bias.

Education and training of developers is essential – not just on tools and methodologies but also on their awareness of their own biases. Another way to mitigate bias is to make sure that AI decisions are transparent and explainable. For example, if a patient or a medical professional wants to know how an AI system came to a given conclusion regarding diagnosis or treatment, this should be explained in language and terms that are clear to whoever is asking.

Of course, achieving AI fairness is not just a technical problem; it also requires the right governance structures, engagement from company leadership and a drive to do the right thing. IBM has established an internal AI ethics board. It supports initiatives to operationalize our principles of trust and transparency.

Increasing the level of trust in AI systems isn’t just a moral imperative, it’s good business sense. If clients, employees and stakeholders don’t trust AI, our society cannot reap the benefits it can offer. It’s an opportunity we must not miss.