As Artificial Intelligence (AI) assumes a more central role in countless business and societal aspects, so has the need for ensuring responsible use. Although financial performance, employee experience, and product and service quality has been improved for millions, AI has also inflicted harm.
Lower credit card limits have been offered to women than men; digital housing and mortgages ads have demonstrated racial bias; users have manipulated chatbots into making offensive comments; and algorithms have produced inaccurate diagnoses and recommendations for cancer treatments.
To counter such failures, companies have acknowledged they must develop and operate systems that work as a force for good while achieving transformative business impacts. Algorithmic fairness, identifying potential, and driving positive effects on safety, privacy, and society in general are all elements of what has become known as 'Responsible AI' and must be pursued as an imperative.
Let's get started
It is true that many have appreciated the need for Responsible AI for some time. However, few have developed concrete steps to put principles into action. Ethical leadership is essential, as is bridging the gap.
Fortunately, six basic steps can establish the broad-based support to drive required internal change and deliver Responsible AI:
An internal champion should be appointed to oversee the entire initiative, convene stakeholders, identify other champions inside the organization, and establish principles and policies that guide the creation of AI systems. Leadership with ultimate decision-making responsibility is not enough, and no single person has all the answers to complex issues.
Therefore, organizational ownership that incorporates a diverse set of perspectives must be in place to deliver meaningful impacts.
Develop principles, policies
Although principles alone do not achieve Responsible AI, they are critically important. As the basis for the subsequent broader programme, they should flow from the company’s overall purpose and values to provide clear links to corporate culture and commitment, with time invested to develop, socialize, and disseminate them.
Soliciting feedback from across the organization identifies employee concerns and high risk areas, which ensures principles are communicated and provides employees with the context for initiatives upcoming initiatives.
Beyond an ethical framework and executive leadership, roles and and procedures ensure organizations embed Responsible AI into the products and services they develop. Effective governance involves bridging the gap between teams, and the leaders and governance committee providing oversight so that principles and policies are applied in practice.
To have an impact, the approach must be integrated into the full value chain, with effective integration dependent on assessing associated risks and biases with use case outcomes. A structured assessment tool helps identify and mitigate risks throughout the project life cycle, which can be identified and flagged early by assessing every step of the journey.
Reviews should not be limited to algorithms, but be part of a comprehensive assessment of the end-to-end AI system.
AI system developers must possess supportive tools and policies to be effective. It is easy for executive leaders to have teams review data for bias, but these can be time-consuming and complex. Providing tools that simplify workflows ensures compliance and avoids resistance from teams that may already be overloaded when operating under tight deadlines.
Preparation is critical to Responsible AI becoming operational. While every effort should be taken to avoid a lapse, companies must be prepared for mistakes, with a response plan in place to mitigate adverse impacts if a lapse occurs.
This should detail the steps to take to prevent further harm, correct technical issues, communicate to customers and employees what happened and what is being done, and designate the individuals responsible for each step to avoid confusion and ensure seamless execution.
While this approach may be perceived as demanding, leaders should note it does not require massive investment to initiate, and subsequent progress to deliver AI responsibly is achievable for any organization. When integrated with an organization’s distinctive purpose, it presents an opportunity to not only realize Responsible AI, but also exceed business objectives.
- Elias Baltassis is Data and Analytics Partner, Director and Boston Consulting Group Middle East's Gamma Lead.