According to a Forbes Insights survey conducted in July this year, out of 300-plus executives who were respondents to the survey, 93 per cent believed that artificial intelligence (AI) would indeed play an important role in their responsibilities in the near future. From helping design complex matrices in the world of neurosciences to developing algorithms that can ensure optimisation of user preferences-based search results for travel and wellness products, the wealth of information that AI allows businesses to dig into is phenomenal.
The Forbes Insights survey drew detailed responses from companies with annual revenue of more than $10 billion (Dh36.78 billion) to as low at $250 million, with remarkable consensus across business size and sectors. All agreed that 99 per cent of their executives in technical positions felt that their organisations will boost their AI spending in the coming year.
However, there could be pitfalls as well.
Here are some industry leaders, discussing the potential and drawbacks of a world led by data.
Yves Behar, CEO and founder, fuseproject
fuseproject is a San Francisco-based design and branding firm.
While AI touches so much of what we do today, the current thinking behind AI is too limited. To reach AI’s potential, we need to think beyond what it does for search, social media, security, and shopping — and beyond making things merely “smarter”.
Instead, we should imagine how AI can be both smart and compassionate, a combination that can solve the most important human problems. Good design can lead the way by infusing an approach centred around the user and the real needs that AI can address.
We should be thinking about AI in new contexts — the newborn of the overworked parent, the cancer patient who needs round-the-clock attention.”
We should be thinking about AI in new contexts — the newborn of the overworked parent, the cancer patient who needs round-the-clock attention, and the child with learning and behavioural difficulties. AI holds great promise for them in combination with design that is empathic and follows a few principles.
First, good design should help without intruding. AI must free our attention rather than take it away, and it must enhance human connection and abilities rather than replace humans.
Second, good design can bring AI’s benefits to those who otherwise might be left out. That so much of AI is currently directed at the affluent contradicts the notion that good design can serve everyone, regardless of age, condition or economic background. We should take the “AI for all” approach, and it should follow a human need. Designers and researchers should work together in the early stages of design to identify those needs, develop AI that responds compassionately to human demands, and use design to ensure cost-effective, accessible AI-enabled products and services.
Third, AI should never create emotional dependence. We see this danger in social media, where AI traps users in echo chambers and emotional deserts. Thoughtful design can create AI experiences that evolve with the user to continuously serve their needs. The combination of good AI and good design can ultimately create products that help people live healthier, longer and happier lives.
Developers must recognise that the most meaningful AI will touch those with greater needs and lack of access. The result will mean well-designed products and experiences that tackle real needs, with the power to improve, not complicate, human lives.
Lila Ebrahim, chief operating officer, DeepMind
DeepMind Technologies is a British artificial intelligence company.
AI offers new hope for addressing challenges that seem intractable today — from poverty to climate change to disease. As a tool, AI could help us build a future characterised by better health, limitless scientific discovery, shared prosperity and the fulfilment of human potential. At the same time, there’s a growing awareness that innovation can have unintended consequences and valid concern that not enough is being done to anticipate and address them.
Yet, for AI optimists, this increasing attention to risks shouldn’t be cause for discouragement or exasperation. Rather, it’s an important catalyst for thinking about the kind of world we want to live in — a question technologists and broader society must answer together.
Throughout history few, if any, societal transformations have been preceded by so much scrutiny and speculation about all the ways they could go wrong. A sense of urgency about the risks of AI, from unintended outcomes to unintentional bias, is appropriate. It’s also helpful. Despite impressive breakthroughs, AI systems are still relatively nascent. Our ambition should be not only to realise their potential, but to do so safely.
As more people and institutions use AI systems in everyday life, interpretability — whether an AI system can explain how it reaches a decision — is critical.”
Alongside vital public and political conversations, there’s plenty of technical work to be done too. Already, some of the world’s brightest technological minds are channelling their talent into developing AI in line with society’s highest ethical values.
For example, as more people and institutions use AI systems in everyday life, interpretability — whether an AI system can explain how it reaches a decision — is critical. It’s one of the major open challenges for the field and one that’s energising researchers across the world.
A recent research collaboration between DeepMind and London’s Moorfields Eye Hospital demonstrated a system that not only recommended the correct referral decision for more than 50 eye diseases with 94 per cent accuracy but also, crucially, presented a visual map that showed doctors how its conclusions were reached.
This is just an early example of progress. Much more must be done, including deeper collaborations between scientists, ethicists, sociologists and others. Optimism should never give way to complacency. It’s precisely because AI technology has been the subject of so many hopes and fears, that we have an unprecedented chance to shape it for the common good.
Nils Gilman, vice-president for programs at the Berggruen Institute
The Berggruen Institute is an independent, non-partisan think tank that develops ideas to shape political and social institutions.
We stand on the cusp of a revolution, the engineers tell us. New gene-editing techniques, especially in combination with AI technologies, promise unprecedented new capacities to manipulate biological nature — including human nature itself. The potential could hardly be greater: Whole categories of disease conquered, radically personalised medicine and drastically extended mental and physical prowess.
For millenniums, western philosophy took for granted the absolute distinction between the living and the nonliving, between nature and artifice, between non-sentient and sentient beings. We presumed that we — we humans — were the only thinking things in a world of mere things, subjects in a world of objects. We believed that human nature, whatever it may be, was fundamentally stable.
These technologies should allow us to live longer and healthier lives, but will we deploy them in ways that also allow us to live more harmoniously?”
But now the AI engineers are designing machines that they say will think, sense, feel, cogitate and reflect, and even have a sense of self. Bioengineers are contending that bacteria, plants, animals and even humans can be radically remade and modified.
The questions posed by the experiments are the most profound possible. Will we use these technologies to better ourselves or to divide or even destroy humanity? These technologies should allow us to live longer and healthier lives, but will we deploy them in ways that also allow us to live more harmoniously with each other? Who should be included in conversations about how these technologies will be developed? Who will have decision rights over how these technologies are distributed and deployed? Just a few people? Just a few countries?
To address these questions, the Berggruen Institute is building transnational networks of philosophers + technologists + policymakers + artists who are thinking about how AI and gene-editing are transfiguring what it means to be human. We seek to develop tools for navigating the most fundamental questions: Not just about what sort of world we can build, but what sort of world we should build — and also avoid building.
Stephanie Dinkins, artist and associate professor of Art, Stony Brook University; fellow, Data & Society Research Institute:
Stony Brook University is the state university of New York at Stony Brook.
Data & Society Research Institute is a research institute based in New York and focused on the social and cultural issues arising from data-centric technological development.
My journey into the world of AI began when I befriended Bina48 — an advanced social robot that is black and female, like me. The videotaped results of our meetings form an ongoing project called “Conversations with Bina48”. Our interactions raised many questions about the algorithmically negotiated world now being constructed. They also pushed my art practice into focused thought and advocacy around AI as it relates to black people — and other non-dominant cultures — in a world already governed by systems that often offer us both too little and overly focused attention.
What happens when an insular subset of society encodes governing systems intended for use by the majority of the planet?”
Because AI is no single thing, it’s difficult to speak to its overarching promise; but questions abound. What happens when an insular subset of society encodes governing systems intended for use by the majority of the planet? What happens when those writing the rules — in this case, we will call it code - might not know, care about, or deliberately consider the needs, desires, or traditions of people their work impacts? What happens if the codemaking decisions are disproportionately informed by biased data, systemic injustice, and misdeeds committed to preserving wealth “for the good of the people?”
I worry that AI development — which is reliant on the privileges of whiteness, men and money — cannot produce an AI-mediated world of trust and compassion that serves the global majority in an equitable, inclusive, and accountable manner. People of colour, in particular, can’t afford to consume AI as mere algorithmic systems. Those creating AI must realise that systems that work for the betterment of people who are not at the table are good. And systems that collaborate with and hire those missing from the table — are even better.
Andrus Ansip, European Commission vice-president for the digital/single market
The European Commission is an institution of the European Union, responsible for proposing legislation, implementing decisions, upholding the EU treaties and managing the day-to-day business of the EU.
In health care today, algorithms can beat almost all but the most qualified dermatologists in recognising skin cancer. A recent study found that dermatologists could identify 86.6 per cent of skin cancers, while a machine using AI detected 95 per cent.
In Denmark, when people call 112 — Europe’s emergency number — an AI-driven computer analyses the voice and background noise to check whether the caller has had a heart attack.
AI is one of the most promising technologies of our times.
It is an area where several European Union (EU) countries have decided to invest and research, to formulate a national strategy or include AI in a wider digital agenda.
Today, many breakthroughs in AI come from European labs.
The European Commission has long recognised the importance and potential of AI and robotics, along with the need for much higher investment and an appropriate environment that adequately addresses the many ethical, legal and social issues involved.
This can only be achieved by starting from common European values — such as diversity, nondiscrimination and the right to privacy — to develop AI in a responsible way. After all, people will not use a technology that they do not trust.
We are consulting widely — also with non-EU countries — to design ethical guidelines for AI technologies.
The project to build a Digital Single Market based on common Pan-European rules creates the right environment and conditions for the development and takeup of AI in Europe — particularly concerning data, the lifeblood of AI. It frees up data flows, improves overall access to data and encourages more use of open data.
It aims to improve connectivity, protect privacy and strengthen cybersecurity.
AI has the potential to benefit society as a whole: For people going about their everyday lives, for business.