Please register to access this content.
To continue viewing the content you love, please sign in or create a new account
Dismiss
This content is for our paying subscribers only

Microsoft trust in FATE to keep AI safe

The system is intended to guide creation of harmless AI systems



Robots interact with people at the Dubai World Trade Centre during the Gitex Technology Week.
Image Credit: Clint Egbert/Gulf News

Dubai: The future of Aritificial Intelligence (AI) should be guided by destiny, according to a senior Microsoft executive speaking at Gitex in Dubai on Monday.

Or rather, FATE.

FATE is Microsoft’s controlling mantra when dealing with AI, and stands for fairness, accountability, transparency, and ethics.

Many prominent public figures, including Elon Musk and Stephen Hawking, have called for greater caution when developing artificial intelligence. Musk called the technology more dangerous “than nuclear warheads.”

In response to the concerns surrounding the technology, Microsoft developed its own strategy for ensuring that AI would never be weaponised or used inappropriately. “We looked at the importance of being strong in our values,” said Ali Dalloul, Microsoft’s general manager for AI, “so we have a framework around how AI needs to be governed.”

First of all, Dalloul said, the technology needs to be fair. He uses the example of applying for a bank loan, and having your credit worthiness judged by an AI algorithm. “Now, if there’s bias in that data, we have a real problem,” he said. “The biased humans who have programmed that [algorithm] are also going to infect it … So it has to be really fair.”

Much has been made of the potential lack of human responsibility when applying AI to critical tasks, such as life-saving operations or driving cars with passengers. “What happens if a medical device, powered by an AI algorithm, makes a life or death decision?” Dalloul asked. “What happens if a self-driving car has an accident? Who is responsible?”

Deciding who is accountable, be it the developer, the programmer, the data scientist, or the insurance company, is “critical,” the executive said, leading Microsoft to make it the second principle of their FATE system.


Video by Clint Egbert/Gulf News

Transparency, the third tenet of the system, is intended to deal with one of the technology industry’s most pressing issues in recent months: How customers’ data is harvested and then sold or used.

“We have made a commitment that we are accessing your data in a very respectable manner,” Dalloul said.

AI requires massive amounts of data in order to learn and function optimally, increasing the burden on customers to hand over much of this typically personal information.

Lastly, Dalloul said that ethics must make up the backbone of any AI system.

“We believe privacy is a human right, we fundamentally respect every law in every country we operate,” he said, adding that Microsoft had a strong enough corporate governance that AI could be safely tested and applied without transgressing local laws in any way.

Advertisement