Niño Jose Heredia/©Gulf News

Silicon Valley continues to wrestle with the moral implications of its inventions — often blindsided by the public reaction to them: Google was recently criticised for its work on ‘Project Maven’, a Pentagon effort to develop artificial intelligence (AI), to be used in military drones, with the ability to make distinctions between different objects captured in drone surveillance footage.

The company could have foreseen that a potential end use of this technology would be fully autonomous weapons — so-called “killer robots” — which various scholars, AI pioneers and many of its own employees vocally oppose. Under pressure — including an admonition that the project runs afoul of its former corporate motto, “Don’t Be Evil” — Google said it wouldn’t renew the Project Maven contract when it expires next year.

To quell the controversy surrounding the issue, Google last week announced a set of ethical guidelines meant to steer its development of AI. Among its principles: The company won’t “design or deploy AI” for “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people”. That’s a reassuring pledge.

What’s harder is figuring out, going forward, where to draw the line — to determine what, exactly, “cause” and “directly facilitate” mean, and how those limitations apply to Google projects. To find the answers, Google, and the rest of the tech industry, should look to philosophers, who’ve grappled with these questions for millennia. Philosophers’ conclusions, derived over time, will help Silicon Valley identify possible loopholes in its thinking about ethics.

The realisation that we can’t perfectly codify ethical rules dates at least to Aristotle, but we’re familiar with it in our everyday moral experience, too. We know we ought not lie, but what if it’s done to protect someone’s feelings? We know killing is wrong, but what if it’s done in self-defence? Our language and concepts seem hopelessly Procrustean when applied to our multifarious moral experience. The same goes for the way we evaluate the uses of technology.

In the case of Project Maven, or weapons technology, in general, how can we tell whether artificial intelligence facilitates injury or prevents it?

 The realisation that we can’t perfectly codify ethical rules dates at least to Aristotle, but we’re familiar with it in our everyday moral experience, too.


The Pentagon’s aim in contracting with Google was to develop AI to classify objects in drone video footage. In theory, at least, the technology could be used to reduce civilian casualties that result from drone strikes. But it’s not clear whether this falls afoul of Google’s guidelines. Imagine, for example, that artificial intelligence classifies an object, captured by a drone’s video, as human or non-human and then passes that information to an operator who makes the decision to launch a strike. Does the AI that separates human from non-human targets “facilitate injury?” Or is the resulting injury from a drone strike caused by the operator pulling the trigger?

New ethical guidelines

On one hand, the enhanced ability of the drone operator to visually identify humans and, potentially, refrain from targeting them, could mean the AI’s function is to prevent harm, and it would, therefore, fit within the company’s new ethical guidelines. On the other hand, the fact that the AI is a component of an overall weapons system that’s used to attack targets, including humans, could mean the technology is ultimately employed to facilitate harm, and therefore its development runs afoul of Google’s guidelines.

Sorting out causal chains, such as this, is challenging for philosophers and can lead us to jump through esoteric metaphysical hoops. But the exercise is important, because it forces the language we use to be precise and, in cases like this, to determine whether someone, or something, is rightly described as the cause, direct or indirect, of harm. Google appears to understand this, and its focus with causation is appropriate, but its gloss on the topic is incomplete.

One problem its guidelines don’t adequately address is the existence of so-called “dual use” technologies, which can be used for civilian or military purposes. Something like a drone’s autopilot system can be used for a task as innocuous as recording a snowboarder travelling down a mountain, or that same technology could be used to allow a loitering munition to hover above a battlefield while its operator scrutinises the area below for targets. Which of these is the “primary purpose”?

A more rigorous set of ethical guidelines would make it clear how corporations would approach the development of ostensibly innocent technologies that could be co-opted for “evil” uses. While Google’s guidelines state, “We will work to limit potentially harmful or abusive applications”, it would be comforting to see a more robust explanation of how the company will evaluate the potential for harm or abuse, and how it distinguishes a technology’s primary use from its other uses, since the uses of an invention often become clear only much later.

 Google shapes technologies that affect billions of people and its commitment to answering questions sets the stage for the rest of Silicon Valley.


In the context of internet surveillance, Google’s new guidelines place constraints on what data the company will collect, saying it will shun “technologies that gather or use information for surveillance violating internationally accepted norms”. But “accepted norms” isn’t a sufficient catch-all, because in some countries, spying on everyone, all the time is the accepted practice.

Indeed, it presents a classic problem in philosophy: You can’t justify an action by pointing to what everyone else is doing. There has to be a way to determine the difference between what people do and what they ought to do — otherwise, no one ever does the wrong thing. Google’s guidance falls short because it relies on a relatively nebulous concept — “norms” — rather than an articulation of company values. For a statement of principles such as Google’s to mean something, companies have to know their own values, be committed to them, and then sort through these questions in tandem with the technological development process, but it can’t be accomplished by the development process alone.

Necessary condition

What Google has now is a start: It provides what philosophers would call a necessary condition. It has articulated that, at a minimum, the company should avoid developing technology that falls afoul of international norms. But this still leaves too much wiggle room, since widespread data collection is commonly practised internationally — and is, therefore, arguably an accepted norm — but is also widely regarded as harmful by civil libertarians.

Scientists search for answers, but, as the work of the University of South Carolina’s Justin Weinberg illustrates, philosophers’ contributions to a decision-making process can be hard to spot, because the value of philosophy often includes discovering additional questions. Questions about the ethical use of technology may seem tangential to the development of new and innovative technology, but because the value of a technology is in its application, and a company like Google is valued based on the application of its technology, the unpacking of these philosophical questions — and a meaningful enhancement of ethical guidelines — adds value to technology. Identifying ambiguities in a company’s ethical reasoning, then, is good for both the society impacted by the technology and the corporate bottom line.

As a leading tech company, Google shapes technologies that affect billions of people and its commitment to answering these philosophical questions sets the stage for the rest of Silicon Valley — as a company, it can initiate a race to the top in terms of the ethical principles to which tech companies commit themselves.

To “get this right,” in Google’s words, doesn’t just mean developing a mission statement. It means crafting an ethics policy sensitive to ambiguities. It means considering different understandings of causation, harm and moral justification. And it means making sure the ethical core guiding a company meets the technical challenge of constructing artificial intelligence. All that means incorporating the input of philosophers.

— Washington Post

Ryan Jenkins is an assistant professor of Philosophy and a senior fellow at California Polytechnic State University and the director of ethics and policy for WhiteFox Defense Technologies, Inc.