Ethical A.I.?

We’ve all watched, read, and heard the stories: humans create intelligent robot – robot decides to kill humans. At this point the A.I.-gone-haywire trope has become the centre piece of a stock narrative; sapped of the novelty it had in the classic era of sci-fi – see Isaac Asimov’s short story I, Robot – it is now a lazy Hollywood formula, used as a Mcguffin to cover up unimaginative story telling.

But is the anxiety behind the trope valid? Is it realistic to think that, after giving birth to A.I., humanity will be devoured in its ravenous metal maw? To answer that question, it’ll be useful to look at a successful usage of the A.I.-gone-haywire trope, the breakdown of the HAL 9000 supercomputer in Stanley Kubrick’s 2001: A Space Odyssey.

During the course of the film, HAL slowly declines, culminating in a tense, drawn out scene where he is slowly unplugged, just before the film’s psychedelic finale. HAL’s slow disintegration starts off with errors while playing chess and ends with the murder of most of the crew members. However, not all is as it seems. HAL’s malfunction is not due to inherent malevolence, but due to human error: he is given conflicting orders (one: relay all information accurately, and two: don’t tell the crew about the end goal of the mission) which results in the A.I. version of a nervous breakdown.

Although obviously a fictional example, it is an example of the primary danger with A.I. – not computers suddenly turning “evil”, but human error itself. Without foreseeing all future problems and putting the correct constraints on a super intelligent computer, there is bound to be a catastrophic fallout.

A thought experiment can demonstrate this further. Suppose there existed a supremely powerful A.I. with the adequate data and resources that could eradicate the common cold. Among other constraints, it would be necessary to program the A.I. to not kill every human being with the common cold. It may sound extreme, but killing everyone with the common cold may be the most rational solution to the problem.

Proposing rules for artificial intelligence is nothing new. In the aforementioned I, Robot, Asimov proposed the now famous 3 laws of robotics. My contention is that the primary danger to humans with regards to A.I. is not the malevolent agency of the technology itself, but not applying adequate constraints on the technology, i.e., human error. If the constraints are adequate, there is no reason to be afraid of the technology.

Our everyday experience of morality is not unlike a set of constraints imposed upon an A.I. These constraints – imposed by biology, culture, or a mixture of both – govern our day-to-day actions, often unconsciously. But human beings make mistakes, we are inconsistent, capricious, and solipsistic. Our moral values tend to change from situation to situation and we often lie to ourselves about how moral we are. How we view ourselves and how we act are constantly at odds. The advantage of A.I. is that, with enough careful deliberation, it can have morality programmed into it. Unlike human beings, if programmed correctly and given enough information, we can expect A.I. to act consistently.

In order for this to work, scientists and technicians would have to apply a cautious approach when developing new technologies. Constraints would have to be deliberately drawn up and tested with A.I. in low risk situations. A.I. would have to operate in such a way as to allow human guidance and interaction with its systems. If all this is done correctly, there’s no reason why we can’t have our own personal HAL 9000 at some point in the distant – or maybe not so distant – future.