Ethical A.I.?

We’ve all watched, read, and heard the stories: humans create intelligent robot – robot decides to kill humans. At this point the A.I.-gone-haywire trope has become the centre piece of a stock narrative; sapped of the novelty it had in the classic era of sci-fi – see Isaac Asimov’s short story I, Robot – it is now a lazy Hollywood formula, used as a Mcguffin to cover up unimaginative story telling.

But is the anxiety behind the trope valid? Is it realistic to think that, after giving birth to A.I., humanity will be devoured in its ravenous metal maw? To answer that question, it’ll be useful to look at a successful usage of the A.I.-gone-haywire trope, the breakdown of the HAL 9000 supercomputer in Stanley Kubrick’s 2001: A Space Odyssey.

During the course of the film, HAL slowly declines, culminating in a tense, drawn out scene where he is slowly unplugged, just before the film’s psychedelic finale. HAL’s slow disintegration starts off with errors while playing chess and ends with the murder of most of the crew members. However, not all is as it seems. HAL’s malfunction is not due to inherent malevolence, but due to human error: he is given conflicting orders (one: relay all information accurately, and two: don’t tell the crew about the end goal of the mission) which results in the A.I. version of a nervous breakdown.

Although obviously a fictional example, it is an example of the primary danger with A.I. – not computers suddenly turning “evil”, but human error itself. Without foreseeing all future problems and putting the correct constraints on a super intelligent computer, there is bound to be a catastrophic fallout.

A thought experiment can demonstrate this further. Suppose there existed a supremely powerful A.I. with the adequate data and resources that could eradicate the common cold. Among other constraints, it would be necessary to program the A.I. to not kill every human being with the common cold. It may sound extreme, but killing everyone with the common cold may be the most rational solution to the problem.

Proposing rules for artificial intelligence is nothing new. In the aforementioned I, Robot, Asimov proposed the now famous 3 laws of robotics. My contention is that the primary danger to humans with regards to A.I. is not the malevolent agency of the technology itself, but not applying adequate constraints on the technology, i.e., human error. If the constraints are adequate, there is no reason to be afraid of the technology.

Our everyday experience of morality is not unlike a set of constraints imposed upon an A.I. These constraints – imposed by biology, culture, or a mixture of both – govern our day-to-day actions, often unconsciously. But human beings make mistakes, we are inconsistent, capricious, and solipsistic. Our moral values tend to change from situation to situation and we often lie to ourselves about how moral we are. How we view ourselves and how we act are constantly at odds. The advantage of A.I. is that, with enough careful deliberation, it can have morality programmed into it. Unlike human beings, if programmed correctly and given enough information, we can expect A.I. to act consistently.

In order for this to work, scientists and technicians would have to apply a cautious approach when developing new technologies. Constraints would have to be deliberately drawn up and tested with A.I. in low risk situations. A.I. would have to operate in such a way as to allow human guidance and interaction with its systems. If all this is done correctly, there’s no reason why we can’t have our own personal HAL 9000 at some point in the distant – or maybe not so distant – future.

The future is dead

Any night-time scroll through social media is enough to tell you that the future is dead, or at least dying. It seems that it’s the function of marketing campaigns to tirelessly rehabilitate the retro until there’s no space for the new. The sense of current temporal specificity – i.e. the sense of 2018 feeling like it’s 2018 – has been replaced with a swirling collage of 20th century styles. In Mark Fisher’s words, “to live in the 21st century is to have 20th century culture distributed by high-speed internet”. This over-tolerance of the past is destroying the creative essence of contemporary fashion, music, art, and pop culture in general.

This isn’t a case of mild retrofetishism, but a completely formal restriction. The constant recreation of the past is not in the spirit of homage or criticism, but an unconscious choice. Past forms are the only forms that seem viable to us. It seems that at some point in the last twenty years the past colonised the present and killed the future.

It’s easy to dismiss this trend as a kind of mass nostalgia, something that pop culture will recover from if “shaken up” a bit. This is to misunderstand the cause. The zombified past is kept animated by the dynamism and massive data capacities of contemporary technology. Rather than culture – as was the case in the 20th century – being the driving force in human society, technology is the place where we experience a kind of permanent obsolescence. Creative momentum has been transferred from culture to technology itself, and the result is a severe change in the collective phenomenology of time. Paradoxically, this makes contemporary technology one of the strongest forces of cultural conservatism ever made.

Two thought experiments demonstrate the strangeness of our collective predicament. The first, from Mark Fisher, encourages us to think of how people would react if music from now was sent back twenty years. It’s hard to believe that anyone in 1998 –perched on the end of a decade that gave them Radiohead, Nirvana, and the Wu-Tang Clan – would be any more than mildly interested in the music of the future. For any other twenty year stretch in the 20th century it’s another story. A listener of traditional 40s pop music would not be able to deal with the unashamedly avant-gard proto-punk of the Velvet Underground for example.

The second thought experiment is less of an experiment but more of a reflection on two twenty year periods: 1958-1978 and 1998-2018. In terms of musical culture, the first period of time saw the end of the rock ‘n’ roll era, the beginning and end of Beatlemania, the psychedelic rock of Jimi Hendrix, the blues rock of Led Zeppelin, the start of heavy metal, the creation of funk and later disco, the start of hip-hop, the apex of soul music, the birth and death of punk, and the beginnings of post-punk, goth, new wave, and early electronic music. Although it’s true that new genres have cropped up in the last twenty years – dubstep, grime, and emo are probably the most culturally significant examples – there clearly haven’t been as many developments as ’58-’78. Even the middle decades, ’78-’98, were clearly more culturally fertile.

A side effect of this flattening of cultural time is that the concept of the “futuristic” is just as stylistically significant as the the idea of the “Baroque” or “Victorian”. This democratising of styles leads to the future as a potential point in time being replaced by the “futuristic” as a timeless and static aesthetic, as easily selected or discarded as any other font or theme.

The temporal crisis can be seen as epiphenomenal, happening in line with the logic of our current political and socioeconomic systems. In his essay “Postmodernism, or, the Cultural Logic of Late Capitalism”, Fredric Jameson links this cultural stagnation with the fact that “aesthetic production has been integrated into commodity function more generally: the frantic economic urgency of producing fresh waves of even more novel-seeming goods… assigns an increasingly essential structural function and position to aesthetic innovation and experimentation”. This means that it is relentless commodification that is fuelling our timeless dystopia – our sense of a future is being destroyed by the sheer amount of products. Coupled with technology that gives us instant and frictionless access, we are constantly forced to reconfigure older styles, unable to see past the past. This can be seen clearly in the contemporary music industry; as the structures of dissemination – streaming, downloads, social media – get more and more complex and totalitarian, the novelty and innovation in the music is diminished.

It is not a coincidence that this mass failure of cultural production, this inability to envisage an artistic future, coincides with the apparent death of all political alternatives outside of neoliberalism. The closing of the aesthetic imagination is a symptom of the closing of the political imagination.

O Brave new world… genetic editing in the 21st century

The creation of the CRISPR/cas9 biotechnology is the most significant development in the field of genetic engineering in recent memory. Unlike previous gene editing tools, CRISPR/cas9 allows scientists to directly target pieces of the genome and edit them with “molecular scissors”, removing or replacing strands of undesirable DNA with unprecedented accuracy. It can even change the DNA in human sex cells and early stage embryos, causing permanent and irreversible changes to the germline.

The CRISPR/cas9 process is an already existing mechanism in biology and is part of the bacterial immune system. CRISPR acts as a kind of vaccination hard drive for the bacterium, storing short strands of DNA from viruses that have previously attacked the cell. This DNA is converted to RNA which then binds to a cas9 protein. Using the RNA as a search code, the protein cuts the DNA of attacking viruses at specific points that match with the guide RNA, in this case rendering the selected virus benign enough to be destroyed by the cell. It is this process that forms the basis of the gene editing technology.

The initial medical applications of the technology seem obvious. Blood diseases like haemophilia and sickle cell anaemia will more than likely be the first disorders that are tackled due to the fact that the faulty cells can be taken out of the body, modified, and replaced. Genes linked to big killers like cancer and heart disease will most likely be the next targets.

Beyond important but fairly standard medical use, the existence of a technology that can alter human DNA presents us with profound social and philosophical questions. For example, it is probable that, in time, CRISPR will be used for cosmetic purposes. Tweaking genes that regulate muscle development, eye colour, and hair growth will likely become the norm. What bodies will regulate the technology? What private institutions, if any, will benefit from its usage? What will be the broader psychological effects of this pursuit of biological perfection? Considering how rapidly the science is advancing, these questions need to be answered sooner rather than later.

If we assume that new technology maps onto existing institutional and social structures, then we should expect massive discrepancies in access to CRISPR therapies. 82% of the wealth created in 2017 went to the top 1%. Half the worlds population – roughly 3.8 billion people – own as much wealth as the richest 42. Looking at these statistics it is hard to believe that the benefits of CRISPR will be distributed anywhere near equally.

We can also expect huge difference in the way countries decide to regulate the technology. If made internationally available, we should expect the launch of a of genetic tourism industry, a state of affairs in which the wealthiest members of society travel to countries with the laxest regulation in order to boost genes that may – if the genes that regulate intelligence, productivity, longevity, etc. are isolated and modified – increase their wealth to an even greater degree. We may also see an underground, unregulated market of gene editing grow underneath the legitimate institutions if the channels of access to the benefits of CRISPR are collectively considered to be too restrictive. These potential consequences, although currently science-fiction, may become science-fact within decades.

Due to the simplicity of the mechanism – it contains just two key molecules – and that it is, in theory, a “one and done” style therapy, CRISPR has the potential to advance well ahead of any restrictive legislation. Its simplicity should not be understated. Jennifer Doudna – who is one of the creators of the technology and Professor of Biochemistry and Molecular Biology at UC Berkley – described using previous gene editing tools as “having to rewire your computer each time you want to run new software”. Conversely, she described CRISPR/cas9 as like “software for the genome”. It is worth noting that she has, in the past few years, called for a “worldwide moratorium” on genetic editing.

It is likely that the final consequences of this technology will outstrip even the most fanciful science-fiction inspired predictions. This is because, despite our best efforts, human beings have a fairly poor record when it comes to predicting the various outcomes of new innovations. Who knew, that when the automobile was invented in the latter half of the nineteenth century, that it would become one of the primary contributors to the transformation of our atmosphere and subsequently cause irreversible changes to the ecological makeup of our planet? This case study alone should demonstrate that it is vital that we proceed with caution.