5e Artificial Intelligence: A Cross-Genre Case Study

Research team: Roslynn and Raymond Haynes

The rapidly evolving field of Artificial Intelligence (AI) raises radical and controversial questions for sociology, philosophy, ethics, and cognitive and neuro-science. First proposed as a scientific concept by Alan Turing in 1950, it already had a much longer ancestry in fiction, beginning with de l’Isle Adam’s symbolist novel, L’Eve future (1886). After sporadic reappearances in science fiction and films through the 20th century, AI has now become the focus for a growing number of science novels, dramas, films, science documentaries, and TV series exploring the implications of its permeation of society at all levels from industry, communications, and the military to the role of androids or hubots as servants, carers, and sexual partners. As androids have become increasingly sophisticated, the question of what constitutes “human” has become more complex and provocative than was suggested in Donna Haraway’s seminal essay “A Cyborg Manifesto” (1985).

Despite the obvious utilitarian advantages of AI in the fields of communications, industry (self-driving cars making “rational” decisions are already a reality), there is vigorous debate over the potential dangers. Psychologists, sociologists, philosophers, scientists, and IT experts—including Bill Gates—have voiced their concerns, and Stephen Hawking has gone so far as to predict that it “could spell the end of the human race” (Jones 2014). There are increasing calls for a moratorium on AI research until society (including the legal system) is more prepared to deal with the potential social and ethical consequences. In such a climate, narratives conceptualizing an AI permeated society and portraying the interrelations of humans and post-humans have multiple roles in visualizing a future society and suggesting potential risks.

The project will examine how AI narratives explore the following questions: What are the potential sociological and political implications of AI in industry, communications, the family and the military (e.g. AI-directed drones for remote warfare)? What level of autonomy should be given to hubots? What moral responsibilities are owed to android servants? Are we being forced to revise the conceptual meaning of “human” in cognitive and neuro-science? How far can such narratives provide a platform for the extension of empathy to embrace Haraway’s criteria of “otherness, difference, and specificity” in relation to identity politics concerning post-humans?