Screen grab, Ghost in the Shell. Mamoru Oshii, 1995.

In April this year, thanks to Stephanie Dinkins and Francis Tseng, artists-in-residence at New Inc., I experimented with a workshop called ‘Imagining Ethics’. The workshop invites participants to imagine and work through everyday scenarios in a near future with autonomous vehicles and driving. These scenarios are usually related to moral conflicts, or interpersonal, social, and cultural challenges. This post is about situating this experiment within the methods and approaches of  design fiction, a technique to speculate about near futures.

Julian Bleecker writes about design fiction as “totems” through which projections into the future may be assembled:

“Design fictions are assemblages of various sorts, part story, part material, part idea-articulating prop, part functional software. …are component parts for different kinds of near future worlds. They are like artifacts brought back from those worlds in order to be examined, studied over. They are puzzles of a sort….They are complete specimens, but foreign in the sense that they represent a corner of some speculative world where things are different from how we might imagine the “future” to be, or how we imagine some other corner of the future to be. These worlds are “worlds” not because they contain everything, but because they contain enough to encourage our imaginations, which, as it turns out, are much better at pulling out the questions, activities, logics, culture, interactions and practices of the imaginary worlds in which such a designed object might exist.”

I’m interested in what speculation about near futures means for discussions of ethics in the context of ubiquitous computing and artificial intelligence. Are we creating ethics for now, or for the future, and if for the future, then how do we envisage future scenarios that requires ethical decision-making?

Ethics are frameworks for values, some of which are codified in the law; some, related to AI-based technologies, currently challenge the law. Autonomous driving worries the law: the narrative around autonomous vehicles has tended to focus on the opportunities and limitations of software to make ‘ethical’ decisions. Simply put, how can driverless car software be held responsible for making decisions that may result in the loss of human life?  I’ve argued that this approach places software and its creation as central to the construction of ethics, rather than the wider social, political, cultural, and economic conditions in which autonomous driving (is) will be situated. Is it possible that the ethical implications of autonomous driving will involve more than just the philosophical conundrums defined by the Trolley Problem?

Drawing on Mike Ananny’s definition of technology ethics as “a mix of institutionalized codes, professional cultures, technological capabilities, social practices, and individual decision making”, I claim that ethics is contexually and situationally produced. Imagining Ethics takes the form of a workshop to identify and discuss these contexts and situations.

So, at New Inc, a group of people sat down for a little over two hours to work through scenarios projected to occur five years from the present moment. Here are the scenarios:

  • Develop a Quick Start Guide (QSG) for the new autonomous vehicle owner (following inspiration from the Near Future Laboratory’s QSG). What kinds of information would a new owner of an autonomous vehicle need?
  • Script an interaction between a thirteen year-old and her parents in which the young person is trying to negotiate hiring an autonomous vehicle out to a movie with her friends and without any adult chaperones.
  • Two security guards are in charge of a garage full of driverless trucks; but, one day, a truck goes missing. Develop the synopsis of a movie of any genre (Rom-com, Sci fi, Road movie, Zombie film, etc) based on this starting point and ending with the guards finding the truck. (This one was a little different from the other two).

In terms of process, the group were given these starting points but not much else. The idea was that they speculate ‘up’ into how these scenarios may unfold; there were very few rules about how to go about this speculation. In giving the group a broad remit, I attempted to evoke aspirations, concerns, and questions they might about autonomous driving.  I aimed to emphasise the social-political-cultural and psychographic aspects of decision-making in a future everyday with autonomous vehicles and driving.

The point was not to be pedantic about how a near future technology might work, or accurately predict what the future might be exactly like. What was important were the conversations and negotiations in sketching out how existing and near future artifacts might interact with human conditions of social and political life.

Imagination can be valuable in thinking about applications of technology in society; and I refer to the imagination in the sense that Sheila Jasanoff and Sang-Hyun Kim do, as a “crucial reservoir of power and action”. Their book, Dreamscapes of Modernity: Socio-technical Imaginaries and the Fabrication of Power discusses socio-technical imaginaries, a collectively sustained vision of how science and technology come to be embedded in society. STS (Science and Technology Studies) aims to bring “social thickness and complexity” into the appreciation of technological systems. However, the authors argue that STS lacks “conceptual frameworks that situate technologies within the integrated material, moral, and social landscapes that science fiction offers up in abundance.”  Thus they propose socio-technical imaginaries as “collectively held, institutionally stablized, and publicly performed visions of desirable futures, animated by shared understandings of social life and social order attainable through, and supportive of, advances in science and technology.”

While an imagination of a future does not necessarily contribute to an imaginary in a causal or linear sense, I bring the two words together to suggest that there are parallel processes underway, at different scales. While an imagination may be private and local, embodying contextual aspirations and anxieties of a future scenario, an imaginary may operate more like a Foucaultian apparatus through the interaction of multiple, social agents, powerfully shaping the emergence of technologies.

Cinema is a space where future imaginations of cities have been generated. There is a visual and textural thread running through future cities in the Masamune Shirow manga classic, Ghost in the Shell (Mamoru Oshii, 1995), Total Recall (Paul Verhoeven, 1990/Len Wiseman, 2012),  Elysium (Neil Blomkamp, 2013), Bladerunner (Ridley Scott, 1982), and A.I (Steven Spielberg, 2001), among others.  (Why filmmakers continue to replicate particular architectural or visual tropes is in all these future city visions is another matter). These films depict future cities as vertical, which the middle and upper classes escape to, with the poor (and sometimes, the Resistance) living in labyrinthine warrens or subterranean cities. Rain, or a constant drizzle, slick roads, and water pools, appear as another common motif in many of these urban dystopias, possibly signalling flooding from climate change, and resulting in a palette of blues and greys.

The visual representation of everyday life in these future cities is a particular project that brings us closer to the work of design fiction. How did Spielberg come up with the idea that the side of the cornflakes box in Minority Report would be a screen showing cartoons? Or that Tom Cruise’s John Anderton, head of the Precrime Division, would investigate criminal cases through a gestural interface? Philip K. Dick certainly didn’t write these things in his novel the film is based on. That interface has become iconic in discussions of cinematic shaping of future visions. The sliding, swiping, twirling of dials, and pinching of the screen to zoom out, didn’t exist in 2002 when the film was released. We didn’t even have social media or smartphones.

Much of this is intentional, writes David Kirby. Cinema has become a space for scientists, technologists and filmmakers to collaborate on “diegetic prototypes”, ‘real’ objects that people in the film’s fictional world actually use convincingly. As Kirby notes, engaging with design to seriously create a vision of the future, filmmakers and scientists are creating legitimacy for how the future will look at a granular and everyday level. And the granular and everyday are important in terms of thinking about how we will actually encounter future technologies. As Julian Bleeker writes, “everyday aspects of what life will be like [in the near future] — after the gloss of the new purchase has worn off — tell a rich story”.

It is this space of the mundane and the everyday in the near future with driverless cars that we as consumers, customers, scientists, scholars, activists, and lawyers, may have to start engaging new framings of ethics. Some of these scenarios may not be so different from what we encounter now, in the sense that social and economic inequalities will not cease to exist with the arrival of autonomous driving. How do you hold a fleet taxi service, like Uber, with autonomous vehicles, accountable for an accident? Who or what is responsible when an autonomous vehicle’s mapping system avoids “high crime neighbourhoods”, and thereby doesn’t offer services to an entire  community? How might a ‘victim’ in an accident  – a road accident, a data breach, a data exposure – involving a driverless car claim insurance?

Spurred by opportunities in the fictive and the fictional, the Imagining Ethics  workshop method is part of ongoing research and practice that seeks to understand how ethics may be reframed in a society with ubiquitous computing and artificial intelligence. What counts as a moral challenge in this society, and how will decisions about such challenges be made? Is it possible to nurture peoples’ aspirations and imaginations into imaginaries of ethics in artificial intelligence, imaginaries that alleviate the social, political, cultural and economic conditions of life in present and future societies with AI? It is time to find out.

Maya Ganesh is a Berlin based researcher, writer, and information activist. She works at Tactical Tech and is a doctoral candidate at Leuphana University.

References

Ananny, M. 2016. Toward an Ethics of Algorithms: Convening, Observation, Probability, and Timeliness. Science, Technology, & Human Values 2016, Vol. 41(1) 93-117.

Bleeker, J. 2009. Design Fiction: A Short Essay on Design, Science, Fact, and Fiction. Near Future Laboratory Blog. Accessed online: http://blog.nearfuturelaboratory.com/2009/03/17/design-fiction-a-short-essay-on-design-science-fact-and-fiction/

Jasanoff, S. and Kim, S. 2015. Dreamscapes of Modernity: Socio-technical Imaginaries and Fabrications of Power. Chicago, IL: University of Chicago Press.

Kirby, D. 2010. The Future is Now: Diegetic Prototypes and the Role of Popular Films in Generating Real-world Technological Development. Social Studies of Science 40/1 (February 2010) 41–70.