Street view, Calcutta, 2010.


According to its author JG Ballard, Crash is the first pornographic novel based on technology’. From bizarre to oddly pleasing, the book’s narrative drifts from one erotically charged description to another of mangled human bodies and machines fused together in the moment of a car crash. The “TV scientist”, Vaughan, and his motley crew of car crash fetishists seek out crashes in-the-making, even causing them, just for the thrill of it. The protagonist of the tale, a James Ballard, gets deeply entangled with Vaughan and his ragged band after his own experience of a crash. Vaughan’s ultimate sexual fantasy is to die in a head-on collision with the actress Elizabeth Taylor.

Like a STS scholar-version of Vaughan, I imagine what it may be like to arrive on the scene of  a driverless car crash, and draw maps to understand what happened. Scenario planning is one kind of map-making to plan for ‘unthinkable futures’.

The ‘scenario’ is a phenomenon that became prominent during the Korean War, and through the following decades of the Cold War, to allow the US army to plan its strategy in the event of nuclear disaster. Peter Galison describes scenarios as a “literature of future war” “located somewhere between a story outline and ever more sophisticated role-playing war games”, “a staple of the new futurism”. Since then scenario-planning has been adopted by a range of organisations, and features in the modelling of risk and to identify errors. Galison cites the Boston Group as having written a scenario – their very first one-  in which feminist epistemologists, historians, and philosophers of science running amok might present a threat to the release of radioactive waste from the Cold War (“A Feminist World, 2091”).

The applications of the Trolley Problem to driverless car crashes are a sort of scenario planning exercise. Now familiar to most readers of mainstream technology reporting, the Trolley problem is presented as a series of hypothetical situations with different outcomes derived from a pitting of consequentialism against deontological ethics. Trolley Problems are constructed as either/or scenarios where a single choice must be made. MIT’s Moral Machine project materialises this thought experiment with an online template in which the user has to complete scenarios in which she has to instruct the driverless car about which kinds of pedestrian to avoid in the case of brake failure: runaway criminals, pets, children, parents, athletes, old people, or fat people.  Patrick Lin has a short animation describing the application of the Trolley Problem in the driverless car scenario.


The applications of the Trolley Problem, as well as the Pascalian Wager (Bhargava, 2016), are applications that attempt to create what-if scenarios. These scenarios guide the technical development of what has become both holy grail and a smokescreen in talking about autonomous driving (in the absence of an actual autonomous vehicle): how can we be sure that driverless car software will make the right assessment about the value of human lives in the case of an accident?

These scenarios and their outcomes is being referred to as the ‘ethics’ of autonomous driving.  In the development of driverless cars we see an ambition for the development of what James Moor refers to as an ‘explicit ethical agent’ – one that is able to calculate the best action in an ethical dilemma. I resist this construction of ethics because I claim that relationships with complex machines, and the outcomes of our interactions with them, are not either/or, and are perhaps far more entangled, especially in crashes. There is an assumption in applications of Trolley Problems that crashes take place in certain specific ways,  that  ethical outcomes can be computed in a logical fashion, and that this will amount to an accountability for machine behaviour. There is an assumption that the driverless car is a kind of neoliberal subject itself, which can somehow account for its behaviour just as it might some day just drive itself (thanks to Martha Poon for this phrasing).

Crash scenarios are particular moments that are being planned for and against; what kinds of questions does the crash event allow us to ask about how we’re constructing relationships with and about machines with artificial intelligence technologies in them? I claim that when we say “ethics” in the context of hypothetical driverless car crashes, what we’re really saying “who is accountable for this?” or, “who will pay for this?”, or “how can we regulate machine intelligence?”, or “how should humans apply artificially intelligent technologies in different areas?”. Some of these questions can be drawn out in terms of an accident involving a Tesla semi-autonomous vehicle.

In May 2016, an ex-US Navy veteran was test-driving a Model S Tesla semi-autonomous vehicle. The test driver, who was allegedly watching a Harry Potter movie at the time with the car in ‘autopilot’ mode, drove into a large tractor trailer whose white surface was mistaken by the computer vision software for the bright sky. Thus it did not stop, and went straight into the truck. The fault, it seemed, was the driver’s for trusting the autopilot mode as the Tesla statement after the event says. Autopilot in the semi-autonomous car is perhaps misleading for those who go up in airplanes, so much so that the German government has told Tesla that it cannot use the word ‘autopilot’ in the German market.

In order to understand what might have happened in the Tesla case, it is necessary to look at applications of computer vision and machine learning in driverless cars. A driverless car is equipped with a variety of sensors and cameras that will record objects around it. These objects will be identified by specialised deep learning algorithms called neural nets. Neural nets are distinct in that they can be programmed to build internal models for identifying features of a dataset, and can learn how those features are related without being explicitly programmed to do so (NVIDIA 2016; Bojarski et al 2016).

Computer vision software makes an image of an object and breaks that image up into small parts – edges, lines, corners, colour gradients and so on. By looking at billions of images, neural nets can identify patterns in how combinations of edges, lines, corners and gradients come together to constitute different objects; this is its ‘model making’. The expectation is that such software can identify a ball, a cat, or a child, and that this identification will enable the car’s software to make a decision about how to react based on that identification.

Yet, this is a technology still in development and there is the possibility for much confusion. So, things that are yellow, or things that have faces and two ears on top of the head, which share features such as shape, or where edges, gradients, and lines come together, can be misread until the software sees enough examples that distinguish how things that are yellow, or things with two ears on the top of the head, are different. The more complex something is visually, without solid edges, curves or single colours; or if an object is fast, small, or flexible, the more difficult it is to read. So, computer vision in cars is shown to have a ‘bicycle problem‘ because bicycles are difficult to identify, do not always havea a specific, structured shape, and can move at different speeds.

In the case of the Tesla crash, the software misread the large expanse of the side of the trailer truck for the sky. It is possible that the machine learning was not well-trained enough to make the distinction. The Tesla crash suggests that there was both an error in the computer vision and machine learning software, as well as a lapse on the part of the test driver who misunderstood what autopilot meant. How, then, are these separate conditions to be understood as part of the dynamic that resulted in the crash? How might an ethics be imagined for this sort of crash that comes from an unfortunate entanglement of machine error and human error? 

Looking away from driverless car crashes, and to aviation crashes instead, a specific narrative around the idea of accountability for crashes emerges. Galison, again, in his astounding chapter, An Accident of History,  meticulously unpacks aviation crashes and how they are are investigated, documented and recorded. In doing so he makes the point that it is near impossible to escape the entanglements between human and machine actors involved in an accident. And that in attempting to identify how and why a crash occurred, we find a “recurrent strain to between a drive to ascribe final causation to human factors and an equally powerful, countervailing drive to assign agency to technological factors.” Galison finds that in accidents, human action and material agency are entwined to the point that causal chains both seem to terminate at particular, critical actions, as well as radiate out towards human interactions and organisational cultures. Yet, what is embedded in accident reporting is the desire for a “single point of culpability”, as Alexander Brown puts it, which never seems to come.

Brown’s own accounts of accidents and engineering at NASA, and Diane Vaughan’s landmark ethnography about the reasons for the Challenger Space Shuttle crash suggest the same: from organisational culture and bureaucratic processes, to faulty design, to a wide swathe of human errors, to the combinations of these, are all implicated in how crashes of complex vehicles occur.

Anyone who has ever been in a car crash probably agrees that correctly identifying exactly what happened is a challenging task, at least.  How can the multiple, parallel conditions present in driving and car crashes be conceptualised? Rather than anethics of driver-less cars’ as a set of programmable rules for appropriate action, could it be imagined as a process by which an assemblage of people, social groups, cultural codes, institutions, regulatory standards, infrastructures, technical code, and engineering are framed in terms of their interaction? Mike Ananny suggests that “technology ethics emerges from a mix of institutionalized codes, professional cultures, technological capabilities, social practices, and individual decision making.” He shows that ethics is not a “test to be passed or a culture to be interrogated but a complex social and cultural achievement” (emphasis in original).

What the Trolley Problem scenario and the applications of machine learning in driving suggest is that we’re seeing a shift in how ethics is being constructed: from accounting for crashes after the fact, to pre-empting them (though, the automotive industry has been using computer simulated crash modeling for over twenty years); from ethics that is about values, or reasoning, to ethics as based on datasets of correct responses, and, crucially, of ethics as the outcome of software engineering. Specifically in the context of driverless cars, there is the shift from ethics as a framework of “values for living well and dying well”, as Gregoire Chamayou puts it, to a framework for “killing well”, or ‘necroethics’.

Perhaps the unthinkable scenario to confront is that  ethics is not a machine-learned response, nor an end-point, but as a series of socio-technical, technical, human, and post-human relationships, ontologies, and exchanges. These challenging and intriguing scenarios are yet to be mapped.

Maya is  a PhD candidate at Leuphana University and is Director of Applied Research at Tactical Tech. She can be reached via twitter 

(Some ideas in this paper have been developed for a paper submitted for publication in a peer-reviewed journal, APRJA)

On December 16, 2012, a violent incident took place on the streets of New Delhi, India: a 23 year old student on her way back from the movies was gang-raped, disemboweled, and died from her injuries two weeks later. The incident sparked nationwide and global outrage; protests across the country; televised and social media discussion about women’s lack of safety in public spaces (and a very marginal discussion about women’s lack of safety in private spaces); the absences in the law on sexual violence and its enforcement; and what could be done to make Indian women secure.

#DelhiGangRape spanned both the online and offline with ease. Nirbhaya (‘the one without fear’), as the young student came to be known, lay in hospital holding on to life while the country raged and reacted. Candle-light vigils, night-time marches, solidarity sit-ins, were held for Nirbhaya across the country. In Delhi they were water-cannoned; more abuse happened during the protests. It felt like a sombre awakening, and we tweeted everything that we felt and experienced. That incident changed something, and we’re trying to piece together what did, and how and why.

YouTube Preview Image YouTube Preview Image

Incidents of public sexual assault of women in the southern-Indian city of Bangalore over New Year’s Eve have now come to light. One was the assault of a lone woman captured on CCTV, and the other was the story of a mass assault on many women in a central thoroughfare in the city among people bringing in the new year.

Protest marches are being organised around the country for January 21st. Bangalore had its “I Will Go Out!” march on January 12, an assertion of women’s rights to safety and confidence and fun in public spaces. Outrage has also been visible on Twitter and Facebook, but so have humour and levity. The hashtag #notallmen began to trend in India in response to a feminist group that surfaced #yesallwomen to show the extent of sexual violence Indian women face.  Interestingly, #notallmen  has received a resounding smack from across the Indian internet.

Early in 2017, a woman walked down a road alone in Bangalore and was accosted by two men on a motorbike. One jumped off the bike and started to grope her while she struggled to get away. Roughly seven kilometres away, hundreds of people were out on the central thoroughfare, MG Road, bringing in the new year. Images from that night document pandemonium, and there were reports of women being harassed and molested by many, many men. A similar sort of thing happened exactly a year before in Cologne, Germany. Police are now saying the mass assault incident never took place, although women have been reporting cases of assault that took place in bars, clubs and on the streets of the city that night.

The attacks were met with familiar and tired gestures. Male politicians did what they always do when sexual assault occurs in public: blamed women for being out at night; blamed the influence of Western culture. (The evil influence of “Western culture” is a popular trope routinely deployed by self-appointed custodians of Indian culture to shut down any challenge to their nationalist notions.) “In these modern times, the more skin women show, the more they are considered fashionable. If my sister or daughter stays out beyond sunset celebrating December 31 with a man who isn’t their husband or brother, that’s not right. If there’s gasoline, there will be fire. If there’s spilt sugar, ants will gravitate towards it for sure,” said Abu Azmi, a politician inclined toward metaphor and sexism. A little correction later, Azmi’s words became the subject of Facebook likes.


After the New Year’s Eve attacks, the advocacy organisation, @FeminisminIndia , started collecting stories on Twitter with the #yesallwomen hashtag to demonstrate how common violence against women is in India. Very soon after, the #notallmen hashtag started to trend. What’s interesting about #notallmen is that it carries no particular cultural particularity – the defensiveness of men who cannot acknowledge structural sexism may have some cultural inflections, but the phenomenon itself appears to be fairly robust across local internets.

From the Huffington Post and Quartz India to the mainstream press, the criticism and trolling of #notallmen has been noticed.  A deep thread started by the Buzzfeed India editor  resulted in a collectively authored #notallmen parody based on Bohemian Rhapsody.


“Be careful” is one of those things Indian women hear all the time; we’re responsible for managing our family’s honour, so we have to be careful with it. Buzzfeed India released a video imagining what it would look like if Indian parents told their sons to ‘be careful’ lest they molest a woman.

Stand-up comedian Karthik Kumar takes on #allIndianmen poking fun at their what’s wrong with Indian men from them not knowing what the clitoris is, to not understanding what consent means. The video shows men in the bar looking a little sheepish and women laughing the loudest.

YouTube Preview Image



The senior journalist Sachin Kalbag had a rant about what is wrong with #notallmen.


Protest and activism online and offline can be oddly inspirational and emotional, and brittle and ephemeral at the same time. It’s unlikely that all the creators of memes are a new feminist online army online; sometimes it feels like we can talk about what’s wrong with Indian masculinity only by making it the subject of chiding humour. We’re not laughing at you.  For now though, it’s trending to troll #notallmen and have a conversation that hasn’t been had enough.