View of an open pit gold mine.

 

“A mine is a complex space of flows” says Dr. Mostafa Benzaazoua.

I’m not expecting a professor of geological engineering to use a phrase from the media studies cannon. I write in my notebook” “maybe media studies before mining science?!!!” Or perhaps that phrase has now entered into everyday scholarly parlance. Over the course of the next few hours, Dr. Benzaazoua gives us a detail-rich lecture on how gold is mined from the earth, and the spaces of flows the mine and its products inhabit. The next day we leave before dawn to visit Canada’s largest open pit gold mine.

This post is a report on a visit to a large scale extraction facility, and its relationship to studies of infrastructure and technology. The visit contributes to my own (ongoing) research on machine learning, accountability and ethics; in this I argue that narratives of ethics and accountability are in fact about the evolution of measurements and standards for regulating and assessing human and non-human systems working together. This visit was organised as part of a Summer School called Planetary Futures conceived and led by Drs. Orit Halpern, Pierre- Louis Patoine, Marie-Pier Boucher and Perig Pitrou, and hosted by the Milieux Institute for Art, Technology and Culture at Concordia University.

Over the two weeks following the visit to the mine we journey – literally and figuratively – to the following places: a Mohawk reservation; waterways that enabled the development of the US and Canada as settler-colonial states; the Buckminster Fuller-designed Biosphere from Expo 67; Moshe Safdie’s Habitat 67, an architectural vision for future housing in crowded cities; a future ‘village’ on the Moon to be built by various Space agencies; the SF of Ursula K le Guin, J.G Ballard, and Peter Watts; and the work of Sarah Sharma on critical temporalities, and notions of ‘exit’ among others.

There is a logic in making these stops; each one relays histories and practices of extraction, colonialism, and imaginations of futures through speculation and design to the next stop. As the course description asks:

“…how we might imagine, and design, a future earth without escaping or denying the ruins of the one we inhabit?  How shall we design and encounter the ineffable without denying history, colonialism, or normalizing violence?  What forms of knowledge and experiment might produce non-normative ecologies of care between life forms? How shall we inhabit the catastrophe? … how we shall inhabit the world in the face of the current ecological crisis and to rethink concepts and practices of environment, ecology, difference, and technology to envision, and create, a more just, sustainable, and diverse planet.”

When you visit an open pit gold mine, it takes time for your eyes to adjust to the grayscale landscape. More lunar than Luxor, you don’t see anything even remotely golden at a gold mine, except perhaps the cheesy gold hard hats (we) visitors wear. We are watching the open pit of the mine from a viewing gallery many metres away and above it; it is very, very quiet here. You expect to hear something, but we’re too far away to hear the machines drill the earth and bring up rocks, which are loaded into large trucks. Each truck has eight wheels, each wheel costs $42,000 and is about ten feet high. The trucks lumber about like friendly, giant worker-animals. To drive them requires significant skill; we are told that women make better drivers. The trucks take the rocks away to the factory where they are analysed for gold.

Someone says something later about the mine being cyborg: the organic Earth, with its transformative automated elements – the drilling machines, trucks, – and the ‘intra-action’ of the two being the mine itself.

Inside the gold mine’s factory facility. Each wheel costs CA$42,000 and is ten feet high.

 

A big truck that conveys rocks dug up from the mine pit to the factory for processing.

 

There is something hypnotic happening here. Standing in the light drizzle, the only colour comes from the yellow of the school bus that brought us here, and our safety jackets. We can’t take our eyes away from the pit. Many people are recording video and we are all taking pictures. It is as if we are waiting to see something important or extra-ordinary, as if something special might emerge because we’re looking at it.

I thought about gold for longer than I ever have in my life in those 30 minutes watching machines work the Earth: the symbolic and socio-cultural value of gold (particularly for Indians); the relationship between gold and finance; the Gold Rush and the Wild West; value; how ‘gold’ enters the vocabulary from ‘gold digger’ and ‘bling’ to ‘gold watch’, to anything prefixed by the word ‘golden’.

Time passes, and nothing happens. It is not actually hynotic, I realise; it is meditative in the sense that it is oddly absorbing and empty at the same time. This is just another day in a gold mining factory however. The longer you watch, the more apparent it becomes why the study of infrastructure is infused with a certain poetics.

Sulfide-rich ores like pyrite sometimes contain gold, and are found here along with chalcopyrite, which is a significant source of copper. Pyrite is also the technical name for Fool’s Gold. The ores dug up from the ground must be analysed by spectrometric techniques, the gold identified and eased out through chemical processes, and the waste rocks dumped and re-used elsewhere. ‘Tailings’ are the waste rock that do not contain any gold and must be recycled for other uses in the factory. In many instances, tailings must be carefully managed, or re-used, to contain the negative environmental outcome of the mining process, particularly acid mine drainage: in other words, environmental contamination (Benzaazoua et al 2017).

Confronted with the expanse of the waste that is the tailings field, it is sobering to realise how little gold comes out at the other end. All the gold that has ever been mined from the earth only fills two Olympic sized swimming pools, says the engineer showing us around. For so little, you have to do so much. We are thinking about the unthinkable, and the incomprehensibility of scales when it comes to the planet.

Pyrite tailings field with school bus to transport PhD students and Summer School faculty.

We are close to what Thacker refers to as the ‘world-in-itself’ in his book on speculation, horror and philosophy, In the Dust Of This Planet (hat tip to Daniel Rourke for this reference here). The world we humans interpret and give meaning to, the world we relate to or feel alienated from, is the world-for us. But the world that already exists, that is somehow inaccessible, the one we turn into the world-for-us though inquiry and study, is the world-in-itself. Both the world for-us and in-itself coexist, paradoxically. Unfortunately the world-in-itself is the one we get to know through natural disasters, it is the world that “resists, or ignores our attempts to mold it”.

The ‘world-in-itself’ is something we as humans model predictively, and prophesy, usually through disaster, and which we will never experience. Yet we are drawn to it. This is the world-without-us, and it is something we are fatalistically drawn to. Thacker says “the world-without-us lies somewhere in between, in a nebulous zone that is at once impersonal and horrific.”

He offers another valuable abbreviation. The world-for-us is simply what we refer to as ‘the World’; the world-in-itself is ‘the Earth’; and the world-without-us is ‘the Planet’. Being at the site of intensive extraction is to be somewhere between the world-in-itself and the world-without-us. The pyrite tailings field is the one of the largest, most bleak landscapes I have ever experienced. The damp, windswept, grey day adds atmosphere. Perhaps this is what the surface of the moon might be like, or the world-without-us.

End of the line. The last stop in the process of gold being extracted from the earth.

Yet, we are back in the world-for-us before we know it. At the end of the tour we find ourselves in something pleasant and ominous; like a scene out of a Tarkovsky film, says someone in the group. Crystal clear water rushes out from a canal and disappears into a thickly wooded forest. It is quiet save for the sound of water. There is a whiff of pine in the air. It could not be more bucolic.

Gold extraction requires chemical processes that contain numerous contaminants that must be washed away; this washing increases the acid level in the water. Carbon dioxide is pumped into the water to restore its PH balance. “There are moose in that forest” says the lead engineer at the mine. “We return to nature now” he says with a warm smile. Here, extraction seems to fit into some sort of pre-ordained, cyclical, natural order of things.

One thread in the Summer School related to the application of ethnographic methods to the study of infrastructure. Through the visit to the mine and the towns around it, I kept a diary of us, of how we arrived in this place and started studying it, like anthropologists; and how we relentlessly document, communicate, and share. I posted a lot of photographs to Instagram right through the two weeks, and particularly about the visit to the mine. (Later, I put them together as a patchwork speculative story here.) We were (are) as much as part of the world-in-itself, transforming it into the world-for-us.

 

Doctoral researcher documenting infrastructural processes.

 

“Infrastructure is things, as well as the relationship between things” says Brian Larkin. There is a sort of well-meaning hubris intrinsic to infrastructure-mapping exercises: eventually, you are not going to capture every part of the system. There are some things about the mine that we cannot know till Mostafa tells us. For example, the price of gold on the market affects the price of academic’s houses in the mining towns of Quebec. The town he lives in will shut down when the gold runs out. There is talk of trying to visit one of the towns nearby that is ‘dead’, following the closure of a mine because the gold dried up. Someone else feels uncomfortable with us engaging in disaster-tourism for academic extraction. Searching online, I find that one of the mining towns in the region has made a bid to be a Canadian Capital of Culture; they were rejected. I wonder if this is some soft of insurance for the future that will certainly come. One of the faculty tells us about how miners can be paid well, and so take out expensive mortgages and buy fancy SUVs. If the gold dries up, they’re out of a job overnight. Someone else tells us that the under-12 suicide rate here is significant. The space of flows indeed.

 

Slide from presentation by Dr. Benzaazoua describing when various resources mined from the Earth will run out.

A phrase that pops up with annoying regularity these days is “data is the new oil”. The study of big data metaphors is already rich. Big data, and controlling big data, is gaining traction in terms of food and nutrition and the ‘data detox’ paradigm (Sutton, 2017). There is much written and being said about rights (or the lack thereof) to individual privacy and big data; this is having significant effects, a lot of it invisible, on individuals, as well as broader notions of democracy and freedom of speech and expression. Limiting the consumption of data is seen as one way to manage the manipulation of this personal data. Thus Tactical Technology Collective has a ‘Data Detox Kit’ to help people limit their online data traces (Disclaimer: I worked for Tactical Tech till a few weeks ago, and continue to consult with them part-time).

Cornelius Puschman and Jean Burgess (2014) discuss two themes they find in the business and technology press: ‘big data is a force of nature to be controlled’ and ‘big data is nourishment / fuel to be consumed’. They find that the metaphors used thoroughly disguise the agency of data creation by evoking natural source domains. Data is described as a commodity to be exploited. They write:  “Through the use of a highly specific set of terms, the role of data as a valued commodity is effectively inscribed (e.g., “the new oil”; Rotella, 2012), most often by suggesting physicality, immutability, context independence, and intrinsic worth.”

The comparison of data to oil neatly elides the histories of violence that mark the human relationship to oil in particular, and extraction more broadly. The comparison seems to justify how data is to be extracted from people and internet objects, and traded in a similar manner. At the same time, the comparison indicates a shift in the valuation of value itself. Just as the phrase ‘x is the new black’ suggests the dominance of a new metric of fashion, ‘data is the new oil’ implies that we start valuing things as, or against, data. And perhaps the coiners of such phrases know that the standard measures of value, oil and gold, are running out.

References

Brian Larkin. ‘The Politics and Poetics of Infrastructure’. Annual Review of Anthropology 2013. 42:327–43. Online: 10.1146/annurev-anthro-092412-155522.

Eugene Thacker. In the dust of this planet: The horror of philosophy vol 1. 2011. Zero Books.

Mostafa. Benzaazoua, H. Bouzahzah, Y. Taha, L. Kormos, D. Kabombo, F. Lessard, B. Bussie, I. Demers, M. Kongolo. ‘Integrated environmental management of pyrrhotite tailings at Raglan Mine: Part 1 challenges of desulphurization process and reactivity prediction’. Journal of Cleaner Production 162 (2017) 86e95. http://dx.doi.org/10.1016/j.jclepro.2017.05.161

Maya Ganesh is a reader, writer, technology researcher, and feminist killjoy who recently left full-time, NGO work after two decades, most recently out of Tactical Tech in Berlin, to spend more time on her PhD about machine learning and ethics. She tweets as @mayameme

The Planetary Futures Summer School is assembling the results of our two weeks together as group and individual projects that will be published in the coming months.

All photographs used in this post are by Maya Ganesh taken in Quebec, Canada on August 4, 2017 and are licensed under Creative Commons Attribution Share-Alike 4.0 International

Protestors in Hamburg, Germany. July 7, 2017. Photo: Maya Ganesh

On July 6, 7, and 8 the police established a thick cordon around the Hamburg Messe and Congress where the G20 was taking place. It separated delegates from the thousands of protestors who had converged on the city.

From the distinctive, red Handmaid cloaks worn by Polish feminists, to Greenpeace in boats off the harbour, to radical and Left groups in Europe, the G20 brought together diverse communities of protest from around the world. Protestors were there to tell leaders of the world’s most powerful economies that they were doing a terrible job of running the planet.

Not all of the protests were peaceful. The violence by some protestors and by the police against them has formed a substantial part of the reportage about the G20. In this post, I share some experiences and insights from the protests and their mediation.

The anti-Trump protests, the anti-Caste discrimination movements and activism on Indian university campuses against the Modi government, the ‘Science March’, the Women’s Marches, protests against Michel Temer in Brazil, #FeesMustFall and its sister protests in South Africa, and the post-Brexit marches to name a few, have captured local and national attention.

These protests have generated discussion about the shifting dynamics of political participation, popular resistance, and media, from the documentation of clever signs, the transition of protest memes from the online to the offline and back again, to the inspirational images of Saffiyah Khan and Ieshia Evans confronting violence.

 

Protestors and police in Landungsbrücke, Hamburg. July 7, 2017. Photo: Maya Ganesh

The anti G20 protests in Hamburg have resulted in damage to property, and violence by the police against protestors. According to a legal team representing activists, 15 people have been arrested and 28 are in preventive custody. For a few hours I got to observe how the police were responding to protestors. I did not witness any specific acts of violence by protestors. It was also not always easy to tell a ‘protestor’ from anyone else who might have just been an aggressive element on the streets. The city felt chaotic that day.

Late in the afternoon, many of us were in a park on Helgoländer Allee. We could see water canons, unmarked black vans, police cars lined up ahead, and protestors on the bridge above us. Formations of police in riot gear, their faces invisible behind visored helmets, jogged past. Protestors on the ridge above us were yelling. Riot police were running at us through the park, and chasing protestors down from the ridge. Suddenly, we were running but not sure exactly of where because police were coming at us from two directions. Many of us in the park – curious locals, tourists, media workers – we were not part of protest groups, and yet felt intimidated by these swarms of police. “If you’re wearing black, they’re not asking questions, they’re assuming you’re a protestor” said someone behind me. ‘Schwarzen block’, or Black Bloc tactics were in evidence.

An hour later, we walked down Helgoländer Allee towards Landungsbrücken station. Someone in our group wanted to try Hamburg’s famous Fischbrötchen – cured fish, remoulade, and pickles in bread – so we thought the touristy harbour promenade area might be a relatively quiet place to catch a break. It turned out that it wasn’t off limits. The police were going anywhere there were protestors, some of whom also wanted Fischbrötchen. The police sprayed protestors with water canons, and started pushing and shoving protestors and non-protestors alike.

Again, we saw ourselves surrounded by riot police yelling raus, raus (leave, go). We didn’t feel particularly heroic and wanted to leave, which we eventually did, but only after being pushed down the promenade. We broke away as soon as we could. Protestors and police continued down towards the St.Pauli Fish Market. Above us customers at the Hard Rock Cafe were filming everything, Pilsners in hand, and chicken wings for after.

FCMC

Before getting to witness police tactics of intimidation, my day in Hamburg started by exiting the Landungsbrücken subway station and walking up to Millerntor stadium, home of the local football team St. Pauli (also associated with a brand popular with Punk and ‘alternative’ consumers), where FC⚡MC had their headquarters.

FCMC describes itself as a “material-semiotic device for bloggers and Twitterers, editorial collectives and staff, video activists, free radios, precarious media workers and established journalists to re-invent critical journalism in times of affective populism.” With technical support by the Chaos Computer Club, and something of an heir in the IndyMedia tradition,FCMC is a space for and to support media workers reporting on the G20.

Anyone with accreditation could a space to write, edit, and post reports online. It was not difficult to join FCMC. All you had to do was accredit yourself via their website. The accreditation letter and wrist tag were something of a relief later in the day when we encountered the police.

While the MC in FCMC is clearly ‘media centre’, the ‘FC’ expands into charming, ephemeral “suggestions from the outside” everytime you refresh the webpage: “FC might mean Free Critical, For Context, Free Communication, Future Commons, Fruitful Collaborations, Friends of Criticism, Finalize Capitalism, Flowing Capacities, Freedom Care, Flowing Communism, or even Fight Creationism. But remember: these are just suggestions from the outside. The FCMC itself, in its autonomy and autology, might choose different ones.”

FCMC planned to organise press conferences following developments of the G20 and based on the reportage and views of alternative and independent media and activists. As protests grew however and got to dominate the coverage of the event, I couldn’t help feeling that it possibly put FCMC in a difficult situation. FCMC was also a space for media workers aligned with protest groups, particularly Left-leaning groups. But, FC⚡MC  could not directly promote protest actions. Reporters talking about police violence and the conditions around the protests were, however, given a platform by FCMC.

Drawing on support and resources from its networks, rather than owning the infrastructures of media production themselves, FCMC attempts to introduce diversity in the media ecosystem by itself being a pop-up. It is unclear if they will emerge elsewhere, and that is perhaps the point. This begs the difficult question of what kinds of control – editorial or otherwise – are lost and gained in adopting this flexible, in-a-box  approach.

The ‘Inappropriate’ Selfie

 

The inappropriate selfie is fresh social media meat. A recent selfie from the Hamburg protests, with a sarcastic caption by @JimmyRushmore and re-tweeted more than 35,000 times in two days, foregrounds the struggle around selfies and mediated political participation.

(There is a lot going on in this particular image, and in the online discussion about it. My intention in this diary is not to attempt a full or in-depth analysis, but to share some impressions that might inspire future discussions.)

The tweet was about selfie culture, tech object fetishisation, and the irony of a selfie with the latest iPhone (one tweeter was quick to point out it was not an iPhone 7 but a 6) in a protest against Capitalism.

Rushmore’s tweet may have been one of those throwaway comments that makes for great Twitter: a critique of pop culture, ruefulness at the mediation of everyday life through social media, and the struggle to live a consumption-conscious life in the absence of infrastructures to support this consciousness. But it has become a lot more than that now.

Like funeral selfies, or Holocaust Memorial-selfies, the discussion around this selfie is about the question of legitimacy: Are your politics legitimate if you’re taking a photograph of yourself in a riot against Capitalism? What are you, some kind of riot hipster? (“Riot hipster” is a Reddit word).

In a 2014 AoIR conference paper, Martin Gibbs and his collaborators discuss controversy around funeral selfies as an example of ‘boundary work’, that is rhetorical work negotiating the space between the legitimacy and illegitimacy of online behaviour. Similarly, “riot porn” is a word that has come up in some offline conversations to discuss the media’s emphasis on the riots and protests, as well as how the G20 protests were a spectacle that many of us participated in.

A recent conversation on Twitter (that includes Cyborgology folks) takes on porn as a suffix: there is a question of legitimacy here, again, and about the policing of how something is to be enjoyed.

Screen grab from Twitter. July 10, 2017.

Paul Frosh discusses selfies in terms of how they tip the balance of ‘indexicality’ that all photographic images are about: “It deploys both the index as trace and as deixis to foreground the relationship between the image and its producer because its producer and referent are identical. It says not only “see this, here, now,” but also “see me showing you me.” So in this case, you have: ‘see me showing you me’ participating in this particular political event that is dangerous; see what I risk for my politics. This selfie is unadulterated hyper-masculinity caught up in establishing its own calibrations of legitimacy.

Almost as soon as Rushmore’s tweet went up, so did digital sleuthing: Is this a real image or a doctored one? In a long Twitter thread following the selfie, Rushmore finds himself fending off these discussions, but image literacy is now an inescapable part of the mediation of images of events.

Back

Hamburg was shut down on Friday, and many of us had to walk some miles around the heavily secured Messe area to get to the only train station working that day. Close to twenty cars were burned and property was damaged. Hamburg is not without its rich history of countercultural and protest politics. These sorts of events are not new here but are infrequent now (unless perhaps you’re a newly arrived brown immigrant). A German friend who has been involved in activism for many years watches videos from Hamburg and says “good, it is good, let the protests come back to Hamburg. We need to deal with what has happened.”

Maya Ganesh is a tech researcher, writer, and feminist infoactivist living in Berlin

.

 

 

 

 

Screen grab, Ghost in the Shell. Mamoru Oshii, 1995.

In April this year, thanks to Stephanie Dinkins and Francis Tseng, artists-in-residence at New Inc., I experimented with a workshop called ‘Imagining Ethics’. The workshop invites participants to imagine and work through everyday scenarios in a near future with autonomous vehicles and driving. These scenarios are usually related to moral conflicts, or interpersonal, social, and cultural challenges. This post is about situating this experiment within the methods and approaches of  design fiction, a technique to speculate about near futures.

Julian Bleecker writes about design fiction as “totems” through which projections into the future may be assembled:

“Design fictions are assemblages of various sorts, part story, part material, part idea-articulating prop, part functional software. …are component parts for different kinds of near future worlds. They are like artifacts brought back from those worlds in order to be examined, studied over. They are puzzles of a sort….They are complete specimens, but foreign in the sense that they represent a corner of some speculative world where things are different from how we might imagine the “future” to be, or how we imagine some other corner of the future to be. These worlds are “worlds” not because they contain everything, but because they contain enough to encourage our imaginations, which, as it turns out, are much better at pulling out the questions, activities, logics, culture, interactions and practices of the imaginary worlds in which such a designed object might exist.”

I’m interested in what speculation about near futures means for discussions of ethics in the context of ubiquitous computing and artificial intelligence. Are we creating ethics for now, or for the future, and if for the future, then how do we envisage future scenarios that requires ethical decision-making?

Ethics are frameworks for values, some of which are codified in the law; some, related to AI-based technologies, currently challenge the law. Autonomous driving worries the law: the narrative around autonomous vehicles has tended to focus on the opportunities and limitations of software to make ‘ethical’ decisions. Simply put, how can driverless car software be held responsible for making decisions that may result in the loss of human life?  I’ve argued that this approach places software and its creation as central to the construction of ethics, rather than the wider social, political, cultural, and economic conditions in which autonomous driving (is) will be situated. Is it possible that the ethical implications of autonomous driving will involve more than just the philosophical conundrums defined by the Trolley Problem?

Drawing on Mike Ananny’s definition of technology ethics as “a mix of institutionalized codes, professional cultures, technological capabilities, social practices, and individual decision making”, I claim that ethics is contexually and situationally produced. Imagining Ethics takes the form of a workshop to identify and discuss these contexts and situations.

So, at New Inc, a group of people sat down for a little over two hours to work through scenarios projected to occur five years from the present moment. Here are the scenarios:

  • Develop a Quick Start Guide (QSG) for the new autonomous vehicle owner (following inspiration from the Near Future Laboratory’s QSG). What kinds of information would a new owner of an autonomous vehicle need?
  • Script an interaction between a thirteen year-old and her parents in which the young person is trying to negotiate hiring an autonomous vehicle out to a movie with her friends and without any adult chaperones.
  • Two security guards are in charge of a garage full of driverless trucks; but, one day, a truck goes missing. Develop the synopsis of a movie of any genre (Rom-com, Sci fi, Road movie, Zombie film, etc) based on this starting point and ending with the guards finding the truck. (This one was a little different from the other two).

In terms of process, the group were given these starting points but not much else. The idea was that they speculate ‘up’ into how these scenarios may unfold; there were very few rules about how to go about this speculation. In giving the group a broad remit, I attempted to evoke aspirations, concerns, and questions they might about autonomous driving.  I aimed to emphasise the social-political-cultural and psychographic aspects of decision-making in a future everyday with autonomous vehicles and driving.

The point was not to be pedantic about how a near future technology might work, or accurately predict what the future might be exactly like. What was important were the conversations and negotiations in sketching out how existing and near future artifacts might interact with human conditions of social and political life.

Imagination can be valuable in thinking about applications of technology in society; and I refer to the imagination in the sense that Sheila Jasanoff and Sang-Hyun Kim do, as a “crucial reservoir of power and action”. Their book, Dreamscapes of Modernity: Socio-technical Imaginaries and the Fabrication of Power discusses socio-technical imaginaries, a collectively sustained vision of how science and technology come to be embedded in society. STS (Science and Technology Studies) aims to bring “social thickness and complexity” into the appreciation of technological systems. However, the authors argue that STS lacks “conceptual frameworks that situate technologies within the integrated material, moral, and social landscapes that science fiction offers up in abundance.”  Thus they propose socio-technical imaginaries as “collectively held, institutionally stablized, and publicly performed visions of desirable futures, animated by shared understandings of social life and social order attainable through, and supportive of, advances in science and technology.”

While an imagination of a future does not necessarily contribute to an imaginary in a causal or linear sense, I bring the two words together to suggest that there are parallel processes underway, at different scales. While an imagination may be private and local, embodying contextual aspirations and anxieties of a future scenario, an imaginary may operate more like a Foucaultian apparatus through the interaction of multiple, social agents, powerfully shaping the emergence of technologies.

Cinema is a space where future imaginations of cities have been generated. There is a visual and textural thread running through future cities in the Masamune Shirow manga classic, Ghost in the Shell (Mamoru Oshii, 1995), Total Recall (Paul Verhoeven, 1990/Len Wiseman, 2012),  Elysium (Neil Blomkamp, 2013), Bladerunner (Ridley Scott, 1982), and A.I (Steven Spielberg, 2001), among others.  (Why filmmakers continue to replicate particular architectural or visual tropes is in all these future city visions is another matter). These films depict future cities as vertical, which the middle and upper classes escape to, with the poor (and sometimes, the Resistance) living in labyrinthine warrens or subterranean cities. Rain, or a constant drizzle, slick roads, and water pools, appear as another common motif in many of these urban dystopias, possibly signalling flooding from climate change, and resulting in a palette of blues and greys.

The visual representation of everyday life in these future cities is a particular project that brings us closer to the work of design fiction. How did Spielberg come up with the idea that the side of the cornflakes box in Minority Report would be a screen showing cartoons? Or that Tom Cruise’s John Anderton, head of the Precrime Division, would investigate criminal cases through a gestural interface? Philip K. Dick certainly didn’t write these things in his novel the film is based on. That interface has become iconic in discussions of cinematic shaping of future visions. The sliding, swiping, twirling of dials, and pinching of the screen to zoom out, didn’t exist in 2002 when the film was released. We didn’t even have social media or smartphones.

Much of this is intentional, writes David Kirby. Cinema has become a space for scientists, technologists and filmmakers to collaborate on “diegetic prototypes”, ‘real’ objects that people in the film’s fictional world actually use convincingly. As Kirby notes, engaging with design to seriously create a vision of the future, filmmakers and scientists are creating legitimacy for how the future will look at a granular and everyday level. And the granular and everyday are important in terms of thinking about how we will actually encounter future technologies. As Julian Bleeker writes, “everyday aspects of what life will be like [in the near future] — after the gloss of the new purchase has worn off — tell a rich story”.

It is this space of the mundane and the everyday in the near future with driverless cars that we as consumers, customers, scientists, scholars, activists, and lawyers, may have to start engaging new framings of ethics. Some of these scenarios may not be so different from what we encounter now, in the sense that social and economic inequalities will not cease to exist with the arrival of autonomous driving. How do you hold a fleet taxi service, like Uber, with autonomous vehicles, accountable for an accident? Who or what is responsible when an autonomous vehicle’s mapping system avoids “high crime neighbourhoods”, and thereby doesn’t offer services to an entire  community? How might a ‘victim’ in an accident  – a road accident, a data breach, a data exposure – involving a driverless car claim insurance?

Spurred by opportunities in the fictive and the fictional, the Imagining Ethics  workshop method is part of ongoing research and practice that seeks to understand how ethics may be reframed in a society with ubiquitous computing and artificial intelligence. What counts as a moral challenge in this society, and how will decisions about such challenges be made? Is it possible to nurture peoples’ aspirations and imaginations into imaginaries of ethics in artificial intelligence, imaginaries that alleviate the social, political, cultural and economic conditions of life in present and future societies with AI? It is time to find out.

Maya Ganesh is a Berlin based researcher, writer, and information activist. She works at Tactical Tech and is a doctoral candidate at Leuphana University.

References

Ananny, M. 2016. Toward an Ethics of Algorithms: Convening, Observation, Probability, and Timeliness. Science, Technology, & Human Values 2016, Vol. 41(1) 93-117.

Bleeker, J. 2009. Design Fiction: A Short Essay on Design, Science, Fact, and Fiction. Near Future Laboratory Blog. Accessed online: http://blog.nearfuturelaboratory.com/2009/03/17/design-fiction-a-short-essay-on-design-science-fact-and-fiction/

Jasanoff, S. and Kim, S. 2015. Dreamscapes of Modernity: Socio-technical Imaginaries and Fabrications of Power. Chicago, IL: University of Chicago Press.

Kirby, D. 2010. The Future is Now: Diegetic Prototypes and the Role of Popular Films in Generating Real-world Technological Development. Social Studies of Science 40/1 (February 2010) 41–70.

 

 

Street view, Calcutta, 2010.

 

According to its author JG Ballard, Crash is the first pornographic novel based on technology’. From bizarre to oddly pleasing, the book’s narrative drifts from one erotically charged description to another of mangled human bodies and machines fused together in the moment of a car crash. The “TV scientist”, Vaughan, and his motley crew of car crash fetishists seek out crashes in-the-making, even causing them, just for the thrill of it. The protagonist of the tale, a James Ballard, gets deeply entangled with Vaughan and his ragged band after his own experience of a crash. Vaughan’s ultimate sexual fantasy is to die in a head-on collision with the actress Elizabeth Taylor.

Like a STS scholar-version of Vaughan, I imagine what it may be like to arrive on the scene of  a driverless car crash, and draw maps to understand what happened. Scenario planning is one kind of map-making to plan for ‘unthinkable futures’.

The ‘scenario’ is a phenomenon that became prominent during the Korean War, and through the following decades of the Cold War, to allow the US army to plan its strategy in the event of nuclear disaster. Peter Galison describes scenarios as a “literature of future war” “located somewhere between a story outline and ever more sophisticated role-playing war games”, “a staple of the new futurism”. Since then scenario-planning has been adopted by a range of organisations, and features in the modelling of risk and to identify errors. Galison cites the Boston Group as having written a scenario – their very first one-  in which feminist epistemologists, historians, and philosophers of science running amok might present a threat to the release of radioactive waste from the Cold War (“A Feminist World, 2091”).

The applications of the Trolley Problem to driverless car crashes are a sort of scenario planning exercise. Now familiar to most readers of mainstream technology reporting, the Trolley problem is presented as a series of hypothetical situations with different outcomes derived from a pitting of consequentialism against deontological ethics. Trolley Problems are constructed as either/or scenarios where a single choice must be made. MIT’s Moral Machine project materialises this thought experiment with an online template in which the user has to complete scenarios in which she has to instruct the driverless car about which kinds of pedestrian to avoid in the case of brake failure: runaway criminals, pets, children, parents, athletes, old people, or fat people.  Patrick Lin has a short animation describing the application of the Trolley Problem in the driverless car scenario.

 

The applications of the Trolley Problem, as well as the Pascalian Wager (Bhargava, 2016), are applications that attempt to create what-if scenarios. These scenarios guide the technical development of what has become both holy grail and a smokescreen in talking about autonomous driving (in the absence of an actual autonomous vehicle): how can we be sure that driverless car software will make the right assessment about the value of human lives in the case of an accident?

These scenarios and their outcomes is being referred to as the ‘ethics’ of autonomous driving.  In the development of driverless cars we see an ambition for the development of what James Moor refers to as an ‘explicit ethical agent’ – one that is able to calculate the best action in an ethical dilemma. I resist this construction of ethics because I claim that relationships with complex machines, and the outcomes of our interactions with them, are not either/or, and are perhaps far more entangled, especially in crashes. There is an assumption in applications of Trolley Problems that crashes take place in certain specific ways,  that  ethical outcomes can be computed in a logical fashion, and that this will amount to an accountability for machine behaviour. There is an assumption that the driverless car is a kind of neoliberal subject itself, which can somehow account for its behaviour just as it might some day just drive itself (thanks to Martha Poon for this phrasing).

Crash scenarios are particular moments that are being planned for and against; what kinds of questions does the crash event allow us to ask about how we’re constructing relationships with and about machines with artificial intelligence technologies in them? I claim that when we say “ethics” in the context of hypothetical driverless car crashes, what we’re really saying “who is accountable for this?” or, “who will pay for this?”, or “how can we regulate machine intelligence?”, or “how should humans apply artificially intelligent technologies in different areas?”. Some of these questions can be drawn out in terms of an accident involving a Tesla semi-autonomous vehicle.

In May 2016, an ex-US Navy veteran was test-driving a Model S Tesla semi-autonomous vehicle. The test driver, who was allegedly watching a Harry Potter movie at the time with the car in ‘autopilot’ mode, drove into a large tractor trailer whose white surface was mistaken by the computer vision software for the bright sky. Thus it did not stop, and went straight into the truck. The fault, it seemed, was the driver’s for trusting the autopilot mode as the Tesla statement after the event says. Autopilot in the semi-autonomous car is perhaps misleading for those who go up in airplanes, so much so that the German government has told Tesla that it cannot use the word ‘autopilot’ in the German market.

In order to understand what might have happened in the Tesla case, it is necessary to look at applications of computer vision and machine learning in driverless cars. A driverless car is equipped with a variety of sensors and cameras that will record objects around it. These objects will be identified by specialised deep learning algorithms called neural nets. Neural nets are distinct in that they can be programmed to build internal models for identifying features of a dataset, and can learn how those features are related without being explicitly programmed to do so (NVIDIA 2016; Bojarski et al 2016).

Computer vision software makes an image of an object and breaks that image up into small parts – edges, lines, corners, colour gradients and so on. By looking at billions of images, neural nets can identify patterns in how combinations of edges, lines, corners and gradients come together to constitute different objects; this is its ‘model making’. The expectation is that such software can identify a ball, a cat, or a child, and that this identification will enable the car’s software to make a decision about how to react based on that identification.

Yet, this is a technology still in development and there is the possibility for much confusion. So, things that are yellow, or things that have faces and two ears on top of the head, which share features such as shape, or where edges, gradients, and lines come together, can be misread until the software sees enough examples that distinguish how things that are yellow, or things with two ears on the top of the head, are different. The more complex something is visually, without solid edges, curves or single colours; or if an object is fast, small, or flexible, the more difficult it is to read. So, computer vision in cars is shown to have a ‘bicycle problem‘ because bicycles are difficult to identify, do not always havea a specific, structured shape, and can move at different speeds.

In the case of the Tesla crash, the software misread the large expanse of the side of the trailer truck for the sky. It is possible that the machine learning was not well-trained enough to make the distinction. The Tesla crash suggests that there was both an error in the computer vision and machine learning software, as well as a lapse on the part of the test driver who misunderstood what autopilot meant. How, then, are these separate conditions to be understood as part of the dynamic that resulted in the crash? How might an ethics be imagined for this sort of crash that comes from an unfortunate entanglement of machine error and human error? 

Looking away from driverless car crashes, and to aviation crashes instead, a specific narrative around the idea of accountability for crashes emerges. Galison, again, in his astounding chapter, An Accident of History,  meticulously unpacks aviation crashes and how they are are investigated, documented and recorded. In doing so he makes the point that it is near impossible to escape the entanglements between human and machine actors involved in an accident. And that in attempting to identify how and why a crash occurred, we find a “recurrent strain to between a drive to ascribe final causation to human factors and an equally powerful, countervailing drive to assign agency to technological factors.” Galison finds that in accidents, human action and material agency are entwined to the point that causal chains both seem to terminate at particular, critical actions, as well as radiate out towards human interactions and organisational cultures. Yet, what is embedded in accident reporting is the desire for a “single point of culpability”, as Alexander Brown puts it, which never seems to come.

Brown’s own accounts of accidents and engineering at NASA, and Diane Vaughan’s landmark ethnography about the reasons for the Challenger Space Shuttle crash suggest the same: from organisational culture and bureaucratic processes, to faulty design, to a wide swathe of human errors, to the combinations of these, are all implicated in how crashes of complex vehicles occur.

Anyone who has ever been in a car crash probably agrees that correctly identifying exactly what happened is a challenging task, at least.  How can the multiple, parallel conditions present in driving and car crashes be conceptualised? Rather than anethics of driver-less cars’ as a set of programmable rules for appropriate action, could it be imagined as a process by which an assemblage of people, social groups, cultural codes, institutions, regulatory standards, infrastructures, technical code, and engineering are framed in terms of their interaction? Mike Ananny suggests that “technology ethics emerges from a mix of institutionalized codes, professional cultures, technological capabilities, social practices, and individual decision making.” He shows that ethics is not a “test to be passed or a culture to be interrogated but a complex social and cultural achievement” (emphasis in original).

What the Trolley Problem scenario and the applications of machine learning in driving suggest is that we’re seeing a shift in how ethics is being constructed: from accounting for crashes after the fact, to pre-empting them (though, the automotive industry has been using computer simulated crash modeling for over twenty years); from ethics that is about values, or reasoning, to ethics as based on datasets of correct responses, and, crucially, of ethics as the outcome of software engineering. Specifically in the context of driverless cars, there is the shift from ethics as a framework of “values for living well and dying well”, as Gregoire Chamayou puts it, to a framework for “killing well”, or ‘necroethics’.

Perhaps the unthinkable scenario to confront is that  ethics is not a machine-learned response, nor an end-point, but as a series of socio-technical, technical, human, and post-human relationships, ontologies, and exchanges. These challenging and intriguing scenarios are yet to be mapped.

Maya is  a PhD candidate at Leuphana University and is Director of Applied Research at Tactical Tech. She can be reached via twitter 

(Some ideas in this paper have been developed for a paper submitted for publication in a peer-reviewed journal, APRJA)

On December 16, 2012, a violent incident took place on the streets of New Delhi, India: a 23 year old student on her way back from the movies was gang-raped, disemboweled, and died from her injuries two weeks later. The incident sparked nationwide and global outrage; protests across the country; televised and social media discussion about women’s lack of safety in public spaces (and a very marginal discussion about women’s lack of safety in private spaces); the absences in the law on sexual violence and its enforcement; and what could be done to make Indian women secure.

#DelhiGangRape spanned both the online and offline with ease. Nirbhaya (‘the one without fear’), as the young student came to be known, lay in hospital holding on to life while the country raged and reacted. Candle-light vigils, night-time marches, solidarity sit-ins, were held for Nirbhaya across the country. In Delhi they were water-cannoned; more abuse happened during the protests. It felt like a sombre awakening, and we tweeted everything that we felt and experienced. That incident changed something, and we’re trying to piece together what did, and how and why.

YouTube Preview Image YouTube Preview Image

Incidents of public sexual assault of women in the southern-Indian city of Bangalore over New Year’s Eve have now come to light. One was the assault of a lone woman captured on CCTV, and the other was the story of a mass assault on many women in a central thoroughfare in the city among people bringing in the new year.

Protest marches are being organised around the country for January 21st. Bangalore had its “I Will Go Out!” march on January 12, an assertion of women’s rights to safety and confidence and fun in public spaces. Outrage has also been visible on Twitter and Facebook, but so have humour and levity. The hashtag #notallmen began to trend in India in response to a feminist group that surfaced #yesallwomen to show the extent of sexual violence Indian women face.  Interestingly, #notallmen  has received a resounding smack from across the Indian internet.

Early in 2017, a woman walked down a road alone in Bangalore and was accosted by two men on a motorbike. One jumped off the bike and started to grope her while she struggled to get away. Roughly seven kilometres away, hundreds of people were out on the central thoroughfare, MG Road, bringing in the new year. Images from that night document pandemonium, and there were reports of women being harassed and molested by many, many men. A similar sort of thing happened exactly a year before in Cologne, Germany. Police are now saying the mass assault incident never took place, although women have been reporting cases of assault that took place in bars, clubs and on the streets of the city that night.

The attacks were met with familiar and tired gestures. Male politicians did what they always do when sexual assault occurs in public: blamed women for being out at night; blamed the influence of Western culture. (The evil influence of “Western culture” is a popular trope routinely deployed by self-appointed custodians of Indian culture to shut down any challenge to their nationalist notions.) “In these modern times, the more skin women show, the more they are considered fashionable. If my sister or daughter stays out beyond sunset celebrating December 31 with a man who isn’t their husband or brother, that’s not right. If there’s gasoline, there will be fire. If there’s spilt sugar, ants will gravitate towards it for sure,” said Abu Azmi, a politician inclined toward metaphor and sexism. A little correction later, Azmi’s words became the subject of Facebook likes.

azmi

After the New Year’s Eve attacks, the advocacy organisation, @FeminisminIndia , started collecting stories on Twitter with the #yesallwomen hashtag to demonstrate how common violence against women is in India. Very soon after, the #notallmen hashtag started to trend. What’s interesting about #notallmen is that it carries no particular cultural particularity – the defensiveness of men who cannot acknowledge structural sexism may have some cultural inflections, but the phenomenon itself appears to be fairly robust across local internets.

From the Huffington Post and Quartz India to the mainstream press, the criticism and trolling of #notallmen has been noticed.  A deep thread started by the Buzzfeed India editor  resulted in a collectively authored #notallmen parody based on Bohemian Rhapsody.

jharhapsody

“Be careful” is one of those things Indian women hear all the time; we’re responsible for managing our family’s honour, so we have to be careful with it. Buzzfeed India released a video imagining what it would look like if Indian parents told their sons to ‘be careful’ lest they molest a woman.

Stand-up comedian Karthik Kumar takes on #allIndianmen poking fun at their what’s wrong with Indian men from them not knowing what the clitoris is, to not understanding what consent means. The video shows men in the bar looking a little sheepish and women laughing the loudest.

YouTube Preview Image

makamtwitter

nocountry

The senior journalist Sachin Kalbag had a rant about what is wrong with #notallmen.

kalbagtwitter

Protest and activism online and offline can be oddly inspirational and emotional, and brittle and ephemeral at the same time. It’s unlikely that all the creators of memes are a new feminist online army online; sometimes it feels like we can talk about what’s wrong with Indian masculinity only by making it the subject of chiding humour. We’re not laughing at you.  For now though, it’s trending to troll #notallmen and have a conversation that hasn’t been had enough.