Poster spotted in the Geoengineering and Geosciences department
at the University of Quebec at Abitibi and Temiscamingue. However, the author believes the future is not just about robots. (Image: Maya Ganesh, 2017)

It seems like there is a flowering of interest in speculating about the future. Of course SF writers, the RAND Group, and Trekkies, have been doing this for much longer. (An interesting side note: SF writers are now enjoying new income streams by working with multinational corporations to imagine the future.)

It is possible that as consumer technologies began to appear as if from ‘the future’, as presented to us in dystopian movies such as Bladerunner and Minority Report,  speculating about the future increasingly became a topic of interest. As the phrase ‘surveillance capitalism’ has gained visibility thanks to devices just as the Echo. And maybe things started to appear ‘Orwellian’ after the Snowden revelations. I would like to think that Intergovernmental Panel on Climate Change reports generate concern about the future; but the continued rising temperature of the planet suggests that this is not the case. Possibly for people in the US, the election of Donald Trump, for Brazilians of Jair Bolsonaro, The Future has become a thing to be worried about (‘now more than ever’).

I spent last weekend at a workshop called Designing Tomorrow organised by the great folks behind the Utopia Film Festival in Tel Aviv, and re:publica in Berlin. The workshop was about testing various methodologies to actually speculate about the future; and they drew heavily from Peter Frase’s Four Futures. It got me thinking about the different narratives to thinking about the future. Here is a quick overview of some of these that I’ve encountered through recent arts and culture projects, and in the tech news (These do not necessarily line up as perfectly nested Russian dolls, however.)

  1. The planet is a mess! Things are going extinct! Life as we know it is over! We can do nothing to stop it, except to come up with a beautifully designed ending. 
  2. So let us use reason, rationality and technology to transcend the mortal coil and start over on Mars.
  3. But why are we talking about extinction as a something about the future when it is already happening? Like, for example, a million indigenous people in India were just evicted from the forests that they have always lived in. This has “resurrected an old debate between forest rights groups and the conservation lobby.” The future is actually the past.
  4. Framed in terms of both the future and a catastrophic present, theorist Orit Halpern’s Planetary Futures Summer School project asked: “how we might imagine, and design, a future earth without escaping or denying the ruins of the one we inhabit? How shall we design and encounter the ineffable without denying history, colonialism, or normalizing violence? What forms of knowledge and experiment might produce non-normative ecologies of care between life forms? How shall we inhabit the catastrophe?” (Projects here)
  5. Mushon Zer-Aviv says we need to consider  Canceling the Apocalypse in favour of a different perception of the future(s). We need “to think beyond our dark visions, beyond Silicon Valley’s techno determinism, beyond dystopian dreams, beyond the resistance framework that leaves us always playing defence… we will reignite our political imagination… we will explore the futures as an open set of political possibilities rather than a predetermined algorithmic prediction.” .
  6. One of the ways Mushon and his collaborator  Shalev Moran have done this is through their Speculative Tourism project, a set of audio guides taking you through a Jerusalem 50 years into the future as imagined by a group of authors and artists.
  7. What you imagine is what you get, argue Mushon and Shalev. In terms of a planetary future, what if we imagined something “digitally inclined but unafraid of dirt. Think post-apocalyptic hacker aesthetics, but with a sunnier disposition.” Enter: Solarpunk.
  8. Another set of people are doing the work of creating new political imaginaries. They are thinking about the future in terms of the world we want, and have, now, rather than about the persistent theme of exit. Who gets to exit, as Sarah Sharma might ask? Octavia’s Brood is an anthology of  ‘visionary fiction’ because it “pulls from real life experience, inequalities and movement building to create innovative ways of understanding the world around us, paint visions of new worlds that could be, and teach us new ways of interacting with one another.”
  9. This is a theme echoed by other fellow travelers who ask why have certain narratives been privileged over others? Situated at the intersection of Art, Science and Technology they ask, what it would look like if “feminist, queer, de-colonial, disabled and historically marginalized artists…offer visions of the future outside and beyond the dominant discourse…” Might we find new possibilities of “Re-figuring The Future?
  10. Narratives about speculation about the future are situated. What do Asian speculations about the future look like?
  11. And critically, Ingrid Burrington asks how speculating about the future has been used and abused. How might the ‘inevitabilities of certain doom’ contained in the notion of a Future Perfect be challenged? How might “life find a way”?

What narratives about speculative futures are you noticing where you are?

*Thanks to participants at the Designing Tomorrows workshop who contributed some of these links at the workshop and in the group’s Slack channel. 

Maya Ganesh is a technology researcher and writer living in Berlin.  She is working on a PhD about how automated decision-making is playing out against the backdrop of  future imaginations of machine autonomy. She may be found on Twitter @mayameme

 

 

AnthroFix, a speculation on online dating in a posthuman future. Picture: Maya Ganesh

In the past few weeks, the news of Chinese scientist He Jiankui germline editing twins of a HIV positive couple in vitro has raced around newsfeeds. He edited the CCR5 gene in the twins in an attempt to create resistance to HIV. He used a gene-editing tool called CRISPR (which is short for: Clustered Regularly Interspaced Short Palindromic Repeats). CRISPR  is regularly used in genetic engineering in controlled lab contexts, but how it fares in the wild is unknown.  Bill Gates has been enthusiastic about the possibilities of future applications of CRISPR to address various ‘Third World’ health and development problems.

That He used CRISPR on humans and how he did it, have fueled discussions about bio-ethics and genetic engineering.  One of the best reviews of the ethical issues surrounding the CRISPR-ing of the embryos comes from Ed Yong writing in the Atlantic. For those  consumed with questions of ethics in emerging new technologies, the history and development of ethics in genetics and nuclear energy management are great first ports of call.

This week I went to a speculative design workshop about CRISPR-Cas9 (CRISPR-associated protein 9) hosted by Emilia Tikka, a Berlin-based artist and designer whose practice deals with the philosophical and cultural implications of biotechnologies, at STATE Studio.

I’m fascinated by the recent popularity of Speculative Design / Design Fictions / Futures-Thinking as methods, and as spaces for the design of advocacy materials in the art, academic, civil society, and technologist communities.

Some thoughtful examples that are reflections on technology, society and culture include: Julia Kloiber’s Ding Magazine produced by the Mozilla Foundation, Sasha Costanza-Schock and Joana Varon’s Transfeminist Tech Oracle, and Ruha Benjamin’s The Future is Ferguson and Lou Cornum’s The Irradiated International as part of Future Perfect. Coming up this weekend (Dec 15-16) is an Asia-focused SF writing workshop hosted by the Digital Asia Hub in Singapore. (Note: I have personal friendships and/or professional relationships with all of these people)

[For a critique of the spread of design thinking and speculating about the future through design, Silvio Lorusso has a great Medium piece here. Hat tip to Johannes Bruder for this reference]

Rather than conceiving an entirely new world, which a Speculative Futures or Design Fiction process might actually be about and is actually really hard to do, we often go into workshops about AI (Artificial Intelligence) and ML (Machine Learning) to talk about what is happening now. AI and ML technologies are already being applied without adequate testing and calibrating for their social impact. The results are disturbing. I believe many of us are drawn to these workshops and speculative methods as emotional, supportive, collaborative spaces to talk about how we want to fix society now, and how AI/ML technologies are ferocious amplifiers of social dynamics that are already violent and skewed.

Thinking about speculative futures involving gene editing is not a ‘now’ kind of problem, He Jiankui’s experiment notwithstanding. Engaging with bio-futures often involves de-centering the human being rather than thinking about how to stop fascists using AI technologies next week. Bio-futures scenarios also do not necessarily involve maintaining humans and human societies as we know them now.

Tikka let us choose the groups we wanted to be part of: the future of food, post-humanism, enabling longevity, the creation of hybrid species. Each group was given a big ‘what if’ question like, ‘what if genetic engineering did away with needing to cook food?’  And then had to focus on building a story around an everyday scenario, like ‘what does shopping at the supermarket look like?’, or, ‘what is a family dinner like?’ We had lots of great arts and crafts materials to work with.

I was in the group motivated by posthumanism, and we were given the question: what if we could create new species? It took us a long time to work through this. We talked about possibly creating bizarre chimeras (but to what end? just because we can?), or increasing the numbers of a dying species. But what would it mean to resurrect one species and not an entire ecosystem? If we allowed some kinds of bees to survive but not a kind of flower, then how would that affect the system?  Why did we want to create species that would enable us to continue to live as we are? If we want to bring back an extinct species, how will it affect the ecosystem it enters? Tikka had referenced the case of the Harvard scientists resurrecting the Woolly Mammoth with CRISPR.

At every stage, the conversation pinged back and forth between what it would mean for human society to see the emergence of new species. But one member of our group kept bringing us back to the question of why we wanted to centre the human and human society in the first place. Why are we invested in humans continuing as we are? Having just seen Boots Riley’s darkly funny Afrofuturist take on posthumanism, Sorry To Bother You, I was all about embracing a future with strange creatures like Eco-Sapiens (though not necessarily in the way the Armie Hammer character in the movie proposed).

Our group finally decided on a future scenario we wanted to work with. We were motivated by the need to arrest the decline of what exists. In the future, humans have acknowledged our role in the extinction of species on the planet (Julian Oliver and Crystelle Vu’s Extinction Gong came up as an inspiration here). So the human body becomes a repository of genetic material being lost. Given that the planet is already fairly densely mapped and monitored by sensors, we are alerted to the dwindling of a species so that its genetic material may be harvested. This material is delivered to humans to ingest as pills from birth. It is now compulsory for humans to become living archives of genetic material being lost on the planet. It is a little like how we pay our taxes now. You have little control about exactly what kinds of material you have to carry. There is an entire, global scientific bureaucratic apparatus managing the process.

We wanted to assert that while almost everything about human society would change by the time this bio-future arrived, Capitalism and bureaucracies would still be functioning. And yes, there would also be resistors and people who did not want to ‘pay their taxes’ and believed in the superiority of ‘pure’ humans. Gene editing black markets would thrive like mosquitoes. Corruption in government would still exist.

We do not know how carrying other kinds of genetic material will affect the human body over time, and with its hormonal changes. But it is not actually the human body as it is now, anyway. You might be carrying the genetic material of salmon or trout, and for a period have nothing but beautiful, shimmery, pink skin eruptions. Or, someone else might develop only three toes and otherwise look entirely as humans do now; or develop fine hairs on their eyelids.  Or nothing at all might happen. But you are a living, walking archive of something being lost from the planet, and you have the possibility to grow something new. Some genetic material might not survive at all, other material may go rogue and cause destruction; or, serendipity might strike somewhere and lead to the creation of a beautiful and harmonious new ecosystem. The Camille Stories in Donna Haraway’s latest book, Staying With the Trouble, was a connection Tikka saw in our thinking.

AnthroFix Dating App model created in 30 minutes at Emilia Tikka’s workshop. Picture: Maya Ganesh.

Our specific scenario in this posthuman future was: what does dating, hooking up and reproduction look like when we know we are carrying different kinds of genetic material within us? We came up with the idea of a dating app called AnthroFix; personal profiles display each person’s genetic code like personal data dashboards, in addition to what they are looking for: experimentation, fun, reproduction etc. Do people look for ‘sensible’ or ‘responsible’ matches, or just want to experiment to see what kinds of crazy new species might develop from their mating?

One member of the group persisted: but what if we change? What if we don’t want salmon skin or babies with hooves? Would any of this be ethical? But I believe the point of the discussion about posthumanism is to acknowledge that ‘we’ have to consider what it means to not be ‘we’ anymore. What kinds of ways are there to re-imagine humanity beyond definitions of human rights, and consider the human in relation to other organic and inorganic non-humans around us? Tikka made the point that we cannot project the ethics of now into a future society when we think about what possibilities exist with new future technologies.

I tend to agree, that with radical and strange new technologies, time travel means thinking and feeling in reduced oxygen, as it were. One has to get a little light-headed and embrace things that are horrific, bizarre and possibly cruel. (And all these horrific, bizarre and cruel things have already happened in the past anyway.) The role of artists and creative practitioners in this is critical. As someone in the workshop noted, Jules Verne imagined driverless cars quite easily and long before car companies did.

He Jiankui may not have done the right thing in experimenting with CRISPR as he did; and we also need to be wary of the Bill Gates-es of the world. Some of the most important work in STS (Science and Technology Studies) is about labs, government bureaucracies and everyday organisational, ‘boring’ infrastructures that create and contain ‘scientific’ knowledge. These alert us to the tensions in innovation, the need to keep an eye on social context, and how power accrues to those who make knowledge in these places. Whose Speculative Design Futures workshop happening where will shape the future of technologies like CRISPR?

 

 

 

 

When migrants meet we talk about our visas. You can be a Belgian in New Delhi or a Colombian in New York, but until you get your anchor passport, or a Green/Blue card, something that stabilises your identity in a place that you’ve chosen to live and work in, and that isn’t the poor, boring, violent or corrupt yet always-wonderful place you’re ‘originally from’, you will continue to talk about your right to remain.

We do this because it affects what we can and cannot do to build our lives as workers, survivors, parents and upstanding members of our new societies. Public and social bureaucracies that regulate our right to remain in a place affect our confidence in visible and invisible ways.

Talking about our visas is an extended response to the question: Wie gehts? How’s it going? Yesterday, my answer would have been: Schlimm! Terrible! Because I discovered that my prized freelancer visa is actually not as stable a sign of inclusion as I thought it was. I was reminded that my right to remain and work in Germany is a paper sticker in a passport, and indicative of a particular system of systems related to immigration, not verification of my identity, nor a validation of the hoop-jumping that translates into trust in me.

In this post I put together some reflections from a personal incident as a way to document how notions of identity and trust are being transformed in the context of automated decision-making and financial technologies (fintech).

A few months ago I wrote on this site about fintech and Aadhaar in India, and the power and violence evoked by identities constructed in terms of caste, religion, language, region, and gender. I tried to think through what happens when offline identities take on data versions shaped from biometrics and big data traces. I argued that certain kinds of shifts occur through the mass collection of data and its analysis, in particular a hyper-visibility of “databodies” floating in “codespace”, as Shannon Mattern puts it, that become signifiers of trust in subjects whose material realities become invisible.

Trust is key. In the rapid growth of fintech applications in India, trust is a series of big datapoints assembled, processed, and taken to mean something significant at scale. In the language of identity verification platforms like OnGrid and IndiaStack, trust is a ‘layer’ of data exchanges enabled by an individual’s consent to data collection about themselves. This trust enables fintech applications, and any other official or regulatory verification of identity without the individual ever having to be present.

Yesterday I tried to open a bank account with a new ‘mobile bank’ that is popular in Western Europe and is being advertised in Berlin at the moment. The mobile bank promises to ease the process of opening of an account and offers all their information in English, an increasingly important factor in a diverse city like Berlin. I was not allowed to open an account because I have an Indian passport. However, they didn’t tell me this till I was well in to the process of registration (and after the bank had collected my personal data).

After giving the system my address, phone number and nationality, and selecting the kind of account I wanted with the bank, I was asked to download the mobile app where the “fun stuff” would happen. But when I logged in to the mobile app, the system seemed to hang, because I was told that my identity documents could not be verified. I couldn’t move forward or back, and couldn’t log out either. I asked a Canadian friend who also wanted an account with the bank to try the document verification, and he found that it all worked perfectly.

According to the bank, they have two ways of verifying identity documents like a national ID card (for Europeans) or a passport (for most others): there is a video verification process (for certain nationalities), or a visit to the local Deutsche Post Office (for some others). When I referred to the list of nationalities that could have identity documents video-verified, I realised that India was not on the list at all.

Arriving at the frozen screen for the third time I called the customer service helpline. The agent assured me that I was not the problem, only my passport was. It was a matter of ‘when’, not ‘if’, Indians could get bank accounts. He acknowledged that there was a high demand for accounts from Indians. He could not tell me why we were being rejected.

I discovered why my identity documents could not be verified when I called customer service again pretending to not know anything about the video verification. This time I got a more chatty customer service agent who told me about IDNow, the video verification service the bank uses. It turns out the problem is with Indian passports: our passports cannot be read by machines. IDNow cannot read a passport that does not have holograms, biometrics, or some other kind of stable identity data embedded in it. This is also why I cannot use self-check-in kiosks in many airports around the world; however, I am afforded the luxury of human interaction and verification-by-human.

The chatty agent told me that my Egyptian, Canadian and Nigerian friends can use their passports to apply for a bank account because their passports have either biometrics or some other kind of identity data embedded in it. My Brazilian friend cannot verify his passport with ID Now, but the Brazilian passport has a hologram, which is  why he can have it verified at a local Post Office.

 

YouTube Preview Image

 

How IDNow video verification works

Possibly, this isn’t just about what a video verification system like IDNow can read. There is an infrastructural dimension to identity verification that includes legacy systems like laws, risk scoring, bilateral agreements, and recommendations put out by the FATF (Financial Action Task Force) an intergovernmental regulatory body that sets standards for “combating of money laundering and the financing of terrorism and proliferation of weapons of mass destruction” among others.

It turns out that the FATF is very keen to work closely with the fintech and ‘regtech’ industries; ‘regtech’ is short for ‘regulatory technology’, which enables transparency and compliance in the finance industry in particular. Protocols such as KYC, or Know Your Customer, are managed by the regtech industry.

TheKenWeb journalist KJ Shashi tells me passports from Singapore and South Korea are some of the most powerful in the world allowing nationals to travel widely without visas. These two passports are considered stable and are maintained as such by various governments and risk assessments by agencies like FATF. By contrast, Afghanistan has a high risk score. So, Shashi says, nationals have to show a lot of documentation to be verified, and in some cases even a return ticket to the country.

It is possible that India is on the radar for money laundering or financial flows associated with terrorism. Is this why our passports don’t have biometrics and holograms embedded in them, so we don’t flow smoothly but are stopped and peered at more closely? Admittedly, it is hard to piece together the logics operating behind the validity of one passport over another in the video-verification process, and where this sits alongside other financial and regulatory assessments or risk scores of a particular country.

Why does the same Nigerian, Egyptian or Brazilian passport holder need visas to travel across borders, but the passport itself makes identity verification possible? Clearly, a passport has its own subjectivity! Because it isn’t the material body of the Nigerian or the Egyptian person that is verified through the banking app; the app merely verifies that the passport is not a forgery and that the passport holder is indeed the person applying for a bank account. IDNow seems to verify infrastructures of identity management systems themselves.

Identity verification by machine systems is quickly becoming the norm. Aside from identifying potential criminality, it is also central to financial inclusion whether that is in the context of refugees in camps, or bringing the marginal and ‘unbanked’ into the mainstream. Therefore a program of the World Bank frames poverty alleviation and ‘development’ in terms of identity: the Identification for Development program, or ID4D. Its mission is “To enable all people to exercise their rights and access services, [ID4D] helps countries realize the transformational potential of inclusive, robust, and responsible digital identification systems.”

Its advisory council includes Nandan Nilekani, the architect of the Aadhaar project, the former president of Estonia, and the CEO of Ant Financial, an affiliate of AliBaba, among others. No doubt this advisory council has a wealth of experience to share with the Bank in building secure identity verification systems.

It didn’t occur to me to ask the bank’s customer service agent if an Aadhaar card would be acceptable for video-verification. It probably could. And this is exactly the argument used for Aadhaar: for identities to be constructed in terms of a stable ‘databody’ so as to be machine-readable, thereby inspiring trust as an infrastructural-computational outcome, whether that is a credit score or a mobile banking application. For me, it’s back to a bricks-and-mortar bank for a new account. And that’s also going to be a story for when I meet other Indians in Berlin.

Maya’s bio may be gleaned from this post. More information is available in the data currently held by a new mobile banking service.

As a follow up to my previous post about the Center for Humane Technology, I want to examine more of their mission [for us] to de-addict from the attention economy. In this post I write about ‘time well spent’ through the lens of Sarah Sharma’s work on critical temporalities; and share an anecdote from my (ongoing) fieldwork at a recent AI, Technology and Ethics conference.

Time Well Spent is an approach that de-emphasizes a life rewarded by likes, shares and follower counts. ‘Time well spent’ is about “resisting the hijacking of our minds by technology”. It is about not allowing the cunning of social media UX to lead us to believe that the News Feed or TimeLine are actually real life. We are supposed to be participants, not prey; users not abusers; masterful and not entangled. The sharing, pouting, snarking, loving, hating and sexting we do online, and at scale, is damaging personal well-being and the pillars of society, and must be diverted to something else, the Center claims.

As I have argued before in relation to the addiction metaphor the Center uses, ‘time well spent’ implies the need for individual disconnection from the attention economy. It is about recovering time that is monetized as attention. This is a notion of time that belongs to the self, un-tethered, as if it were not enmeshed in relationships with others and in existing social structures.

“What Foucault is to Power, Sarah Sharma is to Time”: Time like power is everywhere, it is differential, flows, changes shape, organizes society and relations between bodies. Sharma’s In the Meantime: Temporality and Cultural Politics explores how relationships to time organize and perpetuate inequalities in society.

Her ethnographic work develops a ‘power chronography’ approach, a level-up riff on Doreen Massey’s ‘power geography’. ‘Power-chronographies’ register the bio-political dimensions of how Time and Place are calibrated for us by the outer structures of our lives, which work inward to eventually shape who we (think) are. Sharma spends time with taxi drivers, slow food makers, frequent flyers and office workers and finds that time, for many people, is not theirs to manipulate, hoard, speed up or slow down. She writes:

“Individual experiences of time depend upon where people are positioned within a larger economy of temporal worth. The temporal subject’s day includes technologies of the self that are cultivated through synchronizing to the time of others and also having others synchronize to them. In this way the meaning of one’s time is in large part structured and controlled by both the institutional arrangements inhabited and the time of others—other temporalities.”

Harris’ approach constructs social media time as ‘me-time’, an individualized emphasis on being social. However, the digital ecosystem generally, and some social media platforms, host both public and intimate economies of care and work that make getting off near impossible. Migrants maintain family relationships across distance; entrepreneurs set up and manage businesses; millions are employed by digital apps and platforms; activists amplify their causes; marginalized people find community. Not spending time on these platforms is not a choice for many people.

Recent initiatives bring critical labor politics perspectives to the monetization of our time-as-work and work-time on these platforms, such as Platform Co-operativism and the scholars and activists at this upcoming event Log-Out: Worker Resistance Within and Against the Platform Economy. These offer ways to think about the Center’s concerns with time-as-individualized critically, through labor organizing and rights, and the history of scholarship on Taylorism, management science and the capitalist structuring of work-as-time, broadly.

The questionable gift of social media apps and platforms is that they would make us all more efficient – personally and professionally – because our deep connections across a limited number of platforms would make things faster, and “friction-less”.  It is an oft-repeated observation that when email was first available it was something of a reprieve from having to respond immediately, as you might to a phone call. Technology made a promise that you could manipulate, expand and make time; but this delay has become perverted, and UX  became a sort of handmaid to this effort. So the notification lurks there saying “oh you don’t have to look at this now, you just need to know something has arrived for you and you can look at it whenever you have the time.” In Harris’ list of things to do to minimize social media addiction, turning off notifications is one.

When Harris talks about spending our time well, and about well-being, I claim he wants us to re-calibrate time in a continuation of a familiar neoliberal agenda. You are responsible for how you spend your time. Find a number that values what you do with your time [which is a number that is supposed to reflect what you do, not who you are]. Slow down; Sharma writes about slow food, wellness, yoga in the office, listening to your body. But also: maximize your slowed-down time.

Yet, we’re told things are always speeding up thanks to technology: High frequency trading algorithms, machine learning and automation all promise to compress time. Sharma says that it isn’t so much about speeding up, or the value on slowing down, as it is about re-calibration:

“What most populations encounter is not the fast pace of life but the structural demand that they must recalibrate in order to fit into the temporal expectations demanded by various institutions, social relationships, and labor arrangements. To recalibrate is to learn how to deal with time, be on top of one’s time, to learn when to be fast and when to be slow.” (p133)

The other angle to temporal politics is its differential value depending on who is doing the valuing; and this came home to me through an interaction at an AI and Ethics academic conference in the United States a few weeks ago.  At the end of one panel session on ‘ethical issues and models’ that included two papers on natural language processing (NLP), chatbots and digital assistants, I asked a question that I wanted a genuine technical response to:

“given that we know that social media speech is a convenient, never-ending source of training data, but a cesspit at the same time, has anyone looked elsewhere for training data for an AI assistant? It may sound naive but why can’t we have even some small scale experiments comparing what would happen if an entirely different training data set, say from the corpus of children’s literature, television programming, or young adult fiction – these exist in every language around the world – and which comes with some thought and controls – be used to train machine learning algorithms for speech?”

No one on the panel responded except to say they hadn’t really considered it. Though, someone came up to me later and introduced themselves as working at a national standards-setting organization. One of the many things the organization does is to standardize and verify training data sets and how they are to be used. They were both intrigued and concerned by my question: intrigued because they saw the problems in using social media data, but concerned because children’s literature would not work as a training data set.  At all. Nor teen fiction. I tried to emphasize the principle rather than the detail – are there other training datasets that might shift the standards of speech – ours and machine speech –  and what would it take to employ them?

Eventually, they said, it comes down to this: Social media data is already there, it is faster and more reliable. It would take a lot more time and effort to develop new standards through different kinds of speech. Why is that a problem, I asked? Isn’t there time to innovate in new directions while carrying on with the same rubbish data? I mean, Twitter isn’t going anywhere.

They responded, and I paraphrase here: “If we have to start adopting new kinds of training data then it is just going to put a lot of pressure on the ecosystem and the supply chain of a lot of services based on NLP. These programmers and data scientists need to be on it, pushing out products more quickly if they want to maintain their edge. Social media is free, relatively, it’s just there.  It would take a lot of effort and time to think about changing the model. ”

Time – theirs –  is an economically valuable resource; and they don’t want to spend it. Our time as users is also valuable but one to be extracted, monetized and manipulated. Finding ways to get users to spend more time on a website and fall deeper into its seductions is exactly what Harris did at Google. If there is sincerity in making time ‘well spent’, I’d argue it has to become a consideration through the process of development rather than be something managed at the output end of a highly asymmetrical power-economy.

I believe this to be a question of ethics too, and of the structural dimensions of how technology industries build technology. I’ve argued that ‘ethics in AI/ML’ is being treated as an outcome, a rule-based, computational, decision-making process to be completed by machine learning. I believe many papers presented at this conference were in this mold. Instead, what if ethics were considered to be a series of ongoing, complex negotiations of values between humans and non humans? And as integral to the design process? A few others at the conference were on the same page. I view the the Center’s emphasis on de-addiction at the user-end as part of the ethics-tacked-on-at-the-end approach, and the ethics-as-output approach.

 

It is much harder to think about what it means to design and build technologies ethically, than to make a technology artifact that makes decisions that are deemed ethical because they were programmed according to rules based on specific approaches to ethics. We need more of the former and less of the latter. Perhaps if time were spent well through the process of building digital technologies, society and democracy might be better served. The Center’s efforts, and that of other efforts – like Listen Up from RagTag – are very welcome in this direction.

 

Maya Indira Ganesh is working on a PhD about the testing and standardization of machine intelligence. She enjoys working with scientists and engineers on questions of ethics and accountability in autonomous systems. She can be reached on Twitter @mayameme.

There has been a steady stream of articles about and by “reformed techies” who are coming to terms with the Silicon Valley ‘Frankenstein‘ they’ve spawned. Regret is transformed into something more missionary with the recently launched Center for Humane Technology.

In this post I want to focus on how the Center has constructed what they perceive as a problem with the digital ecosystem: the attention economy and our addiction to it. I question how they’ve constructed the problem in terms of individual addictive behavior, and design, rather than structural failures and gaps; and the challenges in disconnection from the attention economy. However, I end my questioning with an invitation to them to engage with organisations and networks who are already working on addressing problems arising out of the attention economy.

YouTube Preview Image

Sean Parker and Chamath Palihapitiya, early Facebook investors and developers, are worried about the platform’s effects on society.

The Center for Humane Technology identifies social media – the drivers of the attention economy – and the dark arts of persuasion, or UX, as culprits in the weakening of democracy, children’s well-being, mental health and social relations. Led by Tristan Harris, aka “the conscience of silicon valley”, the Center wants to disrupt how we use tech, and get us off all the platforms and tools most of them worked to get us on in the first place. They define the problem as follows:

Snapchat turns conversations into streaks, redefining how our children measure friendship. Instagram glorifies the picture-perfect life, eroding our self worth. Facebook segregates us into echo chambers, fragmenting our communities. YouTube autoplays the next video within seconds, even if it eats into our sleep. These are not neutral products. They are part of a system designed to addict us.”

Pushing lies directly to specific zip codes, races, or religions. Finding people who are already prone to conspiracies or racism, and automatically reaching similar users with “Lookalike” targeting. Delivering messages timed to prey on us when we are most emotionally vulnerable (e.g., Facebook found depressed teens buy more makeup). Creating millions of fake accounts and bots impersonating real people with real-sounding names and photos, fooling millions with the false impression of consensus.”

The Center for Humane Technology has set itself a mighty challenge. How are people going to change digital practices in the face of UX that is weaponized with dark patterns that intend to keep us addicted? How are they going to take down the business model built on surveillance capitalism, which they refer to as the attention economy? If social media is addictive, what sort of twelve step program are they going to come up with? How do you sustain being clean? They might want to check out a program for how to detox from data.

What the Center identifies as the ‘monetization of attention’ is, actually, the extraction of personal data. (Curiously, they do not use the phrase ‘big data’, or ‘your personal data’ anywhere in their website text.) This attention (or, personal data) is extracted from our digital and analog behavior and then is used to profile and target us to sell us lies, misinformation, or worsen our depression by showing us advertising for make-up. And we are targeted even when we aren’t paying attention at all, like when we are walking down a street with mobile phones in our handbags. Information about us is being extracted to identify and profile us almost all the time because it is profitable.

How will the harmful effects of attention be arrested without a challenge to the monetization itself, and the values that sustain it? Your attention is valuable only because it is associated with an identity that exists in multiple geographies  – financial, cartographic, intimate, socio-cultural, linguistic, religious, gendered, racialised -at the same time. These identities, and the attention that animates them, pop up across different devices, platforms, services and networks making it identifiable and knowable, and thus easy to sell things to. Your identity is like cables and wires, and attention is the electricity that runs along the outside of, rather than in or through, these wires. For people who have already made fortunes by peddling the cables, wires, poles, and electricity, it sounds disingenuous to not confront the economic value underlying all of this.

The Center for Humane Technology constructs the problem in terms of addiction and therefore as one of individual attention. And while they acknowledge the importance of lobbying Congress and hardware companies (Apple and Microsoft will set us free as if they don’t lock us into digital ecosystems and vie for our attention?), they emphasize a focus on individual action be that of tech workers, or users. By invoking ‘addiction’ as a metaphor,  they see the problem as being about individual attention, and eventually, individual salvation. Naming the co-founder of the Center, Harris, as the ‘conscience’ of Silicon Valley evokes a similar emphasis on individual rather than community, political, or structural dimensions to the attention economy and its dismantling, or restructuring. The use of the addiction metaphor has been criticized for at least twenty years and most notably by Sherry Turkle; and mostly because it is neither apt, nor it there enough evidence of how it works as an addiction. ‘Diet’ metaphors and relationships-with-food metaphors may work better, perhaps, to characterize our relationships with technology.

However, it is also about design, they say: design has been weaponized to create addiction. By invoking both addiction and design there is  lack of a structural critique in addressing how complex social problems such as children’s well being, or democracy, come to be. According to the Center, if you resist UX by turning your attention away, you can start to make a change by hitting the tech business where it hurts. And if tech businesses cease to get our attention, then democracy, social relations, mental health and children’s well-being might be salvaged.  Frankly this accrues more power to UX and Design itself; and creates a sort of hallowed epistemology flowing from Design.

The assumption is that these social conditions and relationships somehow did not exist before social media, or have changed in the past ten years because of UX and its seductions. I believe this is both not-true, and also true. We do engage in politics and democracy through our devices and social media, and we do see the weakening of existing values and notions of governance; but there isn’t necessarily a directly causal relationship between them. This is not uniformly the case, nor evenly distributed around the world. There are muddied tracks around the bodies of these relationships.

Democracy as a design problem is not new. There has been considerable work over the past decade to enable citizens to use civic technology applications for transparency and accountability to hold governments to account and promote democratic values and practices. It might help the Center to look at some lessons from around the world where democracy has been considered to be failing and technology was applied as a solution. To cherry-pick one relevant lesson (because this is a vast area of expertise and research that I cannot do justice to in this post): building a tool or a platform to foster democratic values or behavior does not necessarily scale. The lesson is that it doesn’t flow in the direction tech —> democracy.

Applying this to the case of the Center, but in inverse, you cannot approach technology and social change from a deterministic perspective. Technology will amplify and accentuate some things: there will be more ‘voices’ but most likely the voices of those who are already powerful in society, will be heard the loudest. Networks of influence offline matter to how messages are amplified online; swarms of hate-filled hashtags, memes, and bots traverse the fluid connections between on and offline.

Fixing Facebook and Twitter is absolutely essential, but it is not the same as addressing the weakening of public institutions, xenophobia, poverty, the swing towards populism, the 2008 financial recession, or combinations of these. They need to happen in conjunction with each other. Democracy is actually about relationships among people, movements, and longstanding practices of activism and organising in communities.

Trying to change digital behaviour is difficult and complicated because of how our political and personal expression, relationships of care, work and intimacy, and maintenance of these relationships, are all bound up in a narrow set of platforms and devices. Disconnecting from the attention economy is more like a series of trade-offs and negotiations with yourself; like a constant algebra of maintenance, of digital choice-making, managing information flows across different activities and relationships; and some baseline basic digital hygiene.

It is hard to feel like you have arrived at a place of disconnection because of how perniciously deep these platforms and devices can go and how far they spread. There is something aspirational and athletic about trying to disconnect from the attention economy; it really is a bit like a practice of a religious kind, almost. I know this because I’ve consciously practiced this disconnection for some years because of where I worked and what I did there. (I practice less now because my work has also changed, but I am still conservative about what kinds of attention I give different platforms and services.)

Through this work I’ve been part of communities of technologists, security and privacy experts, activists, lawyers, policy advocates, human rights defenders, and artists, who construct their relationship with information and technologies in critical terms. These communities, highly creative and adept in our use of technology,  understand the politics of information systems as continuous with the politics of, governance, geopolitics, economics, history, gender, the law, and so on.

In these communities it is entirely normal to never know some of your friends on social media, to not  assume they are on social media in the first place, or to refer to people by their online handles rather than their actual given names.  Having a community of practice really is key to disconnection from the attention economy; and to supporting any other kind of personal de-addiction as well.

Many of us who practice disconnection from the big data attention economy use open source tools that are sometime ugly because they don’t try to grab your attention (there is little investment in UX) but deliver a service instead; and we compartmentalize digital practices across different devices, identities, services and platforms. We may use social media but selectively, and we don’t necessarily connect all of them with our actual identities and personal details. Many people I know actively try to get their immediate families to also disconnect from social media as a way to communicate; only a few succeed. The first thing to be hit is your personal relationships, as Palihapitiya notes in the video above.

It is entirely possible to live a Google-free life as some of my ex colleagues and friends do, but you make peace with the trade-offs, and adjust your life accordingly.  It’s like people who don’t drink Coca Cola, or are vegetarian but not on the weekends, or would rather cycle than take transatlantic flights. An interesting point about Coca Cola: in Berlin we have Afri-Cola, and Fritz Cola (caffeinated and not; with and without sugar) as tasty and refreshing alternatives to Coca Cola, which is also available in its many flavors. In some places there are structurally-afforded opportunities to be more flexible and make a wider range of choices. This is what we need from extractive technology industries  – more control and more choices.

 

I Quit (2017). An installation by Thierry Fournier of video testimonials of why people quit their social media accounts and what happened next.

Despite absence of a real structural critique to the attention problem, I believe the Center may be successful because they are well-placed in terms of money and influence. If the Center for Humane Technology actually worked to disarm UX, made it possible for us to move our personal networks to platforms of our choosing, baked ethics into how technology is made, enabled regulation of the data trade, and protections  for users, then they might actually be disruptive. Let’s hope they succeed. In the mean time, the Center may find useful resources and ground-up expertise among those who have already been building movements for users to take control of their digital lives such as:

Article 19; Bits of Freedom; Coding Rights; Committee to Protect Journalists; Cryptoparty; Data Detox Kit; Data Justice Lab; Derechos Digitales; Digital Rights Foundation; Electronic Frontier Foundation; Freedom of the Press Foundation; The Glass Room; Gobo.Social; Internet Freedom Festival; Mozilla Internet Health Project; Privacy International; Responsible Data Project; Security in a Box; Share Lab; Simply Secure; Surveillance Self Defence Kit; Tactical Technology Collective; Take Back The Tech.

Maya Indira Ganesh has been a feminist information-activist for the past decade, and most of that time was spent at Tactical Technology Collective. She lives in Berlin and is working on a PhD about the testing and standardization of machine intelligence. She does not drink Coca-Cola. She can be reached on Twitter @mayameme

Image by Mansi Thapliyal /Reuters grabbed from a Quartz story on January 25, 2018

I dream of a Digital India where access to information knows no barriers – Narendra Modi, Prime Minister of India

The value of a man was reduced to his immediate identity and nearest possibility. To a vote. To a number. To a thing. Never was a man treated as a mind. As a glorious thing made up of stardust. – From the suicide note of Rohith Vemula 1989 – 2016.

A speculative dystopia in which a person’s name, biometrics or social media profile determine their lot is not so speculative after all. China’s social credit scoring system assesses creditworthiness on the basis of social graphs. Cash disbursements to Syrian refugees are made through the verification of iris scans to eliminate identity fraud. A recent data audit of the World Food Program has revealed significant lapses in how personal data is being managed; this becomes concerning in Myanmar (one of the places where the WFP works) where religious identity is at the heart of the ongoing genocide.

In this essay I write about how two technology applications in India – ‘fintech’ and Aadhaar – are being implemented to verify and ‘fix’ identity against the backdrop of contestations of identity, and religious fascism and caste-based violence in the country. I don’t intend to compare the two technologies directly; however, they exist within closely connected technical infrastructure ecosystems. I’m interested in how both socio-technical systems operate with respect to identity.

Recently in Aadhaar

Aadhaar is the unique 12 digit number associated with biometric data that the Indian government’s national ID project (UIDAI) assigns to citizens. It is not mandatory to register for an Aadhaar number, however. Financial technologies, or fintech, are (primarily) mobile phone based apps for financing, loans, credit, retail payments, money transfers, asset management and other financial services. Fintech applications circumvent the high cost of banking services, the unavailability of ‘bricks-and-mortar’ banks, and the complex procedures required to open a bank account.

Fintech and Aadhaar are both supposed to be, loosely, technologies to manage complex and ‘wicked’ problems —poverty, corruption, and social exclusion. Both shape practices of identity verification and management through trails of financial transactions and biometric data. Aadhaar is expected to make bureaucratic processes smoother, like opening bank accounts, applying for a passport, receiving a pension, or buying a SIM card, through a speedy identity verification process. The absence of identity documents and fake documents, are common in India.

On January 4, 2018, the Punjab-based newspaper, The Tribune broke a story that for Rs.500 (US$ 8) personal data associated with Aadhaar numbers could be accessed via a “racket” run on a WhatsApp group. It gets worse: “What is more, The Tribune team paid another Rs 300, for which the agent provided “software” that could facilitate the printing of the Aadhaar card after entering the Aadhaar number of any individual.”

Nine years after Aadhaar first entered Indian citizens’ awareness, we are contending with it as much more than a biometrics database and national ID project. It is also a public-private partnership and a government project, critical public infrastructure, a complex socio-technical system, a biometric database, a contested legal subject, and now, a flagrant security risk.

However, Indian civil society has been thinking about Aadhaar in all these terms; we have been researching, analysing and thinking about what it means to develop a biometric database for 1.25 billion people, how this data will be stored, how it will be integrated into other public infrastructure and systems, and what all of these will mean for the country’s most marginalised and disadvantaged citizens.

In critiquing the project, researchers, technologists, journalists, lawyers and activists have been trying to illustrate that this is more than just a matter of iris scanners. One  hilarious story stands out: in 2015, a man (who, it turns out, worked in an Aadhaar enrollment centre) applied for an Aadhaar card for his dog. Reverse engineering this, a biometrics-based unique identity project is not just about the biometrics, but about all the different social, technical, cultural and legal systems that biometrics-capturing technology is embedded in.

More sobering are accounts of ‘Aadhaar-deaths’. An old woman in a rural area was denied her food subsidies because her Aadhaar number wasn’t found on the system; she eventually starved to death. A 13 year old girl died of starvation because her family’s Aadhaar card had not been properly ‘seeded’ to the Aadhaar database and thus was not on the grid. Three older men, brothers, also died because they could not produce Aadhaar cards to claim their food subsidies. Identity stands out in the stories of these starvation deaths: the victims were old, widowed and young, and Dalit. Usha Ramanathan, a lawyer and expert on Aadhaar, has been calling out these risks for some time now; writing in the Indian Express, she notes that “illegality and shrinking spaces for liberty…have become the defining character of the project,” and she details a number of violations that affect the most disadvantaged in society.

These concerns have often been ignored by the state but they cannot ignore The Tribune investigation. After initially denying the story, the UIDAI, the government agency that manages the Aadhaar project, filed a case against the Tribune journalist for ‘misreporting’, which is deeply problematic at many levels not least for press freedom. However, faced with public pressure, the agency has begun to adopt practical suggestions to improve security of Aadhaar transactions.

 

Grabbed from the internet via Google search on July 27, 2017

 

Fintech and verification

Fintech initiatives and biometrics claim to ‘reduce friction’ and increase trust in people (but really in big data infrastructures). OnGrid and IndiaStack are two such verification platforms being used in fintech applications. Like E-KYC, or ‘electronic-Know Your Customer’, a modality popular with banks to verify customer identities, OnGrid and IndiaStack verify an individual’s identity documents and offer ‘verification as a service’ to clients like fintech providers, and to facilitate the use of Aadhaar. Quick identity verification makes financial and other commercial transactions ‘frictionless’.

The OnGrid homepage has images of people who provide manual and domestic services – bell boys, domestic helpers, rickshaw drivers, personal drivers. The class dimension is impossible to ignore: people in these roles are less likely to be trustworthy, or have ‘uncertain’ identities, and OnGrid is a way to verify them, the images seem to say.

Lenddo is a credit rating scheme that uses a machine learning algorithm that needs only three minutes to process an individual applicant’s smart phone browsing history, GPS history, and social media to generate “insights” predictive of the applicant’s creditworthiness. However, as an employee of the company clarifies in an interview, a single data point, like a single Facebook post, does not determine creditworthiness:

“We gather at least 17,000 data points for every application…we look at behavioural patterns not a single incident. We look at if you shop online and how much not at all the things you buy. We need to look at a to of data points to create a pattern and this is based on an algorithmic model we have developed ..You also need to have a pattern of three years to determine something, not a random one-off thing.” (Interview with ‘KC’ conducted via VOIP, Oct 18, 2017)

 

YouTube Preview Image

 

The interviewee goes on to tell me that India is one of the most profitable markets because of the lax legal and regulatory environment. (Her tone is hushed, almost reverential, when talking about the Indian market.) They would love to get into Europe but data protection guidelines are too strong, she says.

At the start of banking and financial institutions in the industrial age, creditworthiness was based on an individual’s social graph and personal networks. This evolved into creditworthiness on the basis of actual financial indicators, such as income, assets and job security. Yet with fintech applications like Lenddo, as in the Chinese case, we see a return to an earlier metric of creditworthiness, albeit without an individual’s full knowledge of how things work.

In July 2017, the Economic Times reported that two new fintech startups working in India, EarlySalary and ZestMoney, were using customers’ online activity to track and verify them. The CEO of EarlySalary narrated a story in which the company rejected an online loan application made by a young woman who was perfectly eligible for it. They were able to ascertain that she was actually taking the loan out for her live-in boyfriend, who was unemployed. He had had applied for a loan himself and had been rejected. Here is how they figured it out:

“The startup’s machine learning algorithm used GPS and social media data —both of which the duo had given permissions for while downloading the app —to make the connection that they were in a relationship. The final nail in the coffin: the lady in question was transferring money every month to the boyfriend which showed up in the bank statements they had submitted for the loans”

The story goes on to quote the CEO of ePayLater who says that their app’s machine learning algorithm uses anywhere from 800-5000 data points to assess a customer’s willingness and ability to repay a loan: from keyboard typing speeds (“if there is a lot of variation in your usual behavior, the machine will raise an alert”) to Facebook (“Accessing her social media, we learnt they were dating”) to LinkedIn (“To understand if one is working or not we usually check his LinkedIn profile”).

Image from the OnGrid Home page grabbed on July 24, 2017

Identity and Violence

But what does it mean to verify or fix identity against a backdrop of ferocious religious violence, ‘beef lynchings’, and of ‘love jihad’ on top of endemic caste discrimination and violence against Muslims? The porosity of the Aadhaar database, the Aadhaar starvation deaths, are more than just technical lapses. These are serious breakdowns of complex socio-technical systems, and are not likely to inspire confidence in people who are marginalised. In India it is not uncommon for people to size each other upon meeting by asking ‘where are you from?’, which is shorthand for many things including ‘where are you on the social hierarchy in relation to me?’

In the early days of the Indian government’s 2016 ‘demonetisation’ drive to manage corruption and introduce negative interest rates, a friend tells me that his elderly Muslim parents received messages on WhatsApp groups saying that this was the Indian government’s way of harassing Muslims by taking away their money (Muslims are some of the poorest people in India) and reducing them to penury. The WhatsApp group messages urged older people to quickly take their money out of banks. Such heartbreaking stories of misinformation and disinformation are part of what it means to apply predictive algorithms and biometrics in already-stratified, violent and hierarchical societies.

The challenge to India’s brutal caste system is not a new phenomenon but it has picked up steam recently, thanks possibly to increased media attention to caste violence, and student activism, among other factors. Caste in the sense of jaat or jaati, is an enduring, fundamental variable in India’s byzantine system of social stratification. Caste is particularly violent because it is both fixed and yet an entirely social construct; a person born into a particular jaati can do nothing to escape it. It takes on the manner of something biologically determined and passed on from parent to child with no option for conversion, mixing or ‘lightening’. Endogamy, or marriage within jaati, is the expected norm. Inter-caste marriages happen but are met with everything from social snubs to criminal violence; the children resulting from this union take on their father’s caste. There is no quadroon version of jaati. There is ‘passing’, however: it is not uncommon for Dalit Indians to change their names in order to pass as a member of a more favourable caste to access housing, jobs, social inclusion, and to avoid violence.

What it means to manage and hide identity in terms of surnames, addresses, and social graphs as known by machines, is chilling. At the present moment this does not seem particularly far-fetched considering the state’s encouragement of its religious fundamentalist fascist base.

Epilogue / Prologue

On August 24, 2017, the Supreme Court of India passed a historic judgment upholding the constitutional right to privacy and drew clear lines between personal identity, privacy, dignity and the health of a democracy. It draws a link between privacy and personal identity, including sexual and reproductive identities and choices; and writes extensively about ‘informational privacy’:

“Knowledge about a person gives a power over that person. The personal data collected is capable of effecting representations, influencing decision making processes and shaping behaviour. It can be used as a tool to exercise control over us like the ‘big brother’ State exercised. This can have a stultifying effect on the expression of dissent and difference of opinion, which no democracy can afford.”

At the time of writing, the Indian government is on the fourth day of a hearing in the Supreme Court defending Aadhaar in light of the  Privacy ruling; and in light of the Tribune investgation, and the absence of clear data protection guidelines in the country, 2018 is going to be an interesting year for Aadhaar – and hopefully a better one for Indian citizens – and the future of the biometrics project in India.

Caste, biometrics and predictive algorithms are forms of power masquerading as knowledge about people, and social media-derived social graphs serve as proxies for trust in them. There is no tidy stack of big data infrastructures that can be plugged in to eliminate violence, poverty and corruption in India (or anywhere); but fintech and Aadhaar are the latest in a long history of schemes and programs attempting to do just that. There are untidy, imperfect interconnections between digital technology, privacy and the contestations and manipulations of identity currently underway in India. In addressing these places of imperfection, we must deconstruct the ways in which big and biometric data become a tool to perpetuate long-standing and deep-rooted forms of discrimination.

In the summer of 2017 I wrote an essay about financial technologies and Aadhaar, but for various reasons the essay was not published at the time. The present essay draws from the original and reflects changes in the landscape over the past five months since the Supreme Court ruling. The original essay was supported through work at Tactical Technology Collective. Some of these ideas were first developed for a panel at Transmediale in Berlin in January 2017.

Maya lives in Germany and on Twitter as @mayameme

 

View of an open pit gold mine.

 

“A mine is a complex space of flows” says Dr. Mostafa Benzaazoua.

I’m not expecting a professor of geological engineering to use a phrase from the media studies cannon. I write in my notebook” “maybe media studies before mining science?!!!” Or perhaps that phrase has now entered into everyday scholarly parlance. Over the course of the next few hours, Dr. Benzaazoua gives us a detail-rich lecture on how gold is mined from the earth, and the spaces of flows the mine and its products inhabit. The next day we leave before dawn to visit Canada’s largest open pit gold mine.

This post is a report on a visit to a large scale extraction facility, and its relationship to studies of infrastructure and technology. The visit contributes to my own (ongoing) research on machine learning, accountability and ethics; in this I argue that narratives of ethics and accountability are in fact about the evolution of measurements and standards for regulating and assessing human and non-human systems working together. This visit was organised as part of a Summer School called Planetary Futures conceived and led by Drs. Orit Halpern, Pierre- Louis Patoine, Marie-Pier Boucher and Perig Pitrou, and hosted by the Milieux Institute for Art, Technology and Culture at Concordia University.

Over the two weeks following the visit to the mine we journey – literally and figuratively – to the following places: a Mohawk reservation; waterways that enabled the development of the US and Canada as settler-colonial states; the Buckminster Fuller-designed Biosphere from Expo 67; Moshe Safdie’s Habitat 67, an architectural vision for future housing in crowded cities; a future ‘village’ on the Moon to be built by various Space agencies; the SF of Ursula K le Guin, J.G Ballard, and Peter Watts; and the work of Sarah Sharma on critical temporalities, and notions of ‘exit’ among others.

There is a logic in making these stops; each one relays histories and practices of extraction, colonialism, and imaginations of futures through speculation and design to the next stop. As the course description asks:

“…how we might imagine, and design, a future earth without escaping or denying the ruins of the one we inhabit?  How shall we design and encounter the ineffable without denying history, colonialism, or normalizing violence?  What forms of knowledge and experiment might produce non-normative ecologies of care between life forms? How shall we inhabit the catastrophe? … how we shall inhabit the world in the face of the current ecological crisis and to rethink concepts and practices of environment, ecology, difference, and technology to envision, and create, a more just, sustainable, and diverse planet.”

When you visit an open pit gold mine, it takes time for your eyes to adjust to the grayscale landscape. More lunar than Luxor, you don’t see anything even remotely golden at a gold mine, except perhaps the cheesy gold hard hats (we) visitors wear. We are watching the open pit of the mine from a viewing gallery many metres away and above it; it is very, very quiet here. You expect to hear something, but we’re too far away to hear the machines drill the earth and bring up rocks, which are loaded into large trucks. Each truck has eight wheels, each wheel costs $42,000 and is about ten feet high. The trucks lumber about like friendly, giant worker-animals. To drive them requires significant skill; we are told that women make better drivers. The trucks take the rocks away to the factory where they are analysed for gold.

Someone says something later about the mine being cyborg: the organic Earth, with its transformative automated elements – the drilling machines, trucks, – and the ‘intra-action’ of the two being the mine itself.

Inside the gold mine’s factory facility. Each wheel costs CA$42,000 and is ten feet high.

 

A big truck that conveys rocks dug up from the mine pit to the factory for processing.

 

There is something hypnotic happening here. Standing in the light drizzle, the only colour comes from the yellow of the school bus that brought us here, and our safety jackets. We can’t take our eyes away from the pit. Many people are recording video and we are all taking pictures. It is as if we are waiting to see something important or extra-ordinary, as if something special might emerge because we’re looking at it.

I thought about gold for longer than I ever have in my life in those 30 minutes watching machines work the Earth: the symbolic and socio-cultural value of gold (particularly for Indians); the relationship between gold and finance; the Gold Rush and the Wild West; value; how ‘gold’ enters the vocabulary from ‘gold digger’ and ‘bling’ to ‘gold watch’, to anything prefixed by the word ‘golden’.

Time passes, and nothing happens. It is not actually hynotic, I realise; it is meditative in the sense that it is oddly absorbing and empty at the same time. This is just another day in a gold mining factory however. The longer you watch, the more apparent it becomes why the study of infrastructure is infused with a certain poetics.

Sulfide-rich ores like pyrite sometimes contain gold, and are found here along with chalcopyrite, which is a significant source of copper. Pyrite is also the technical name for Fool’s Gold. The ores dug up from the ground must be analysed by spectrometric techniques, the gold identified and eased out through chemical processes, and the waste rocks dumped and re-used elsewhere. ‘Tailings’ are the waste rock that do not contain any gold and must be recycled for other uses in the factory. In many instances, tailings must be carefully managed, or re-used, to contain the negative environmental outcome of the mining process, particularly acid mine drainage: in other words, environmental contamination (Benzaazoua et al 2017).

Confronted with the expanse of the waste that is the tailings field, it is sobering to realise how little gold comes out at the other end. All the gold that has ever been mined from the earth only fills two Olympic sized swimming pools, says the engineer showing us around. For so little, you have to do so much. We are thinking about the unthinkable, and the incomprehensibility of scales when it comes to the planet.

Pyrite tailings field with school bus to transport PhD students and Summer School faculty.

We are close to what Thacker refers to as the ‘world-in-itself’ in his book on speculation, horror and philosophy, In the Dust Of This Planet (hat tip to Daniel Rourke for this reference here). The world we humans interpret and give meaning to, the world we relate to or feel alienated from, is the world-for us. But the world that already exists, that is somehow inaccessible, the one we turn into the world-for-us though inquiry and study, is the world-in-itself. Both the world for-us and in-itself coexist, paradoxically. Unfortunately the world-in-itself is the one we get to know through natural disasters, it is the world that “resists, or ignores our attempts to mold it”.

The ‘world-in-itself’ is something we as humans model predictively, and prophesy, usually through disaster, and which we will never experience. Yet we are drawn to it. This is the world-without-us, and it is something we are fatalistically drawn to. Thacker says “the world-without-us lies somewhere in between, in a nebulous zone that is at once impersonal and horrific.”

He offers another valuable abbreviation. The world-for-us is simply what we refer to as ‘the World’; the world-in-itself is ‘the Earth’; and the world-without-us is ‘the Planet’. Being at the site of intensive extraction is to be somewhere between the world-in-itself and the world-without-us. The pyrite tailings field is the one of the largest, most bleak landscapes I have ever experienced. The damp, windswept, grey day adds atmosphere. Perhaps this is what the surface of the moon might be like, or the world-without-us.

End of the line. The last stop in the process of gold being extracted from the earth.

Yet, we are back in the world-for-us before we know it. At the end of the tour we find ourselves in something pleasant and ominous; like a scene out of a Tarkovsky film, says someone in the group. Crystal clear water rushes out from a canal and disappears into a thickly wooded forest. It is quiet save for the sound of water. There is a whiff of pine in the air. It could not be more bucolic.

Gold extraction requires chemical processes that contain numerous contaminants that must be washed away; this washing increases the acid level in the water. Carbon dioxide is pumped into the water to restore its PH balance. “There are moose in that forest” says the lead engineer at the mine. “We return to nature now” he says with a warm smile. Here, extraction seems to fit into some sort of pre-ordained, cyclical, natural order of things.

One thread in the Summer School related to the application of ethnographic methods to the study of infrastructure. Through the visit to the mine and the towns around it, I kept a diary of us, of how we arrived in this place and started studying it, like anthropologists; and how we relentlessly document, communicate, and share. I posted a lot of photographs to Instagram right through the two weeks, and particularly about the visit to the mine. (Later, I put them together as a patchwork speculative story here.) We were (are) as much as part of the world-in-itself, transforming it into the world-for-us.

 

Doctoral researcher documenting infrastructural processes.

 

“Infrastructure is things, as well as the relationship between things” says Brian Larkin. There is a sort of well-meaning hubris intrinsic to infrastructure-mapping exercises: eventually, you are not going to capture every part of the system. There are some things about the mine that we cannot know till Mostafa tells us. For example, the price of gold on the market affects the price of academic’s houses in the mining towns of Quebec. The town he lives in will shut down when the gold runs out. There is talk of trying to visit one of the towns nearby that is ‘dead’, following the closure of a mine because the gold dried up. Someone else feels uncomfortable with us engaging in disaster-tourism for academic extraction. Searching online, I find that one of the mining towns in the region has made a bid to be a Canadian Capital of Culture; they were rejected. I wonder if this is some soft of insurance for the future that will certainly come. One of the faculty tells us about how miners can be paid well, and so take out expensive mortgages and buy fancy SUVs. If the gold dries up, they’re out of a job overnight. Someone else tells us that the under-12 suicide rate here is significant. The space of flows indeed.

 

Slide from presentation by Dr. Benzaazoua describing when various resources mined from the Earth will run out.

A phrase that pops up with annoying regularity these days is “data is the new oil”. The study of big data metaphors is already rich. Big data, and controlling big data, is gaining traction in terms of food and nutrition and the ‘data detox’ paradigm (Sutton, 2017). There is much written and being said about rights (or the lack thereof) to individual privacy and big data; this is having significant effects, a lot of it invisible, on individuals, as well as broader notions of democracy and freedom of speech and expression. Limiting the consumption of data is seen as one way to manage the manipulation of this personal data. Thus Tactical Technology Collective has a ‘Data Detox Kit’ to help people limit their online data traces (Disclaimer: I worked for Tactical Tech till a few weeks ago, and continue to consult with them part-time).

Cornelius Puschman and Jean Burgess (2014) discuss two themes they find in the business and technology press: ‘big data is a force of nature to be controlled’ and ‘big data is nourishment / fuel to be consumed’. They find that the metaphors used thoroughly disguise the agency of data creation by evoking natural source domains. Data is described as a commodity to be exploited. They write:  “Through the use of a highly specific set of terms, the role of data as a valued commodity is effectively inscribed (e.g., “the new oil”; Rotella, 2012), most often by suggesting physicality, immutability, context independence, and intrinsic worth.”

The comparison of data to oil neatly elides the histories of violence that mark the human relationship to oil in particular, and extraction more broadly. The comparison seems to justify how data is to be extracted from people and internet objects, and traded in a similar manner. At the same time, the comparison indicates a shift in the valuation of value itself. Just as the phrase ‘x is the new black’ suggests the dominance of a new metric of fashion, ‘data is the new oil’ implies that we start valuing things as, or against, data. And perhaps the coiners of such phrases know that the standard measures of value, oil and gold, are running out.

References

Brian Larkin. ‘The Politics and Poetics of Infrastructure’. Annual Review of Anthropology 2013. 42:327–43. Online: 10.1146/annurev-anthro-092412-155522.

Eugene Thacker. In the dust of this planet: The horror of philosophy vol 1. 2011. Zero Books.

Mostafa. Benzaazoua, H. Bouzahzah, Y. Taha, L. Kormos, D. Kabombo, F. Lessard, B. Bussie, I. Demers, M. Kongolo. ‘Integrated environmental management of pyrrhotite tailings at Raglan Mine: Part 1 challenges of desulphurization process and reactivity prediction’. Journal of Cleaner Production 162 (2017) 86e95. http://dx.doi.org/10.1016/j.jclepro.2017.05.161

Maya Ganesh is a reader, writer, technology researcher, and feminist killjoy who recently left full-time, NGO work after two decades, most recently out of Tactical Tech in Berlin, to spend more time on her PhD about machine learning and ethics. She tweets as @mayameme

The Planetary Futures Summer School is assembling the results of our two weeks together as group and individual projects that will be published in the coming months.

All photographs used in this post are by Maya Ganesh taken in Quebec, Canada on August 4, 2017 and are licensed under Creative Commons Attribution Share-Alike 4.0 International

Protestors in Hamburg, Germany. July 7, 2017. Photo: Maya Ganesh

On July 6, 7, and 8 the police established a thick cordon around the Hamburg Messe and Congress where the G20 was taking place. It separated delegates from the thousands of protestors who had converged on the city.

From the distinctive, red Handmaid cloaks worn by Polish feminists, to Greenpeace in boats off the harbour, to radical and Left groups in Europe, the G20 brought together diverse communities of protest from around the world. Protestors were there to tell leaders of the world’s most powerful economies that they were doing a terrible job of running the planet.

Not all of the protests were peaceful. The violence by some protestors and by the police against them has formed a substantial part of the reportage about the G20. In this post, I share some experiences and insights from the protests and their mediation.

The anti-Trump protests, the anti-Caste discrimination movements and activism on Indian university campuses against the Modi government, the ‘Science March’, the Women’s Marches, protests against Michel Temer in Brazil, #FeesMustFall and its sister protests in South Africa, and the post-Brexit marches to name a few, have captured local and national attention.

These protests have generated discussion about the shifting dynamics of political participation, popular resistance, and media, from the documentation of clever signs, the transition of protest memes from the online to the offline and back again, to the inspirational images of Saffiyah Khan and Ieshia Evans confronting violence.

 

Protestors and police in Landungsbrücke, Hamburg. July 7, 2017. Photo: Maya Ganesh

The anti G20 protests in Hamburg have resulted in damage to property, and violence by the police against protestors. According to a legal team representing activists, 15 people have been arrested and 28 are in preventive custody. For a few hours I got to observe how the police were responding to protestors. I did not witness any specific acts of violence by protestors. It was also not always easy to tell a ‘protestor’ from anyone else who might have just been an aggressive element on the streets. The city felt chaotic that day.

Late in the afternoon, many of us were in a park on Helgoländer Allee. We could see water canons, unmarked black vans, police cars lined up ahead, and protestors on the bridge above us. Formations of police in riot gear, their faces invisible behind visored helmets, jogged past. Protestors on the ridge above us were yelling. Riot police were running at us through the park, and chasing protestors down from the ridge. Suddenly, we were running but not sure exactly of where because police were coming at us from two directions. Many of us in the park – curious locals, tourists, media workers – we were not part of protest groups, and yet felt intimidated by these swarms of police. “If you’re wearing black, they’re not asking questions, they’re assuming you’re a protestor” said someone behind me. ‘Schwarzen block’, or Black Bloc tactics were in evidence.

An hour later, we walked down Helgoländer Allee towards Landungsbrücken station. Someone in our group wanted to try Hamburg’s famous Fischbrötchen – cured fish, remoulade, and pickles in bread – so we thought the touristy harbour promenade area might be a relatively quiet place to catch a break. It turned out that it wasn’t off limits. The police were going anywhere there were protestors, some of whom also wanted Fischbrötchen. The police sprayed protestors with water canons, and started pushing and shoving protestors and non-protestors alike.

Again, we saw ourselves surrounded by riot police yelling raus, raus (leave, go). We didn’t feel particularly heroic and wanted to leave, which we eventually did, but only after being pushed down the promenade. We broke away as soon as we could. Protestors and police continued down towards the St.Pauli Fish Market. Above us customers at the Hard Rock Cafe were filming everything, Pilsners in hand, and chicken wings for after.

FCMC

Before getting to witness police tactics of intimidation, my day in Hamburg started by exiting the Landungsbrücken subway station and walking up to Millerntor stadium, home of the local football team St. Pauli (also associated with a brand popular with Punk and ‘alternative’ consumers), where FC⚡MC had their headquarters.

FCMC describes itself as a “material-semiotic device for bloggers and Twitterers, editorial collectives and staff, video activists, free radios, precarious media workers and established journalists to re-invent critical journalism in times of affective populism.” With technical support by the Chaos Computer Club, and something of an heir in the IndyMedia tradition,FCMC is a space for and to support media workers reporting on the G20.

Anyone with accreditation could a space to write, edit, and post reports online. It was not difficult to join FCMC. All you had to do was accredit yourself via their website. The accreditation letter and wrist tag were something of a relief later in the day when we encountered the police.

While the MC in FCMC is clearly ‘media centre’, the ‘FC’ expands into charming, ephemeral “suggestions from the outside” everytime you refresh the webpage: “FC might mean Free Critical, For Context, Free Communication, Future Commons, Fruitful Collaborations, Friends of Criticism, Finalize Capitalism, Flowing Capacities, Freedom Care, Flowing Communism, or even Fight Creationism. But remember: these are just suggestions from the outside. The FCMC itself, in its autonomy and autology, might choose different ones.”

FCMC planned to organise press conferences following developments of the G20 and based on the reportage and views of alternative and independent media and activists. As protests grew however and got to dominate the coverage of the event, I couldn’t help feeling that it possibly put FCMC in a difficult situation. FCMC was also a space for media workers aligned with protest groups, particularly Left-leaning groups. But, FC⚡MC  could not directly promote protest actions. Reporters talking about police violence and the conditions around the protests were, however, given a platform by FCMC.

Drawing on support and resources from its networks, rather than owning the infrastructures of media production themselves, FCMC attempts to introduce diversity in the media ecosystem by itself being a pop-up. It is unclear if they will emerge elsewhere, and that is perhaps the point. This begs the difficult question of what kinds of control – editorial or otherwise – are lost and gained in adopting this flexible, in-a-box  approach.

The ‘Inappropriate’ Selfie

 

The inappropriate selfie is fresh social media meat. A recent selfie from the Hamburg protests, with a sarcastic caption by @JimmyRushmore and re-tweeted more than 35,000 times in two days, foregrounds the struggle around selfies and mediated political participation.

(There is a lot going on in this particular image, and in the online discussion about it. My intention in this diary is not to attempt a full or in-depth analysis, but to share some impressions that might inspire future discussions.)

The tweet was about selfie culture, tech object fetishisation, and the irony of a selfie with the latest iPhone (one tweeter was quick to point out it was not an iPhone 7 but a 6) in a protest against Capitalism.

Rushmore’s tweet may have been one of those throwaway comments that makes for great Twitter: a critique of pop culture, ruefulness at the mediation of everyday life through social media, and the struggle to live a consumption-conscious life in the absence of infrastructures to support this consciousness. But it has become a lot more than that now.

Like funeral selfies, or Holocaust Memorial-selfies, the discussion around this selfie is about the question of legitimacy: Are your politics legitimate if you’re taking a photograph of yourself in a riot against Capitalism? What are you, some kind of riot hipster? (“Riot hipster” is a Reddit word).

In a 2014 AoIR conference paper, Martin Gibbs and his collaborators discuss controversy around funeral selfies as an example of ‘boundary work’, that is rhetorical work negotiating the space between the legitimacy and illegitimacy of online behaviour. Similarly, “riot porn” is a word that has come up in some offline conversations to discuss the media’s emphasis on the riots and protests, as well as how the G20 protests were a spectacle that many of us participated in.

A recent conversation on Twitter (that includes Cyborgology folks) takes on porn as a suffix: there is a question of legitimacy here, again, and about the policing of how something is to be enjoyed.

Screen grab from Twitter. July 10, 2017.

Paul Frosh discusses selfies in terms of how they tip the balance of ‘indexicality’ that all photographic images are about: “It deploys both the index as trace and as deixis to foreground the relationship between the image and its producer because its producer and referent are identical. It says not only “see this, here, now,” but also “see me showing you me.” So in this case, you have: ‘see me showing you me’ participating in this particular political event that is dangerous; see what I risk for my politics. This selfie is unadulterated hyper-masculinity caught up in establishing its own calibrations of legitimacy.

Almost as soon as Rushmore’s tweet went up, so did digital sleuthing: Is this a real image or a doctored one? In a long Twitter thread following the selfie, Rushmore finds himself fending off these discussions, but image literacy is now an inescapable part of the mediation of images of events.

Back

Hamburg was shut down on Friday, and many of us had to walk some miles around the heavily secured Messe area to get to the only train station working that day. Close to twenty cars were burned and property was damaged. Hamburg is not without its rich history of countercultural and protest politics. These sorts of events are not new here but are infrequent now (unless perhaps you’re a newly arrived brown immigrant). A German friend who has been involved in activism for many years watches videos from Hamburg and says “good, it is good, let the protests come back to Hamburg. We need to deal with what has happened.”

Maya Ganesh is a tech researcher, writer, and feminist infoactivist living in Berlin

.

 

 

 

 

Screen grab, Ghost in the Shell. Mamoru Oshii, 1995.

In April this year, thanks to Stephanie Dinkins and Francis Tseng, artists-in-residence at New Inc., I experimented with a workshop called ‘Imagining Ethics’. The workshop invites participants to imagine and work through everyday scenarios in a near future with autonomous vehicles and driving. These scenarios are usually related to moral conflicts, or interpersonal, social, and cultural challenges. This post is about situating this experiment within the methods and approaches of  design fiction, a technique to speculate about near futures.

Julian Bleecker writes about design fiction as “totems” through which projections into the future may be assembled:

“Design fictions are assemblages of various sorts, part story, part material, part idea-articulating prop, part functional software. …are component parts for different kinds of near future worlds. They are like artifacts brought back from those worlds in order to be examined, studied over. They are puzzles of a sort….They are complete specimens, but foreign in the sense that they represent a corner of some speculative world where things are different from how we might imagine the “future” to be, or how we imagine some other corner of the future to be. These worlds are “worlds” not because they contain everything, but because they contain enough to encourage our imaginations, which, as it turns out, are much better at pulling out the questions, activities, logics, culture, interactions and practices of the imaginary worlds in which such a designed object might exist.”

I’m interested in what speculation about near futures means for discussions of ethics in the context of ubiquitous computing and artificial intelligence. Are we creating ethics for now, or for the future, and if for the future, then how do we envisage future scenarios that requires ethical decision-making?

Ethics are frameworks for values, some of which are codified in the law; some, related to AI-based technologies, currently challenge the law. Autonomous driving worries the law: the narrative around autonomous vehicles has tended to focus on the opportunities and limitations of software to make ‘ethical’ decisions. Simply put, how can driverless car software be held responsible for making decisions that may result in the loss of human life?  I’ve argued that this approach places software and its creation as central to the construction of ethics, rather than the wider social, political, cultural, and economic conditions in which autonomous driving (is) will be situated. Is it possible that the ethical implications of autonomous driving will involve more than just the philosophical conundrums defined by the Trolley Problem?

Drawing on Mike Ananny’s definition of technology ethics as “a mix of institutionalized codes, professional cultures, technological capabilities, social practices, and individual decision making”, I claim that ethics is contexually and situationally produced. Imagining Ethics takes the form of a workshop to identify and discuss these contexts and situations.

So, at New Inc, a group of people sat down for a little over two hours to work through scenarios projected to occur five years from the present moment. Here are the scenarios:

  • Develop a Quick Start Guide (QSG) for the new autonomous vehicle owner (following inspiration from the Near Future Laboratory’s QSG). What kinds of information would a new owner of an autonomous vehicle need?
  • Script an interaction between a thirteen year-old and her parents in which the young person is trying to negotiate hiring an autonomous vehicle out to a movie with her friends and without any adult chaperones.
  • Two security guards are in charge of a garage full of driverless trucks; but, one day, a truck goes missing. Develop the synopsis of a movie of any genre (Rom-com, Sci fi, Road movie, Zombie film, etc) based on this starting point and ending with the guards finding the truck. (This one was a little different from the other two).

In terms of process, the group were given these starting points but not much else. The idea was that they speculate ‘up’ into how these scenarios may unfold; there were very few rules about how to go about this speculation. In giving the group a broad remit, I attempted to evoke aspirations, concerns, and questions they might about autonomous driving.  I aimed to emphasise the social-political-cultural and psychographic aspects of decision-making in a future everyday with autonomous vehicles and driving.

The point was not to be pedantic about how a near future technology might work, or accurately predict what the future might be exactly like. What was important were the conversations and negotiations in sketching out how existing and near future artifacts might interact with human conditions of social and political life.

Imagination can be valuable in thinking about applications of technology in society; and I refer to the imagination in the sense that Sheila Jasanoff and Sang-Hyun Kim do, as a “crucial reservoir of power and action”. Their book, Dreamscapes of Modernity: Socio-technical Imaginaries and the Fabrication of Power discusses socio-technical imaginaries, a collectively sustained vision of how science and technology come to be embedded in society. STS (Science and Technology Studies) aims to bring “social thickness and complexity” into the appreciation of technological systems. However, the authors argue that STS lacks “conceptual frameworks that situate technologies within the integrated material, moral, and social landscapes that science fiction offers up in abundance.”  Thus they propose socio-technical imaginaries as “collectively held, institutionally stablized, and publicly performed visions of desirable futures, animated by shared understandings of social life and social order attainable through, and supportive of, advances in science and technology.”

While an imagination of a future does not necessarily contribute to an imaginary in a causal or linear sense, I bring the two words together to suggest that there are parallel processes underway, at different scales. While an imagination may be private and local, embodying contextual aspirations and anxieties of a future scenario, an imaginary may operate more like a Foucaultian apparatus through the interaction of multiple, social agents, powerfully shaping the emergence of technologies.

Cinema is a space where future imaginations of cities have been generated. There is a visual and textural thread running through future cities in the Masamune Shirow manga classic, Ghost in the Shell (Mamoru Oshii, 1995), Total Recall (Paul Verhoeven, 1990/Len Wiseman, 2012),  Elysium (Neil Blomkamp, 2013), Bladerunner (Ridley Scott, 1982), and A.I (Steven Spielberg, 2001), among others.  (Why filmmakers continue to replicate particular architectural or visual tropes is in all these future city visions is another matter). These films depict future cities as vertical, which the middle and upper classes escape to, with the poor (and sometimes, the Resistance) living in labyrinthine warrens or subterranean cities. Rain, or a constant drizzle, slick roads, and water pools, appear as another common motif in many of these urban dystopias, possibly signalling flooding from climate change, and resulting in a palette of blues and greys.

The visual representation of everyday life in these future cities is a particular project that brings us closer to the work of design fiction. How did Spielberg come up with the idea that the side of the cornflakes box in Minority Report would be a screen showing cartoons? Or that Tom Cruise’s John Anderton, head of the Precrime Division, would investigate criminal cases through a gestural interface? Philip K. Dick certainly didn’t write these things in his novel the film is based on. That interface has become iconic in discussions of cinematic shaping of future visions. The sliding, swiping, twirling of dials, and pinching of the screen to zoom out, didn’t exist in 2002 when the film was released. We didn’t even have social media or smartphones.

Much of this is intentional, writes David Kirby. Cinema has become a space for scientists, technologists and filmmakers to collaborate on “diegetic prototypes”, ‘real’ objects that people in the film’s fictional world actually use convincingly. As Kirby notes, engaging with design to seriously create a vision of the future, filmmakers and scientists are creating legitimacy for how the future will look at a granular and everyday level. And the granular and everyday are important in terms of thinking about how we will actually encounter future technologies. As Julian Bleeker writes, “everyday aspects of what life will be like [in the near future] — after the gloss of the new purchase has worn off — tell a rich story”.

It is this space of the mundane and the everyday in the near future with driverless cars that we as consumers, customers, scientists, scholars, activists, and lawyers, may have to start engaging new framings of ethics. Some of these scenarios may not be so different from what we encounter now, in the sense that social and economic inequalities will not cease to exist with the arrival of autonomous driving. How do you hold a fleet taxi service, like Uber, with autonomous vehicles, accountable for an accident? Who or what is responsible when an autonomous vehicle’s mapping system avoids “high crime neighbourhoods”, and thereby doesn’t offer services to an entire  community? How might a ‘victim’ in an accident  – a road accident, a data breach, a data exposure – involving a driverless car claim insurance?

Spurred by opportunities in the fictive and the fictional, the Imagining Ethics  workshop method is part of ongoing research and practice that seeks to understand how ethics may be reframed in a society with ubiquitous computing and artificial intelligence. What counts as a moral challenge in this society, and how will decisions about such challenges be made? Is it possible to nurture peoples’ aspirations and imaginations into imaginaries of ethics in artificial intelligence, imaginaries that alleviate the social, political, cultural and economic conditions of life in present and future societies with AI? It is time to find out.

Maya Ganesh is a Berlin based researcher, writer, and information activist. She works at Tactical Tech and is a doctoral candidate at Leuphana University.

References

Ananny, M. 2016. Toward an Ethics of Algorithms: Convening, Observation, Probability, and Timeliness. Science, Technology, & Human Values 2016, Vol. 41(1) 93-117.

Bleeker, J. 2009. Design Fiction: A Short Essay on Design, Science, Fact, and Fiction. Near Future Laboratory Blog. Accessed online: http://blog.nearfuturelaboratory.com/2009/03/17/design-fiction-a-short-essay-on-design-science-fact-and-fiction/

Jasanoff, S. and Kim, S. 2015. Dreamscapes of Modernity: Socio-technical Imaginaries and Fabrications of Power. Chicago, IL: University of Chicago Press.

Kirby, D. 2010. The Future is Now: Diegetic Prototypes and the Role of Popular Films in Generating Real-world Technological Development. Social Studies of Science 40/1 (February 2010) 41–70.

 

 

Street view, Calcutta, 2010.

 

According to its author JG Ballard, Crash is the first pornographic novel based on technology’. From bizarre to oddly pleasing, the book’s narrative drifts from one erotically charged description to another of mangled human bodies and machines fused together in the moment of a car crash. The “TV scientist”, Vaughan, and his motley crew of car crash fetishists seek out crashes in-the-making, even causing them, just for the thrill of it. The protagonist of the tale, a James Ballard, gets deeply entangled with Vaughan and his ragged band after his own experience of a crash. Vaughan’s ultimate sexual fantasy is to die in a head-on collision with the actress Elizabeth Taylor.

Like a STS scholar-version of Vaughan, I imagine what it may be like to arrive on the scene of  a driverless car crash, and draw maps to understand what happened. Scenario planning is one kind of map-making to plan for ‘unthinkable futures’.

The ‘scenario’ is a phenomenon that became prominent during the Korean War, and through the following decades of the Cold War, to allow the US army to plan its strategy in the event of nuclear disaster. Peter Galison describes scenarios as a “literature of future war” “located somewhere between a story outline and ever more sophisticated role-playing war games”, “a staple of the new futurism”. Since then scenario-planning has been adopted by a range of organisations, and features in the modelling of risk and to identify errors. Galison cites the Boston Group as having written a scenario – their very first one-  in which feminist epistemologists, historians, and philosophers of science running amok might present a threat to the release of radioactive waste from the Cold War (“A Feminist World, 2091”).

The applications of the Trolley Problem to driverless car crashes are a sort of scenario planning exercise. Now familiar to most readers of mainstream technology reporting, the Trolley problem is presented as a series of hypothetical situations with different outcomes derived from a pitting of consequentialism against deontological ethics. Trolley Problems are constructed as either/or scenarios where a single choice must be made. MIT’s Moral Machine project materialises this thought experiment with an online template in which the user has to complete scenarios in which she has to instruct the driverless car about which kinds of pedestrian to avoid in the case of brake failure: runaway criminals, pets, children, parents, athletes, old people, or fat people.  Patrick Lin has a short animation describing the application of the Trolley Problem in the driverless car scenario.

 

The applications of the Trolley Problem, as well as the Pascalian Wager (Bhargava, 2016), are applications that attempt to create what-if scenarios. These scenarios guide the technical development of what has become both holy grail and a smokescreen in talking about autonomous driving (in the absence of an actual autonomous vehicle): how can we be sure that driverless car software will make the right assessment about the value of human lives in the case of an accident?

These scenarios and their outcomes is being referred to as the ‘ethics’ of autonomous driving.  In the development of driverless cars we see an ambition for the development of what James Moor refers to as an ‘explicit ethical agent’ – one that is able to calculate the best action in an ethical dilemma. I resist this construction of ethics because I claim that relationships with complex machines, and the outcomes of our interactions with them, are not either/or, and are perhaps far more entangled, especially in crashes. There is an assumption in applications of Trolley Problems that crashes take place in certain specific ways,  that  ethical outcomes can be computed in a logical fashion, and that this will amount to an accountability for machine behaviour. There is an assumption that the driverless car is a kind of neoliberal subject itself, which can somehow account for its behaviour just as it might some day just drive itself (thanks to Martha Poon for this phrasing).

Crash scenarios are particular moments that are being planned for and against; what kinds of questions does the crash event allow us to ask about how we’re constructing relationships with and about machines with artificial intelligence technologies in them? I claim that when we say “ethics” in the context of hypothetical driverless car crashes, what we’re really saying “who is accountable for this?” or, “who will pay for this?”, or “how can we regulate machine intelligence?”, or “how should humans apply artificially intelligent technologies in different areas?”. Some of these questions can be drawn out in terms of an accident involving a Tesla semi-autonomous vehicle.

In May 2016, an ex-US Navy veteran was test-driving a Model S Tesla semi-autonomous vehicle. The test driver, who was allegedly watching a Harry Potter movie at the time with the car in ‘autopilot’ mode, drove into a large tractor trailer whose white surface was mistaken by the computer vision software for the bright sky. Thus it did not stop, and went straight into the truck. The fault, it seemed, was the driver’s for trusting the autopilot mode as the Tesla statement after the event says. Autopilot in the semi-autonomous car is perhaps misleading for those who go up in airplanes, so much so that the German government has told Tesla that it cannot use the word ‘autopilot’ in the German market.

In order to understand what might have happened in the Tesla case, it is necessary to look at applications of computer vision and machine learning in driverless cars. A driverless car is equipped with a variety of sensors and cameras that will record objects around it. These objects will be identified by specialised deep learning algorithms called neural nets. Neural nets are distinct in that they can be programmed to build internal models for identifying features of a dataset, and can learn how those features are related without being explicitly programmed to do so (NVIDIA 2016; Bojarski et al 2016).

Computer vision software makes an image of an object and breaks that image up into small parts – edges, lines, corners, colour gradients and so on. By looking at billions of images, neural nets can identify patterns in how combinations of edges, lines, corners and gradients come together to constitute different objects; this is its ‘model making’. The expectation is that such software can identify a ball, a cat, or a child, and that this identification will enable the car’s software to make a decision about how to react based on that identification.

Yet, this is a technology still in development and there is the possibility for much confusion. So, things that are yellow, or things that have faces and two ears on top of the head, which share features such as shape, or where edges, gradients, and lines come together, can be misread until the software sees enough examples that distinguish how things that are yellow, or things with two ears on the top of the head, are different. The more complex something is visually, without solid edges, curves or single colours; or if an object is fast, small, or flexible, the more difficult it is to read. So, computer vision in cars is shown to have a ‘bicycle problem‘ because bicycles are difficult to identify, do not always havea a specific, structured shape, and can move at different speeds.

In the case of the Tesla crash, the software misread the large expanse of the side of the trailer truck for the sky. It is possible that the machine learning was not well-trained enough to make the distinction. The Tesla crash suggests that there was both an error in the computer vision and machine learning software, as well as a lapse on the part of the test driver who misunderstood what autopilot meant. How, then, are these separate conditions to be understood as part of the dynamic that resulted in the crash? How might an ethics be imagined for this sort of crash that comes from an unfortunate entanglement of machine error and human error? 

Looking away from driverless car crashes, and to aviation crashes instead, a specific narrative around the idea of accountability for crashes emerges. Galison, again, in his astounding chapter, An Accident of History,  meticulously unpacks aviation crashes and how they are are investigated, documented and recorded. In doing so he makes the point that it is near impossible to escape the entanglements between human and machine actors involved in an accident. And that in attempting to identify how and why a crash occurred, we find a “recurrent strain to between a drive to ascribe final causation to human factors and an equally powerful, countervailing drive to assign agency to technological factors.” Galison finds that in accidents, human action and material agency are entwined to the point that causal chains both seem to terminate at particular, critical actions, as well as radiate out towards human interactions and organisational cultures. Yet, what is embedded in accident reporting is the desire for a “single point of culpability”, as Alexander Brown puts it, which never seems to come.

Brown’s own accounts of accidents and engineering at NASA, and Diane Vaughan’s landmark ethnography about the reasons for the Challenger Space Shuttle crash suggest the same: from organisational culture and bureaucratic processes, to faulty design, to a wide swathe of human errors, to the combinations of these, are all implicated in how crashes of complex vehicles occur.

Anyone who has ever been in a car crash probably agrees that correctly identifying exactly what happened is a challenging task, at least.  How can the multiple, parallel conditions present in driving and car crashes be conceptualised? Rather than anethics of driver-less cars’ as a set of programmable rules for appropriate action, could it be imagined as a process by which an assemblage of people, social groups, cultural codes, institutions, regulatory standards, infrastructures, technical code, and engineering are framed in terms of their interaction? Mike Ananny suggests that “technology ethics emerges from a mix of institutionalized codes, professional cultures, technological capabilities, social practices, and individual decision making.” He shows that ethics is not a “test to be passed or a culture to be interrogated but a complex social and cultural achievement” (emphasis in original).

What the Trolley Problem scenario and the applications of machine learning in driving suggest is that we’re seeing a shift in how ethics is being constructed: from accounting for crashes after the fact, to pre-empting them (though, the automotive industry has been using computer simulated crash modeling for over twenty years); from ethics that is about values, or reasoning, to ethics as based on datasets of correct responses, and, crucially, of ethics as the outcome of software engineering. Specifically in the context of driverless cars, there is the shift from ethics as a framework of “values for living well and dying well”, as Gregoire Chamayou puts it, to a framework for “killing well”, or ‘necroethics’.

Perhaps the unthinkable scenario to confront is that  ethics is not a machine-learned response, nor an end-point, but as a series of socio-technical, technical, human, and post-human relationships, ontologies, and exchanges. These challenging and intriguing scenarios are yet to be mapped.

Maya is  a PhD candidate at Leuphana University and is Director of Applied Research at Tactical Tech. She can be reached via twitter 

(Some ideas in this paper have been developed for a paper submitted for publication in a peer-reviewed journal, APRJA)