There has been a steady stream of articles about and by “reformed techies” who are coming to terms with the Silicon Valley ‘Frankenstein‘ they’ve spawned. Regret is transformed into something more missionary with the recently launched Center for Humane Technology.

In this post I want to focus on how the Center has constructed what they perceive as a problem with the digital ecosystem: the attention economy and our addiction to it. I question how they’ve constructed the problem in terms of individual addictive behavior, and design, rather than structural failures and gaps; and the challenges in disconnection from the attention economy. However, I end my questioning with an invitation to them to engage with organisations and networks who are already working on addressing problems arising out of the attention economy.

YouTube Preview Image

Sean Parker and Chamath Palihapitiya, early Facebook investors and developers, are worried about the platform’s effects on society.

The Center for Humane Technology identifies social media – the drivers of the attention economy – and the dark arts of persuasion, or UX, as culprits in the weakening of democracy, children’s well-being, mental health and social relations. Led by Tristan Harris, aka “the conscience of silicon valley”, the Center wants to disrupt how we use tech, and get us off all the platforms and tools most of them worked to get us on in the first place. They define the problem as follows:

Snapchat turns conversations into streaks, redefining how our children measure friendship. Instagram glorifies the picture-perfect life, eroding our self worth. Facebook segregates us into echo chambers, fragmenting our communities. YouTube autoplays the next video within seconds, even if it eats into our sleep. These are not neutral products. They are part of a system designed to addict us.”

Pushing lies directly to specific zip codes, races, or religions. Finding people who are already prone to conspiracies or racism, and automatically reaching similar users with “Lookalike” targeting. Delivering messages timed to prey on us when we are most emotionally vulnerable (e.g., Facebook found depressed teens buy more makeup). Creating millions of fake accounts and bots impersonating real people with real-sounding names and photos, fooling millions with the false impression of consensus.”

The Center for Humane Technology has set itself a mighty challenge. How are people going to change digital practices in the face of UX that is weaponized with dark patterns that intend to keep us addicted? How are they going to take down the business model built on surveillance capitalism, which they refer to as the attention economy? If social media is addictive, what sort of twelve step program are they going to come up with? How do you sustain being clean? They might want to check out a program for how to detox from data.

What the Center identifies as the ‘monetization of attention’ is, actually, the extraction of personal data. (Curiously, they do not use the phrase ‘big data’, or ‘your personal data’ anywhere in their website text.) This attention (or, personal data) is extracted from our digital and analog behavior and then is used to profile and target us to sell us lies, misinformation, or worsen our depression by showing us advertising for make-up. And we are targeted even when we aren’t paying attention at all, like when we are walking down a street with mobile phones in our handbags. Information about us is being extracted to identify and profile us almost all the time because it is profitable.

How will the harmful effects of attention be arrested without a challenge to the monetization itself, and the values that sustain it? Your attention is valuable only because it is associated with an identity that exists in multiple geographies  – financial, cartographic, intimate, socio-cultural, linguistic, religious, gendered, racialised -at the same time. These identities, and the attention that animates them, pop up across different devices, platforms, services and networks making it identifiable and knowable, and thus easy to sell things to. Your identity is like cables and wires, and attention is the electricity that runs along the outside of, rather than in or through, these wires. For people who have already made fortunes by peddling the cables, wires, poles, and electricity, it sounds disingenuous to not confront the economic value underlying all of this.

The Center for Humane Technology constructs the problem in terms of addiction and therefore as one of individual attention. And while they acknowledge the importance of lobbying Congress and hardware companies (Apple and Microsoft will set us free as if they don’t lock us into digital ecosystems and vie for our attention?), they emphasize a focus on individual action be that of tech workers, or users. By invoking ‘addiction’ as a metaphor,  they see the problem as being about individual attention, and eventually, individual salvation. Naming the co-founder of the Center, Harris, as the ‘conscience’ of Silicon Valley evokes a similar emphasis on individual rather than community, political, or structural dimensions to the attention economy and its dismantling, or restructuring. The use of the addiction metaphor has been criticized for at least twenty years and most notably by Sherry Turkle; and mostly because it is neither apt, nor it there enough evidence of how it works as an addiction. ‘Diet’ metaphors and relationships-with-food metaphors may work better, perhaps, to characterize our relationships with technology.

However, it is also about design, they say: design has been weaponized to create addiction. By invoking both addiction and design there is  lack of a structural critique in addressing how complex social problems such as children’s well being, or democracy, come to be. According to the Center, if you resist UX by turning your attention away, you can start to make a change by hitting the tech business where it hurts. And if tech businesses cease to get our attention, then democracy, social relations, mental health and children’s well-being might be salvaged.  Frankly this accrues more power to UX and Design itself; and creates a sort of hallowed epistemology flowing from Design.

The assumption is that these social conditions and relationships somehow did not exist before social media, or have changed in the past ten years because of UX and its seductions. I believe this is both not-true, and also true. We do engage in politics and democracy through our devices and social media, and we do see the weakening of existing values and notions of governance; but there isn’t necessarily a directly causal relationship between them. This is not uniformly the case, nor evenly distributed around the world. There are muddied tracks around the bodies of these relationships.

Democracy as a design problem is not new. There has been considerable work over the past decade to enable citizens to use civic technology applications for transparency and accountability to hold governments to account and promote democratic values and practices. It might help the Center to look at some lessons from around the world where democracy has been considered to be failing and technology was applied as a solution. To cherry-pick one relevant lesson (because this is a vast area of expertise and research that I cannot do justice to in this post): building a tool or a platform to foster democratic values or behavior does not necessarily scale. The lesson is that it doesn’t flow in the direction tech —> democracy.

Applying this to the case of the Center, but in inverse, you cannot approach technology and social change from a deterministic perspective. Technology will amplify and accentuate some things: there will be more ‘voices’ but most likely the voices of those who are already powerful in society, will be heard the loudest. Networks of influence offline matter to how messages are amplified online; swarms of hate-filled hashtags, memes, and bots traverse the fluid connections between on and offline.

Fixing Facebook and Twitter is absolutely essential, but it is not the same as addressing the weakening of public institutions, xenophobia, poverty, the swing towards populism, the 2008 financial recession, or combinations of these. They need to happen in conjunction with each other. Democracy is actually about relationships among people, movements, and longstanding practices of activism and organising in communities.

Trying to change digital behaviour is difficult and complicated because of how our political and personal expression, relationships of care, work and intimacy, and maintenance of these relationships, are all bound up in a narrow set of platforms and devices. Disconnecting from the attention economy is more like a series of trade-offs and negotiations with yourself; like a constant algebra of maintenance, of digital choice-making, managing information flows across different activities and relationships; and some baseline basic digital hygiene.

It is hard to feel like you have arrived at a place of disconnection because of how perniciously deep these platforms and devices can go and how far they spread. There is something aspirational and athletic about trying to disconnect from the attention economy; it really is a bit like a practice of a religious kind, almost. I know this because I’ve consciously practiced this disconnection for some years because of where I worked and what I did there. (I practice less now because my work has also changed, but I am still conservative about what kinds of attention I give different platforms and services.)

Through this work I’ve been part of communities of technologists, security and privacy experts, activists, lawyers, policy advocates, human rights defenders, and artists, who construct their relationship with information and technologies in critical terms. These communities, highly creative and adept in our use of technology,  understand the politics of information systems as continuous with the politics of, governance, geopolitics, economics, history, gender, the law, and so on.

In these communities it is entirely normal to never know some of your friends on social media, to not  assume they are on social media in the first place, or to refer to people by their online handles rather than their actual given names.  Having a community of practice really is key to disconnection from the attention economy; and to supporting any other kind of personal de-addiction as well.

Many of us who practice disconnection from the big data attention economy use open source tools that are sometime ugly because they don’t try to grab your attention (there is little investment in UX) but deliver a service instead; and we compartmentalize digital practices across different devices, identities, services and platforms. We may use social media but selectively, and we don’t necessarily connect all of them with our actual identities and personal details. Many people I know actively try to get their immediate families to also disconnect from social media as a way to communicate; only a few succeed. The first thing to be hit is your personal relationships, as Palihapitiya notes in the video above.

It is entirely possible to live a Google-free life as some of my ex colleagues and friends do, but you make peace with the trade-offs, and adjust your life accordingly.  It’s like people who don’t drink Coca Cola, or are vegetarian but not on the weekends, or would rather cycle than take transatlantic flights. An interesting point about Coca Cola: in Berlin we have Afri-Cola, and Fritz Cola (caffeinated and not; with and without sugar) as tasty and refreshing alternatives to Coca Cola, which is also available in its many flavors. In some places there are structurally-afforded opportunities to be more flexible and make a wider range of choices. This is what we need from extractive technology industries  – more control and more choices.

 

I Quit (2017). An installation by Thierry Fournier of video testimonials of why people quit their social media accounts and what happened next.

Despite absence of a real structural critique to the attention problem, I believe the Center may be successful because they are well-placed in terms of money and influence. If the Center for Humane Technology actually worked to disarm UX, made it possible for us to move our personal networks to platforms of our choosing, baked ethics into how technology is made, enabled regulation of the data trade, and protections  for users, then they might actually be disruptive. Let’s hope they succeed. In the mean time, the Center may find useful resources and ground-up expertise among those who have already been building movements for users to take control of their digital lives such as:

Article 19; Bits of Freedom; Coding Rights; Committee to Protect Journalists; Cryptoparty; Data Detox Kit; Data Justice Lab; Derechos Digitales; Digital Rights Foundation; Electronic Frontier Foundation; Freedom of the Press Foundation; The Glass Room; Gobo.Social; Internet Freedom Festival; Mozilla Internet Health Project; Privacy International; Responsible Data Project; Security in a Box; Share Lab; Simply Secure; Surveillance Self Defence Kit; Tactical Technology Collective; Take Back The Tech.

Maya Indira Ganesh has been a feminist information-activist for the past decade, and most of that time was spent at Tactical Technology Collective. She lives in Berlin and is working on a PhD about the testing and standardization of machine intelligence. She does not drink Coca-Cola. She can be reached on Twitter @mayameme