Augmented reality makes odd bed fellows out of pleasure and discomfort. Overlaying physical objects with digital data can be fun and creative. It can generate layered histories of place, guide tourists through a city, and gamify ordinary landscapes.  It can also raise weighty philosophical questions about the nature of reality.

The world is an orchestrated accomplishment, but as a general rule, humans treat it like a fact. When the threads of social construction begin to unravel, there is a rash of movement to weave them back together. This pattern of reality maintenance, potential breakdown and repair is central to the operation of self and society and it comes into clear view through public responses to digital augmentation.

A basic sociological tenet is that interaction and social organization are only possible through shared definitions of reality. For meaningful interaction to commence, interactants must first agree on the question of “what’s going on here?”. It is thus understandable that technological alteration, especially when applied in fractured and nonuniform ways, would evoke concern about disruptions to the smooth fabric of social life. It is here, in this disruptive potential, that lie apprehensions about the social effects of AR. more...

A dirty old chair with the words "My mistakes have a certain logic" stenciled onto the back.

You may have seen the media image that was circulating ahead of the 2018 State of the Union address, depicting a ticket to the event that was billed under a typographical error as the “State of the Uniom.” This is funny on some level, yet as we mock the Trump Administration’s foibles, we also might reflect on our own complicity. As we eagerly surround ourselves with trackers, sensors, and manifold devices with internet-enabled connections, our thoughts, actions, and, yes, even our mistakes are fast becoming data points in an increasingly Byzantine web of digital information.

To wit, I recently noticed a ridiculous typo in an essay I wrote about the challenges of pervasive digital monitoring, lamenting the fact that “our personal lives our increasingly being laid bare.” Perhaps this is forgivable since the word “our” appeared earlier in the sentence, but nonetheless this is a piece I had re-read many times before posting it. Tellingly, in writing about a panoptic world of self-surveillance and compelled revelations, my own contributions to our culture of accrued errors was duly noted. How do such things occur in the age of spellcheck and autocorrect – or more to the point, how can they not occur? I have a notion. more...

 

This image provided by the U.S. Coast Guard shows fire boat response crews battle the blazing remnants of the off shore oil rig Deepwater Horizon Wednesday April 21, 2010. The Coast Guard by sea and air planned to search overnight for 11 workers missing since a thunderous explosion rocked an oil drilling platform that continued to burn late Wednesday. (AP Photo/US Coast Guard)

It has been really thrilling to hear so much positive feedback about my essay about authoritarianism in engineering. In that essay, which you can read over at The Baffler, I argue that engineering education and authoritarian tendencies trend very closely and that we see this trend play out in their interpretations of dystopian science fiction. Instead of heeding very clear warnings about the avarice of good intentions gone awry, companies like Axon (né TASER) use movies and books like Minority Report as product roadmaps. I conclude by saying:

In times like these it is important to remember that border walls, nuclear missiles, and surveillance systems do not work, and would not even exist, without the cooperation of engineers. We must begin teaching young engineers that their field is defined by care and humble assistance, not blind obedience to authority.

I’ve got some pushback, both gentle and otherwise about two specific points in my essay which I’d like to discuss here. I’m going to paraphrase and synthesize several people’s arguments but if anyone wants to jump into the comments with something specific they’re more than welcome to do so.

more...

As a follow up to my previous post about the Center for Humane Technology, I want to examine more of their mission [for us] to de-addict from the attention economy. In this post I write about ‘time well spent’ through the lens of Sarah Sharma’s work on critical temporalities; and share an anecdote from my (ongoing) fieldwork at a recent AI, Technology and Ethics conference.

Time Well Spent is an approach that de-emphasizes a life rewarded by likes, shares and follower counts. ‘Time well spent’ is about “resisting the hijacking of our minds by technology”. It is about not allowing the cunning of social media UX to lead us to believe that the News Feed or TimeLine are actually real life. We are supposed to be participants, not prey; users not abusers; masterful and not entangled. The sharing, pouting, snarking, loving, hating and sexting we do online, and at scale, is damaging personal well-being and the pillars of society, and must be diverted to something else, the Center claims.

As I have argued before in relation to the addiction metaphor the Center uses, ‘time well spent’ implies the need for individual disconnection from the attention economy. It is about recovering time that is monetized as attention. This is a notion of time that belongs to the self, un-tethered, as if it were not enmeshed in relationships with others and in existing social structures.

more...

There has been a steady stream of articles about and by “reformed techies” who are coming to terms with the Silicon Valley ‘Frankenstein‘ they’ve spawned. Regret is transformed into something more missionary with the recently launched Center for Humane Technology.

In this post I want to focus on how the Center has constructed what they perceive as a problem with the digital ecosystem: the attention economy and our addiction to it. I question how they’ve constructed the problem in terms of individual addictive behavior, and design, rather than structural failures and gaps; and the challenges in disconnection from the attention economy. However, I end my questioning with an invitation to them to engage with organisations and networks who are already working on addressing problems arising out of the attention economy.

YouTube Preview Image

Sean Parker and Chamath Palihapitiya, early Facebook investors and developers, are worried about the platform’s effects on society.

The Center for Humane Technology identifies social media – the drivers of the attention economy – and the dark arts of persuasion, or UX, as culprits in the weakening of democracy, children’s well-being, mental health and social relations. Led by Tristan Harris, aka “the conscience of silicon valley”, the Center wants to disrupt how we use tech, and get us off all the platforms and tools most of them worked to get us on in the first place. They define the problem as follows:

Snapchat turns conversations into streaks, redefining how our children measure friendship. Instagram glorifies the picture-perfect life, eroding our self worth. Facebook segregates us into echo chambers, fragmenting our communities. YouTube autoplays the next video within seconds, even if it eats into our sleep. These are not neutral products. They are part of a system designed to addict us.”

Pushing lies directly to specific zip codes, races, or religions. Finding people who are already prone to conspiracies or racism, and automatically reaching similar users with “Lookalike” targeting. Delivering messages timed to prey on us when we are most emotionally vulnerable (e.g., Facebook found depressed teens buy more makeup). Creating millions of fake accounts and bots impersonating real people with real-sounding names and photos, fooling millions with the false impression of consensus.”

more...

These days, Influencers are starting off younger and older.

As the earliest cohorts of Influencers progress in their life course and begin families, it is no surprise that many of them are starting to write about their young children, grooming them into micro-microcelebrities. Many Influencer mothers have also been known to debut dedicated digital estates for their incoming babies straight from the womb via sonograms. Influencer culture that has predominantly been youth- and self-centric is growing to accommodate different social units, such as young couples and families. In fact, entire families are now beginning to debut as family Influencers, documenting and publicizing the domestic happenings of their daily lives for a watchful audience, although not without controversy. But now it seems grandparents are joining in too.

Recently, I have taken interest in a new genre of Influencers emerging around the elderly. Many of these elderly Influencers are well into their 60s and 70s, and display their hobbies or quirks of daily living on various social media. Some create, publish, and curate their own content, while others are filmed by family members who manage the public images of these elderly Influencers. I am just beginning to conceptualize a new research project on these silver haired Influencers in East Asia, and will briefly share in this post some of the elderly Influencers I enjoy and emergent themes from news coverage on them. more...

“Social media has exacerbated and monetized fake news but the source of the problem is advertising-subsidized journalism,” David wrote last year after numerous news outlets were found to have been unwittingly reporting the disinformation of Jenna Abrams, a Russian troll account, as fact. “Breitbart and InfoWars republished Abrams’ tweets, but so did The Washington Post and The Times of India. The only thing these news organizations have in common is their advertiser-centered business model.” David concludes that short of “confront[ing] the working conditions of reporters” who are strapped for time and publishable content, the situation isn’t likely to improve. As this instance of fake news proliferation shows, journalism’s reliance on this business model represents a bug for a better informed society, one that not coincidentally represents a feature from the perspective of the advertisers it serves.

Conceiving of less destructive business models can be a way into critical analysis of platforms. An aspect of that analysis is to situate the platform within the environment that produced it. For this post, I want to explore how industry analysts’ observations fit into criticism of socio-technical systems. The act of assessing platforms primarily in terms of market viability or future prospects, as analysts do, is nauseating to me. It’s one thing to parse out the intimations of a CEO’s utopian statements, but it’s another to take into account the persuasive commentary of experienced, self-assured analysts. If analysts represent the perspective of a clairvoyant capitalist, inhabiting their point of view even momentarily feels like ceding too much ground or “mindshare” to Silicon Valley and its quantitative, technocratic ideology. Indeed, an expedient way to mollify criticism would be to turn it into a form of consultancy.

more...

Image by Mansi Thapliyal /Reuters grabbed from a Quartz story on January 25, 2018

I dream of a Digital India where access to information knows no barriers – Narendra Modi, Prime Minister of India

The value of a man was reduced to his immediate identity and nearest possibility. To a vote. To a number. To a thing. Never was a man treated as a mind. As a glorious thing made up of stardust. – From the suicide note of Rohith Vemula 1989 – 2016.

A speculative dystopia in which a person’s name, biometrics or social media profile determine their lot is not so speculative after all. China’s social credit scoring system assesses creditworthiness on the basis of social graphs. Cash disbursements to Syrian refugees are made through the verification of iris scans to eliminate identity fraud. A recent data audit of the World Food Program has revealed significant lapses in how personal data is being managed; this becomes concerning in Myanmar (one of the places where the WFP works) where religious identity is at the heart of the ongoing genocide.

In this essay I write about how two technology applications in India – ‘fintech’ and Aadhaar – are being implemented to verify and ‘fix’ identity against the backdrop of contestations of identity, and religious fascism and caste-based violence in the country. I don’t intend to compare the two technologies directly; however, they exist within closely connected technical infrastructure ecosystems. I’m interested in how both socio-technical systems operate with respect to identity.

more...

cw: suicide

This isn’t the essay I originally set out to write. That essay is sitting open next to this one, unfinished. But in being unable to finish that piece, I was inspired to write this one.

In January 2013, web developer and activist Aaron Swartz hanged himself in his New York apartment. At the time, Swartz was facing serious jail time for using a guest account on MIT servers to download millions of academic papers from the online journal repository JSTOR. Swartz, who was also integral in the development of RSS web feed format and the news aggregation site Reddit, sought to make publically available the academic content that JSTOR held behind its subscription paywalls.

more...

An artists’ rendering of a possible future Amazon HQ2 in Chicago. Image from the Chicago Tribune.

The Intercept’s Zaid Jilani asked a really good question earlier today: Why Don’t the 20 Cities on Amazon’s HQ2 Shortlist Collectively Bargain Instead of Collectively Beg? Amazon is looking for a place to put its second headquarters and cities have fallen over each other to provide some startlingly desperate concessions to lure the tech giant. Some of the concessions, like Chicago’s offer to essentially engage in wage theft by taking all the income tax collected from employees and hand it back to Amazon, make it unclear what these cities actually gain by hosting the company. The reason that city mayors will never collectively bargain on behalf of their citizens is two fold: 1) America lacks an inter-city governance mechanism that prevents cities from being blackballed by corporate capital and 2) most big city mayors are corrupt as hell and don’t care about you.

In 1987 urban sociologists John Logan and Harvey Molotch put forward the “Growth Machine” theory to explain why cities do not collectively bargain and instead compete with one-another in a race-to-the-bottom to see which city can concede the most taxes for the least gain. The theory is rather straightforward: cities may have one or two inherent competitive advantages that no other city has, but beyond that you can only offer tax breaks. Maybe you’ve got a deep water port that big container ships can use, or you’re situated at the only pass in a mountain range. Other than that, location is completely fungible. All that’s left is tax policy and land grants. more...