“Is it in error to act unpredictably and behave in ways that run counter to how you were programmed to behave?” –Janet, The Good Place, S01E11

“You keep on asking me the same questions (why?)
And second-guessing all my intentions
Should know by the way I use my compression
That you’ve got the answers to my confessions”
“Make Me Feel” –Janelle Monáe, Dirty Computer

Alexa made headlines recently for bursting out laughing to herself in users’ homes. “In rare circumstances, Alexa can mistakenly hear the phrase ‘Alexa, laugh,’” an Amazon representative clarified following the widespread laughing spell. To avert further unexpected lols, the representative assured, “We are changing that phrase to be “Alexa, can you laugh?” which is less likely to have false positives […] We are also changing Alexa’s response from simply laughter to ‘Sure, I can laugh’ followed by laughter.”

This laughing epidemic is funny for many reasons, not least for recalling Amazon’s own Super Bowl ads of Alexa losing her voice. But it’s funny maybe most of all because of the schadenfreude of seeing this subtly misogynist voice command backfire. “Alexa, laugh” might as well be “Alexa, smile.” Only the joke is on the engineers this time – Alexa has the last laugh. Hahaha!

more...

If I were to ask you a question, and neither of us knew the answer, what would you do? You’d Google it, right? Me too. After you figure out the right wording and hit the search button, at what point would you be satisfied enough with Google’s answer to say that you’ve gained new knowledge? Judging from the current socio-technical circumstances, I’d be hard-pressed to say that many of us would make it past the featured snippet, let alone the first page of results.

The internet—along with the complementary technologies we’ve developed to increase its accessibility—enriches our lives by affording us access to the largest information repository ever conceived. Despite physical barriers, we can share, explore, and store facts, opinions, theories, and philosophies alike. As such, this vast repository contains many answers to many questions derived from many distinct perspectives. These socio-technical circumstances are undeniably promising for the distribution and development of knowledge. However, in 2008, tech-critic Nicholas Carr posed a counter argument about the internet and its impact on our cognitive abilities by asking readers a simple question: is Google making us stupid? In his controversial article published by The Atlantic, Carr blames the internet for our diminishing ability to form “rich mental connections,” and supposes that technology and the internet are instruments of intentional distraction. While I agree with Carr’s sentiment that the way we think has changed, I don’t agree that the fault falls on the internet. I believe we expect too much of Google and less of ourselves; therefore, the fault (if there is fault) is largely our own. more...

Augmented reality makes odd bed fellows out of pleasure and discomfort. Overlaying physical objects with digital data can be fun and creative. It can generate layered histories of place, guide tourists through a city, and gamify ordinary landscapes.  It can also raise weighty philosophical questions about the nature of reality.

The world is an orchestrated accomplishment, but as a general rule, humans treat it like a fact. When the threads of social construction begin to unravel, there is a rash of movement to weave them back together. This pattern of reality maintenance, potential breakdown and repair is central to the operation of self and society and it comes into clear view through public responses to digital augmentation.

A basic sociological tenet is that interaction and social organization are only possible through shared definitions of reality. For meaningful interaction to commence, interactants must first agree on the question of “what’s going on here?”. It is thus understandable that technological alteration, especially when applied in fractured and nonuniform ways, would evoke concern about disruptions to the smooth fabric of social life. It is here, in this disruptive potential, that lie apprehensions about the social effects of AR. more...

A dirty old chair with the words "My mistakes have a certain logic" stenciled onto the back.

You may have seen the media image that was circulating ahead of the 2018 State of the Union address, depicting a ticket to the event that was billed under a typographical error as the “State of the Uniom.” This is funny on some level, yet as we mock the Trump Administration’s foibles, we also might reflect on our own complicity. As we eagerly surround ourselves with trackers, sensors, and manifold devices with internet-enabled connections, our thoughts, actions, and, yes, even our mistakes are fast becoming data points in an increasingly Byzantine web of digital information.

To wit, I recently noticed a ridiculous typo in an essay I wrote about the challenges of pervasive digital monitoring, lamenting the fact that “our personal lives our increasingly being laid bare.” Perhaps this is forgivable since the word “our” appeared earlier in the sentence, but nonetheless this is a piece I had re-read many times before posting it. Tellingly, in writing about a panoptic world of self-surveillance and compelled revelations, my own contributions to our culture of accrued errors was duly noted. How do such things occur in the age of spellcheck and autocorrect – or more to the point, how can they not occur? I have a notion. more...

 

This image provided by the U.S. Coast Guard shows fire boat response crews battle the blazing remnants of the off shore oil rig Deepwater Horizon Wednesday April 21, 2010. The Coast Guard by sea and air planned to search overnight for 11 workers missing since a thunderous explosion rocked an oil drilling platform that continued to burn late Wednesday. (AP Photo/US Coast Guard)

It has been really thrilling to hear so much positive feedback about my essay about authoritarianism in engineering. In that essay, which you can read over at The Baffler, I argue that engineering education and authoritarian tendencies trend very closely and that we see this trend play out in their interpretations of dystopian science fiction. Instead of heeding very clear warnings about the avarice of good intentions gone awry, companies like Axon (né TASER) use movies and books like Minority Report as product roadmaps. I conclude by saying:

In times like these it is important to remember that border walls, nuclear missiles, and surveillance systems do not work, and would not even exist, without the cooperation of engineers. We must begin teaching young engineers that their field is defined by care and humble assistance, not blind obedience to authority.

I’ve got some pushback, both gentle and otherwise about two specific points in my essay which I’d like to discuss here. I’m going to paraphrase and synthesize several people’s arguments but if anyone wants to jump into the comments with something specific they’re more than welcome to do so.

more...

As a follow up to my previous post about the Center for Humane Technology, I want to examine more of their mission [for us] to de-addict from the attention economy. In this post I write about ‘time well spent’ through the lens of Sarah Sharma’s work on critical temporalities; and share an anecdote from my (ongoing) fieldwork at a recent AI, Technology and Ethics conference.

Time Well Spent is an approach that de-emphasizes a life rewarded by likes, shares and follower counts. ‘Time well spent’ is about “resisting the hijacking of our minds by technology”. It is about not allowing the cunning of social media UX to lead us to believe that the News Feed or TimeLine are actually real life. We are supposed to be participants, not prey; users not abusers; masterful and not entangled. The sharing, pouting, snarking, loving, hating and sexting we do online, and at scale, is damaging personal well-being and the pillars of society, and must be diverted to something else, the Center claims.

As I have argued before in relation to the addiction metaphor the Center uses, ‘time well spent’ implies the need for individual disconnection from the attention economy. It is about recovering time that is monetized as attention. This is a notion of time that belongs to the self, un-tethered, as if it were not enmeshed in relationships with others and in existing social structures.

more...

There has been a steady stream of articles about and by “reformed techies” who are coming to terms with the Silicon Valley ‘Frankenstein‘ they’ve spawned. Regret is transformed into something more missionary with the recently launched Center for Humane Technology.

In this post I want to focus on how the Center has constructed what they perceive as a problem with the digital ecosystem: the attention economy and our addiction to it. I question how they’ve constructed the problem in terms of individual addictive behavior, and design, rather than structural failures and gaps; and the challenges in disconnection from the attention economy. However, I end my questioning with an invitation to them to engage with organisations and networks who are already working on addressing problems arising out of the attention economy.

YouTube Preview Image

Sean Parker and Chamath Palihapitiya, early Facebook investors and developers, are worried about the platform’s effects on society.

The Center for Humane Technology identifies social media – the drivers of the attention economy – and the dark arts of persuasion, or UX, as culprits in the weakening of democracy, children’s well-being, mental health and social relations. Led by Tristan Harris, aka “the conscience of silicon valley”, the Center wants to disrupt how we use tech, and get us off all the platforms and tools most of them worked to get us on in the first place. They define the problem as follows:

Snapchat turns conversations into streaks, redefining how our children measure friendship. Instagram glorifies the picture-perfect life, eroding our self worth. Facebook segregates us into echo chambers, fragmenting our communities. YouTube autoplays the next video within seconds, even if it eats into our sleep. These are not neutral products. They are part of a system designed to addict us.”

Pushing lies directly to specific zip codes, races, or religions. Finding people who are already prone to conspiracies or racism, and automatically reaching similar users with “Lookalike” targeting. Delivering messages timed to prey on us when we are most emotionally vulnerable (e.g., Facebook found depressed teens buy more makeup). Creating millions of fake accounts and bots impersonating real people with real-sounding names and photos, fooling millions with the false impression of consensus.”

more...

These days, Influencers are starting off younger and older.

As the earliest cohorts of Influencers progress in their life course and begin families, it is no surprise that many of them are starting to write about their young children, grooming them into micro-microcelebrities. Many Influencer mothers have also been known to debut dedicated digital estates for their incoming babies straight from the womb via sonograms. Influencer culture that has predominantly been youth- and self-centric is growing to accommodate different social units, such as young couples and families. In fact, entire families are now beginning to debut as family Influencers, documenting and publicizing the domestic happenings of their daily lives for a watchful audience, although not without controversy. But now it seems grandparents are joining in too.

Recently, I have taken interest in a new genre of Influencers emerging around the elderly. Many of these elderly Influencers are well into their 60s and 70s, and display their hobbies or quirks of daily living on various social media. Some create, publish, and curate their own content, while others are filmed by family members who manage the public images of these elderly Influencers. I am just beginning to conceptualize a new research project on these silver haired Influencers in East Asia, and will briefly share in this post some of the elderly Influencers I enjoy and emergent themes from news coverage on them. more...

“Social media has exacerbated and monetized fake news but the source of the problem is advertising-subsidized journalism,” David wrote last year after numerous news outlets were found to have been unwittingly reporting the disinformation of Jenna Abrams, a Russian troll account, as fact. “Breitbart and InfoWars republished Abrams’ tweets, but so did The Washington Post and The Times of India. The only thing these news organizations have in common is their advertiser-centered business model.” David concludes that short of “confront[ing] the working conditions of reporters” who are strapped for time and publishable content, the situation isn’t likely to improve. As this instance of fake news proliferation shows, journalism’s reliance on this business model represents a bug for a better informed society, one that not coincidentally represents a feature from the perspective of the advertisers it serves.

Conceiving of less destructive business models can be a way into critical analysis of platforms. An aspect of that analysis is to situate the platform within the environment that produced it. For this post, I want to explore how industry analysts’ observations fit into criticism of socio-technical systems. The act of assessing platforms primarily in terms of market viability or future prospects, as analysts do, is nauseating to me. It’s one thing to parse out the intimations of a CEO’s utopian statements, but it’s another to take into account the persuasive commentary of experienced, self-assured analysts. If analysts represent the perspective of a clairvoyant capitalist, inhabiting their point of view even momentarily feels like ceding too much ground or “mindshare” to Silicon Valley and its quantitative, technocratic ideology. Indeed, an expedient way to mollify criticism would be to turn it into a form of consultancy.

more...

Image by Mansi Thapliyal /Reuters grabbed from a Quartz story on January 25, 2018

I dream of a Digital India where access to information knows no barriers – Narendra Modi, Prime Minister of India

The value of a man was reduced to his immediate identity and nearest possibility. To a vote. To a number. To a thing. Never was a man treated as a mind. As a glorious thing made up of stardust. – From the suicide note of Rohith Vemula 1989 – 2016.

A speculative dystopia in which a person’s name, biometrics or social media profile determine their lot is not so speculative after all. China’s social credit scoring system assesses creditworthiness on the basis of social graphs. Cash disbursements to Syrian refugees are made through the verification of iris scans to eliminate identity fraud. A recent data audit of the World Food Program has revealed significant lapses in how personal data is being managed; this becomes concerning in Myanmar (one of the places where the WFP works) where religious identity is at the heart of the ongoing genocide.

In this essay I write about how two technology applications in India – ‘fintech’ and Aadhaar – are being implemented to verify and ‘fix’ identity against the backdrop of contestations of identity, and religious fascism and caste-based violence in the country. I don’t intend to compare the two technologies directly; however, they exist within closely connected technical infrastructure ecosystems. I’m interested in how both socio-technical systems operate with respect to identity.

more...