Colin Koopman, an associate professor of philosophy and director of new media and culture at the University of Oregon, wrote an opinion piece in the New York Times last month that situated the recent Cambridge Analytica debacle within a larger history of data ethics. Such work is crucial because, as Koopman argues, we are increasingly living with the consequences of unaccountable algorithmic decision making in our politics and the fact that “such threats to democracy are now possible is due in part to the fact that our society lacks an information ethics adequate to its deepening dependence on data.” It shouldn’t be a surprise that we are facing massive, unprecedented privacy problems when we let digital technologies far outpace discussions around ethics or care for data. more...
The last couple weeks have been rough for sex workers on the internet. Adult content creators are reporting that their porn videos are disappearing out of Google Drive; Microsoft has announced that they will prohibit profanity and nudity on Skype; Patreon has changed its terms of service to exclude pornography; Facebook is censoring events that are related to sex – including even sex ed by refusing to allow for paid promotion (I recently gave a Dirty Talk workshop for a Pittsburgh based sex-positive sex education collective, and their ads were rejected); Twitter is shadowbanning sex workers at alarming rates; and several platforms related to erotic services have shut down entirely: Craigstlist personal ads, several sub-Reddits, The Erotic Review, MyRedBook, CityVibe, Providingsupport, to name a few.
Much of this is a reaction to the passage of FOSTA (Fight Online Sex Trafficking Act) in the House, and SESTA (Stop Enabling Sex Traffickers Act) in the Senate. These bills are a response to the government’s inability to prosecute trafficking cases against the online classifieds site Backpage (a competitor to Craiglist known for being more hospitable to sex workers). These bills would amend Section 230 of The Communications Decency Act of 1996, holding websites liable for content posted by 3rd parties and making it easy for plaintiffs and state attorney generals to sue websites that “knowingly assist, facilitate, or support sex trafficking” (a phrase that the bill does not clearly define and often seems to conflate will prostitution more generally). In other words, once these bills are signed into law, Craigslist, for example, could be sued because of something that a user posts, if an attorney general from any of the 50 states decides to interpret it as vaguely related to sex trafficking. And, many proponents of FOSTA/SESTA seem to be indicating that they view all sex work as equatable to sex trafficking. more...
“Is it in error to act unpredictably and behave in ways that run counter to how you were programmed to behave?” –Janet, The Good Place, S01E11
“You keep on asking me the same questions (why?)
And second-guessing all my intentions
Should know by the way I use my compression
That you’ve got the answers to my confessions”
“Make Me Feel” –Janelle Monáe, Dirty Computer
Alexa made headlines recently for bursting out laughing to herself in users’ homes. “In rare circumstances, Alexa can mistakenly hear the phrase ‘Alexa, laugh,’” an Amazon representative clarified following the widespread laughing spell. To avert further unexpected lols, the representative assured, “We are changing that phrase to be “Alexa, can you laugh?” which is less likely to have false positives […] We are also changing Alexa’s response from simply laughter to ‘Sure, I can laugh’ followed by laughter.”
This laughing epidemic is funny for many reasons, not least for recalling Amazon’s own Super Bowl ads of Alexa losing her voice. But it’s funny maybe most of all because of the schadenfreude of seeing this subtly misogynist voice command backfire. “Alexa, laugh” might as well be “Alexa, smile.” Only the joke is on the engineers this time – Alexa has the last laugh. Hahaha!
If I were to ask you a question, and neither of us knew the answer, what would you do? You’d Google it, right? Me too. After you figure out the right wording and hit the search button, at what point would you be satisfied enough with Google’s answer to say that you’ve gained new knowledge? Judging from the current socio-technical circumstances, I’d be hard-pressed to say that many of us would make it past the featured snippet, let alone the first page of results.
The internet—along with the complementary technologies we’ve developed to increase its accessibility—enriches our lives by affording us access to the largest information repository ever conceived. Despite physical barriers, we can share, explore, and store facts, opinions, theories, and philosophies alike. As such, this vast repository contains many answers to many questions derived from many distinct perspectives. These socio-technical circumstances are undeniably promising for the distribution and development of knowledge. However, in 2008, tech-critic Nicholas Carr posed a counter argument about the internet and its impact on our cognitive abilities by asking readers a simple question: is Google making us stupid? In his controversial article published by The Atlantic, Carr blames the internet for our diminishing ability to form “rich mental connections,” and supposes that technology and the internet are instruments of intentional distraction. While I agree with Carr’s sentiment that the way we think has changed, I don’t agree that the fault falls on the internet. I believe we expect too much of Google and less of ourselves; therefore, the fault (if there is fault) is largely our own. more...
Augmented reality makes odd bed fellows out of pleasure and discomfort. Overlaying physical objects with digital data can be fun and creative. It can generate layered histories of place, guide tourists through a city, and gamify ordinary landscapes. It can also raise weighty philosophical questions about the nature of reality.
The world is an orchestrated accomplishment, but as a general rule, humans treat it like a fact. When the threads of social construction begin to unravel, there is a rash of movement to weave them back together. This pattern of reality maintenance, potential breakdown and repair is central to the operation of self and society and it comes into clear view through public responses to digital augmentation.
A basic sociological tenet is that interaction and social organization are only possible through shared definitions of reality. For meaningful interaction to commence, interactants must first agree on the question of “what’s going on here?”. It is thus understandable that technological alteration, especially when applied in fractured and nonuniform ways, would evoke concern about disruptions to the smooth fabric of social life. It is here, in this disruptive potential, that lie apprehensions about the social effects of AR. more...
You may have seen the media image that was circulating ahead of the 2018 State of the Union address, depicting a ticket to the event that was billed under a typographical error as the “State of the Uniom.” This is funny on some level, yet as we mock the Trump Administration’s foibles, we also might reflect on our own complicity. As we eagerly surround ourselves with trackers, sensors, and manifold devices with internet-enabled connections, our thoughts, actions, and, yes, even our mistakes are fast becoming data points in an increasingly Byzantine web of digital information.
To wit, I recently noticed a ridiculous typo in an essay I wrote about the challenges of pervasive digital monitoring, lamenting the fact that “our personal lives our increasingly being laid bare.” Perhaps this is forgivable since the word “our” appeared earlier in the sentence, but nonetheless this is a piece I had re-read many times before posting it. Tellingly, in writing about a panoptic world of self-surveillance and compelled revelations, my own contributions to our culture of accrued errors was duly noted. How do such things occur in the age of spellcheck and autocorrect – or more to the point, how can they not occur? I have a notion. more...
It has been really thrilling to hear so much positive feedback about my essay about authoritarianism in engineering. In that essay, which you can read over at The Baffler, I argue that engineering education and authoritarian tendencies trend very closely and that we see this trend play out in their interpretations of dystopian science fiction. Instead of heeding very clear warnings about the avarice of good intentions gone awry, companies like Axon (né TASER) use movies and books like Minority Report as product roadmaps. I conclude by saying:
In times like these it is important to remember that border walls, nuclear missiles, and surveillance systems do not work, and would not even exist, without the cooperation of engineers. We must begin teaching young engineers that their field is defined by care and humble assistance, not blind obedience to authority.
I’ve got some pushback, both gentle and otherwise about two specific points in my essay which I’d like to discuss here. I’m going to paraphrase and synthesize several people’s arguments but if anyone wants to jump into the comments with something specific they’re more than welcome to do so.
As a follow up to my previous post about the Center for Humane Technology, I want to examine more of their mission [for us] to de-addict from the attention economy. In this post I write about ‘time well spent’ through the lens of Sarah Sharma’s work on critical temporalities; and share an anecdote from my (ongoing) fieldwork at a recent AI, Technology and Ethics conference.
Time Well Spent is an approach that de-emphasizes a life rewarded by likes, shares and follower counts. ‘Time well spent’ is about “resisting the hijacking of our minds by technology”. It is about not allowing the cunning of social media UX to lead us to believe that the News Feed or TimeLine are actually real life. We are supposed to be participants, not prey; users not abusers; masterful and not entangled. The sharing, pouting, snarking, loving, hating and sexting we do online, and at scale, is damaging personal well-being and the pillars of society, and must be diverted to something else, the Center claims.
As I have argued before in relation to the addiction metaphor the Center uses, ‘time well spent’ implies the need for individual disconnection from the attention economy. It is about recovering time that is monetized as attention. This is a notion of time that belongs to the self, un-tethered, as if it were not enmeshed in relationships with others and in existing social structures.
There has been a steady stream of articles about and by “reformed techies” who are coming to terms with the Silicon Valley ‘Frankenstein‘ they’ve spawned. Regret is transformed into something more missionary with the recently launched Center for Humane Technology.
In this post I want to focus on how the Center has constructed what they perceive as a problem with the digital ecosystem: the attention economy and our addiction to it. I question how they’ve constructed the problem in terms of individual addictive behavior, and design, rather than structural failures and gaps; and the challenges in disconnection from the attention economy. However, I end my questioning with an invitation to them to engage with organisations and networks who are already working on addressing problems arising out of the attention economy.
Sean Parker and Chamath Palihapitiya, early Facebook investors and developers, are worried about the platform’s effects on society.
The Center for Humane Technology identifies social media – the drivers of the attention economy – and the dark arts of persuasion, or UX, as culprits in the weakening of democracy, children’s well-being, mental health and social relations. Led by Tristan Harris, aka “the conscience of silicon valley”, the Center wants to disrupt how we use tech, and get us off all the platforms and tools most of them worked to get us on in the first place. They define the problem as follows:
“Snapchat turns conversations into streaks, redefining how our children measure friendship. Instagram glorifies the picture-perfect life, eroding our self worth. Facebook segregates us into echo chambers, fragmenting our communities. YouTube autoplays the next video within seconds, even if it eats into our sleep. These are not neutral products. They are part of a system designed to addict us.”
“Pushing lies directly to specific zip codes, races, or religions. Finding people who are already prone to conspiracies or racism, and automatically reaching similar users with “Lookalike” targeting. Delivering messages timed to prey on us when we are most emotionally vulnerable (e.g., Facebook found depressed teens buy more makeup). Creating millions of fake accounts and bots impersonating real people with real-sounding names and photos, fooling millions with the false impression of consensus.”
These days, Influencers are starting off younger and older.
As the earliest cohorts of Influencers progress in their life course and begin families, it is no surprise that many of them are starting to write about their young children, grooming them into micro-microcelebrities. Many Influencer mothers have also been known to debut dedicated digital estates for their incoming babies straight from the womb via sonograms. Influencer culture that has predominantly been youth- and self-centric is growing to accommodate different social units, such as young couples and families. In fact, entire families are now beginning to debut as family Influencers, documenting and publicizing the domestic happenings of their daily lives for a watchful audience, although not without controversy. But now it seems grandparents are joining in too.
Recently, I have taken interest in a new genre of Influencers emerging around the elderly. Many of these elderly Influencers are well into their 60s and 70s, and display their hobbies or quirks of daily living on various social media. Some create, publish, and curate their own content, while others are filmed by family members who manage the public images of these elderly Influencers. I am just beginning to conceptualize a new research project on these silver haired Influencers in East Asia, and will briefly share in this post some of the elderly Influencers I enjoy and emergent themes from news coverage on them. more...