AppleSiri1TNW-1200x651

I recently updated my mac’s operating system. The new OS, named Sierra, has a few new features that I was excited to try but the biggest one was the ability to use Siri to search my files and launch applications. Sierra was bringing me one step closer to the human-computer interaction fantasy that was set up for me at an early age when I watched Picard, La Forge, and Data solve a complicated problem with the ship’s computer. In those scenes they’d ask fairly complicated questions, ask follow-up questions with pronouns and prepositions that referenced the first question, and finish their 24th century Googling session with some plain language query like “anything else?”  Judging by the demo I had seen on the Apple website it seemed like I could have just that conversation. I clicked the waveform icon, saw the window pop up indicating that my very own ship’s computer was listening and… nothing.

The problem wasn’t with Siri, it was with me. I had frozen. It was as if a rainbow spinning beach ball was stuck in my mouth. I was unable to complete a simple sentence. I closed the window and tried again:

Show me files that I created on… Damnit

Sorry I did not get that.

Show me files from… That I made on Friday.

Here are some of the files you created on Friday.

In all honesty, I should have seen this coming. I frequently use Siri to set reminders or to put things in my calendar but I always use my digital assistant in secret: the moment between getting in the car and starting the engine, alone at my desk, or (sorry) while I am using the bathroom. It works almost every time but when something goes wrong, it is my commands not Siri’s execution, that is left wanting. I pause because I forget the name of the place I need directions to or I stumble when it comes to saying exactly what reminder I want to set. There are several Siri-dictated reminders sitting in my phone right now that don’t want me to forget to “bring it back with you before you go” or “to write email in the morning.”  I clam up when I know my devices are listening.

It gets worse when other humans are listening to my awkward commands. The thought of talking to an algorithm in the presence of fellow humans is about as enticing to me as reciting a poem I wrote in high school or explaining a joke that just fell flat. Here I was thinking it was the technology that had to catch up to my cyborg dreams but now it seems that the flesh is the half not willing. more...

16024515689_3cc1be05a2_z

Sherry Turkle has been very successful lately. She is still touring the country giving high-profile talks and her best-selling books are assigned in college classrooms all across the country. The quotes on her books’ dustjackets are from respected authors and thinkers. She is a senior faculty member at an elite east coast university. She is by all accounts someone with an ostensibly left-of-center perspective that is popular while still pushing audiences to consider the ramifications of their actions. Turkle, through her critical analysis of social media and portable digital devices, wants people to think twice about the unintended consequences of their actions; how individual choices often aggregate into undesirable interpersonal dynamics. This is important work worthy of public debate but, precisely because it is so important, it is worth asking who benefits from Turkle’s particular brand of mindfulness.

Critiques of Turkle are too few, but the ones that exist are spot on. Focusing on individuals’ technology use, according to Nathan Jurgenson, not only turns the subjects of Turkle’s analysis into broken subhumans, it also gives the reader the opportunity to feel superior simply by fretting over when and how a device comes out of their pocket. Her work also misses, according to Zeynep Tufeci and Alexandra Samuel all the ways social media is a way of reclaiming some form of sociality in a world dominated by televisions, the suburbs, long work hours, and life circumstances that geographically separate us. Taken together we might understand the shortcomings of Turkle’s work as primarily one of digital dualism, i.e. that she considers non-mediated, in-person interaction as inherently more real or authentic compared to anything done through digital networks. What has been left unsaid, and what I want to focus on here, is how Turkle contradicts herself and, in so doing, reveals a bias toward authority and socially conservative political institutions. Turkle selectively deploys her analysis in such a way that traditional sources of authority are left unchallenged. more...

Zombie cyborg
Image credit

The New York Times editors, as Claude Fisher wrote yesterday, “have their meme and they will ride it hard.” That meme is Sherry Turkle, the MIT psychologist that has built a cottage industry (a far away disconnected cottage on the shores of Cape Cod no doubt) around pathologizing the bad feelings people get when everyone around them are on their phones. Fisher does a really supurb job of laying out what is wrong with this latest round of Turkle fanfare so you should go read his piece on his blog, but I want to draw out and add to one point that he makes about the “death of conversation” being an evergreen topic for decades.

I have an article coming out in First Monday in about a month but there is a section that I want to quote from just because I think it is especially relevant to this issue of conversation, attention, and their vulnerability to new technologies. The article argues that online/offline states should be seen as social relationships among groups and not the binary states of an individual. To that point I show how cultural, political, and economic reactions to railroad lines mirror the experiences we have with the Internet today. What follows is a small section about what sorts of social and cultural effects were attributed to railroads: more...

pplkpr

We’ve all been there. Sweaty palms, racing heart, left eye that winks at involuntary intervals. You’re emotionally fraught and having a physiological response. It could be an upcoming exam, a big presentation, or that one friend who can’t stop telling you about their fantastic job/spouse/kids/new shoes while wondering out loud how you manage living in such messy quarters.

Our bodies are key sources of information and guidance. Bodied reactions, coupled with culturally situated reflexive analyses, help us make sense of day-to-day events and make behavioral decisions. Feel like you’re going to vomit every time that colleague stops by your office? Maybe they’re toxic. Maybe you’re in love. The bodily response prompts you to do something, and how you interpret that response tells you what that something is. more...

Image credit
Image credit

Late Monday night it was discovered that one of the EPA’s Twitter accounts was a C-list celebrity on the popular iPhone game Kim Kardashian: Hollywood. The Tweet was one of those automatically generated ones meant to announce progress in a game or the unlocking of an achievement. Its easy to imagine the scenario: an over-worked or deeply bored social media manager didn’t realize they were signed into their work account instead of their personal one and let the tweet go. Or maybe a family member borrowed their work phone. Who knows? What we do know is that the tweet immediately garnered thousands of retweets and countless more screenshots were shared on other platforms. Why is this even remotely funny? What sorts of publicly held believes does it reveal? more...

Image from Robert Cooke
Image from Robert Cooke

On Monday I posed two related questions:  “Are wearables like Glass relegated to the same fate as Bluetooth earpieces and the Discman, or can they be saved?  Is the entire category irredeemable or have we yet to see the winning execution?” I concluded that most of the problems have to do with the particular executions we’ve seen to date, but it’s also very possible that the very idea of the wearable is predicated on the digital dualist notion that interacting with a smartphone is inherently disruptive to a productive/happy/authentic lifestyle. Lot’s of devices are pitched as “getting out of the way” and only providing a little bit of information that is context specific and quickly (not to mention discreetly) displayed to the user. I contended that the motivation to make devices “invisible” can bring about some unintended consequences; mainly that early adopters experience the exact opposite reaction. Everyone pays attention to your face computer and nothing is getting out of the way at all. more...

source
source

Last week, The Verge’s Adrianne Jeffries (@adrjeffries) asked a really provocative titular question: “If you back a Kickstarter Project that sells for $2 billion, do you deserve to get rich?” After interviewing venture capitalists and the like she concludes that the answer isn’t even “no” it’s “that’s ridiculous.” After speaking to Spark Capital’s Mo Koyfman Jeffries writes, “Oculus raised money on Kickstarter because it wanted to see if people wanted and would buy the product, and whether developers wanted it and would build games for it. The wildly successful campaign validated that premise, and made it much easier for Oculus to raise money from venture capitalists.”

Kickstarter’s biggest innovation is its ability to cut two time-consuming tasks –market research and startup funds– down to a 90 day fundraising window. Companies that choose to use Kickstarter usually aren’t ready to offer equity because that comes after the two steps that Kickstarter is so useful in accelerating. Or, perhaps more honestly, companies opt to use Kickstarter precisely because they want to avoid selling off shares of their company as much as possible. Jeffries gives us a good financial and legal (juridical, if we want to be Foucauldian about it) but that seems like a wholly unfulfilling argument for someone who spent $25 on an Oculus-branded t-shirt. Let’s forget for a moment about what’s legal and normal –those things are rarely moral or fair– and start to compare what happens on Kickstarter to similar (and much older) social arrangements. To start, let’s go way back to the early 1990s. more...

A Market in Cambodia. Via Wikimedia Commons
A Market in Cambodia. Via Wikimedia Commons

In the first chapters of every Economics 101 textbook there’s a misleading hypothetical about the origins of money. David Graeber, in his book Debt: The First 5,000 Years calls it “the founding myth of our system of economic relations.” This myth is so pervasive that even people who have never taken an Economics 101 class know, and believe in, this myth. We tend to assume that before money there was this awkward barter system where you had to keep all your chickens and yams with you when you went to market to buy a calf. If the person selling the calf didn’t want chicken or yams, no transaction would take place. Money seems to fill a very important need: it lets us compare and exchange a wide variety of goods by establishing a common metric of value. The problem with this construction—of simple barter being replaced with cash economies—is that it never happened. That’s what makes Bondsy, an app that let’s you effortlessly barter with a private set of friends, so interesting: It takes a modern myth and turns it into everyday reality. more...

This is the complete version of a three-part essay that I posted in May, June, and July of this year:
Part I: Distributed Agency and the Myth of Autonomy
Part II: Disclosure (Damned If You Do, Damned If You Don’t)
Part III: Documentary Consciousness


Privacy is not dead, but it does need to change.

Part I: Distributed Agency and the Myth of Autonomy

Last spring at TtW2012, a panel titled “Logging off and Disconnection” considered how and why some people choose to restrict (or even terminate) their participation in digital social life—and in doing so raised the question, is it truly possible to log off? Taken together, the four talks by Jenny Davis (@Jup83), Jessica Roberts (@jessyrob), Laura Portwood-Stacer (@lportwoodstacer), and Jessica Vitak (@jvitak) suggested that, while most people express some degree of ambivalence about social media and other digital social technologies, the majority of digital social technology users find the burdens and anxieties of participating in digital social life to be vastly preferable to the burdens and anxieties that accompany not participating. The implied answer is therefore NO: though whether to use social media and digital social technologies remains a choice (in theory), the choice not to use these technologies is no longer a practicable option for number of people.

In this essay, I first extend the “logging off” argument by considering that it may be technically impossible for anyone, even social media rejecters and abstainers, to disconnect completely from social media and other digital social technologies (to which I will refer throughout simply as ‘digital social technologies’). Consequently, decisions about our presence and participation in digital social life are made not only by us, but also by an expanding network of others. I then examine two prevailing privacy discourses—one championed by journalists and bloggers, the other championed by digital technology companies—to show that, although our connections to digital social technology are out of our hands, we still conceptualize privacy as a matter of individual choice and control. Clinging to the myth of individual autonomy, however, leads us to think about privacy in ways that mask both structural inequality and larger issues of power. Finally, I argue that the reality of inescapable connection and the impossible demands of prevailing privacy discourses have together resulted in what I term documentary consciousness, or the abstracted and internalized reproduction of others’ documentary vision. Documentary consciousness demands impossible disciplinary projects, and as such brings with it a gnawing disquietude; it is not uniformly distributed, but rests most heavily on those for whom (in the words of Foucault) “visibility is a trap.” I close by calling for new ways of thinking about both privacy and autonomy that more accurately reflect the ways power and identity intersect in augmented societies. more...


We're always connected, whether we're connecting or not.

Last month at TtW2012, a panel titled “Logging off and Disconnection” considered how and why some people choose to restrict (or even terminate) their participation in digital social life—and in doing so raised the question, is it truly possible to log off? Taken together, the four talks by Jenny Davis (@Jup83), Jessica Roberts, Laura Portwood-Stacer (@lportwoodstacer), and Jessica Vitak (@jvitak) suggested that, while most people express some degree of ambivalence about social media and other digital social technologies, the majority of digital social technology users find the burdens and anxieties of participating in digital social life to be vastly preferable to the burdens and anxieties that accompany not participating. The implied answer is therefore NO: though whether to use social media and digital social technologies remains a choice (in theory), the choice not to use these technologies is no longer a practicable option for number of people.

In the three-part essay to follow, I first extend this argument by considering that it may be technically impossible for anyone, even social media rejecters and abstainers, to disconnect completely from social media and other digital social technologies (to which I will refer throughout simply as ‘digital social technologies’). Even if we choose not to connect directly to digital social technologies, we remain connected to them through our ‘conventional’ or ‘analogue’ social networks. Consequently, decisions about our presence and participation in digital social life are made not only by us, but also by an expanding network of others. In the second section, I examine two prevailing discourses of privacy, and explore the ways in which each fails to account for the contingencies of life in augmented realities. Though these discourses are in some ways diametrically opposed, each serves to reinforce not only radical individualist framings of privacy, but also existing inequalities and norms of visibility. In the final section, I argue that current notions of both “privacy” and “choice” need to be reconceptualized in ways that adequately take into account the increasing digital augmentation of everyday life. We need to see privacy both as a collective condition and as a collective responsibility, something that must be honored and respected as much as guarded and protected. more...