“You are talking to me like I don’t understand what you are saying. I understand what you are saying, I don’t accept what you are saying,” shouted the bespectacled woman who would soon have tears running down her indignant face. “I’m not from this country. I don’t have a phone. I have kids with me. What am I supposed to do!?” The customer service representative at the airline desk spoke slowly and explained again, as if to a spoiled child, that all of the hotels were full and customers were now responsible for finding and booking their own, but not to worry, customers would be reimbursed after going online and submitting the necessary information with a paid receipt. The woman stared blankly at him, and stepped aside to wait for a supervisor. Now she would cry.
Ugh. I hate the new Facebook. I liked it better without the massive psychological experiments.
Facebook experimented on us in a way that we really didn’t like. Its important to frame it that way because, as Jenny Davis pointed out earlier this week, they experiment on us all the time and in much more invasive ways. The ever-changing affordances of Facebook are a relatively large intervention in the lives of millions of people and yet the outrage over these undemocratic changes never really go beyond a complaint about the new font or the increased visibility of your favorite movies (mine have been and always will be True Stories and Die Hard). To date no organization, as Zeynep Tufekci observed, has had the “stealth methods to quietly model our personality, our vulnerabilities, identify our networks, and effectively nudge and shape our ideas, desires and dreams.” When we do get mad at Facebook, it always seems to be a matter of unintended consequences or unavoidable external forces: There was justified outrage over changes in privacy settings that initiated unwanted context collapse, and we didn’t like the hard truth that Facebook had been releasing its data to governments. Until this week, it was never quite so clear just how much unchecked power Facebook has over its 1.01 billion monthly active users. What would governing such a massive sociotechnical system even look like? (more…)
Last week, The Verge’s Adrianne Jeffries (@adrjeffries) asked a really provocative titular question: “If you back a Kickstarter Project that sells for $2 billion, do you deserve to get rich?” After interviewing venture capitalists and the like she concludes that the answer isn’t even “no” it’s “that’s ridiculous.” After speaking to Spark Capital’s Mo Koyfman Jeffries writes, “Oculus raised money on Kickstarter because it wanted to see if people wanted and would buy the product, and whether developers wanted it and would build games for it. The wildly successful campaign validated that premise, and made it much easier for Oculus to raise money from venture capitalists.”
Kickstarter’s biggest innovation is its ability to cut two time-consuming tasks –market research and startup funds– down to a 90 day fundraising window. Companies that choose to use Kickstarter usually aren’t ready to offer equity because that comes after the two steps that Kickstarter is so useful in accelerating. Or, perhaps more honestly, companies opt to use Kickstarter precisely because they want to avoid selling off shares of their company as much as possible. Jeffries gives us a good financial and legal (juridical, if we want to be Foucauldian about it) but that seems like a wholly unfulfilling argument for someone who spent $25 on an Oculus-branded t-shirt. Let’s forget for a moment about what’s legal and normal –those things are rarely moral or fair– and start to compare what happens on Kickstarter to similar (and much older) social arrangements. To start, let’s go way back to the early 1990s. (more…)
Last spring at TtW2012, a panel titled “Logging off and Disconnection” considered how and why some people choose to restrict (or even terminate) their participation in digital social life—and in doing so raised the question, is it truly possible to log off? Taken together, the four talks by Jenny Davis (@Jup83), Jessica Roberts (@jessyrob), Laura Portwood-Stacer (@lportwoodstacer), and Jessica Vitak (@jvitak) suggested that, while most people express some degree of ambivalence about social media and other digital social technologies, the majority of digital social technology users find the burdens and anxieties of participating in digital social life to be vastly preferable to the burdens and anxieties that accompany not participating. The implied answer is therefore NO: though whether to use social media and digital social technologies remains a choice (in theory), the choice not to use these technologies is no longer a practicable option for number of people.
In this essay, I first extend the “logging off” argument by considering that it may be technically impossible for anyone, even social media rejecters and abstainers, to disconnect completely from social media and other digital social technologies (to which I will refer throughout simply as ‘digital social technologies’). Consequently, decisions about our presence and participation in digital social life are made not only by us, but also by an expanding network of others. I then examine two prevailing privacy discourses—one championed by journalists and bloggers, the other championed by digital technology companies—to show that, although our connections to digital social technology are out of our hands, we still conceptualize privacy as a matter of individual choice and control. Clinging to the myth of individual autonomy, however, leads us to think about privacy in ways that mask both structural inequality and larger issues of power. Finally, I argue that the reality of inescapable connection and the impossible demands of prevailing privacy discourses have together resulted in what I term documentary consciousness, or the abstracted and internalized reproduction of others’ documentary vision. Documentary consciousness demands impossible disciplinary projects, and as such brings with it a gnawing disquietude; it is not uniformly distributed, but rests most heavily on those for whom (in the words of Foucault) “visibility is a trap.” I close by calling for new ways of thinking about both privacy and autonomy that more accurately reflect the ways power and identity intersect in augmented societies. (more…)
This image is on the Internet. Whose fault is that? (Is it anyone's fault, per se?)
Last month in Part I (Distributed Agency and the Myth of Autonomy), I used the TtW2012 “Logging Off and Disconnection” panel as a starting point to consider whether it is possible to abstain completely from digital social technologies, and came to the conclusion that the answer is “no.” Rejecting digital social technologies can mean significant losses in social capital; depending on the expectations of the people closest to us, rejecting digital social technologies can mean seeming to reject our loved ones (or “liked ones”) as well. Even if we choose to take those risks, digital social technologies are non-optional systems; we can choose not to use them, but we cannot choose to live in a world where we are not affected by other people’s choices to use digital social technologies.
I used Facebook as an example to show that we are always connected to digital social technologies, whether we are connecting through them or not. Facebook (and other companies) collect what I call second-hand data, or data about people other those from whom the data is collected. This means that whether we leave digital traces is not a decision we can make autonomously, as our friends, acquaintances, and contacts also make these decisions for us. We cannot escape being connected to digital social technologies anymore than we can escape society itself.
This week, I examine two prevailing privacy discourses—one championed by journalists and bloggers, the other championed by digital technology companies—to show that, although our connections to digital social technology are out of our hands, we still conceptualize privacy as a matter of individual choice and control, as something individuals can ‘own’. Clinging to the myth of individual autonomy, however, leads us to think about privacy in ways that mask both structural inequality and larger issues of power. (more…)
"For Trayvon Martin" mural by "Israel" in Third Ward Houston. Photo taken by Jenni Mueller.
On February 26, 2012 Trayvon Martin, a Black, 17 a year old, unarmed, high school student, was shot and killed by George Zimmerman, a White Hispanic man acting in the self-appointed position of neighborhood watch captain (click here for more details).
The case has become a symbolic battle ground for two important issues: gun laws and racism. Although both of these issues are inextricably entwined, for purposes of simplicity, I will focus here only on the issue of race.
As Jessie Daniels importantly points out on Racism Review, battles over racism have shifted into the realm of social media, where digital and physical race relations persist in an augmented relationship. We see this in both the progressive anti-racist discourses, and the racial smear campaigns surrounding the Martin/Zimmerman case.
Although it is important to expose the overtly racist tactics utilized by some of Zimmerman’s defenders—of which there are plenty—I want to talk here about a more subtle, and so perhaps more problematic form of racial discourse. Specifically, I will talk about how a prominent strategy of protest—coming out of the liberal left—may inadvertently perpetuate, rather than challenge, racial hierarchies in their most dehumanizing form. (more…)
Last Friday, Rachel Maddow reported (video clip above, full transcript here) that hundreds of citizens had suddenly started posting questions on the Facebook pages of Virginia Governor Ryan McDougle and Kansas Governor Sam Brownback. Their pages were full of questions on women’s health issues and usually included some kind of statement about why they were going to the Facebook page for this information. Here’s an example from Brownback’s page:
The seemingly-coordinated effort draws attention to the recent flurry of forced ultrasound bills that are being passed in state legislatures. Media outlets have started calling it “sarcasm bombing” although the source of that term is difficult to find. ABC News simply says: “One website labelled the messages ‘Sarcasm Bombing’ for the tounge-in-cheek [sic] way the users ask the politicians for help.” A few hours of intensive googling only brings up more headlines parroting the words “sarcasm bomb” but no actual origin story. These events (which have now spread to Governor Rick Perry of Texas as well) raise several important questions but I am only going to focus on one: Can we call Facebook a “Feminist Technology”? (more…)
My post today comes from a class on ableism and disabled bodies that I taught earlier this past semester in my Social Problems course. Its inception came from the point at which I wanted to introduce my students to Donna Haraway’s concept of cyborgs, because I saw some useful connections between one and the other.
My angle was to begin with the idea of able-bodied society’s instinctive, gut-level sense of discomfort and fear regarding disabled bodies, which is outlined in disability studies scholar Fiona Kumari Campbell’s book Contours of Ableism. Briefly, Campbell distinguishes between disableism, which are the set of discriminatory ideas and practices that construct the world in such a way that it favors the able-bodied and marginalizes the disabled, and ableism, which is the set of constructed meanings that set disabled bodies themselves apart as objects of distaste and discomfort. In this sense, disabled bodies are imbued with a kind of queerness – they are Other in the most physical sense, outside and beyond accepted norms, unknown and unknowable, uncontrollable, disturbing in how difficult they are to pin down. Campbell identifies this quality of unknowability and uncontainability as especially, viscerally horrifying.
This brief essay attempts to link two conceptualizations of the important relationship of the on and offline. I will connect (1) my argument that we should abandon the digital dualist assumption that the on and offline are separate in favor of the view that they enmesh into an augmented reality and (2) the problematic view that the Internet transcends social structures to produce something “objective” (or “flat” to use Thomas Friedman’s term).
Instead, recognizing that code has always been embedded in social structures allows persistent inequalities enacted in the name of computational objectivity to be identified (e.g., the hidden hierarchies of Wikipedia, the hidden profit-motive behind open-source, the hidden gendered standpoint of computer code, and so on). I will argue that the fallacy of web objectivity is driven fundamentally by digital dualism, providing further evidence that this dualism is at once conceptually false, and, most importantly, morally problematic. Simply, this specific form of digital dualism perpetuates structural inequalities by masking their very existence. (more…)
I shot this video at the University of Maryland’s 2011 Summer Social Webshop and am posting it here with Ezster’s permission. This presentation is particularly interesting because she describes the arc of her research throughout her career thus far, noting specific topical and methodological shifts.
We live in a cyborg society. Technology has infiltrated the most fundamental aspects of our lives: social organization, the body, even our self-concepts. This blog chronicles our new, augmented reality.