Recently I’ve started to wonder if certain friends of mine aren’t receiving kickbacks from the social music streaming serviceSpotify. I’ve started calling these friends the “Spotivangelists,” and have jokingly insisted that recruitment bonuses are the only reason they could possibly have for being so intent on getting me to sign up. What I’ll try to examine in this post, however, is not why my friends are so intent on getting me to join Spotify, but why I’ve been so intent on resisting. By rights, I shouldbe some kind of Spotifanatic; music is a huge part of my life, and I discover almost all of my new music through friends. So why am I still holding out?
For the uninitiated (which included me until about a month ago), Wikipedia describes Spotify as “a Swedish music streaming service offering digitally restricted streaming of selected music from a range of major and independent record labels.” Spotify offers a free service that includes “radio-style” advertisements (and a monthly usage cap of 10 hours per month for users outside the US), as well as a paid “premium” service that does not include advertisements. Perhaps the biggest difference between Spotify and other music streaming services, however, is its “social” component: Spotify is deeply integrated with Facebook (in fact, Facebook accounts are mandatory for Spotify users outside of Germany), which enables Spotify users both to send music files to their Facebook friends via an ‘inbox’ and to broadcast their listening habits on Facebook through so-called “frictionless sharing.”
Alternatively, the Spotify website describes Spotify simply as “all the music, all the time”—and indeed, the service’s vast catalogue was the first selling point a Spotivangelist friend emphasized to me. I insisted that some of my favorite bands were too obscure to be included in Spotify’s library, but as we played Band Go Fish, a majority of the bands I named turned out to be in there (even a long-dead Boston band that “isn’t easily pigeonholed”). Spotify users can also upload their own “local files” into Spotify, so that they “never again need to switch between media players.”
To me, this seemed less like a perk and more like a sneaky way to chain me both to Spotify and to my computer. “But I want my music even when I don’t want the Internet,” I protested, “and sometimes I drive where there isn’t cell phone reception!” My friend informed me that Spotify’s $10/month ‘Premium’ service not only eliminates advertisements, but also includes offline access to any saved playlists. Temporarily out of points to argue, I started to think about joining Spotify (albeit grudgingly).
The Spotified dog was left to lie until a few weeks later, when my iPod caught a terminal case of hard drivefailure. Nothing stirs cloud-fanatics to action quite like hardware-induced data loss, and the Spotivangelist choir came out in full-force SSATB crescendo to sing Spotify’s praises. One friend explained that he uses Spotify primarily for music discovery, and still buys or downloads his own copies of any newly discovered music he likes. Another stated that he uses Spotify anti-socially, and has delinked Spotify and Facebook as much as possible; this friend agreed with me wholeheartedly when I said I had no desire to ‘perform’ my listening for others. Still another told me that the failed hard drive was “a sign that you just need to get a Spotify account and give up on ownership,” and added, “I almost never push products, but Spotify is just that good.”
These conversations were striking examples of the ‘viral marketing’ success to which any “social” application aspires: ordinary Spotify users—people with no financial stake in the company, and who receive no direct reward for recruiting new users—were putting a surprising amount of effort into trying to get me on this service. (Though one might argue that users are incentivized to recruit friends because their own experience of using Spotify is better when more of their friends use it, I doubt any of my friends are that anxious either to know what I’m listening to at every moment or to share music with me in ways that don’t involve co-listening or mailing flash drives.) While all that effort has not (at the time of writing) led me to crack and join Spotify, it has led me to spend a fair amount of time thinking about the reasons underlying my reluctance to do so. My tentative conclusion, at the moment, is that it mostly boils down to issues of control.
In his book Edited Clean Version: Technology and the Culture of Control, Raiford Guins examines ways in which users of newer media technologies are offered “empowerment” through “control,” which in turn is made available through arrays of choices. Though Guins’s project is examining censorial practices in newer media, many of his ideas are applicable here as well; he explains that control technologies are “designed to advance an ethos of neoliberal governance,” and draws on both Deleuze’s work on control and Foucault’s work on governmentality to show that “choice is imagined as an active, autonomous action… an enabling action for regulated and disciplined freedom: the paradoxical logic of choice in the era of control.” Choice functions as “a preferred surrogate strategy in neoliberal societies for the presumed limitations and restrictions of regulation,” and exercising our ‘freedom of choice’ encourages us to see “regulatory practices of self-management as licensed freedom, not as dominating.”
In other words, when we are offered a choice and a vast array of options, we are encouraged feel both free and empowered (not just a choice, but so many choices available for us to make); we are discouraged from paying attention to what has structured the choice itself, and this fits in well with a political ideology that asks us to believe that we are all autonomous individuals with no one but ourselves to blame (or credit) for our failures (or successes). To borrow Deleuze’s highway metaphor, we’re encouraged to see the lone car on the empty road as a symbol of freedom and self-direction; we’re not encouraged to think about how the range of possible routes is pre-determined, because the car can only go where the state has decided to build roads. We can choose from what has been made available to us, and we’re encouraged to see that choice as “freedom” without thinking about what isn’t available to us and why.
Spotify promises “Millions of tracks, any time you like… Just help yourself to whatever you want, whenever you want it”—a seemingly infinite array of musical choices, the ultimate in musical freedom. The website further states, “our dream is to have all the music in the world available instantly to everyone, wherever they are.” All the music, all the places, all the people; what a vast utopian soundscape! But of course, it’s not that simple: ‘all the music’ is really just the combination of Spotify’s (admittedly large) catalogue plus whatever you’ve chosen to upload as “local files,” and the catalogue piece of that equation disappears as soon as you cease to be a Spotify user.
Ultimately, this is the piece that really gets me. Sure, I could use Spotify the way one of my friends does, and judiciously track down my own copies of any music I encounter on Spotify and subsequently come to love. But I know myself; I’m neither sufficiently organized nor sufficiently disciplined to make regular time for Spotify File Duplication. I would become complacent; I would assemble a collection of music to which I’d become significantly attached, an array of playlists that would come to capture memories I’d be distraught at the idea of losing. And at that point, a part of me would thereafter be at Spotify’s mercy, held hostage by the synergy between my emotions and sound.
What if one day, after months or years of using the service, I decide I want to leave Spotify? Maybe I’d be sick of paying for their service; more likely, perhaps they’d do something policy-wise that made me angry, and I’d decide to stop giving them my money and my participation as a result. In that moment, I would suddenly be trapped between principle and passion; I would be forced to “choose” between betraying either one or the other. My ‘choice’ would be to keep driving down the corrupt Spoti-Highway, or to total the car by trying to drive off the road. Even in the hypothetical, this feels like an awful choice to make; far from freedom, it feels like being trapped.
My relationship to music may be incredibly social, but it is also profoundly private and personal. I suspect this is true for a lot of people who care about music, and I think Spotify understands that; I think the saved, offline-available playlists that can accompany a Spotify user anywhere she goes—yet which also disappear the moment she cancels her subscription—are part of the service’s subscriber retention strategy. There’s nothing wrong with that, per se; implicitly holding a few playlists hostage is not the same thing as holding (say) someone’s first born child, or their right kidney, or their cat, and technically speaking there’s nothing to stop a disgruntled user from using her Spotify playlist as an acquisition to-do list before she quits the service.
But there’s a time price to rebuilding a music library, and frequently an economic price as well (as I am presently all too aware, thanks to that hard drive failure). Though I realize my relationship with music is already at the mercy of hardware manufacturers, software developers, and others (to say nothing of vinyl pressers, turntable cartridge makers, electric companies and the power grid and all of that), I’m wary of getting into a position from which I might have to calculate what either my love of music or my sense of right and wrong is worth in currencies of time, money, or frustration. In the end, I’m still clinging to “ownership” because for me, having files on my hard drive does a better job of preserving illusions of freedom and control.
In the last part of my recent essay “A New Privacy,” I described documentary consciousness as the perpetual (and frequently anxiety-provoking) awareness that, at each moment, we are potentially the documented objects of others. In this post, I use a friend’s recent ‘Facebook debacle’ as a starting point to elaborate on what documentary consciousness is, how it works, and whether it can be diminished or assuaged by the fact that “nobody… wants to see your status update from 2007.” I draw on Brian Massumi’s distinction between the possible and the potential to help explain why documentary consciousness entails “the ever-present sense of a looming future failure,”whether anyone reads that old status update or not.
My friend Dani (not her real name) recently moved 2000 miles away to be closer to her partner. The weekend before the movers came, Dani had a going-away party at her house that featured a cookout, friends’ bands playing live, and another friend DJing after the bands’ sets. The event started at 2pm and, somewhat amazingly, managed to make it to midnight before a uniformed gentleman arrived to announce that the hour for outdoor merriment had passed. Dani chose to look on the bright side of things, and saw getting shut down by local law enforcement as just one more sign that the night had been a success. As she posted to Facebook as the officer walked away, “Hey! At least the cops shut down my party! Yeah!!”
Much later that night, Dani suddenly remembered that certain members of her extended family (with whom she has contentious relationships) were “friends” with her on Facebook; these relatives would be scandalized if they saw her status update. Dani jumped out of bed, went to her computer, and deleted the digital residue of her earlier exuberance. In reloading her Facebook profile, however, Dani saw just how many of her friends had also excitedly posted either about how the party ended or about her consequent jubilation. She started to panic at the sheer volume of documentation: well-meaning friends had not only posted on her wall, but also tagged her in a flurry of their own status updates and posted photographs. Due to a difference in time zones, it would be impossible for Dani to remove all the tags before her relatives began to arrive at work and sit down at their computers.
Then Dani had an idea: she’d simply block her relatives on Facebook instead. That way her relatives wouldn’t be able to view any of the information on her wall; she could go back to bed, and decide which posts to “un-tag” herself from next day.
The problem, as Dani explained a few nights later, was that she hadn’t realized “blocking” someone on Facebook also entails “unfriending” that person—and that “unblocking” that person does not entail quietly “refriending” them. A small group of us spent an hour or more debating the question Dani now faced: should she refriend her extended relatives or not? If she left them unfriended, they would eventually discover the unfriendings (perhaps through Facebook’s “friend suggestions”); this would make them upset, and put Dani’s closer family members in awkward positions. If Dani did refriend her extended relatives, Facebook’s “friend request” notifications themselves would implicitly serve as notifications of the unfriendings; not only would the relatives be upset, but Dani would also have to manage the family fall-out now. She asked us to come up with a plausible story—maybe about her account getting hacked—to feed the relatives about why they had been unfriended, but no one story passed muster with all of us. Dani admitted that she didn’t want to be “Facebook friends” with those relatives anyway, but felt an obligation to her closer family members to maintain digital connections with the others.
Dani’s story illustrates a number of concepts and ideas that I’ve been thinking about recently, most of which will be familiar to regular Cyborgology readers. First and foremost is the fact that “online” and “offline” are not separate worlds: online and offline relationships impact and shape each other, and frequently blur to the point of being simply “relationships” that include both online and offline interaction. Digital dualists (those who believe online and offline areseparate worlds or realities) would do well to observe the very real effects that Dani’s offline activities had on her “online relationships,” as well as the effects her online activities were poised to have on her “offline relationships.” They might note as well that five people spent an hour discussing—offline and in real, live, Turklesque “face-to-face conversation”—what one of them should do to manage her online connections. Dani’s story also highlights the way family expectations can shape our use of digital social technologies (see Jessica Roberts’s [@jessyrob] presentation as part of TtW2012’s “Logging off and Disconnection” panel), and the “laborious practices of protection, maintenance, and care” that social media users put into presenting themselves online. It highlights, too, how Facebook can transform one’s friends into a form of digital paparazzi, especially for the majority of Facebook users who haven’t memorized all the ins and outs of Facebook’s constantly shifting privacy settings.
What I want to focus on, however, is the moment of panic Dani felt when she confronted the plethora of posts her friends had made about the party. Though her anxiety was sparked by being not a potentially documented object, but a clearly documented one (perhaps we would call her experience “documentary awareness”), at the heart of her disquietude was the same “looming sense of future failure” that characterizes documentary consciousness. Though each variable had a smaller range of values, she was attempting to calculate the same unwieldy and incalculable matrix of who might see which documentary artifacts, at which times, in which contexts, with what kinds of reactions, and with what kinds of consequences.
One might argue—as have Nathan Jurgenson (@nathanjurgenson) and PJ Rey (@pjrey), among others—that for most of us, the likelihood of anyone reading any particular status update we post is very small, particularly if that status update is in the past; even in the present, “the web… still sometimes forgets.” This is because the so-called ‘data deluge’ is not limited to corporations and researchers; it is also alive and well in ordinary people’s experiences with social media. How many Facebook users have all of their Facebook ‘friends’ in their newsfeeds, and how many Facebook users actually read every newsfeed item that appears? For an ordinary Facebook user, who’s going to comb through each and every status update to find the incriminating one, whether it was posted three days ago or three years ago?
Like jokes about “at least it was on G+ where no one will see it,” these arguments rest on the assumption that though it may be possible for someone to view incriminating content, the probability that anyone will do so is really very low for most social media users. Fallout from something as mundane as a single status update is therefore only a very small possibility, and not one worth getting worked up about. We may like to think that we are all special, unique snowflakes, and that “everything we do is being recorded for all time,” but no: snowflakes melt, and so do their status updates, usually without incident (or accolade).
What fuels the anxiety of documentary consciousness, however, is not possibility, but potentiality. In his book Parables for the Virtual, Brian Massumi draws on philosopher Henri Bergson to explain the difference between “the possible” and “the potential” through the image of an archer’s arrow shot into the air. While the arrow is in flight, there exists an infinite array of positions that it could occupy in any next moment; the continuity of movement that we see in the arrow’s flight path can only be measured and confirmed once the arrow comes to rest (whether in a target, in the ground, or off in the bushes somewhere). While the arrow is in flight, all points in space are its potential endpoints; when the arrow comes to rest, and its “real trajectory” is known, the range of its possible endpoints can be plotted retrospectively by tracing backward from where the arrow’s flight ended. Possibility is therefore “back-formed from potential’s unfolding.”
If we consider any given digital artifact on Dani’s Facebook profile to be the arrow, and its ultimate impact (or lack thereof) the place where the arrow lands, we start to see why potentiality is the key to documentary consciousness. When has a tag, a photograph, or a status update “come to rest”? Even if it registers a bullseye in the target of an angry relative, there are still more relatives; a status update’s flight path can switch seamlessly from digital to analogue (and back again) through word of mouth, through family gossip, through online and offline confrontations. Even if a photograph is deleted, it may have been saved or copied by someone else; though one version’s flight path may end at uneventful deletion, countless others may still be in flight—and we can know neither their possible numbers, nor their possible impacts, until they land.
Massumi states that once possibility is formed, it feeds back in to prescript a normative range of outcomes. In Dani’s case, this range probably includes a large number of outcomes in which nothing in particular comes of her status update, and only a few outcomes in which she experiences fallout (after all, the risk—though real—is small). But the problem is that the flight path of a digital artifact never fully stops unfolding, so the range of possibility is never fully formed. Potential, Massumi writes, “is the immanence of a thing to its still indeterminate variation, under way.” Digital artifacts cannot escape the unprescripted realm of potentiality, and the potential “only feeds forward, unfolding toward the registering of an event.” It is these unknowable, unregistered events that ‘loom’ when we experience documentary consciousness.
In short, the magnitude of possibility is not the issue here. It is not possibility, but possibility’s indeterminacy, that drives our anxieties about the unknown and unknowable futures of our digital artifacts.
In Part I this essay, I considered the fact that we are always connected to digital social technologies, whether we are connecting through them or not. Because many companies collect what I call second-hand data (data about people other than those from whom the data is collected), whether we leave digital traces is not a decision we can make autonomously. The end result is that we cannot escape being connected to digital social technologies anymore than we can escape society itself.
In Part II, I examined two prevailing privacy discourses to show that, although our connections to digital social technology are out of our hands, we still conceptualize privacy as a matter of individual choice and control. Clinging to the myth of individual autonomy, however, leads us to think about privacy in ways that mask both structural inequalities and larger issues of power.
In this third and final installment, I consider one of the many impacts that follow from being inescapably connected in a society that still masks issues of power and inequality through conceptualizations of ‘privacy’ as an individual choice. I argue that the reality of inescapable connection and the impossible demands of prevailing privacy discourses have together resulted in what I term documentary consciousness, or the abstracted and internalized reproduction of others’ documentary vision. Documentary consciousness demands impossible disciplinary projects, and as such brings with it a gnawing disquietude; it is not uniformly distributed, but rests most heavily on those for whom (in the words of Foucault) “visibility is a trap.” I close by calling for new ways of thinking about both privacy and autonomy that more accurately reflect the ways power and identity intersect in augmented societies.
The Impact of Documentary Consciousness
Just before the turn of the 19th century, Jeremy Bentham designed a prison he called the Panopticon. The idea behind the prison’s circular structure was simple: a guard in a central tower could see into any of the prisoners’ cells at any given time, but no prisoner could ever see into the tower. The prisoners would therefore be subordinated by this asymmetrical gaze: because they would always know that they could be watched, but would never know if they werebeing watched, the prisoners would be forced to act at all times as if they were being watched, whether they were being watched or not. In contemporary parlance, Bentham’s Panopticon basically sought to crowd-source the labor of monitoring prisoners to the prisoners themselves.
Though Bentham’s Panopticon itself was never built, Michel Foucault used Bentham’s design to build his own concept of panopticism. For Foucault, the Panopticon represents not power wielded by particular individuals through brute force, but power abstracted to the subtle and ideal form of a power relation. The Panopticon itself is a mechanism not just of power, but of disciplinary power; this kind of power works in prisons, in other types of institutions, and in modern societies generally because citizens and prisoners alike, aware at all times that they could be under surveillance, internalize the panoptic gaze and reproduce the watcher/watched relation within themselves by closely monitoring and controlling their own conduct. Discipline (and other technologies of power) therefore produce docile bodies, as individuals self-regulate by acting on their own minds, bodies, and conduct through technologies of the self.
Foucault famously held that “visibility is a trap”; were he alive today, it is unlikely Foucault would be on board with radical transparency. Accordingly, it has become well-worn analytic territory to critique social media and digital social technologies by linking to Foucault’s panoptic surveillance. As early as 1990, Mark Poster argued that databases of digitalized information constituted what he termed the “Superpanopticon.” Since then, others have pointed out that “[even] minutiae can create the Superpanopticon”—and it would be difficult to argue that social media websites don’t help to circulate a lot of minutiae.
Facebook itself has been likened to a digital panopticon since at least 2007, though there are issues both with some of these critiques and with responses to them. The “Facebook=Panopticon” theme emerges anew each time the site redesigns its privacy settings—for instance, following the launch of so-called “frictionless sharing”—or, in more recent Facebook-speak, every time the site redesigns its “visibility settings.” Others express skepticism, however, as to whether the social media site is really what I jokingly like to call “the Panoptibook”; they claim that Facebook’s goal is supposedly “not to discipline us,” but to coerceus into reckless sharing (though I would argue that the latter is merely an instance of the former).
Others point to the fact that using Facebook is still “largely voluntary”; therefore, it cannot be the Panopticon. ‘Voluntary,’ however, is predicated on the notion of free choice, and as I argued in Part I, our choices here are constrained; at best, each of us can only choose whether or not to interact with Facebook (or other digital social technologies) directly. Infinitely expanding definitions of ‘disclosure’ notwithstanding, whether we leave digital traces on Facebook depends not just on the choices we make as individuals, but on the choices made by people to whom we are connected in any way. For most of us, this means that leaving traces on Facebook is largely inevitable, whether “voluntary” or not.
What may not be as readily apparent is that whether or not we interact with the site directly, Facebook and other social media sites also leave traces on us. Nathan Jurgenson (@nathanjurgenson) describes “documentary vision” as an augmented version of the photographer’s ‘camera eye,’ one through which the infinite opportunity for self-documentation afforded by social media leads us not only to view the world in terms of its documentary potential, but also to experience our own present “as always a future past.” In this way, “the logic of Facebook” affects us most profoundly not when we are using Facebook, but when we are doing nearly anything else. Away from our screens, the experiences we choose and the ways we experience them are inexorably colored not only by the ways we imagine they could be read by others in artifact form, but by the range of idealized documentary artifacts we imagine we could create from them. We see and shape our moments based on the stories we might tell about them, on the future histories they could become.
I argue here, however, that Facebook’s phenomenological impact is not limited to opportunities for self-documentation. More and more, we are attuned not only to possibilities of documenting, but also to possibilities of being documented. As I explored in Part I of this essay, living in an augmented world means that we are always connected to digital social technologies (whether we are connecting to them or not). As I elaborated in last month’s Part II, the Shame On You paradigm reminds us that virtually any moment can be a future past disclosure; we also know that social media and digital social technologies are structured by the Look At Me paradigm, which insists that “any data that can be shared, WILL be shared.” Consequently, if augmented reality has seen the emergence of a new kind of ‘camera eye’, it has seen as well the emergence of a new kind of camera shyness. Much as Bentham designed the Panopticon to crowd-source the disciplinary work of prison guards, Facebook’s design ends up crowd-sourcing the disciplinary functions of the paparazzi.
Accordingly, I extend Jurgenson’s concept of documentary vision—through which we are simultaneously documenting subjects and documented objects, perpetually aware of each moment’s documentary potential—into what I term documentary consciousness, or the perpetual awareness that, at each moment, we are potentially the documented objects of others. I want to be clear that it is not Facebook itself that is “the Panoptibook”; knowing something about nearly everyone is not nearly the same thing as seeing everything about anyone. Moreover, what Facebook ‘knows’ comes not only from what it ‘sees’ of our actions online, but also from the online and offline actions of our family members, friends, and acquaintances. Our loved ones—and our “liked” ones, and untold scores of strangers—are at least as much the guard in the Panopticon tower as is Facebook itself, if not moreso. As a result, we are now subjected to a second-order asymmetrical gaze: we can never know with certainty whether we are within the field of someone else’s documentary vision, and we can never know when, where, by whom, or to what end any documentary artifacts created of us will be viewed.
As Erving Goffman elaborated more than 50 years ago, we all take on different roles in different social contexts. For Goffman, this represents not “a lack of integrity,” but the work each of us does, and is expected to do, in order to make social interaction function. In fact, it is those individuals who refuse to play appropriate roles, or whose behavior deviates from what others expect based on the situation at hand, who lose face and tarnish their credibility with others. The context collapse designed into most social media therefore complicates profoundly even the purposeful, asynchronous self-presentation that takes place on such websites, which has come to require “laborious practices of protection, maintenance, and care.”
When we internalize the abstracted and compounded documentary gaze, we are left with a Sisyphean disciplinary task: we become obligated to consider not just the situation at hand, and not just the audiences we choose for the documentary artifacts we create, but also every future audience for every documentary artifact created by anyone else. It is no longer enough to play the appropriate social role in the present moment; it is no longer enough to craft and curate images of selves we dream of becoming, selves who will have lived idealized versions of our near and distant pasts. Applying technologies of self starts to bring less pleasure and more resignation, as documentary consciousness stirs a subtle but persistent disquietude; documentary consciousness haunts us with the knowledge that we cannot be all things, at all times, to all of the others that we (or our artifacts) will ever encounter. Documentary consciousness entails the ever-present sense of a looming future failure.
As I discussed in Part II, the impacts of these inevitable failures are not evenly distributed. Those who have the most to lose are not people who are “evil” or who are “doing something they shouldn’t be doing,” but people who live ordinary lives within marginalized groups. Those who have the most to gain, on the other hand, are people who are already among the most privileged, and corporations that already wield a great deal of power. The greatest burdens of documentary consciousness itself are therefore likely to be carried by people who are already carrying other burdens of social inequality.
Recent attention to a website called WeKnowWhatYoureDoing.com showcased much of this yet again. The speaker behind the “we” is a white, 18-year-old British man named Callum Haywood (who’s economically privileged enough to own an array of computer and networking hardware), who built a site that aggregates potentially incriminating Facebook status updates and showcases them with the names and photos of the people who posted them. Because all the data used is “publicly accessible via the Graph API,” Haywood states in a disclaimer on the site that he “cannot be held responsible for any persons [sic] actions” as a result of using what he terms “this experiment”; he has further stated that his site (which is subtitled, “…and we think you should stop”) is intended to be “a learning tool” for those users who have failed to “properly [understand] their privacy options” on Facebook.
Coverage of the site’s rapid rise to popularity (or at least high visibility) was similar to coverage surrounding Girls Around Me: a lot of Shame On You, and the occasional critique that stopped at “creepy.” Tech–savvywhitemen thought the site was great; a young white woman starting college at Yale this fall explained that her digital cohort—“the youngest millennials, the real Facebook Generation”—has learned from the mistakes of “those who are ten years older than us.” As a result, her generation thinks Facebook’s privacy settings are easy, obvious, and normal; if your mileage has varied, “you have no one to blame but yourself.” In examining the screenshots from WeKnowWhatYoureDoing.com that these articles feature, I have yet to find one featured Facebook user who writes like a Yale-bound preparatory school graduate; unlike the articles’ authors, the majority of featured Facebook users in these screenshots are people of color. It is hard to see WeKnowWhatYoureDoing.com as doing anything other than offering self-satisfied amusement to privileged people, at the acknowledged potential expense of less privileged people’s employment.
Even overlooking the facts that Facebook’s privacy policy may be “more confusing and harder to understand than the small print coming from credit card companies,” and that the data in Facebook’s Graph API is really only “publicly accessible” in reference to a ‘public’ comprised entirely of software developers, the story Haywood wants to tell with WeKnowWhatYoureDoing.com is fundamentally flawed. It is a story in which “people violate their own privacies on a regular basis,” in a world where digital surveillance, companies like Facebook, and smug self-important software developers fade into the background of the setting’s ‘natural’ world. Haywood states, “[p]eople have lost their jobs in the past due to some of the posts they put on Facebook, so maybe this demonstrates why”; what he seems to be missing is that his site demonstrates not only “why,” but how people come to lose their jobs.
In pretending that “information wants to be free” and holding individuals responsible for violations of their own privacy, we neglect to consider the responsibility of other individuals who write code for companies like Facebook, or who use the data available through the Graph API, or who circulate Facebook data more widely, or who help Facebook generate and collect data (yes, even by tagging their friends in photographs). If we cannot control our own privacy, it is because we can so easily impact the privacy of everyone we know—and even of people we don’t know.
We urgently need to rethink ‘privacy’ in ways that expand beyond the level of individual conditions, obligations, or responsibilities, yet also take into account differing intersections of visibility, context, and identity in an unequal but intricately connected society. And we need as well to turn much of our thinking about privacy and individual autonomy on its head. Due justice to questions of who is visible, and to whom, to what end, and to what effect cannot be done so long as we continue to believe that privacy and visibility are simply neutral consequences of individual choices, or that such choices are individual moral responsibilities.
We must reconceptualize privacy as a collective condition, one that entails more than simply ‘lack of visibility’; privacy must be also a collective responsibility, one that individuals and institutions alike honor in ways that go beyond ‘opting out’ of the near-ubiquitous state and corporate surveillance we seem to take for granted. It is time to stop critiquing the visible minutiae of individual lives and choices, and to start asking critical questions about who is looking, and why, and what happens next.
Whitney Erin Boesel (@phenatypical) is a graduate student in Sociology at the University of California, Santa Cruz.
Looming storm photo by Whitney Erin Boesel. Used with permission.
Last month in Part I (Distributed Agency and the Myth of Autonomy), I used the TtW2012 “Logging Off and Disconnection” panel as a starting point to consider whether it is possible to abstain completely from digital social technologies, and came to the conclusion that the answer is “no.” Rejecting digital social technologies can mean significant losses in social capital; depending on the expectations of the people closest to us, rejecting digital social technologies can mean seeming to reject our loved ones (or “liked ones”) as well. Even if we choose to take those risks, digital social technologies are non-optional systems; we can choose not to use them, but we cannot choose to live in a world where we are not affected by other people’s choices to use digital social technologies.
I used Facebook as an example to show that we are always connected to digital social technologies, whether we are connecting through them or not. Facebook (and other companies) collect what I call second-hand data, or data about people other those from whom the data is collected. This means that whether we leave digital traces is not a decision we can make autonomously, as our friends, acquaintances, and contacts also make these decisions for us. We cannot escape being connected to digital social technologies anymore than we can escape society itself.
This week, I examine two prevailing privacy discourses—one championed by journalists and bloggers, the other championed by digital technology companies—to show that, although our connections to digital social technology are out of our hands, we still conceptualize privacy as a matter of individual choice and control, as something individuals can ‘own’. Clinging to the myth of individual autonomy, however, leads us to think about privacy in ways that mask both structural inequality and larger issues of power.
Disclosure: Damned If You Do, and Damned If You Don’t
Inescapable connection notwithstanding, we still largely conceptualize disclosure as an individual choice, and privacy as a personal responsibility. This is particularly unsurprising in the United States, where an obsession with self-determination is foundational not only to the radical individualism that increasingly characterizes American culture, but also to much of our national mythology (to let go of the ‘autonomous individual’ would be to relinquish the “bootstrap” narrative, the mirage of meritocracy, and the shaky belief that bad things don’t happen to good people, among other things).
Though the intersection of digital interaction and personal information is hardly localized to the United States, major digital social technology companies such as Facebook and Google are headquartered in the U.S.; perhaps relatedly, the two primary discourses of privacy within that intersection share a good deal of underlying ideology with U.S. national mythology. The first of these discourses centers on a paradigm that I’ll call Shame On You, and spotlights issues of privacy and agency; the second centers on a paradigm that I’ll call Look At Me, and spotlights issues of privacy and identity.
“You shouldn’t have put it on the Internet, stupid!” Within the Shame On You paradigm, the control of personal information and the protection of personal privacy are not just individual responsibilities, but also moral obligations. Choosing to disclose is at best a risk and a liability; at worst, it is the moment we bring upon ourselves any unwanted social, emotional, or economic impacts that will stem (at any point, and in any way) from either an intended or an unintended audience’s access to something we have made digitally available. Disclosure is framed as an individual choice, though we need not choose intentionally or even knowingly; it can be the choice to disclose information, the choice to make incorrect assumptions about to whom information is (or will be) accessible, or the choice to remain ignorant of what, when, by whom, how, and to what end that information can be made accessible.
A privacy violation is therefore ultimately a failure of vigilance, a failure of prescience; it redefines as disclosure the instant in which we should have known better, regardless of what it is we should have known. Accordingly, the greatest shame in compromised privacy is not what is exposed, but the fact of exposure itself. We judge people less for showing women their genitals, and more for being reckless enough to get caughtdoing so on Twitter.
Shame On You was showcased most recently in the commentary surrounding the controversial iPhone app “Girls Around Me,” which used a combination of public Google Maps data, public Foursquare check-ins, and ‘publicly available’[i] Facebook information to create a display of nearby women. The creators of Girls Around Me claimed their so-called “creepy” app was being targeted as a scapegoat, and insisted that the app could just as well be used to locate men instead of women. Nonetheless, the creators’ use of the diminutive term “girls” rather than the more accurate term “women” exemplifies the sexism and the objectification of women on which the app was designed to capitalize. (If the app’s graphic design somehow failed to make this clear, see also one developer’s comments about using Girls Around Me to “[avoid] ugly women on a night out”).
The telling use of “girls” seemed to pass uncommented upon, however, and most accounts of the controversy (with fewexceptions) omitted gender and power dynamics from the discussion—as well as “society, norms, politics, values and everything else confusing about the analogue world.” The result was a powerful but unexamined synergy between Shame On You and the politics of sex and visibility, one that cast as transgressors women who had dared not only to go out in public, but to publicly declare where they had gone.
Another blogger admonishes that, “the only way to really stay off the grid is to never sign up for these services in the first place. Failing that, you really should take your online privacy seriously. After all, Facebook isn’t going to help you, as the more you share, the more valuable you are to its real customers, the advertisers. You really need to take responsibility for yourself.” Still another argues that conventional standards of morality and behavior no longer apply to digitally mediated actions, because “publishing anything publicly online means you have, in fact, opted in”—to any and every conceivable (or inconceivable) use of whatever one might have made available. The pervasive tone of moral superiority, both in these articles and in others like them, proclaims loudly and clearly: Shame On You—for being foolish, ignorant, careless, naïve, or (worst of all) deliberately choosing to put yourself on digital display.
Accounts such as these serve not only to deflect attention away from problems like sexism and objectification, but to normalize and naturalize a veritable litany of questionable phenomena: pervasive surveillance, predatory data collection, targeted ads, deliberately obtuse privacy policies, onerous opt-out procedures, neoliberal self-interest, the expanding power of social media companies, the repackaging of users as products, and the simultaneous monetization and commoditization of information, to name just a few. These accounts perpetuate the myth that we are all autonomous individuals, isolated and distinct, endowed with indomitable agency and afforded infinite arrays of choices.
With other variables obfuscated or reduced to the background noise of normalcy, the only things left to blame for unanticipated or unwanted outcomes–or for our disquietude at observing such outcomes–are those individuals who choose to expose themselves in the first place. Of course corporations try to coerce us into “putting out” information, and of course they will take anything they can get if we are not careful; this is just their nature. It is up to us to be good users, to keep telling them no, to remain vigilant and distrustful (even if we like them), and never to let them go all the way. We are to keep the aspirin between our knees, and our data to ourselves.
Shame On You extends beyond disclosure to corporations, and—for all its implicit digital dualism—beyond digitally mediated disclosure as well. Case in point: during the course of writing this essay, I received a mass-emailed “Community Alert Bulletin” in which the Chief of Police at my university campus warned of “suspicious activity.” On several occasions, it seems, two men have been seen “roaming the library;” one of them typically “acts as a look out,” while the other approaches a woman who is sitting or studying by herself. What does one man do while the other keeps watch? He “engages” the woman, and “asks for personal information.” The “exact intent or motives of the subjects” is unknown, but the ‘Safety Reminders’ at the end of the message instruct, “Never provide personal information to anyone you do not know or trust.”
If ill-advised disclosure were always this simple—suspicious people asking us outright to reveal information about ourselves—perhaps the moral mandate of Shame On You would seem slightly less ridiculous. As it stands, holding individuals primarily responsible for violations of their own privacy expands the operative definition of “disclosure” to absurd extremes. If we post potentially discrediting photos of ourselves, we are guilty of disclosure through posting; if friends (or former lovers) post discrediting photos of us, we are guilty of disclosure through allowing ourselves to be photographed; if we did not know that we were being photographed, we are guilty of disclosure through our failure to assume that we would be photographed and to alter our actions accordingly. If we do not want Facebook to have our names or our phone numbers, we should terminate our friendships with Facebook users, forcibly erase our contact information from those users’ phones, and thereafter give false information to any suspected Facebook users we might encounter. This is untenable, to say the least. The inescapable connection of life in an augmented world means that exclusive control of our personal information, as well as full protection of our personal privacy, is quite simply out of our personal hands.
* * * * *
The second paradigm, Look At Me, at first seems to represent a competing discourse. Its most vocal proponents are executives at social- and other digital media companies, along with assorted technologists and other Silicon Valley ‘digerati’. This paradigm looks at you askance not for putting information on the Internet, but for forgetting that “information wants to be free”—because within this paradigm, disclosure is the new moral imperative. Disclosure is no longer an action that disrupts the guarded default state, but the default state itself; it is not something one chooses or does, but something one is, something one has always-already done. Privacy, on the other hand, is a homespun relic of a bygone era, as droll as notions of a flat earth; it is particularly impractical in the 21st century. After all, who really owns a friendship? “Who owns your face if you go out in public?”
Called both “openness” and “radical transparency,” disclosure-by-default is touted as a social and political panacea; it will promote kindness and tolerance toward others, fuel progress and innovation, create accountability, and bring everyone closer to a better world. Alternatively, clinging to privacy will merely harbor evil people and condemn us to “generic relationships.” The enlightened “don’t believe that privacy is a real issue,” and anyone who maintains otherwise is suspect; as even WikiLeaks activist Jacob Appelbaum (@ioerror) has lamented, privacy is cast “as something that only criminals would want.” Our greatest failings are no longer what do or what we choose, but who we are; our greatest shame is not in exposure, but in having or being something to hide.
“Transparency,” too, has been popularized into a buzzword (try web-searching “greater transparency,” with quotes). In an age of what Jurgenson and Rey (@nathanjurgenson and @pjrey) have called liquid politics, people demand transparency from institutions in the hope that exposure will encourage the powerful to be honest; in return, institutions offer cryptic policy documents, unreadable reports, and “rabbit hole[s] of links” as transparency simulacra. Yet we continue to push for transparency from corporations and governments alike, which suggests that, to some degree, we do believe in transparency as a means to progressive ends. Perhaps it is not such a stretch, then, for social media companies (and others) to ask that we accept radical transparency for ourselves as well?
The 2011 launch of social network-cum-“identity service” Google+ served to test this theory, but the protracted “#nymwars” controversy that followed seemed not to be the result that Google had anticipated. Although G+ had been positioned as ‘the anti-Facebook’ in advance of its highly anticipated beta launch, within four weeks of going live Google decided not only to enforce a strict ‘real names only’ policy, but to do so through a “massive deletion spree” that quickly grew into a public relations debacle. The mantra in Silicon Valley is, “Google doesn’t get social,” and this time Google managed to (as one blogger put it) “out-zuck the Zuck.” Though sparked by G+, #nymwars did not remain confined to G+ in its scope; nor were olderbattlelinesaroundprivacy and anonymity redrawn for this particular occasion.
On one side of the conflagration, rival data giants Google and Facebook both pushed their versions of ‘personal radical transparency’; they were more-or-less supported by a loose assortment of individuals who either had “nothing to hide,” or who preached the fallacy that, ‘if you have nothing to hide, you have nothing to fear.’ The ‘nothing to hide’ argument in particular has been a perennial favorite for Google; well before the advent of G+ and #nymwars, CEO Eric Schmidt rebuked, “If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place.” Unsurprisingly, Schmidt’s dismissive response to G+ users with privacy concerns was a smug, “G+ is completely optional. No one is forcing you use it.” If G+ is supposed to be an ‘identity service,’ it seems only some identities are worthy of being served.
Issues of power are conspicuously absent from Look At Me, and Google makes use of this fact to gloss the profound difference between individuals pressuring a company to be transparent and a company pressuring (or forcing) individuals to be transparent. Google is transparent, for example, in its ad targeting, but allowing users either to opt-out of targeted advertising or to edit the information used in targeting them does not change the fact that Google users are being tracked, and that information about them is being collected. This kind of ‘transparency’ offers users not empowerment, but what Davis (@Jup83) calls “selective visibility”: users can reduce their awareness of being tracked by Google, but can do little (short of complete Google abstention) to stop the tracking itself.
Such ‘transparency’ therefore has little effect on Google in practice; with plenty more fish in the sea (and hundreds of millions of userfish already in its nets), Google has little incentive to care whether any particular user does or does not continue to use Google products. Individual users, on the other hand, can find quitting Google to be a right pain, and this imbalance in the Google/user relationship effectively gives carte blanche to Google policymakers. This is especially problematic with respect to transparency, which has a far greater impact on individual Google users than it does on Google itself.
Google may have ‘corporate personhood’, but persons have identities; in the present moment, many identities still serve to mark people for discrimination, oppression, and persecution, whether they are “evil” or not. Look At Me claims personal radical transparency will solve these problems, yet social media spaces are neither digital utopias nor separate worlds; all regular “*-isms” still apply, and real names don’t stop bullies or trolls in digital spaces any more than they do in conventional spaces. At the height of irony, even ‘real-named’ G+ users who supported Google’s policy still trolled other users who supported pseudonyms.
boyd offers a different assessment of ‘real names’ policies, one that stands in stark contrast to the doctrine of radical transparency. She argues that, far from being empowering, ‘real name’ policies are “an authoritarian assertion of power over vulnerable people.” Geek Feminism hosts an exhaustive list of the many groups and individuals who can be vulnerable in this way, and Audrey Watters (@audreywatters) explains that many people in positions of power do not hesitate to translate their disapproval of even ordinary activities into adverse hiring and firing decisions. Though tech CEOs call for radical transparency, a senior faculty member (for example) condemns would-be junior faculty whose annoying “quirks” turn up online instead of remaining “stifle[d]” and “hidden” during the interview process. As employers begin not only to Google-search job applicants, but also to demand Facebook passwords, belonging to what Erving Goffman calls a “discreditable group” carries not just social but economic consequences.
Rey points out that there are big differences between cyber-libertarianism and cyber-anarchism; if “information” does, in fact, “want to be free,” it does not always “want” to be free for the same reasons. ‘Openness’ does not neutralize preexisting inequality (as the Open Source movement itself demonstrates), whereas forced transparency can be a form of outing with dubious efficacy for encouraging either tolerance or accountability. As boyd, Watters, and Geek Feminism demonstrate, those who call most loudly for radical transparency are neither those who will supposedly receive the greatest benefit from it, nor those who will pay its greatest price. Though some (economically secure, middle-aged, heterosexual, able-bodied) white men acknowledged their privilege and tried to educate others like them about why pseudonyms are important, the loudest calls for ‘real names’ were ultimately the “most privileged and powerful” calling for the increased exposure of marginalized others. Maybe the full realization of an egalitarian utopia would lead more people to choose ‘openness’ or ‘transparency’, or maybe it wouldn’t. But strong-arming more people into ‘openness’ or ‘transparency’ certainly will not lead to an egalitarian utopia; it will only exacerbate existing oppressions.
The #nymwars arguments about ‘personal radical transparency’ reveal that Shame On You and Look At Me are at least as complementary as they are competing. Both preach in the same superior, moralizing tone, even as one lectures about reckless disclosure and the other levels accusations of suspicious concealment. Both would agree to place all blame squarely on the shoulders of individual users, if only they could agree on what is blameworthy. Both serve to naturalize and to normalize the same set of problematic practices, from pervasive surveillance to the commoditization of information.
Most importantly, both turn a blind eye to issues of inequality, identity, and power, and in so doing gloss distinctions that are of critical importance. If these two discourses are as much in cahoots as they are in conflict, what is the composite picture of how we think about privacy, choice, and disclosure? What is the impact of most social media users embracing one paradigm, and most social media designers embracing the other?
As I will show next week, the conception of privacy as an individual matter—whether as a right or as something to be renounced—is the linchpin of the troubling consequences that follow.
Whitney Erin Boesel (@phenatypical) is a graduate student in Sociology at the University of California, Santa Cruz.
Image Credits:
Baby bathtub image from http://www.mommyshorts.com/2011/11/caption-contest.html
Peeping bathroom image from http://gweedopig.com/index.php/2009/11/03/peeping-tom-hides-video-camera-in-christian-store-bathroom/
Dear Victim image from http://news.nationalpost.com/2011/11/24/let-me-run-through-your-dumb-mistakes-teen-burglars-apology-letter-to-victims/
Pushpin image from http://www.pinewswire.net/article/to-warrant-or-not-to-warrant-aclu-police-clash-over-cellphone-location-data/
Don’t rape sign image from http://higherunlearning.files.wordpress.com/2012/05/dsc7600r.jpg?w=580&h=510
Anti-woman image from http://www.feroniaproject.org/fired-for-using-contraception-maybe-in-arizona/
Naked wedding photo from http://www.mccullagh.org/photo/1ds-10/burning-man-cathedral-wedding
Cop image from http://www.quickmeme.com/meme/6ajf/
G+ vs. Facebook comic: by Randal Munroe, from xkcd: http://xkcd.com/918/
Transparency word cluster from http://digiphile.wordpress.com/2010/03/28/transparency-camp-2010-government-transparency-open-data-and-coffee/
Transparent sea creature image from http://halfelf.org/2012/risk-vs-transparency/
Transparent grenade image from http://www.creativeapplications.net/objects/the-transparency-grenade-by-julian-oliver-design-fiction-for-leaking-data/
Transparency and toast image from http://time2morph.files.wordpress.com/2012/01/transparent-toaster.jpg?w=300&h=234
[i] Note that “publicly available” is tricky here: Girls Around Me users were ported to Facebook to view the women’s profiles, and so could also have accessed non-public information if they happened to view profiles of women with whom they had friends or networks in common.
[ii] Alternatively, my difficulty finding posts written in support of Google’s policy may simply reflect Google’s ‘personalization’ of my search results, both then and now.
Last month at TtW2012, a panel titled “Logging off and Disconnection” considered how and why some people choose to restrict (or even terminate) their participation in digital social life—and in doing so raised the question, is it truly possible to log off? Taken together, the four talks by Jenny Davis (@Jup83), Jessica Roberts, Laura Portwood-Stacer (@lportwoodstacer), and Jessica Vitak (@jvitak) suggested that, while most people express some degree of ambivalence about social media and other digital social technologies, the majority of digital social technology users find the burdens and anxieties of participating in digital social life to be vastly preferable to the burdens and anxieties that accompany not participating. The implied answer is therefore NO: though whether to use social media and digital social technologies remains a choice (in theory), the choice not to use these technologies is no longer a practicable option for number of people.
In the three-part essay to follow, I first extend this argument by considering that it may be technically impossible for anyone, even social media rejecters and abstainers, to disconnect completely from social media and other digital social technologies (to which I will refer throughout simply as ‘digital social technologies’). Even if we choose not to connect directly to digital social technologies, we remain connected to them through our ‘conventional’ or ‘analogue’ social networks. Consequently, decisions about our presence and participation in digital social life are made not only by us, but also by an expanding network of others. In the second section, I examine two prevailing discourses of privacy, and explore the ways in which each fails to account for the contingencies of life in augmented realities. Though these discourses are in some ways diametrically opposed, each serves to reinforce not only radical individualist framings of privacy, but also existing inequalities and norms of visibility. In the final section, I argue that current notions of both “privacy” and “choice” need to be reconceptualized in ways that adequately take into account the increasing digital augmentation of everyday life. We need to see privacy both as a collective condition and as a collective responsibility, something that must be honored and respected as much as guarded and protected.
Part I: Distributed Agency and the Myth of Autonomy
For the skeptical reader in particular, I want to begin by highlighting that the effects of neither participation nor non-participation in digital sociality are uniform or determined, and that both are likely to vary considerably across different social positions and contexts. An illustrative (if somewhat extreme) example is the elderly: Alexandra Samuels caricatures the absurdity of fretting over seniors who refuse online engagement, and my own socially active but offline-only grandmother makes a great case study in successful living without digital social technology. Though 84 years old, my grandmother is healthy and can get around independently; she lives in a seniors-only community of a few thousand adults (nearly all of whom are offline-only as well), and a number of her neighbors have become her good friends. She has several children, grandchildren, and great-grandchildren who live less than an hour away, and who call and visit regularly. As a financially stable retiree, she can say with confidence that there will be no job-hunting in her future; her surviving siblings still send letters, and her adult children print out digital family photos to show her. For these reasons and others, it would be hard to make the case that either she or any one of her similarly situated friends suffers from digital deprivation.
In contrast, the “Logging Off and Disconnection” panel highlights how the picture of offline-only living shifts if some of the other factors I list above change. Whereas my grandmother has a number of friends with whom she spends time (and who, like her, do not use digital social technologies), Davis describes the isolation that digital abstainers experience when many of the friends with whom they spend time do use digital social technologies. Much to their dismay, non-participating friends of social media enthusiasts in particular can find themselves excluded from both offline and online interaction within their own social groups. Similarly, Roberts finds that even 24 hours of “logging off” can be impossible for students if their close friends, family members, or significant others expect them to be constantly (digitally) available. In these contexts, it becomes difficult to refuse digital engagement without seeming also to refuse obligations of care. Nor is what I will call abstention-related atrophy limited to relationships with friends and family members; professional relationships and even career trajectories can suffer as well. Vitak points out that, for job-seekers, the much-maligned proliferation of ‘weak ties’ that social media has been accused of fostering is a greater asset for gaining employment than is a smaller assortment of ‘strong ties.’ Modern life has become sufficiently saturated with social media to support use of what Portwood-Stacer calls its “conspicuous non-consumption” as a status marker: in the United States, where 96.7 percent of households have at least one television, “I’m not on Facebook” is the new “I don’t even own a TV.” That even a few people read the purposeful rejection of social media as a privilege signifier implicitly demonstrates the high cost of abstaining from social media.
Conversations about logging off or disconnecting have continued in the weeks since TtW2012. Most recently, PJ Rey (@pjrey) makes the case that social media is a non-optional system; because societies and technologies are always informing and affecting each other, “we can’t escape social media any more than we can escape society itself.” This means that the extent to which we can opt-out is limited; we can choose not to use Facebook, for example, but we can no longer choose to live in a world in which no one else uses Facebook (whether for planning parties or organizing protests). As does Davis, Rey argues that “conscientious objectors of the digital age” therefore risk losing social capital in a number of ways. I would like to suggest, however, that even those who are “secure enough” to quit social media and other digital social technologies can not separate from them fully, nor can so-called “Facebook virgins” remain pure abstainers. Rejecters and abstainers continue to live within the same socio-technical system as adopters and everyone else, and therefore continue to affect and to be affected by digital social technology indirectly; they also continue to leave digital traces through the actions of other people. As I elaborate below, not connecting and not being connected are two very different things; we are always connected to digital social technologies, whether we are connecting to them or not. A number of digital social technology companies capitalize on this fact, and in so doing amplify the extent to which digital agency is increasingly distributed rather than individualized.
In this section, I use Facebook as a familiar example to illustrate the near-impossibility of erasing digital traces of one’s self most generally. Many of the surveillance practices that follow here are not unique to Facebook, but the difficulty of achieving a full disengagement from Facebook can serve as an indicator of how much more difficult a full disengagement from all digital social technology would be. First, consider some of the issues that face people who actually have Facebook accounts (at minimum a username and password). Facebook has tracked its users’ web behavior even when they are logged out of Facebook; the “fixed” version of the site’s cookies still track potentially identifying information after users log out, and these same cookies are deployed whenever anyone (even a non-user) views a Facebook page. Last year, a 24-year-old law student named Max Schrems discovered that Facebook retains a wide array of user profile data that users themselves have deleted; Schrems subsequently launched 22 unique complaints, started an initiative called Europe vs. Facebook, and earned Facebook’s Ireland offices an audit.
In one particular complaint, Schrems alleges that Facebook not only retains data it should have deleted, but also builds “shadow profiles” of both users and non-users. These shadow profiles contain information that the profiled individuals themselves did not choose to share with Facebook. For a Facebook user, a shadow profile could include information about any pages she has viewed that have “Like” buttons on them, whether she has ever “Liked” anything or not. User and non-user shadow profiles alike contain what I call second-hand data, or information obtained about individuals through other individuals’ interactions with an app or website. Facebook harvests second-hand data about users’ friends, acquaintances, and associates when users synchronize their phones with Facebook, import their contact lists from other email or messaging accounts, or simply search Facebook for individual names or email addresses. In each case, Facebook acquires and curates information that pertains to individuals other than those from whom the information is obtained.
Second-hand data collection on and through Facebook is not limited to the creation of shadow profiles, however. As a recent article elaborated, Facebook’s current photo tagging system enables and encourages users to disclose a wealth of information not only about themselves, but also about the people they tag in posted photos. (Though not mentioned in the piece, the “tag suggestions” provided by Facebook’s facial recognition software have made photo tagging nearly effortless for users who post photos, while removing tags now involves a cumbersome five-click process per each tag that a pictured user wants removed.) Recall, too, that other companies collect second-hand data through Facebook each time a Facebook user authorizes a third party app; by default, the third-party app can ‘see’ everything the user who authorized it can see, on each of that user’s friends’ profiles (the same holds true for games and for websites that allow users to log-in with their Facebook accounts). Those users who dig through Facebook’s privacy settings can prevent apps from accessing some of their information by repeating the tedious, time-consuming process required to block a specific app for each and every app that any one of their Facebook ‘friends’ might have authorized (though the irritation-price of doing so clearly aims to guide users away from this sort of behavior). Certain pieces of information, however—a user’s name, profile picture, gender, network memberships, username, user id, and ‘friends’ list—remain accessible to Facebook apps, no matter what; Facebook states that this makes one’s friends’ experiences on the site (if not one’s own) “better and more social.” Users do have the ‘nuclear option’ of turning off all apps, though this action means they cannot use apps themselves; their information also still remains available for collection through their friends’ other Facebook-related activities.
Facebook representatives have denied any wrongdoing, denied the existence of shadow profiles per se, and maintained that there is nothing non-standard about the company’s data collection practices. Nonetheless, even the possibility of shadow profiles raises a complicated question about where to draw the line between information that individuals ‘choose willingly’ to share (and are therefore responsible for assuming it will end up on the Internet), and “information that accumulates simply by existing.” The difficulty of making this determination reflects not only the tensions between prevailing privacy discourses, but also the growing ruptures between the ways in which we conceptualize privacy and the increasingly augmented world in which we live.
As a headline in The Atlantic put it recently, “On Facebook, Your Privacy is Your Friends’ Privacy”—but what does that mean? How should we weigh which of our friends’ desires against which of our own? How are we to anticipate the choices our friends might make, and on whom does the responsibility fall to choose correctly? The problem is that we tend to think of privacy as a matter of individual control and concern, even though privacy—however we define it—is now (and has always been) both enhanced and eroded by networks of others. In a society that places so much emphasis on radical individualism, we are ill-prepared to grapple with the rippling and often unintended consequences that our actions can have for others; we are similarly disinclined to look beyond the level of individual actions in asking why such consequences play out in the ways that they do.
‘Simply existing’ does generate more information than it did two generations ago, in part because so many different corporations and institutions are attempting to capitalize on the potentials for targeted data collection afforded by a growing number of digital technologies. At the same time, surveillance of individual behavior for commercial purposes is nothing new, and Facebook is hardlythe only companybuildingdata profiles to which the profiled individuals themselves have incomplete access (if any access at all). What is comparatively new about Facebook-style surveillance in social media is the degree to which disclosure of our personal information has become a product not only of choices we make (knowingly or unknowingly), but also of choices made by our family members, friends, acquaintances, or professional contacts. Put less eloquently: if Facebook were an STI, it would be one that you contract whenever any of your old classmates have unprotected sex. Even one’s own abstinence is no longer effective protection against catching another so-called ‘data double’ or “data self,” yet we still think about privacy and disclosure as matters of individual choice and responsibility. If your desire is to disconnect completely, the onus is on you to keep any and all information about yourself—even your name—from anyone who uses Facebook, or who might use anything like Facebook in the future.
If we dispense with digital dualism—the idea that the ‘virtual,’ ‘digital,’ or ‘online’ world is somehow separate and distinct from the ‘real,’ ‘physical,’ or ‘face to face’ world—it becomes apparent that not connecting to digital social technologies and not being connected to digital social technologies are two different things. Whether as a show of conspicuous non-consumption, an act of atonement and catharsis (as portrayed in Kelsey Brannan’s [@KelsBran] film Over & Out), or for other reasons entirely, we can choose to accept the social risks of deleting our social media profiles, dispensing with our gadgetry, and no longer connecting to others through digital means. Yet whether we feel liberated, isolated, or smugly self-satisfied in doing so, we have not exited the ‘virtual world’; we remain situated within the same augmented reality, connected to each other and to the only world available through flows that are both physical and digital. I email photographs, my mother prints them out, and my grandmother hangs them in frames on her wall; a social media refuser meets her own searchable reflection in traces of book reviews, grant awards, department listings, and RateMyProfessors.com; a nearby friend sees you check-in early at your office, and drops by to surprise you with much needed coffee. A news story is broken and researched via Twitter, circulated in a newspaper, amplified by a TV documentary, and referenced in a book that someone writes about on a blog. Whether the interface at which we connect is screen, skin, or something else, the digital and physical spaces in which we live are always already enmeshed. Ceasing to connect at one particular type of interface does not change this.
In stating that connection is inescapable, I do not mean to suggest that all patterns of connection are equitable or equivalent in form, function, or impact. Connection does not operate independent of variables such as race, class, gender, ability, or sexual orientation; digital augmentation is not a panacea for oppression, and neither has nor will magically eliminate social and structural inequality to birth a technoutopian future. My intent here in focusing on broader themes is not to diminish the importance of these differences, but to highlight three key points about digital social technology in an augmented world:
1.) First, our individual choices to use or reject particular digital social technologies are structured not only by cultural, economic, and technological factors, but also by our social, emotional, and professional ties to other people;
2.) Second, regardless of how much or how little we choose to use digital social technology, there are more digital traces of us than we are able to access or to remove;
3.) Third, even if we choose not to participate in digital social life ourselves, the participation of people we know still leaves digital traces of us. We are always connected to digital social technologies, whether we are connecting through them or not.
Next week, I’ll continue the conversation by examining the ways that this inescapable connection serves to complicate two prevailing discourses of privacy, both of which assume autonomous individuals as subjects and, in so doing, mask larger issues of power and inequality.
Whitney Erin Boesel (@phenatypical) is a graduate student in Sociology at the University of California, Santa Cruz.
We live in a cyborg society. Technology has infiltrated the most fundamental aspects of our lives: social organization, the body, even our self-concepts. This blog chronicles our new, augmented reality.