I have been thinking through ideas on this blog for my dissertation project about how we document ourselves on social media. I recently posted some thoughts on rethinking privacy and publicity and I posted an earlier long essay on the rise of faux-vintage Hipstamatic and Instagram photos. There, I discussed the “camera eye” as a metaphor for how we are being trained to view our present as always its potential documentation in the form of a tweet, photo, status update, etc. (what I call “documentary vision”). The photographer knows well that after taking many pictures one’s eye becomes like the viewfinder: always viewing the world through the logic of the camera mechanism via framing, lighting, depth of field, focus, movement and so on. Even without the camera in hand the world becomes transformed into the status of the potential-photograph. And with social media we have become like the photographer: our brains always looking for moments where the ephemeral blur of lived experience might best be translated into its documented form.

I would like to expand on this point by going back a little further in the history of documentation technologies to the 17th century Claude glass (pictured above) to provide insight into how we position ourselves to the world around us in the age of social media.

Do I have to make the case that self-documentation has expanded with social media? As I type this in a local bar, I know that I can “check-in” on Foursquare, tweet a funny one-liner just overheard, post a quick status update on Facebook letting everyone know that I am working from a bar, snap an interesting photo (or video) of my drink and this glowing screen, and, while I’m at it, write a short review of the place on Yelp. And I can do all of this on my phone in a matter of minutes.

While self-documentation is nothing new, the level and ubiquity of documentation possibilities afforded by social media, as well as the normalcy to which we engage in them, surely is. And, most importantly, sites like Facebook, for the first time, guarantee an audience. Thus, social media provides both the opportunity and motivation to self-document as never before.

What does any of this have to do with the Claude glass, a little-known 17th century mirror-device?

A Claude glass.

The Claude glass is (usually) a small (usually) convex mirror that is (usually) the color grey (the darkened color gave it the nickname “black mirror”). The device assisted 17th and 18th century “picturesque” landscape painters, especially those attempting to emulate the popular paintings of Claude Lorrain (whom the device is named after). By standing with their back to the landscape and towards the Claude glass mirror, the viewer is provided with a mirror-image of the landscape behind them. The mirror’s convex shape pushes more scenery into a single focal point in the reflection (which was considered aesthetically pleasing in the “picturesque” genre). And the grey-smoke coloring of the lens changes the tones of the reflection to those that are more easily reproduced with the limited color palette painters employ.

Not a Hipstamtic image, this is a Claude Lorrain painting.

The Claude glass also came to be used by more than painters. Wealthy English vacationers took countryside vacations in search of “picturesque”-style landscapes reminiscent of the famous paintings. These tourists would often carry a Claude glass in their pocket so that they could turn around and view the world, slightly convexed and re-colored, as if it were a painting. Often, they would also use the device to make a sketch, wanting to return home with some documentation of the beauty witnessed on vacation. Competent in what is picturesque, the wealthy demonstrated their “superior” cultured taste distinct from lower and middle classes as well as the new rich. This trend is well documented in the book, The Claude Glass: Use and Meaning of the Black Mirror in Western Art by Arnaud Maillet (2004).

The term “picturesque” as the name for this trend is important to clarify. It refers to what which is worthy of emulation and documentation; to resemble or be suitable for a picture (which, in the 17th and 18th centuries meant a painting). Importantly, that which is “picturesque” is typically associated with beauty, even though many scenes worthy of pictoral documentation are not necessarily beautiful (a horrific war-scene, for instance). Thus, the Claude glass was sort-of like the Hipstamatic or Instagram of its day: it presented lived reality as more beautiful and already in its documented form (be it a painting or a faux-vintage photograph). Indeed, there are similarities in style between Claude Lorrain’s paintings, the images seen in the Claude glass and the effects that the faux-vintage photo filters employ.

Digitally Picturesque
The Claude glass presents us with an image of the tourist standing exactly away from that which they have traveled to see. Instead, the favored vantage-point is of reality already as an idealized documentation. Both the Claude glass and the Hipstamatic photo present this type of so-called nostalgia for the present. I think this a useful metaphor for how we self-document on social media. Let’s split this parallel between the Claude glass and Facebook into three separate points:

First, the Clade glass metaphor jives with the notion of the “camera eye” that I have used previously. They are both examples of what I call “documentary vision,” that is, the habit of viewing reality in the present as always a potential document (often to be posted across social media). We are like the 18th century tourist in that we search for the “picturesque” in our world to demonstrate that we are living a life worthy of documentation; be it literally in a picture, or in a tweet, status update, etc.

Second, the Claude glass model presents the Facebook user as turned away from the lived experience that they are diligently documenting, opposed to the “camera eye” metaphor of facing forward towards reality. One worry surrounding social media is that our fixation on documenting our lives might hinder our ability to live in the moment. Do those fixated on shooting photos and video of a concert miss out on the live performance to some degree? Do those who travel with a camera constantly in-hand sacrifice experiencing the new locale in the here-and-now? Does our increasing fixation with self-documentation position us, like the Claude glass user, precisely away from the lived experience we are documenting?

Third, the Claude glass is affiliated with the “picturesque” movement that equated beauty with being worthy of pictoral representation. The Claude glass model would describe our use of social media as the attempt to present our lives as more beautiful and interesting than they really are (a reality we have turned our backs on). When our reality is too banal we might reshape the image of ourselves just a bit to make our Friday night seem a bit more exciting, our insights more witty, or homes to be better furnished, the food we cook more delicious, our film selections more exotic, our pets more adorable and so on (which creates what Jenna Wortham calls “the fear of missing out”).

To conclude, these last two points demonstrate how the model of Claude glass self-documentation on social media differs from “the camera eye.” Both the camera and the Claude glass share the effect of developing a view of the world as one of constant potential documentation. The difference is that the Claude glass metaphor presents the social media user as turned away from lived experience in order to present it as more picturesque than it really is. Thus, the questions I am still working on and those I would like to pose to you become: Does this capture the reality of the Facebook user? Are we missing out on reality as we attempt to document it? Are we portraying ourselves and lives as better than they actually are?

Header image source: http://carterseddon.com/claude1.html

Source for these last two images: http://www.re-picture.info/

 

PJ Rey and I have been following the 2012 presidential campaign on this blog with social media in mind. We watch as President Obama and the republican contenders try to look social-media-y to garner dollars and votes. However, the social media use has thus far been more astroturfing than grassroots. There have been more social media photo-opts to appear tech-savvy than using the web to fundamentally make politics something that grows from the bottom-up. Presidential politics remain far more like Britannica than Wikipedia.

But this might all change, at least according to Thomas Friedman yesterday in the New York Times. He describes Americans Elect, a non-profit attempting to build an entire presidential campaign from the ground up. This might be our first glimpse of an open and social presidential web-based campaign. From their website,

Americans Elect is the first-ever open nominating process. We’re using the Internet to give every single voter — Democrat, Republican or independent — the power to nominate a presidential ticket in 2012. The people will choose the issues. The people will choose the candidates. And in a secure, online convention next June, the people will make history by putting their choice on the ballot in every state.

If FDR was the radio president, JFK the television president, Obama is not the social media president. Yes, Obama participated in a Twitter town hall and a Facebook summit in an attempt to seem hip to social media. However, as I wrote before, both of these events as well as much of Obama’s social media presence come from the top-down, in stark contrast to the social media ethic of grassroots communications from the bottom-up. Obama has largely used the Internet as if it were a television: a one-way broadcast medium. In fact, this leads me to the idea that perhaps executive power and social media are antithetical in the first place.

Much the same could be said for the republican candidates. However, they might be a little ahead of the democrats in using Twitter. Obama has responded by beginning to type his own tweets.

All of this stands in contrast to the Americans Elect mission to bypass the top-down structures of the existing political parties. The Internet allows for the possibility of a Wikipedia president, one whom reflects the priorities and concerns of a crowd that also determines the rules of how the campaign operates and spends its money. As Friedman concludes,

Write it down: Americans Elect. What Amazon.com did to books, what the blogosphere did to newspapers, what the iPod did to music, what drugstore.com did to pharmacies, Americans Elect plans to do to the two-party duopoly that has dominated American political life — remove the barriers to real competition, flatten the incumbents and let the people in. Watch out.

I share Friedman’s attitude that the old political machine might face serious competition from the web. It may not be in 2012 and it may not be by Americans Elect, but it very well may occur. We can now better visualize how the web may fundamentally change politics as much as it has changed other institutions from publishing to music to pornography.

 

Check out The Big Ideas podcast over at The Guardian UK today for a quick discussion of his work. The CBC has a very good article. Also, check out some Cyborgology posts on McLuhan, including his 1969 Playboy interview, Sheppard Fairey’s redesign ofThe Medium is the Message and perhaps most interesting is a website cataloging McLuhan’s video appearances.

How relevant is McLuhan today? In which disciplines? How about outside of academia?


The orange represents the intensity of Flickr images taken and geotagged to a particular area. The blue is Twitter use. Looking at New York City above, we see that people tweet from different places than they photograph. For example, tourists photograph some areas while people tweet more from work and home.

More images after the jump. Via

Tokyo


This Toyota commercial is narrated by a young woman who gets her parents on Facebook because they supposedly are not social enough. While she scoffs at how relatively few “friends” her parents have, the parents are shown to be out living by mountain-biking some decidedly offline trails. The daughter remains confidently transfixed and anchored to the digital world of her laptop screen.

I spend lots of time on this blog pointing out what I call “digital dualism,” the fallacy of viewing the physical and digital as seperate worlds (think The Matrix). Instead, the position myself and others on this blog favor is what we call “augmented reality,” the realization that our world is one where atoms and bits come together. Read more about this idea if you want.

Enter Toyota. They are playing off the pesky social media misnomer that people are using Facebook instead of doing things offline. Research consistently disproves this zero-sum/one-or-the-other fallacy by demonstrating that those who use social media have more offline connections. They are going out and doing more. It makes sense to anyone who uses the site: what you do and who you talk to online has everything to do with what you do and who you talk to offline. That’s augmented reality.

But that does not stop news journalists, film makers and advertisers from furthering this fallacy. See Zeynep Tufekci absolutely dismantle New York Times executive editor Bill Keller on this issue over Twitter (and check out some the links to the research she posts). I’ve critiqued the film The Social Network for playing on this fallacy (Sorkin doesn’t use Facebook, and it showed in the film’s misunderstanding of the site). And now we have Toyota propagating the image of the Facebook user as one who lethargically trades offline interaction for false online connectedness. Digital dualism continues to persist.

[Unsurprisingly, Toyota’s critique of social media is disingenuous, they’ve recently created their own seocial networking service.]

Via The Machine Starts.

Chris Baraniuk wrote an interesting piece at the blog The Machine Starts a few hours ago and I wanted to offer a comment. I agree with much of the analysis about so-called “Facebook Narcissim,” but what I find particularly interesting is how one fundamental assumption –the existence of a true self– drastically alters the conclusions we might draw.

Baraniuk discusses how social media sites, like Facebook, are designed to promote more sharing through creating a generally positive vibe. Indeed, Facebook has stated explicitly that they do not have a “dislike” button because they want the site to be a fun place to hangout. In addition to the positively-biased valence, Facebook makes calculable social interaction which also serves to create an atmosphere that values and encourages more sharing. For the site more sharing means more profits. And for the user more sharing about our lives creates an inward-gaze that could be described as narcissism.

Lasch’s famous study of The Culture of Narcissism argued that an increase in the size and complexity of culture makes us, individually, feel small and insignificant. Our reaction is to turn inward and announce as loudly as we can that we exist. I am here and I am important. This existential crisis also plays out on Facebook. For many, especially those whose peers have a significant social media presence, to not be online is to be invisible. Thus, we document our lives, ideas, behaviors, friendships and so on to demonstrate that we exist.

Baraniuk describes the particular character of this narcissistic impulse, especially when taken to the extreme, as one of trading their “true” self in favor of turning themselves, disingenuously, into a posed brand.

And this is my fundamental disagreement with Baraniuk’s analysis: the assumption made without being discussed explicitly is that opposed to the narcissistic self there is a true self (that is being lost). While my goal in this comment is not to convince anyone that there is or is not a true self, I wish point out that one’s assumption on this matter is centrally important to what conclusions will be drawn. One’s stance must be made central instead of hidden.

We might look to thinkers such as Judith Butler, Michel Foucault or Erving Goffman who, among others, all describe self-presentation as a performance in various ways. The branding of the self existed long before the Internet and continues to exist offline as well. Performance is something we all do, not a pathology (as Baraniuk’s post hints towards). Facebook only makes clear what these thinkers have known that we all do all the time. Thus, those narcissists Baraniuk describes become reconceptualized as those who are not clever/savvy enough in hiding of their own performativity. What has been pathologized as a disorder is the failure to convincingly pass off one’s fiction as fact. And this pathologization implicitly assumes that there is some fact; some “true” self, an authentic being (a notion that has in its history the Christian concept of the soul).

Agreeing on the analysis but having a different fundamental view of the “authentic” self provides the two of us with precisely opposite conclusions: Baraniuk argues that this trend of narcissism as it plays out on Facebook “obscures” the true self, whereas I think it does exactly the opposite. Narcissism as it plays out on social media forces users to encounter, confess and become hyper-fixated on themselves, always with the intention of passing off their performative fictions as fact. The Foucauldian “so what?” that follows is that the hyper-fixation on the self, indeed the very invention of the self, is to keep people self-policing and self-regulating. To assume the self is natural precludes the sort of identity play that is possible and possibly transgressive.

This essay, like the one I posted last month on faux-vintage photography, is me hashing out ideas as part of my larger dissertation project on self-documentation and social media. Part I is found here.A barrage of media stories are professing the “Death of Anonymity,” the “End of Forgetting” and an “Era of Omniscience.” They are screaming a sensationalism that is part of the larger project to drum up fear about how “public” we are when using social media. While there are indeed risks involved with using social media, these articles engage in a risky hyperbole that I will try to counter-balance here.

Part I of this essay rethought claims of hyper-publicity by theoretically reorienting the concept of publicity itself. Using theorists like Bataille and Baudrillard, I argue that being public is not the end of privacy but instead has everything to do with it. Social media is more like a fan dance: a game of reveal and conceal. Today, I will further take to task our collective tendency to overstate publicity in the age of social media. Sensationalizing the risks of “living in public” perpetuates the stigma around an imperfect social media presence, intensifying the very risk we hope to avoid. But first, let’s look at examples of this sensationalism.

I. Media Sensationalism
Pointing out the dangers of living public online is an important task, but sensationalizing this risk is all too common. Indeed, the media has a long history of sensationalizing all sorts of risks, creating fear to drum up ratings, sales, clicks and page-views. From sexting to cyberbullying to the loss of “deep” learning, political activism, and “real” social connections, I’ve written many times about how the media has found social media to be a particularly fertile space to exploit fear for profit.

Alternatively, a more accurate description of publicity on social media, and the risks associated with it, would include (1) thoroughly describing the risk as well as (2) providing some notion of the approximate probability of that risk occurring and (3) mentioning potential positives of living in public.

To be very clear: the risks surrounding using social media are real. Information about ourselves online is expanding, whether or not you posted it yourself. Surveillance is more pervasive, decentralized, networked, and real-time. This may lead to identity theft, losing one’s job or partner, or being publicly shamed for embarrassing mistakes. Indeed, there may be what danah boyd (2010 .pdf) calls “invisible audiences” online that you may not be aware of that simply are not as pervasive or problematic offline. And, as I have written elsewhere, the privacy risks are not evenly distributed, but instead are faced more by those most vulnerable (for example, racy pictures are forgiven more quickly for men than women).

What a non-sensationalist description of these risks would also provide is some hint at the positives of living in public online: Facebook use is associated with having more close connections, the honest/public nature of Facebook might have played a very important role in the recent uprisings in the Middle East and in North Africa, and White (2003) describes the way that displaying oneself online can be an act of taking control over the gaze, suggesting that wanting to be seen can bring great pleasure.

We could postulate more positives, but my purpose here is to describe the prevailing trend to hyperbolize the negatives without mentioning their likelihood of occurrence or the existence of positives. Examples of this kind of sensationalism are very easy to find:

Last month, the New York Times ran an article by Brian Stelter proclaiming that “the Web unmasks everyone” and is where “anonymity dies.” The article goes on to argue that, thanks to Facebook, we are witnessing an end of privacy because the Internet never forgets.

Another New York Times article published a year ago by Jeffery Rosen called “The Web Means the End of Forgetting” questions

how best to live our lives in a world where the Internet records everything and forgets nothing — where every online photo, status update, Twitter post and blog entry by and about us can be stored forever.

He states that the,

fact that the Internet never seems to forget is threatening, at an almost existential level, our ability to control our identities.

The consequence is that,

for a great many people, the permanent memory bank of the Web increasingly means there are no second chances — no opportunities to escape a scarlet letter in your digital past. Now the worst thing you’ve done is often the first thing everyone knows about you.

A more recent New York Times piece ran with the title “Is Anonymity Dead?” And in 2008 they ran an article titled “The End of Online Anonymity.”

And there is Zygmunt Bauman’s recent article in the Guardian titled “is this the end of anonymity?” Bauman states that the web is ushering in an “end of invisibility and autonomy”:

Everything private is now done, potentially, in public – and is potentially available to public consumption; and remains available for the duration, till the end of time

Another article in the Guardian quotes a psychotherapist as stating,

because of digital technology, society’s ability to forget has become suspended, replaced by perfect memory.

And the blogosphere likes to run with this exaggeration, too. For example this post states that we’ve entered into an “era of omniscience” and quotes the Bible: “there is nothing covered that shall not be revealed; and hid that shall not be known.”

I could go on, but we have enough here to start calling out this sensationalism: Anonymity is declining, it is not dead. The web forgets less, but it still sometimes forgets. We are living in an era of more knowledge, not an era of omniscience. The web unmasks some people sometimes, but it does not unmask everyone. The worst thing you’ve done might be the first thing people know about you, but probably not. There is less invisibility, but invisibility is not dead. Society’s memory has become better, but it is not perfect. Much is increasingly revealed, but much remains concealed (as I argued in Part I).

The hyperbolic statements center around the idea that digital content is immortal and searchable by millions of others. While (mostly) true, it should also be noted that the vast majority of all digital content is seen by virtually no one. Maybe we just like the thought that everything we do is being recorded for all time, but the reality is very few people are looking at your latest tweet or photo. Sorry.

Lots of fear is drummed up by the possibility of having your life ruined by a poorly worded status update. But have you ever tried searching for your or others old Facebook status updates? Good luck. Try to searching for your old tweets. One point I have made previously is that the very immortality of digital content is precisely what causes its relative obscurity. Let’s be more realistic when describing the risks associated with using social media by also discussing probability and reward. Why? Because sensationalizing risk is itself risky.

II. Why Sensationalism Is Problematic
The major issue with media sensationalism, beyond simply being incorrect as is shown above, is that sensationalism actually intensifies and perpetuates the risk itself.The stigma surrounding having imperfections online is eroding, however, not for everyone equally. As we, especially younger folks, increasingly live our lives online, flaws and all, the norm will be to have a little “digital dirt” on our hands and a few “Facebook skeletons” in our closets. Imperfections will continue to be as forgivable as they always have.

But the trend to be more accepting of humans simply being human online is being impeded by news stories that sensationalize risk. By making too big a deal out of a “racy” photo or a stupid status update, the media reifies how big of a deal it is. If they, instead, discussed the risks in the context of likelihood and potential benefits, perhaps even coupled with the idea that the risky stigma may be eroding, then we might make more progress towards forgiving imperfections and thus lessen the risk at hand in the first place. The obsession solidifies stigma instead of letting it erode.

Part of the reason why we might want the risk to erode is because, as I mentioned above, the risk has disproportionally negative affects for vulnerable and marginalized populations. For example, we owe it to the future women running for political office to make clear the risks involved with living online, but to do so in way that does not obsess over, say, “racy” photos. The more we obsess about the risks associated with digital imperfections the more there will be sensationalist compulsion on the part of the media to dig into old Facebook photos and obsess over any image where a skirt is deemed too short. That will keep good people, especially women, from running for office.

Simply, in order for any risk to be taken seriously, it needs to be described accurately. To over-state risk ultimately does more to obscure than to elucidate it, which ultimately harms those we are trying to help.danah boyd. (2010). “Social Network Sites as Networked Publics: Affordances, Dynamics, and Implications.” In Networked Self: Identity, Community, and Culture on Social Network Sites (ed. Zizi Papacharissi), pp. 39-58. [.pdf]

White, M. 2003. “Too close to see: men, women, and webcams.” New Media and Society, vol. 5, pp. 7–28.

Augmented Reality Cinema [thanks @farman].