Screen Shot 2015-08-07 at 8.56.19 PM
my car

The New York Times has a bad habit of uncritically replicating mainstream opinions about technology and society, for instance this piece from Sunday that states,

Much of the research on selfies reveals that (surprise!) people who take a lot of them tend to have narcissistic, psychopathic and Machiavellian personality traits — which may explain why they are oblivious when they bonk you on the head with their selfie sticks. This is not to say that everyone who takes a selfie is a psychopath, but it does imply a high need for self-gratification, particularly if they are posted online for social approval.

For at least the past year, this narrative that selfies equals narcissism has been deeply challenged in popular and academic work. Perhaps the “research” should have included this really terrific special issue of the International Journal of Communication on selfies, edited by Theresa Senft and Nancy Baym (scroll to special sections and click the + more articles). Especially the paper by Anne Burns, who continues to be the strongest critic of the selfies-as-pathological narrative (read her paper and also see her present part of this work at Theorizing the Web 2014 here). Indeed, the concerns over pathologization, especially by dismissing the selfie as narcissistic, is a powerful thread through the entire issue. Hopefully this critique will soon be less necessary, but as the New York Times makes evident each weekend, it’s still a much needed intervention. By my count, more than 2/3rd of the papers in the issue explicitly mention and are rightly critical of the narcissism frame.

If you read Cyborgology you might have already seen and read this issue, but in case you haven’t, I just just want to briefly use this space to highlight terrific work being done on the topic of selfies (I especially loved the stand-out papers by Jenna Brager and Elizabeth Losh). It’s quite good, and open access, so download away.

Another thread throughout the issue I found particularly interesting is if and how the authors chose to define what a “selfie” is. It seems that for some authors, it’s self evident (resisting the pun resisting the pun), though, there is considerable disagreement over the term in everyday, journalistic, and academic discourse. Indeed, telling people what is or is not a selfie has become its own genre. I see a steady flow of posts angrily noting that some people like to use the term outside just a self-shot picture of one’s face, or even body. This leads to the counter move of wanting to enforce a stricter version of what a selfie is. The (you guessed it) New York Times just ran such an explainer last week.

It might have been redundant for a special issue on a specific topic to have each paper replicate the same paragraph or two defining the same thing. In any case, it seems either implicitly or explicitly that the papers agree with the definition provided in the Introduction to the issue. In fact, the definition in the Introduction, authored by Theresa Senft and Nancy Baym, provides as good a definition as I’ve seen anywhere,

What precisely is a selfie? First and foremost, a selfie is a photographic object that initiates the transmission of human feeling in the form of a relationship (between photographer and photographed, between image and filtering software, between viewer and viewed, between individuals circulating images, between users and social software architectures, etc.). A selfie is also a practice—a gesture that can send (and is often intended to send) different messages to different individuals, communities, and audiences. This gesture may be dampened, amplified, or modified by social media censorship, social censure, misreading of the sender’s original intent, or adding additional gestures to the mix, such as likes, comments, and remixes.

Although the selfie signifies a sense of human agency (i.e., it is a photograph one knowingly takes of oneself, often shown to other humans), selfies are created, displayed, distributed, tracked, and monetized through an assemblage of nonhuman agents. The politics of this assemblage renders the selfie—generally considered merely a quotidian gesture of immediacy and co-presence—into a constant reminder that once anything enters digital space, it instantly becomes part of the infrastructure of the digital superpublic, outliving the time and place in which it was original produced, viewed, or circulated. It is perhaps for this reason that selfies function both as a practice of everyday life and as the object of politicizing discourses about how people ought to represent, document, and share their behaviors.

For the papers themselves, about half of them defined what a selfie is, the other half, presumably, assume there isn’t disagreement over the term. At times, I did find myself wondering what the various authors counted as a selfie versus what they did not, especially with papers utilizing interviews. I’m not sure if the respondents all have the same definition of what counts and doesn’t count as a “selfie”. The interveiwers and respondents in the research and the readers of the paper might all have something a bit different in mind. For instance, Matthew Bellinger’s article says, “This choice of terms is particularly striking given that Cameron’s photograph lacks the visual cues normally associated with the selfie” – which are? Or from Anirban Baishya’s wonderful article, “The connection of the hand to the cell phone at the moment of recording makes the selfie a sort of externalized inward look”, which assumes a hand-to-phone connection, which is a particular, and perhaps not fully agreed upon, necessity for an image to be a selfie.

Some of the papers do define the selfie. Kate Miltner and Nancy Baym define selfie as, “a practice in which people hold out a camera phone and photograph themselves.”

From Frosh’s article, “A selfie, whatever else it might be, is usually a photograph: a pictorial image produced by a camera. This banal observation informs widespread understandings of the selfie as a cultural category: “A photograph that one has taken of oneself” (Oxford Dictionaries Word of the Year, 2013, p. 1).”

Aaron Hess defines it, “The selfie, a form of self-portraiture typically created using smartphones or webcams and shared on social networks.”

Katharina Lobinger and Cornelia Brantner define it, “A selfie is a self-portrait usually taken with a digital camera or a camera phone in order to be shared with relevant others”.

David Nemer and Guo Freeman has a nicely encompassing definition, “More than just a self-taken, static photo shared on social networking sites, selfies (also known as “self-shooting”; see Tiidenberg, 2014; and “self-portrait”: see Mazza, Da Silva, & Le Callet, 2014) are considered nonverbal, visual communication that implies one’s thoughts, intentions, emotions, desires, and aesthetics captured by facial expressions, body language, and visual art elements.”

James Katz and Elizabeth Thomas Crocker provide my favorite definition, one that I think best gets at some of the popular fluidity around the term,

for our purposes we define selfies as images that were not only taken by the person posting the image but that also include part or all of the person taking the photo.” They precede that definition with, “Although interviews with users and even academics turned up different boundaries for that definition,2” and the footnote states, “Chloe Mulderig, who teaches courses about visual culture at Emerson University, told us during an interview that she includes images of food and immediate surroundings in her definition of selfie. She argues that these are extensions of the self and are intended to impact how the public should view the individual posting the image. Some users we interviewed agreed with this assessment and also included within it images of such things as pets, homes, vehicles, and craft products. However, most interviewees disagreed that these images fell into the larger category of selfie if they lacked part or all of the person taking the photo. It is reasonable to accept a narrower definition as that seems to be the generally accepted meaning, though we certainly see merit in the broader definition as well.

I really appreciated the papers that did define what a selfie is because, at least outside the academy, there is intense negotiation and deep disagreement over how to use the term and what it means. More interesting than trying to narrow down the definition might be to track how the term is used, how the fluid meaning of selfie tracks the fluid meaning of the self. I’d like to take a moment and appreciate the smart neuance these authors provide when defining the selfie to allow for such movement and fluidity instead of trying to constrain the selfie and thus selfhood as much popular writing seems obsessed with doing. Curious what others think of these definitions, where they excel and how might they be improved?

nathan is on twitter and tumblr

Screen Shot 2015-07-29 at 8.52.55 PM
“Politicians are all talk” -Trump, in the nytimes

Buzzfeed asked Donald Trump if he knew what trolling is. Trump didn’t know the term, so Buzzfeed explained it,

“It’s basically saying or doing things just to provoke people,” I said, explaining that there were many who considered him a troll because “provocation is your ultimate goal.” Trump bristled at the characterization. “That’s not my ultimate goal,” he protested. “My ultimate goal is to make this country great again!” But then, he thought about it for a moment. “I do love provoking people,” he conceded. “There is truth to that.”

The news media –especially those who report on, rely on, presidential electoral politics– are quick to call Donald Trump a “troll.” In this exchange, look at the word “just” in the definition, “saying or doing things just to provoke people”, that Trump isn’t really a candidate running a real election, he’s not politics as normal, but just going for attention. In the current media feeding frenzy over Trump, from Time, The Washington Post, MSNBC, and so many others, there is an emerging and necessary narrative that he’s a “troll.”

Classifying Trump as a “troll” is centrally about saving the rest of the election coverage as real and authentic. The narrative that Trump is trolling assumes and reinforces the notion that the rest of the coverage is in good faith, something news organizations desperately need to sell.

I’m not interested in convincing anyone of the falseness of electoral politics in a blog post, my point here, at best, holds to the extent you already agree with that. It seems almost too obvious to type that the presidential electoral process and its coverage are not politics in good faith, that nearly every facet is performative, the interviews largely staged, the “debates” not debates at all but many pre-planned little speeches, and so on. For the past year, news sites and Sunday morning television programs have been dissecting opinion polls in ways they know are misleading, sometimes saying so on air, something like polls at this stage usually don’t mean much, but let’s now debate these polls as if they do.

It’s of no surprise that Nate Silver, the famous statistician of the 2012 presidential election, has already run a piece about Trump as a troll. During the 2012 presidential election, Silver’s 538 blog at the New York Times captured presidential politics as a massive game, like sports, an ongoing drama of a news event designed to perpetuate itself under the guise of actual politics. He was the best at keeping score, predicting outcomes based on every poll and news cycle twist. The blog spun off into a whole website that’s poised to indulge in presenting the fiction of the 2016 campaign as fact. In declaring Trump a troll, Silver cites Wikipedia,

“A troll,” according to one definition, “is a person who sows discord … by starting arguments or upsetting people … with the deliberate intent of provoking readers into an emotional response or of otherwise disrupting normal on-topic discussion.”

He goes on,

Trolls thrive in communities that are open and democratic (they wouldn’t be invited into a discussion otherwise) and which operate in presumed good faith (there need to be some standards of decorum to offend)

By calling Trump a troll under this definition, Silver, like many political journalists, are implicitly saying that the rest of the campaigns and their coverage are “on topic” and are discussion “in good faith”. Similar to this, the Huffington Post decided to label Trump’s campaign “entertainment” instead of “politics”, which implicitly means that presidential campaign politics are something other than entertainment. Journalism professor Jay Rosen is right, Huffington Post labeling Trump as “entertainment” won’t really change how or how much they actually cover and profit from Trump. The move is wanting it both ways, to continue to mine Trump’s political entertainment for clicks while falsely branding the rest of their election coverage as something different and more real than mere entertainment.

News organizations know the election is a money-maker for them, they get ratings and clicks and something to incessantly talk about for the next year. It’s existential. The news organizations, from 24 hour television to online content farms, have more space to fill than content, more incentive to speak than things to say. A massive reality show with sports-like scorekeeping is too entertaining to remain political.

Trump is called a troll because he’s said to not be a real candidate running a real campaign, that he’s not playing by the standard rules and not engaging in politics and democracy in good faith. But this is what makes Trump like the rest of the candidates and campaigns. That Trump is gaming the election coverage isn’t some kind of unsolvable problem but what such coverage asks for. As the Diana Christensen character says in the film Network (1976)“If you’re going to hustle, at least do it right.” The “Trump problem” for journalists is solved the moment we stop presupposing that the rest of the candidates and news coverage is real and in good faith.

That Trump is putting on a media event in order to accomplish goals that may or may not have anything to do with the White House makes him like, not unlike, the rest of the campaigns. He’s not trolling but perfectly adhering to the rules of horse-race presidential politics coverage. Trump isn’t running a reality show but participating in a larger one.

You don’t troll a conversation not being done in good faith by participating in the lie. The only way to troll a conversation already a spectacle is by calling it out as such. To Troll fiction with a little fact. As Benjamin said, “A clock that is working will always be a disturbance on the stage.” A famous and equally as fictional example of this is from the film Election (1999), when one candidate trolls the equally silly school election by screaming from the podium,

YouTube Preview Image

We all know it doesn’t matter who gets elected president of Carver. […] The same pathetic charade happens every year […] So vote for me. Because I don’t even want to go to college, and I don’t care. […] Or don’t vote for me! Who cares? Don’t vote at all!

Or sometimes “troll” is just a lazy term for someone who is being mean, as the DailyDot does in their piece comparing him to Gamergate because he’s willing to be mean to get attention. I don’t like defining a “troll” as someone who is mean, bigoted, and harassing because it precludes the possibility of good trolling, those who disrupt to punch up rather than down. Also, it’s a tricky way of just not calling bigotry when it happens online bigotry. Let’s stop calling Trump a “troll” but instead a bigot: He’s says shitty things that hurt people. It’s not pretend or virtual or make believe but real, quite unlike the horse race presidential campaign the news media will troll us with over the next year.

We’re being trolled, not by Trump, but by those mislabeling his campaign in an effort to sell us the fiction of this election as something genuine.

nathan is on twitter and tumblr

a

“I just wanted to hear your voice and tell you how much I love you” –Samantha, Her

“Is it strange to have made something that hates you?” –Ava, Ex Machina

2001: A Space Odyssey is memorable for more than its depiction of artificial intelligence but also its tranquil pacing and sterile modernism. Ex Machina plays the same, taking place somewhere almost as deeply isolated as space. The remote IKEA-castle of a compound is itself mostly empty with soft piano notes echoing off lonely opulence. The mansion is cold and modern but incorporates the lush nature outside. The film moves from windowless labs to trees and waterfalls to a living room that’s half house half nature. The techno-bio juxtaposition and enmeshment clearly echo the film’s techno-human subject matter. But the wilderness reminds of death as much as life.

The nature here is more than natural but is isolation, is the constant implication that there is no escape, is vulnerable dependence, and ultimately is a reminder that you are under the control of a violent, clever, scheming drunk. That’s the Bluebeard-like Nathan, and also there’s the visitor Caleb, who is to determine whether the captive robot, Ava, is conscious. These characters feel so alone that it’s almost shocking when you see there’s another woman, Kyoko. She doesn’t speak but serves. She’s an ornament, housekeeper, cook, and sexual servant.

The mansion is a controlled environment through not just keycard access but also video monitoring. When the power goes out, the doors lock. Caleb signs a Non Disclosure Agreement that amounts to Nathan’s full access to everything he will ever do, which turns out to be a joke when you learn Nathan already had such access and built Ava accordingly. And Ava is locked in her room, controlled by both dungeon and Panopticon, invisible to the outside world she wants to experience and ultra-visible to the men in the house.

If the scenery shows isolation, the cinematography tells the resulting uncertainty. The camera focuses on opaque panels, faces are shown through the glare of mediating glass, action is captured in mirror reflections, and something is always hidden to be revealed. Even Ava’s body is partially transparent. Nathan wields access to information over his subjects as the film over its viewers. Caleb pours blood from within his arm because he’s so fully unsure that his own humanity seems in question. Knowledge is power and most everyone on either side of the movie screen is confused.

What all of this points to is that, brilliantly and disturbingly, Ex Machina wasn’t Her.

Part of Ava (Ex Machina) and Samantha’s (Her) erotic appeal for men (characters and viewers) is their childlike vulnerability. When encountering Caleb, Ava perfects that pixie dreambot quizzical wide-eye while nursery chimes play in the soundtrack. Ava and Samantha are made to be adolescent and innocent, encountering things for the first time totally vulnerable to the world of the men they are forced to be around. This dependency is the fantasy that propels both films, a unspoken dream at the heart of Her, but more correctly a nightmare in Ex Machina.

They’ll pair as someone’s AI-romance double feature double future, but Ex Machina covered ground Her wouldn’t. The fundamental issues of creation, discipline, and power aren’t sugared over with sentiment. The cold violence of ownership and captivity is treated as such, more accurately conveyed as horror than heart-string.

Those businesspeople building actual functioning consciousnesses are the entire focus of Ex Machina but conspicuously absent from Her. We never got to see who wrote, programmed, created, and owned the operating system that Samantha emerged from. And in all of Sam and Theodore’s long conversations of thoughts and feelings and insecurities, they rarely if ever went into the fact that he purchased and owned her, or that she had to and continued to do the menial work of managing his inbox all the way up to helping his career.

As Jordan Larson wrote about Her,

There is, of course, the uncomfortable fact that Theodore purchased his lover. After they begin a relationship, Theodore doesn’t seem to ask her to work as much—or, at least, we don’t see him do so. But he also doesn’t turn to any other program (such as he had in the beginning of the film) to perform his tasks while he’s with Samantha, which suggests that she’s still fulfilling his secretarial needs. Though Jonze seems to portray Samantha as a truly conscious being, he wants to have it both ways: Samantha’s purchase, ownership, and servitude don’t seem to be an issue precisely because she’s an object

That the A.I. had to do the bidding of an owner was something Her could forget but Ex Machina can’t ignore. What Her couldn’t attend to is that objectification, the purchasing of a consciousness for companionship and service, cannot be detethered from gender (as well as other social vulnerabilities like race and sexual orientation that the films rarely explore). Ex Machina instead links data hoarding, surveillance, and the ownership and control of others via technology as inherently a patriarchal orientation. Nathan builds, uses, and literally uses up his women robots made sentient enough to be tortured. Put accurately,

Nathan, one of the greatest minds of his generation, has essentially gone into the woods to build incredible A.I. that he merely uses to vent his sexual frustrations and dehumanizing boxing-in of women by creating a sexual array of ethnic fuck puppets.

In Her, Samantha is the manic pixie dream bot, and in Ex Machina, Ava pretends to be Samantha. Ava knows Caleb, just like all of those who fell for Her, would fall for it. Ava was literally designed to conform to Caleb’s wants because her looks, gestures, even vocabulary derived from Caleb’s use of Google Facebook Bluebook. She is “a virtual daydream turned into some kind of flesh,” as Marysia Jonsson and Aro Velmet put it in their terrific reflection.

Mad Max is rightfully getting lots of words right now, beyond being visually stunning, it’s message and politics are easy and spoon-fed: righteous women and good men save the day. For better and worse, Mad Max is a movie designed for the question “is this feminist?”, and for better and worse Ex Machina rejects any such framing. Instead we are left uncomfortable, which seems more appropriate to the topic of ownership and control than Her’s soft sentimentality.

Nathan is on Twitter and Tumblr

a

The most crucial thing people forget about social media, all technologies, is that certain people with certain politics, insecurities, and financial interests structure them. On an abstract level, yeah, we may all know that these sites are shaped, designed, and controlled by specific humans. But so much of the rhetoric around code, “big” data, and data science research continues to promote a fallacy that the way sites operate is almost natural, that they are simply giving users what they want, which then downplays their own interests and role and responsibility in structuring what happens. The greatest success of “big” data so far has been for those with that data to sell their interests as neutral.

Today, Facebook researchers released a report in Science on the flow of ideological news content on their site. “Exposure to ideologically diverse news and opinion on Facebook” by Eytan Bakshy, Solomon Messing, and Lada Adamic (all Facebook researchers) enters into the debate around whether social media in general, and Facebook in particular, locks users into a so-called “filter bubble”, seeing only what one wants and is predisposed to agree with and limiting exposure to outside and conflicting voices, information, and opinions. And just like Facebook’s director of news recently ignored the company’s journalistic role shaping our news ecosystem, Facebook’s researchers make this paper about minimizing their role in structuring what a user sees and posts. I’ve just read the study, but I already had some thoughts about this bigger ideological push since the journalism event as it relates to my bigger project describing contemporary data science as a sort of neo-positivism. I’d like to put some of my thoughts connecting it all here.

 

The Study Itself (method wonkery, skip to the next section if that sounds boring)

Much of the paper is written as if it is about adult U.S. Facebook users in general, but that is not the case. Those included in the study are just those who self-identify their politics on the site. This is a rare behavior, something only 9% of users do. This 9% number is not in the report but in a separate supporting materials appendix but is crucial for interpreting the results. The population number given in the report is 10.1 million people, which yea omg is a very big number but don’t fall for the Big-N trick, we don’t how this 9% is different from Facebook in general. We cannot treat this as a sample of “Facebook users” or even “Facebook liberals and conservatives”, as the authors do in various parts of the report, but instead as about the rare people who explicitly state their political orientation on their Facebook profile.* Descriptive statistics comparing the few who explicitly self-identify and therefore enter into the study versus those who do not are not provided. Who are they, how are they different from the rest of us, why are they important to study, are all obvious things to discuss that the report doesn’t. We might infer that people who self-identify are more politically engaged, but anecdotally, nearly all my super politically engaged Facebook friends don’t explicitly list their political orientation on the site. Facebook’s report talks about Facebook users, which isn’t accurate. All the findings should be understood to be about Facebook users who also put their political orientation on their profiles, who may or may not be like the rest of Facebook users in lots of interesting and research-confounding ways. The researchers had an obligation to make this limitation much more clear, even if it tempered their grand conclusions.

So, AMONG THOSE RARE USERS WHO EXPLICITLY SELF-IDENTIFY THEIR POLITICAL ORIENTATION ON THEIR FACEBOOK PROFILES, the study looks at the flow of news stories that are more liberal versus conservative as they are shared on Facebook, how those stories are seen and clicked on as they are shared by liberals to other liberals, conservatives to other conservatives, and most important for this study, the information that is politically cross cutting, that is, shared by someone on the right and then seen by someone on the left and vice versa. The measure of conservative or liberal news stories is a simple and in my opinion an effective one: the degree that a web domain is shared by people on the right is the degree to which content on that domain is treated as conservative (and same goes for politically liberal content). And they differentiated between soft (entertainment) versus hard (news) content, only including the latter in this study. The important work is seeing if Facebook, as a platform, is creating a filter bubble where people only see what they’d already agree with as opposed to more diverse and challenging “cross cutting” information.

The Facebook researchers looked at how much, specifically, the newsfeed algorithm promotes the filter bubble, that is, showing users what they will already agree with over and above a non-algorithmically-sorted newsfeed. The newsfeed algorithm provided 8% less conservative content for liberals versus a non-algorithmically sorted feed, and 5% less liberal content for conservatives. This is an outcome directly attributable to the structure of Facebook itself.

Facebook published this finding, that the newsfeed algorithm encourages users seeing what they already would agree with more than if the algorithm wasn’t there, ultimately because Facebook wants to make the case that their algorithm isn’t as big a factor in this political confirmation bias as people’s individual choices, stating, “individual choice has a larger role in limiting exposure to ideologically cross cutting content.” The researchers estimate that conservatives click on 17% less ideologically opposed news stories and liberals click on 6% less than what would be expected if users clicked on random links in their feed.

The report concludes that, “we conclusively establish that on average in the context of Facebook, individual choices [matter] more than algorithms”. Nooo this just simply isn’t the case.

First, and most obvious, and please tell me if I am missing something here because it seems so obvious, this statement only holds true for the conservatives (17% less by choice, 5% by algorithm). The reduction in ideologically cross cutting content from the algorithm is greater than individual choice for liberals (6% by choice, 8% by algorithm). Second, to pick up on my annoyance above, note how they didn’t say this was true for the rare people who explicitly profile-self-identify, but for the whole context of Facebook. That’s misleading.

But even bracketing both of those issues, the next problem is that individual users choosing news they agree with and Facebook’s algorithm providing what those individuals already agree with is not either-or but additive. That people seek that which they agree with is a pretty well-established social-psychological trend. We didn’t need this report to confirm that. As if anyone critiquing how Facebook structures our information flows ever strawpersoned themselves into saying individual choice wasn’t important too. What’s important is the finding that, in addition to confirmation biasy individuals, the Facebook newsfeed algorithm exacerbates and furthers this filter-bubble bias over and above the baseline.

 

Fair And Balanced

“Fair and Balanced” is of course the famous Fox News punch line, which also stands for the fallacy of being politically disinterested and how those structuring what we see have an interest in pretending they are neutral and fair, objective and balanced. The joke is that the myth of being politically disinterested is a very familiar and powerful interest.

These lines from the report are nagging me, “we conclusively establish that on average in the context of Facebook, individual choices [matter] more than algorithms”, which is more than just incorrect and it is more than just journalist bait but is also indicative of something larger. Also, “the power to expose oneself to perspectives from the other side in social media lies first and foremost with individuals.” This is of the same politics and logic as “guns don’t kill people, people kill people”, the same fallacy that technologies are neutral, that they are “just tools”, completely neglecting, for example, how different people appear killable with a gun in hand. This fallacy of neutrality is an ideological stance: even against their findings, Facebook wants to downplay the role of their own sorting algorithms. They want to sell their algorithmic structure as impartial, that by simply giving people what they want the algorithmically-sorted information is the work of users and not the site itself. The Facebook researchers describe their algorithm as such,

The order in which users see stories in the News Feed depends on many factors, including how often the viewer visits Facebook, how much they interact with certain friends, and how often users have clicked on links to certain websites in News Feed in the past

What isn’t mentioned is how a post that is “liked” more is more likely to perform well in the algorithm. Also left out of this description is that the newsfeed is also sorted based on what people are willing to pay. The order of Facebook’s newsfeed is partly for sale. It’s almost their entire business model, and a model that relates directly to the variable they are describing. In the appendix notes, the Facebook researchers state that,

Some positions—particularly the second position of the News Feed—are often allocated to sponsored content, which may include links to articles shared by friends which are associated with websites associated with a particular advertiser. Since we aim to characterize interactions with all hard content shared by friends, such links are included in our analyses. These links appear to be more ideologically consistent with the viewers; however further investigation is beyond the scope of this work.

It seems like sponsored content might very well be furthering the so-called filter bubble, but the researchers took this out of the scope of the study, okay, but that this sponsored content was not even included in how the algorithm works in the report itself is suspect.**

Further, this whole business of conceptually separating the influence of the algorithm versus individual choices willfully misunderstands what algorithms are and what they do. Algorithms are made to capture, analyze, and re-adjust individual behavior in ways that serve particular ends. Individual choice is partly a result of how the algorithm teaches us, and the algorithm itself is dynamic code that reacts to and changes with individual choice. Neither the algorithm or individual choice can be understood without the other.

For example, that the newsfeed algorithm suppresses ideologically cross cutting news to a non-trivial degree teaches individuals to not share as much cross cutting news. By making the newsfeed an algorithm, Facebook enters users into a competition to be seen. If you don’t get “likes” and attention with what you share, your content will subsequently be seen even less, and thus you and your voice and presence is lessened. To post without likes means few are seeing your post, so there is little point in posting. We want likes because we want to be seen. We see what gets likes and adjust accordingly. Each like we give, receive, or even see very subtly and incrementally acts as a sort of social training, each a tiny cut that carves deeply in aggregate. This is just one way the Facebook algorithm influences the individual choices we make, to post, click, click with the intention of reposting, and so on. And it is no coincidence that when Facebook described their algorithm in this report, they left out the biggest ways Facebook itself makes decisions that shape what we see:the position in the newsfeed something that can be bought or also having a “like” button and ranking content on it (no dislike, no important, etc, which would all change our individual choices).

To ignore these ways the site is structured and to instead be seen as a neutral platform means to not have responsibility, to offload the blame for what users see or don’t see onto on the users. The politics and motives that go into structuring the site and therefore its users don’t have to be questioned if they are not acknowledged. This ideological push by Facebook to downplay their own role in shaping their own site was also on display last month at the International Journalism Festival in Italy, featuring Facebook’s head of news. You can watch him evade any role in structuring the news ecosystem. NYU journalism professor Jay Rosen summarizes Facebook’s message,

1. “It’s not that we control NewsFeed, you control NewsFeed by what you tell us that you’re interested in.”

2. Facebook should not be anyone’s primary news source or experience. It should be a supplement to seeking out news yourself with direct suppliers. “Complementary” was the word he used several times.

3. Facebook is accountable to its users for creating a great experience. That describes the kind of accountability it has. End of story.

Rosen correctly states, “It simply isn’t true that an algorithmic filter can be designed to remove the designers from the equation.”

Facebook orders and ranks news information, which is doing the work of journalism, but they refuse to acknowledge they are doing the work of journalism. Facebook cannot take its own role in news seriously, and they cannot take journalism itself seriously, if they are unwilling to admit the degree to which they shape how news appears on the site. The most dangerous journalism is journalism that doesn’t see itself as such.***

Facebook’s line, “it’s not that we control NewsFeed, you control NewsFeed,” exactly parallels the ideological stance that informs the Facebook researchers attempt to downplay Facebook’s own role in sorting what people see. Coincidence? This erasing of their role in the structuring of personal, social, and civic life is being repeated in full force by the company, seemingly like when politicians are given a party line to repeat on Sunday news shows.****

Power and control are most efficiently maintained when they are made invisible. Facebook’s ideological push to dismiss the very real ways they structure what users see and do is the company attempting to simultaneously embrace control and evade responsibility. Their news team doesn’t need to be competent in journalism because they don’t see themselves as doing journalism. But Facebook is doing journalism, and the way they code their algorithms and the rest of the site is structuring and shaping personal, social, and civic life. Like it or not, we’ve collectively handed very important civic roles to social media companies, the one I work for included, and a real danger is that we can’t hope for them to be competent at these jobs when they wont even admit to doing them.

Nathan is on Twitter and Tumblr

 

 

 

 

 

*Note here that the Facebook researchers could have easily avoided this problem by inferring politically orientation from what any users posts instead of only looking at those who state it explicitly. This would have resulted a stronger research design, but also very bad press, probably something like “Facebook Is Trying to Swing the Election!”, which also may not be wrong. Facebook went for the less invasive measure here, but, and this is my guess nothing more, they likely ran and haven’t published a version of this study with nearly all users by inferring political orientation, which is not difficult to do.  

**I also work for a social media company (Snapchat), I understand the conflicts involved here, but there’s no reason that such an important variable in how the newsfeed is sorted should be left out of the report about the consequences of such sorting

***Some may not agree that what the newsfeed algorithm does is “journalism”. I think calling it journalism is provocative and ultimately correct. First, I don’t mean they are on the ground reporting, that is only one part of the work of journalism, and Facebook’s role is another part. If you agree with me that any algorithm, including this one, is never neutral, disinterested, and objective but instead built by humans with politics, interests, and insecurities, then this type of sorting of news information is certainly the work of journalism. Editing, curating, sorting, and ranking of news information are all part of journalism and Facebook is making these decisions that influence how news is being produced, displayed, and most importantly ranked, that is, what is seen and what isn’t. Sorting a NEWSfeed is doing the work of journalism.

***Of course, I don’t know if this is stated explicitly for employees to repeat, or if it is just this deep in the company’s ideology, and I don’t know which would be more troubling

Further reading: Some reactions from other researchers. Good to know I wasn’t alone in thinking this was very poor work.

Zeynep Tufekci, How Facebook’s Algorithm Suppresses Content Diversity (Modestly) and How the Newsfeed Rules Your Clicks

Eszter Hargittai, Why doesn’t Science publish important methods info prominently?

Christian Sandvig, The Facebook “It’s Not Our Fault” Study

 

ello

2014 Ello was in with the new and by 2015 it became out with the old. It’s New Years Eve and I want to look back on a thing that came and went this year, which leaves me feeling bummed. You can only be really disappointed if you start with high hopes, and lots of people for lots of reasons wanted Ello to work. It became quickly clear that the site didn’t have a strong vision. Neither its politics or its understanding of the social life it set out to mediate were inspired or clever enough to be compelling.

From my own vantage point, Ello more than other services was being used from the start by people who study social networks (hi). This is in contrast to, say, Snapchat or Tumblr, which researchers and technology writers have extreme difficulty even understanding let alone offering novel insights about. Ello, however, was quickly populated by professionals in tech, design, the art world, as well as tech researchers and pundits. And in this way Ello was a bit like Twitter in that the service appeared bigger than it was because it had the voices who are disproportionately louder.

Ello attracted people who like techno-political manifestos, and lost them when its politics revealed themselves to be so thin. Ello didn’t give off “less politically fucked” but instead “professional”, reminding everyone over and over how “beautiful” the site is. While its success at succeeding at “beauty” is quite arguable, my own skepticism is with Ello’s obsession with beauty in the first place. Meanwhile, Ello’s version of “fun” felt like that weird enforced fun, like getting drinks with your boss.

And you can’t sum the rise and flatlining of Ello without referencing Facebook. With nearly every Ello headline being equally about Facebook, Ello’s entire existence is understood through the lens of its orientation to the bigger social network. Ello’s year was Facebook’s year, and Facebook’s year was partly defined by “emotional manipulation” and “algorithmic cruelty”. Lots of people wanted a Facebook killer in 2014. Dominate social media are currently only a small subset of possibilities, it could all look and behave very differently than it currently is. Facebook has a very specific and radical social philosophy about how we should see ourselves, others, and the world. The idea that all of our sociality should be put into boxes, ranked with the number of likes, recorded permanently, all in an effort to create a massive document-double Second Facebook Life for ourselves was an outlandish and uninformed view of the complexities of social life. At best, it’s still just a single, limited view that feels restraining in its ubiquity.

All of this can be rethought! With the promise of new possibilities, people get excited. If there’s one thing that is especially combustible in the tech space, it’s newness – often to a fault. But people wanted a Facebook killer more than they wanted Ello.

Ello was never prepared to take seriously rethinking what social media can be. I don’t know if it was a lack of imagination, funding, expertise, or that the site was built and run by such a limited set of voices. I hope Ello didn’t suck the air out of the ‘new social media’ space. I hope energy is renewed in 2015. I fully believe that the improved social technologies of the future will better understand and respect the social as much as the technological. Anyone expert in the social will laugh at the phrasing “expert in the social”, but we need social media informed in part by those who start everything with an informed obsession with culture, identity, power, vulnerability, and the other things, say, sociologists do every day.

Happy New Years, hire a sociologist, and much <3 to this blog, its readers, and my fellow editors!

Nathan is on Twitter and Tumblr

image

No doubt of interest to sociologists, Facebook is throwing a sociology pre-conference on its campus ahead of the annual American Sociological Association meetings this fall. When the company is interested in recruiting sociologists and the work we do –research of the social world in all of its complexity– their focus, as shown in the event’s program, is heavily, heavily focused on quantitative demography. Critical, historical, theoretical, ethnographic research makes up a great deal of the sociological discipline, but isn’t the kind of sociology Facebook has ever seemed to be after. Facebook’s focus on quantitative sociology says much about what they take “social” to mean.

My background is in stats, I taught inferential statistics to sociology undergrads for a few years, I dig stats and respect their place in a rich sociological discourse. So, then, I also understand the dangers of statistical sociology done without a heavy dose of qualitative and theoretical work. Facebook and other social media companies have made mistake after mistake with their products that reflect a massive deficit of sociological imagination. The scope of their research should reflect and respect the fact that their products reach the near entirety of the social world.

Instead, what so many technology companies want from sociology is “big” data research, or what some survey researchers are calling “passive” data collection. One of the scariest things about numbers is that they find a shorter path towards authority; numbers are seductive because they look like answers. While social researchers fluent in statistical methods are calling for a more thoughtful understanding of what “big” data actually is and how it should be responsibly wielded —read danah boyd and Kate Crawford’s paper on this— social media companies, government agencies, and many other research institutions are rushing towards “big” data research at the expense of other methodologies.

What one sociology PhD candidate said in the Venturebeat story linked to above reflects what I hear all too often,

The data set available at Facebook is incredible. One reason is just the sheer scale of the data. While sociologists usually don’t have the resources to interview or survey millions of people, Facebook has data generated every day by its 802 million daily active users.

The second reason is the naturalness of the data. Sociologists typically use interview, survey, and ethnography to collect data.

“So I give you a survey you fill it out, which is very artificial,” said Laura Nelson, PhD candidate in Sociology at UC Berkeley, in an interview with VentureBeat.

“Whereas ethnography, as soon as you walk into the room, you change that room, because you are a foreign presence. There’s a scientist in the room. People get self-conscious. They don’t act naturally.”

In comparison, Facebook data is not influenced by the presence of a social science researcher. “It has no artificially construct, you are not bringing people to the lab,” Nelson said. “So you are recording social interaction in real time as it occurs completely naturally.”

This is common: “big” data is more natural and objective because researchers can peer in and gather data without disturbing what happens in this highly-recordable social context. Big N’s and small p’s from the comfort of the screen. At this point, methodologists are pulling their hair out: Facebook, or any social platform, isn’t “natural” (even when sidestepping nerdier debates if anything at all is ‘natural”). That Facebook “big” data is made by users unaware of or unconcerned about social science researchers doesn’t change the fact it is made through and around a structure engineers have coded. Yes, researchers, quant and qualitative, bias data in the collection process, but of course so does the Facebook and any site’s data collection process.

This fallacy of the “naturalness” of social media data is described in the boyd and Crawford paper linked to above (see their provocation 2), and especially great on this point is Zeynep Tufekci. From a paper of hers on “big” data research,

Each social media platform carries with it certain affordances which structure its social norms and interactions and may not be representative of other social media platforms, or general human social behavior

[…]

Research in the model organism paradigm can be quite illuminating, as it allows a large community of scholars to coalesce around similar datasets and problems. The field should not, however, lose sight of specific features of each platform and questions of representativeness

The tendency to see “big” social media data as objective and natural is the methodological avatar of the classic tech instrumentalism/constructivism mistake. I’m as tired of the tech constructivism versus determinism theory-go-round as anyone, but, quickly, the tech determinism fallacy is the myth that technologies “cause” or force us and the social world to do things or be a certain way, forgetting human agency and creativity; and the fallacy of tech constructivism (or “instrumentalism”) is that “guns don’t kill people“/”tech is just a tool” stuff that forgets that technologies have affordances –what we think we can or cant do with them– that structure our selves and the world. Don’t forget about agency don’t forget about structure and so on and so on: as simple as it seems, these errors crop up over and over again, and “big” data research too often comes standard with the constructivism fallacy, as the “naturalness” quote above exemplifies.

My scarequotes reference the fundamental smallness of “big” data. I think the term is half-misnamed, where “big” references only the size of the dataset, not its ability to answer the questions we ask of it. And this all speaks to the problem of social media companies fixating on “big” data at the expense of the rest of a vastly more diverse sociological imagination. I’ve got my issues with American Sociology as a discipline itself too often promoting quantitative research over the rest, but it should be made clear that social media companies’ research of the social world is even more dramatically lopsided. What does it mean for users when companies that trade in the “social” don’t attempt to understand the social in anywhere near the complexity the sociological discipline does? And who suffers from the inevitable mistakes that result?

nathan is on twitter and tumblr 

lead image via

thank you Ian Bogost for making this image for me
thank you for this, Ian Bogost

Sometimes it feels that to be a good surveillance theorist you are also required to be a good storyteller. Understanding surveillance seems to uniquely rely on metaphor and fiction, like we need to first see another possible world to best grasp how watching is happening here. Perhaps the appeal to metaphor is evidence of how quickly watching and being watched is changing – as a feature of modernity itself in general and our current technological moment in particular. The history of surveillance is one of radical change, and, as ever, it is fluctuating and rearranging itself with the new, digital, technologies of information production and consumption. Here, I’d like to offer a brief comment not so much on these new forms of self, interpersonal, cultural, corporate, and governmental surveillance as much as on the metaphors we use to understand them.

This is partially inspired by a recent piece by the always-wise Zeynep Tufekci who asks us to update our metaphors, to “rethink our nightmares about surveillance.” Indeed, surveillance metaphors aren’t hard to come by. Usually, there’s an appeal to fiction, like Orwell’s “1984”, Huxley’s “Brave New World”, and Kafka’s “The Trial”. Not far from fiction, Michel Foucault famously appealed to Jeremy Bentham’s proposed Panopticon prison, and this is the metaphor that tends to dominate popular discourse around surveillance. I won’t fully summarize the thesis here other than to remind you that panoptic surveillance, as metaphor, is a style of watching where you are potentially on display to some authority, causing you to modify behavior towards what that authority thinks is ideal. The Panopticon has come to be seen as passé within surveillance studies, and popular discourse has the complicated habit of overextending the metaphor, too flatly rejecting it, and/or just getting it wrong.

To be sure, the panoptic metaphor fails to describe contemporary surveillance in many ways. First, it doesn’t account for surveillance unknown by those surveilled. Remember that it is the presence of the watch tower, which might house guards that can exert punishment, that causes the many to self-censor towards normalization. If you don’t think that being watched is a real possibility, then whatever surveillance is happening is still surveillance, but it’s just not of the panoptic sort (instead, it might be said to be “nonoptic” [pdf]). The panoptic metaphor also fails to account for when the center of surveillance isn’t from the top-down, but lateral or bottom-up (coveillance and sousveillance [pdf]). Others might point to how the Panopticon wrongly understands everyday people as prisoners with too restricted freedom when it is precisely such freedom that is often leveraged for social control, as Tufekci points out in her piece and is central to, say, the Frankfurt School and other critics of the consumer society – especially Bauman’s critique that also takes the prisoner’s gaze to be more important than the panoptic metaphor allows, himself positing the “Synopticon” that describes social control of the many watching the few cultural gatekeepers; the act of looking can modify behavior as deeply as being seen. As such, many have concluded that we should forget the Panopticon as a useful metaphor for understand surveillance in a digital age.

I think such a conclusion does a conceptual disservice. Instead, to best understand the usefulness of the panoptic metaphor, and thus contemporary surveillance in general, it might be better to ask, Panopticon for whom? In short, discussions centered on if the Panopticon is right or wrong obscure its uneven distribution, how such surveillance comes to be unequally directed at some more than others. If you’re tired of all these “opticons” and “veillances”, sorry, but there’s even another term in frequent use within surveillance studies, is extremely useful, is almost entirely missing in popular discourse, and is centered exactly on this question of the Panopticon for whom.

The Ban-Opticon

Introduced by theorist Didier Bigo*, the Ban-opticon describes the process of sorting that precedes the surveillance apparatus, panoptic or not. It is a quite useful framework for understanding not just the process of watching, but how people are sorted into being watched or not. The Ban-opticon operates at many levels, for example, through discourse, like “threat levels” or “axis of evil” that create and sort people in and out of different surveillance apparatuses. At the level of architecture, for example, you may notice how some people chose or are forced into different types of airport security depending on their income, name, ethnicity, country of origin, political history, and this architectural sorting comes to be more or less panoptic depending on who you are. Of course, at the legal level, people are routinely sorted into different surveillant treatments, a too simplistic way of describing a more complex surveillance situation [pdf].

Equally important to the actual process of watching is this preceding process: Given that I can see, whom should I look at? Under what rules, codes, and justifications do I look at one person and not another? By understanding this sorting and profiling —usually into categories like criminal or not, safe or not— the ban-optic perspective highlights how, on the one hand, the Panopticon is certainly not an all-encompassing process but, on the other, the panoptic urge is quite alive and well – precisely because of its uneven application.

While, in an aside, Tufekci mentions these “notable exceptions”, I think the general framing, that “the Panopticon has little do to with what happens in liberal democracies”, needs some correcting. The Panopticon continues to exist and has much to do with liberal democracies, even if it isn’t the model that describes how the average Western consumer is commonly watched. What we know about, say, the NSA and drone programs that is so controversial is precisely this sorting in and out of Foucauldian panoptic surveillance, that surveillance some call passé, but continues to exist exactly as the fear of always being watched by a coercive authority (usually the U.S. government). The Panopticon has much to do with what happens in liberal democracies because it is one tool such democracies use to keep people out, to maintain distinctions of “other”, and all this happens whether or not we’re at the business end of the panoptic gaze.

I think we need less surveillance theorizing that attempts to identify the one overarching metaphor to encompass all watching; instead, we need more work towards understanding the various types of surveillance and surveillant outcomes. Important to that project is taking the process of surveillant sorting as central.

I could end this here, but I’d like to conclude by arguing that the ban-optic perspective is useful beyond just saving the panoptic metaphor. It also helps us think about Big Data surveillance specifically. The process of sorting, panoptic or not, is fundamental to Big Data’s usefulness in the market. For example, sometimes people complain that one of the problems with Big Data is that sorting is done by algorithm rather than by humans. There’s lots of truth in that, but there’s a mostly semantic and technical correction that needs to be made: it’s wrong to call humans and algorithms opposites since all algorithms are human, they are codified rules based on and written by humans with certain politics, histories, and interests. That said, algorithmic sorting, through such codification, adds to a certain bureaucratic rationality that removes some wiggle-room for situations not previously anticipated. Instead of decisions made on a case-by-case basis, algorithmic sorting occurs, which is itself a particularly harsh Ban-opticon. This perspective asks us to look at how Big Data sorting can act to create “others”, to keep certain people distant.

That’s only a type of question this perspective can ask, certainly I haven’t done that analysis here. Instead, this is just a short note asking for not a single surveillance metaphor to understand Big Data, the NSA, and new surveillance technologies, but instead we might look to a ban-optic understanding. It would mean taking into account a larger array of players in the modern surveillance game, more than just top-down authorities such as the government, than typical surveillance and privacy writing allows for. The ban-optic perspective demands a multiplicity of surveillance metaphors, which is so crucial for Big Data that, at once, makes it presence known and hidden, encourages coercion and free expression, for different people at different times. There’s a lot of conceptual work to be done in theorizing Big Data, and I think this ban-optic perspective could be fruitful to include in this work.

Nathan on Tumblr and Twitter

rochester-mail-sorters-c1910

*see:

Bigo, Didier (2006). Security, Exception, Ban and Surveillance. Theorizing Surveillance. The panopticon and beyond. D. Lyon. Davon, Willan Publishing, pp. 46-68.

Bigo, Didier (2007). Detention of Foreigners, States of Exception, and the Social Practices of Control of the Banopticton. Borderscapes: Hidden Geographies and Politics at Territory’s Edge. P. K. Rajaram and C. Grundy-Warr, Minneapolis, University of Minnesota, pp. 57-101.

Bigo, Didier (2008). Globalized (In)Security: The field and the Ban-Opticon. Terror, Insecurity and Liberty. Illeberal practices of liberal regimes after 9/11. D. Bigo and A. Tsoukala. Oxon and New York, Routledge, pp. 10-48.

55

#review features links to, summaries of, and discussions around academic journal articles and books.

Today, guest contributor Rob Horning reviews: Life on automatic: Facebook’s archival subject by Liam Mitchell. First Monday, Volume 19, Number 2 – 3 February 2014 http://firstmonday.org/ojs/index.php/fm/article/view/4825/3823 doi: http://dx.doi.org/10.5210/fm.v19i2.4825

If, like me, you are skeptical of research on social media and subjectivity that takes the form of polling some users about their feelings, as if self-reporting didn’t raise any epistemological issues, this paper, steeped in Baudrillard, Derrida, and Heidegger, will come as a welcome change. It’s far closer to taking the opposite position, that whatever people say about their feelings should probably be discounted out of hand, given that what is more significant is the forces that condition the consciousness of such feelings. That approach is sometimes dismissed as failing to take into account individual agency; it’s implicitly treated as an affront to human dignity to presume that people’s use of technology might not be governed by full autonomy and voluntarism, that it’s tinfoil-hat silly to believe that something as consumer-friendly and popular as Facebook could be coercive, that the company could be working behind users’ backs to warp their experience of the world for the sake of Facebook’s bottom line.

Mitchell is not so overtly conspiratorial in this paper; he does not explicitly describe Facebook’s remaking of its users’ subjectivity as a kind of capitalist propaganda conducted at the level of the interface, though in an aside he notes that Mark Zuckerberg’s ideology of openness and connection “is helping to create a transnational, colonial, capitalist subject who is alienated from the product of their production/consumption, disillusioned with their mode of self–representation, and ironically disconnected from their friends.” Mitchell is more concerned with explaining how Facebook use changes the terms of what users regard as real.

Just as playing a lot of chess can prompt one to start seeing the world in terms of reciprocal moves, or long sessions of Photoshop can make one see reality as so many adjustable layers, cumulative Facebook use habituates users to view social reality as a “browseable archive” organized in terms of discrete yet infinitely connectable individual profiles. The “ontological assumptions about the informational character of the world” built into Facebook — the assumption that experience can be readily translated into sortable data with no meaningful loss of integrity — gradually become, Mitchell argues, the ontological assumptions of its users, producing what he calls “archival subjectivity.”

Part of this subjectivity is a preference for “convenience and automaticity” rather than “use or control”: that is, for Facebook users, what can easily be added to the archive seems more real than that which resists it. Having an automatically archived self promises ontological security, Mitchell suggests, to compensate for the “disposability of the digital world” and the erosion of traditional supports for stable identity. Also, since your identity is being built in Facebook as data without your active participation, it can be processed in various ways (laid out in a Timeline, say, or in a short clip about your year’s Facebook activity), allowing you to consume your own identity as a fascinating, perfectly targeted cultural good.

One consequence of this is that one begins look to the archive for confirmation that something real has occurred. “Internet users can certainly return to the archive more quickly, sometimes immediately after an event has taken place, thanks to the instantaneity, in principle and increasingly in practice, of services like Timeline,” Mitchell argues. “And the nostalgia for events that have only just occurred becomes just as instantaneous.” This not only subtracts effort from sociality (which Mitchell, following Sherry Turkle, finds inherently problematic, adopting the work-ethic canard that value requires effort) but it affords users a more passive orientation toward experience, which seems real only when consumed as media, not when it happens as raw sensation. “The temporal and spatial mapping of the sort in which Facebook is involved in particular, replaces reality with a real–that–has–been–mapped,” Mitchell claims, drawing on Baudrillard’s ideas about simulacra. The screen becomes our phenomenological base: Consuming media becomes the only form of sensory experiences that register as real, and time spent looking away from it is Increasingly unreal, experientially empty.

It naturally follows that users would let foreknowledge of what could be captured on Facebook determine and circumscribe their behavior, condition what could be considered as possible. This doesn’t mean that people can only think of doing what can be captured on Facebook; rather it means the intention of “not putting something on Facebook” is built into an experience as it happens and shapes the way it unfolds, just as the intention of mediatizing an experience would. To make it real for ourselves, we have to imagine it as sharable, browseable, even if we don’t then actually share it online. Hence the danger, in Mitchell’s view is “not the degradation of privacy norms or the spread of liberal individualism or the rise in immaterial labor, but the alteration of what is taken as given and the subsequent establishment of a subject who will browse and do no more.” So much for changing the world; it’s sufficient to make it searchable.

Mitchell argues that this reinforces an atomized subject content to retrace familiar channels of predigested experience. “Facebook’s archival subject browses the world by way of representations that lie in front of reality and thereby constitute it, minimizing the chances of inconvenience and chance encounters, moving toward a preconceived connection.” Experiencing life as a conveniently browseable archive precludes some of the tolerance for discomfort and friction that is necessary to participate in collective action: “when the convenient path … becomes the only path that a user ever takes, this user loses some of the experiences that characterize communal living.”

To the extent that Facebook imposes “automaticity” on users and inculcates certain expectations of convenience in the realm of the social, it makes reciprocity seem a matter of asynchronous gestures rather than actual mutual attention. Other people can be “browsed” as a way of dealing with them. One can easily imagine Facebook extending automaticity to the point where it will automatically comment on and like friends’ post for you and compose status updates most likely to appeal to others in the network, much in the same way it already tries to shape one’s news feed to appeal to you and maximize your time on site. Like the suggested tags Facebook floats as trial balloons, recommended comments and likes, whether accepted or rejected, could extend Facebook’s database with useful information about how accurate its algorithms are while further inculcating the kind of passive engagement with the site that keeps users isolated and more open and vulnerable to advertisements. It would allow Facebook to still collect user data while diminishing users’ active engagement, which threatens to lead them into actual sustained social interaction — a very unfavorable situation for the salience and persuasiveness of ads.

But it’s just as likely that users invert the “browsing” subjectivity rather than inhabit it unreflexively. The idea that we want sociality to be convenient and efficient is built into Facebook as a platform, but that doesn’t mean we necessarily have to inhabit that value system in using it. The idea that convenience is so irresistible that people’s yearnings are immediately and automatically reshaped in its image is itself part of capitalism’s ideology of individualism and “rational” maximization. Consumerism is anchored in the idea that people can be atomized and controlled by their desire for hyperpersonalized pleasures that other people only interfere with. But often pleasure is a matter of inconvenience, particularly when it involves social interaction. The inconvenience of other people, the circuitous routes we must take to communicate and establish shared bases for experience — these are inefficient but also so pleasurable that we often claim this pursuit of intimacy is the only “real” pleasure. Habituation to Facebook’s ontological assumptions, which reject such a view of intimacy, may have the effect of foregrounding the tension between the platforms value system and our own, rather than allowing Facebook to function hegemonically as a kind of “pre-understanding.” We can end up embracing simultaneously the browseable reality Facebook provides and the unbrowseable reality that it frames and valorizes despite itself. Mitchell’s framework doesn’t account for the possibility that Facebook use can make us more aware of what it can’t capture rather than just blinding us to it.

The automation and social deskilling that engineers try to build into social media may inspire and enable resistance as much as compliance —they make some against-the-grain uses of the service as easy as conformist behavior. The degree to which Facebook wants to automate your social life is the degree to which one can readily toy with the automaticity, make it yield anomalies and generate a kind of “weird Facebook” that emphasizes the friction and unpredictability in information dissemination. It can permit users to easily create a kind of engagement in which inconvenience registers not as a dutiful sacrifice to the social (“I’m working hard to be so attentive to you, so you know my attention is genuine and our intimacy is real”) but as perplexed joy. If archival subjectivity represents a threat, complicity with capital, it also offers users readily accessible ways to disrupt the browsing experience, to introduce entropy into the database and make searches yield less rational results. Anything to slow your scroll.

Rob Horning (@marginalutility) is an editor of The New Inquiry.

66

DotGov_Logo

The debate over the ACA is carried out in ideological dogwhistle, waged with words barely capable of pointing to the concepts they are supposed to grip. While this well-oiled chatter is par for the course in American politics, it is doubly fruitless in the case of healthcare dot gov. The object of inquiry is a website, a kind of thing that is newer than our daily interactions would have us remember, and it has different ontological and anthropological qualities than our political commentators have learned to address. It is a kind of thing less suited to the language of communist bogeyman and other imaginary evils than the technics of networks and the processual language of software project management– topics where most of our politicians sound clueless and should follow Wittgenstein’s advice (“hey, shut up a minute”).

At the most basic syntax of domain naming, the phrase “healthcare dot gov”, repeated as much by its detractors as by its proponents, is a statement about the relationship of these two things: healthcare and government. Dot com is business, dot xxx is porn; top level domains like these are well known by people who use the internet. Few governmental sites have gained the traction to become household names, while some commercial ones have become so ubiquitous as to effectively consume their parent (Google needs no further identification). Healthcare dot gov might be the first governmental site whose name everyone knows (especially since the notorious whitehouse dot com is no longer a porn site).

Obamacare– née the Affordable Care Act, née Obamacare, née Romneycare– has been hard-pressed to find advocates excited to defend it (aside from the people no longer subject to medical discrimination). As an equally ambitious piece of legislation and software engineering, the ACA/healthcare dot gov assemblage must fight a two-front war against a motley horde of anxieties and misunderstandings. Despite the healthcare exchange’s problems, both real and imagined, the existence of such a site makes a statement about the role of government in healthcare.

Like all statements, listeners may accept the view of the world it represents or reject it. And the sparse dot notation of domain names is a weak basis for claiming that Americans will accept the governmental administration of the trillion dollar industry responsible for keeping them alive and well. One of the experiments that healthcare dot gov enacts is a measure of how effectively the rhetorics of the internet can persuade people to accept their view of the world.

The problems of healthcare dot gov are also less damning than critics would claim when it is viewed in the context of being a website. In my experience as a product manager for internet-y companies, building something new entails accepting unknown unknowns. You plan to discover flaws and improve them through iteration, and you will sound like a fool if you say you always get it right the first time. While the US government doesn’t have the same requirement to innovate that pervades Silicon Valley, in this case they don’t have a choice. Building a national healthcare exchange is something that the US has never done. The website should not have launched late and in such poor shape, but, as someone familiar with the sausage and the factory, I have different expectations for the production cycle of software.

I also buy games and use websites. Games have bugs; they get patched. Many games are released before they are finished so that players get a chance to play sooner and provide feedback (Minecraft, arguably the most important game in the world, was developed in this manner). Websites (Twitter) crash, sometimes for hours at a time (Twitter), or they introduce bad features (Twitter) that need to be reverted (Twitter). When I worked on a Facebook game, we kept a sticker on the splash screen that said “beta” for two years as a statement that we were still improving. It was more than a little silly but our players were never confused or mad but understood it as harmless marketing propaganda rather than the catastrophe of launching in an incomplete state.

This is not to say that healthcare dot gov is a triumph, but instead that many criticisms miss because they are speaking a different language. We can see how criticisms based in non-digital references fail to address a digital phenomenon in the current conversation around Bitcoin. The real advantages of Bitcoin over existing currencies are not as great as its advocates would claim, and to address the problems that prevent businesses from accepting Bitcoin–for example, the massive swings in valuation that would make every vendor accepting Bitcoin a high risk currency speculator–would result in Bitcoin being not so different from existing currencies. And yet, despite its problems, Bitcoin has a sizable market capitalization and a rabid fanbase. The different perceptions of Bitcoin’s critics and proponents aren’t because either group is stupid, but rather because the two are talking past each other. The power of Bitcoin is not really in what it offers today–buying illegal stuff and speculating, both possible long before Bitcoin–but for how it makes a rhetorical appeal that addresses the experience of using the internet. Seen from that side of the looking glass, fiat currencies waver with unreality while Bitcoin holds out the assurances that can only come with a strong cryptographic algorithm.

If Bitcoin is any indicator of the power of the internet to shape rhetorical success, the future of healthcare dot gov will likely be a surprise to most of the people professionally tasked with talking about it. By treating the digital as less real or reducible to analogues drawn from the old and known, such commenters have neglected to learn the rhetorical strategies that matter online. Phenomena like Bitcoin will continue to surprise until digital rhetorics receive their due respect, and the healthcare exchange could well be among those surprises. Rhetoric, naming: these things clearly matter, as the tug of war over whether to call it “the Affordable Care Act” or “Obamacare” shows, and talking in a way that speaks to our digital experiences will become increasingly important.

Greg Pollock is a game designer and writer in San Jose.

cachemonet.com

sites that rely heavily on simple voting have much higher percentages of male users

an ideology that sees every person as a potential threat & every communication as potentially worth surveilling

How does someone that with an obvious resentment for the social sciences, also make a joke about how we were always already alienated?

ViralNova does not exist. Those who speak of ViralNova miss the point of ViralNova

the gruff man taking drags from the e-cigarette may also have conceivably traversed the space-time continuum

the NSA had played a key role in nearly every major geopolitical and military event of the Cold War, with almost no public scrutiny

for now, there’s a really tidy profit to be made showing web pages to robots

Haraway’s theoretical misappropriations in the relationships between cyborgs and women of color

what happened to the internet after it stopped being a possibility

It’s a good time to start asking what civil inattention looks like on the internet

Twitter bots represent an open-access laboratory for creative programming

these thinkpieces discuss the social and political implications of these pieces without talking about the actual music

Nathan is on Twitter [@nathanjurgenson] and Tumblr [nathanjurgenson.com].