Facebook

 

Stories of data breaches and privacy violations dot the news landscape on a near daily basis. This week, security vendor Carbon Black published their Australian Threat Report based on 250 interviews with tech executives across multiple business sectors. 89% Of those interviewed reported some form of data breach in their companies. That’s almost everyone. These breaches represent both a business problem and a social problem. Privacy violations threaten institutional and organizational trust and also, expose individuals to surveillance and potential harm.

But “breaches” are not the only way that data exposure and privacy violations take shape. Often, widespread surveillance and exposure are integral to technological design. In such cases, exposure isn’t leveled at powerful organizations, but enacted by them.  Legacy services like Facebook and Google trade in data. They provide information and social connection, and users provide copious information about themselves. These services are not common goods, but businesses that operate through a data extraction economy.

 I’ve been thinking a lot about the cost-benefit dynamics of data economies and in particular, how to grapple with the fact that for most individuals, including myself, the data exchange feels relatively inconsequential or even mildly beneficial. Yet at a societal level, the breadth and depth of normative surveillance is devastating. Resolving this tension isn’t just an intellectual exercise, but a way of answering the persistent and nagging question: “why should I care if Facebook knows where I ate brunch?” This is often wrapped in a broader “nothing to hide” narrative, in which data exposure is a problem only for deviant actors.

more...

I recently started a podcast called The Peepshow Podcast with Jessie Sage, and we recorded an interview with Kashmir Hill that may be of interest to Cyborgology readers.

Hill (@kashhill) is an investigative reporter with Gizmodo Media Group. She recently wrote an article on how Facebook’s “People You May Know” feature outs sex workers. We discuss the ways Facebook/Instagram algorithms may put marginalized people (sex workers, queer youth, domestic abuse survivors, etc.) at risk as well as possible ways of safeguarding users’ identities.

(You can find the uploaded contacts feature mentioned in this segment here.)

Last Sunday French voters seemingly stemmed the tide of nationalist candidates winning major elections. I say seemingly because, as The Guardian reported: “Turnout was the lowest in more than 40 years. Almost one-third of voters chose neither Macron nor Le Pen, with 12 million abstaining and 4.2 million spoiling ballot papers.” The most disturbing statistic though, is that nearly half of voters 18 to 24 voted for Le Pen. She may have not won this time, but the future in France looks pretty fascist. For now, though, France seems to have dodged a bullet with a familiar caliber.

Late last Friday night the Macron campaign announced it had been hacked and many internal documents had been leaked to the open internet through Pastebin and later spread on /Pol/ and Twitter. The comparisons to the American election were easy and numerous but unlike the United States, France has a media blackout period. Elections are held on weekends and new reporting is severely limited. Emily Schultheis in The Atlantic explains:

Here, the pre-election ban on active campaigning, which begins at midnight the Friday night before an election, and ends only when the polls close Sunday night, is practically sacred. The pause is seen as a time when French voters can sit back, gather their information and reflect on their choice before heading to the voting booth on Sunday. It’s also the law: According to French election rules, the blackout includes not just candidate events but anything that could theoretically sway the course of the election: media commentary, interviews, and candidate postings on social media are not just illegal, but taboo.

more...

Making the world a better place has always been central to Mark Zuckerberg’s message. From community building to a long record of insistent authenticity, the goal of fostering a “best self” through meaningful connection underlies various iterations and evolutions of the Facebook project. In this light, the company’s recent move to deploy artificial intelligence towards suicide prevention continues the thread of altruistic objectives.

Last week, Facebook announced an automated suicide prevention system to supplement its existing user-reporting model. While previously, users could alert Facebook when they were worried about a friend, the new system uses algorithms to identify worrisome content. When a person is flagged, Facebook contacts that person and connects them with mental health resources.

Far from artificial, the intelligence that Facebook algorithmically constructs is meticulously designed to pick up on cultural cues of sadness and concern (e.g., friends asking ‘are you okay?’).  What Facebook’s done, is supplement personal intelligence with systematized intelligence, all based on a combination or personal biographies and cultural repositories. If it’s not immediately clear how you should feel about this new feature, that’s for good reason. Automated suicide prevention as an integral feature of the primordial social media platform brings up dense philosophical concerns at the nexus of mental health, privacy, and corporate responsibility. Although a blog post is hardly the place to solve such tightly packed issues, I do think we can unravel them through recent advances in affordances theory. But first, let’s lay out the tensions.   more...

Thiel - Girard

During the week of July 12, 2004, a group of scholars gathered at Stanford University, as one participant reported, “to discuss current affairs in a leisurely way with [Stanford emeritus professor] René Girard.” The proceedings were later published as the book Politics and Apocalypse. At first glance, the symposium resembled many others held at American universities in the early 2000s: the talks proceeded from the premise that “the events of Sept. 11, 2001 demand a reexamination of the foundations of modern politics.” The speakers enlisted various theoretical perspectives to facilitate that reexamination, with a focus on how the religious concept of apocalypse might illuminate the secular crisis of the post-9/11 world.

As one examines the list of participants, one name stands out: Peter Thiel, not, like the rest, a university professor, but (at the time) the President of Clarium Capital. In 2011, the New Yorker called Thiel “the world’s most successful technology investor”; he has also been described, admiringly, as a “philosopher-CEO.” More recently, Thiel has been at the center of a media firestorm for his role in bankrolling Hulk Hogan’s lawsuit against Gawker, which outed Thiel as gay in 2007 and whose journalists he has described as “terrorists.” He has also garnered some headlines for standing as a delegate for Donald Trump, whose strongman populism seems an odd fit for Thiel’s highbrow libertarianism; he recently reinforced his support for Trump with a speech at the Republican National Convention. Both episodes reflect Thiel’s longstanding conviction that Silicon Valley entrepreneurs should use their wealth to exercise power and reshape society. But to what ends? Thiel’s participation in the 2004 Stanford symposium offers some clues.    more...

Thiel - Girard

During the week of July 12, 2004, a group of scholars gathered at Stanford University, as one participant reported, “to discuss current affairs in a leisurely way with [Stanford emeritus professor] René Girard.” The proceedings were later published as the book Politics and Apocalypse. At first glance, the symposium resembled many others held at American universities in the early 2000s: the talks proceeded from the premise that “the events of Sept. 11, 2001 demand a reexamination of the foundations of modern politics.” The speakers enlisted various theoretical perspectives to facilitate that reexamination, with a focus on how the religious concept of apocalypse might illuminate the secular crisis of the post-9/11 world.

As one examines the list of participants, one name stands out: Peter Thiel, not, like the rest, a university professor, but (at the time) the President of Clarium Capital. In 2011, the New Yorker called Thiel “the world’s most successful technology investor”; he has also been described, admiringly, as a “philosopher-CEO.” More recently, Thiel has been at the center of a media firestorm for his role in bankrolling Hulk Hogan’s lawsuit against Gawker, which outed Thiel as gay in 2007 and whose journalists he has described as “terrorists.” He has also garnered some headlines for standing as a delegate for Donald Trump, whose strongman populism seems an odd fit for Thiel’s highbrow libertarianism; he recently reinforced his support for Trump with a speech at the Republican National Convention. Both episodes reflect Thiel’s longstanding conviction that Silicon Valley entrepreneurs should use their wealth to exercise power and reshape society. But to what ends? Thiel’s participation in the 2004 Stanford symposium offers some clues. more...

ReactionsFacebook Reactions don’t grant expressive freedom, they tighten the platform’s affective control.

The range of human emotion is both vast and deep. We are tortured, elated, and ambivalent; we are bored and antsy and enthralled; we project and introspect and seek solace and seek solitude. Emotions are heavy, except when they’re light. So complex is human affect that artists and poets make careers attempting to capture the allusive sentiments that drive us, incapacitate us, bring us together, and tear us apart. Popular communication media are charged with the overwhelming task of facilitating the expression of human emotion, by humans who are so often unsure how they should—or even do—feel. For a long time, Facebook handled this with a “Like” button.

Last week, the Facebook team finally expanded the available emotional repertoire available to users. “Reactions,” as Facebook calls them, include not only “Like,” but also “Love,” “Haha,” “Wow,” “Sad,” and “Angry.” The “Like” option is still signified by a version of the iconic blue thumbs-up, while the other Reactions are signified by yellow emoji faces.

Ostensibly, Facebook’s Reactions give users the opportunity to more adequately respond to others, given the desire to do so with only the effort of a single click. The available Reaction categories are derived from the most common one-word comments people left on their friends’ posts, combined with sentiments users commonly expressed through “stickers.” At a glance, this looks like greater expressive capacity for users, rooted in the sentimental expressions of users themselves. And this is exactly how Facebook bills the change—it captures the range of users’ emotions and gives those emotions back to users as expressive tools.

However, the notion of greater expressive capacity through the Facebook platform is not only illusory, but masks the way that Reactions actually strengthen Facebook’s affective control. more...

bivens

Almost two years ago, Facebook waved the rainbow flag and metaphorically opened its doors to all of the folks who identify outside of the gender binary. Before Facebook announced this change in February of 2014, users were only able to select ‘male’ or ‘female.’ Suddenly, with this software modification, users could choose a ‘custom’ gender that offered 56 new options (including agender, gender non-conforming, genderqueer, non-binary, and transgender). Leaving aside the troubling, but predictable, transphobic reactions, many were quick to praise the company. These reactions could be summarized as: ‘Wow, Facebook, you are really in tune with the LGBTQ community and on the cutting edge of the trans rights movement. Bravo!’ Indeed, it is easy to acknowledge the progressive trajectory that this shift signifies, but we must also look beyond the optics to assess the specific programming decisions that led to this moment.

To be fair, many were also quick to point to the limitations of the custom gender solution. For example, why wasn’t a freeform text field used? Google+ also shifted to a custom solution 10 months after Facebook, but they did make use of a freeform text field, allowing users to enter any label they prefer. By February of 2015, Facebook followed suit (at least for those who select US-English).

There was also another set of responses with further critiques: more granular options for gender identification could entail increased vulnerability for groups who are already marginalized. Perfecting your company’s capacity to turn gender into data equates to a higher capacity for documentation and surveillance for your users. Yet the beneficiaries of this data are not always visible. This is concerning, particularly when we recall that marginalization is closely associated with discriminatory treatment. Transgender women suffer from disproportionate levels of hate violence from police, service providers, and members of the public, but it is murder that is increasingly the fate of people who happen to be both trans and women of color.

Alongside these horrific realities, there is more to the story – hidden in a deeper layer of Facebook’s software. When Facebook’s software was programmed to accept 56 gender identities beyond the binary, it was also programmed to misgender users when it translated those identities into data to be stored in the database. In my recent article in New Media & Society, ‘The gender binary will not be deprogrammed: Ten years of coding gender on Facebook,’ I expose this finding in the midst of a much broader examination of a decade’s worth of programming decisions that have been geared towards creating a binary set of users. more...

LeeAt this point everyone is undoubtedly aware of the school shooting at Umpqua Community College in Oregon, though I am certain the method by which we came across the news varied. Some of us were notified of the event by friends with fingers closer to the pulse, and still more of us learned about it first-hand through a variety of content aggregation platforms, such as Reddit.

I would hazard that the majority of people learned about it first and foremost through social media;  primarily Facebook or Twitter. I certainly did. A friend first told me about it through Facebook Messenger, and almost immediately after she did, I started to see articles trickling into my newsfeed in the background of my Messenger window. And the moment that happened I shut my Facebook tab down, despite the fact that I compulsively, addictively, have it open almost all the time.

Facebook, when taken on the whole, is a fantastic way for people to compare each others’ lives and share pictures of kittens and children, but when it comes to a tragedy, the platform is woefully inadequate at allowing its users to parse and process the gravity of events as they unfold. It is a thorough shock to a system that thrives on irreverent links and topical memes, and when faced with an item that requires genuine reflection and thought, it is often simpler – indeed, even more beneficial – for users to turn their heads until the kittens may resume. more...

TargetHeadlineDisclaimer: Nothing I say in this post is new or theoretically novel. The story to which I’ll refer already peaked over the weekend, and what I have to say about it–that trolling is sometimes productive– is a point well made by many others (like on this blog last month by Nathan Jurgenson). But seriously, can we all please just take a moment and bask in appreciation of trolling at its best?

For those who missed it, Target recently announced that they would do away with gender designations for kids toys and bedding. The retailer’s move toward gender neutrality, unsurprisingly, drew ire from bigoted jerks who apparently fear that mixing dolls with trucks will hasten the unraveling of American society (if David Banks can give himself one more calls it as I sees it moment, I can too).

Sensing “comedy gold” Mike Melgaard went to Target’s Facebook page. He quickly created a fake Facebook account under the name “Ask ForHelp” with a red bullseye as the profile picture. Using this account to pose as the voice of Target’s customer service, he then proceeded to respond with sarcastic mockery to customer complaints. And hit gold, Mike did!! For 16 hilarious hours transphobic commenters provided a rich well of comedic fodder. Ultimately, Facebook stopped the fun by removing Melgaard’s Ask ForHelp account. Although Target never officially endorsed Melgaard, they made their support clear in this Facebook post on Thursday evening:  more...