privacy

Drew Harwell (@DrewHarwell) wrote a balanced article in the Washington Post about the ways universities are using wifi, bluetooth, and mobile phones to enact systematic monitoring of student populations. The article offers multiple perspectives that variously support and critique the technologies at play and their institutional implementation. I’m here to lay out in clear terms why these systems should be categorically resisted.

The article focuses on the SpotterEDU app which advertises itself as an “automated attendance monitoring and early alerting platform.” The idea is that students download the app and then universities can easily keep track of who’s coming to class and also, identify students who may be in, or on the brink of, crisis (e.g., a student only leaves her room to eat and therefore may be experiencing mental health issues). As university faculty, I would find these data useful. They are not worth the social costs. more...

 

Stories of data breaches and privacy violations dot the news landscape on a near daily basis. This week, security vendor Carbon Black published their Australian Threat Report based on 250 interviews with tech executives across multiple business sectors. 89% Of those interviewed reported some form of data breach in their companies. That’s almost everyone. These breaches represent both a business problem and a social problem. Privacy violations threaten institutional and organizational trust and also, expose individuals to surveillance and potential harm.

But “breaches” are not the only way that data exposure and privacy violations take shape. Often, widespread surveillance and exposure are integral to technological design. In such cases, exposure isn’t leveled at powerful organizations, but enacted by them.  Legacy services like Facebook and Google trade in data. They provide information and social connection, and users provide copious information about themselves. These services are not common goods, but businesses that operate through a data extraction economy.

 I’ve been thinking a lot about the cost-benefit dynamics of data economies and in particular, how to grapple with the fact that for most individuals, including myself, the data exchange feels relatively inconsequential or even mildly beneficial. Yet at a societal level, the breadth and depth of normative surveillance is devastating. Resolving this tension isn’t just an intellectual exercise, but a way of answering the persistent and nagging question: “why should I care if Facebook knows where I ate brunch?” This is often wrapped in a broader “nothing to hide” narrative, in which data exposure is a problem only for deviant actors.

more...

I recently started a podcast called The Peepshow Podcast with Jessie Sage, and we recorded an interview with Kashmir Hill that may be of interest to Cyborgology readers.

Hill (@kashhill) is an investigative reporter with Gizmodo Media Group. She recently wrote an article on how Facebook’s “People You May Know” feature outs sex workers. We discuss the ways Facebook/Instagram algorithms may put marginalized people (sex workers, queer youth, domestic abuse survivors, etc.) at risk as well as possible ways of safeguarding users’ identities.

(You can find the uploaded contacts feature mentioned in this segment here.)


With advances in machine learning and a growing ubiquity of “smart” technologies, questions of technological agency rise to the fore of philosophical and practical importance. Technological agency implies deep ethical questions about autonomy, ownership, and what it means to be human(e), while engendering real concerns about safety, control, and new forms of inequality. Such questions, however, hinge on a more basic one: can technology be agentic?

To have agency, technologies need to want something. Agency entails values, desires, and goals. In turn, agency entails vulnerability, in the sense that the agentic subject—the one who wants some things and does not want others—can be deprived and/or violated should those wishes be ignored.

The presence vs. absence of technological agency, though an ontologically philosophical conundrum, can only be assessed through the empirical case. In particular, agency can be found or negated through an empirical instance in which a technological object seems, quite clearly, to express some desire. Such a case arises in the WCry ransomware virus ravaging network systems as I write. more...

Making the world a better place has always been central to Mark Zuckerberg’s message. From community building to a long record of insistent authenticity, the goal of fostering a “best self” through meaningful connection underlies various iterations and evolutions of the Facebook project. In this light, the company’s recent move to deploy artificial intelligence towards suicide prevention continues the thread of altruistic objectives.

Last week, Facebook announced an automated suicide prevention system to supplement its existing user-reporting model. While previously, users could alert Facebook when they were worried about a friend, the new system uses algorithms to identify worrisome content. When a person is flagged, Facebook contacts that person and connects them with mental health resources.

Far from artificial, the intelligence that Facebook algorithmically constructs is meticulously designed to pick up on cultural cues of sadness and concern (e.g., friends asking ‘are you okay?’).  What Facebook’s done, is supplement personal intelligence with systematized intelligence, all based on a combination or personal biographies and cultural repositories. If it’s not immediately clear how you should feel about this new feature, that’s for good reason. Automated suicide prevention as an integral feature of the primordial social media platform brings up dense philosophical concerns at the nexus of mental health, privacy, and corporate responsibility. Although a blog post is hardly the place to solve such tightly packed issues, I do think we can unravel them through recent advances in affordances theory. But first, let’s lay out the tensions.   more...

podcast

Last week The New Inquiry published an essay I wrote about science journalism podcasts syndicated on NPR. Shows like Radiolab, The TED Radio Hour, Hidden Brain, Invisibilia, Note to Self, and Freakonomics Radio, I argued, were more about wrapping pre-conceived notions in a veneer of data than changing minds or delivering new insights into long-standing problems. Worse yet, social and political issues that might be met with collective action are turned into wishy-washy “well isn’t that interesting” anecdotes:

Topics that might have once been subject to political debate or rhetorical argument–work demands, exposure to toxins, surveillance, the limits of love, even Marxian alienation–become apolitical subjects for scientific testing. But the results only lead to greater and greater complexity, prompting introspective thought rather than action.

more...

PowellThe hack and leak of Colin Powell’s emails have brought with them a national conversation about journalistic ethics. At stake are the competing responsibilities for journalists to respect privacy on the one hand, and to inform the public of relevant ongoings on the other.

Powell’s emails, ostensibly hacked and leaked by Russian government forces, revealed incendiary comments about both Donald Trump and Hillary Clinton. Known for maintaining a reserved and diplomatic approach, the indiscreet tone of Powell’s emails had the appeal of an unearthed and long suspected truth.

The news media responded to the leaked emails by plastering their content on talk shows and websites, accompanied by expert commentary and in depth political analyses. Line by line, readers, viewers, and listeners learned, with a sense of excitement and validation, what Colin Powell “really thinks.” more...

DroneNick Bilton’s neighbor flew a drone outside the window of Bilton’s home office. It skeeved him out for a minute, but he got over it. His wife was more skeeved out. She may or may not have gotten over it (but probably not). Bilton wrote about the incident for The New York Times, where he works as a columnist. Ultimately, Bilton’s story concludes that drone watching is no big deal, analogous to peeping-via-binoculars, and that the best response is to simply ignore drone-watchers until they fly their devices away. With all of this, I disagree.

Drone privacy is a fraught issue, one of the many in which slow legislative processes have been outpaced by technological developments. While there remains a paucity of personal-drone laws, the case precedent trends towards punishing those who damage other people’s drones, while protecting the drone owners who fly their devices into airspace around private homes. Through legal precedent, then, privacy takes a backseat to property.

Bilton spends the majority of his article parsing this legal landscape, and tying the extant legal battles to his own experience of being watched. He begins with an account of looking out his window to see a buzzing drone hovering outside. He is both amused and disturbed, as the drone intrusion took place while he was already writing an article about drones. He reports feeling first violated and intruded upon, but this feeling quickly fades, morphing into quite the opposite. He says:   more...

u22
Blacked out Twitter image from my post last week

Netiquette. I seriously hate that word. BUT an issue of internet-based-etiquette (blogger etiquette, specifically) recently came to my attention, and I’m interested in others’ practices and thoughts.

As a blogger, I often analyze content from Facebook and Twitter. In doing so, I usually post images of actual tweets, comments, and status updates. These are forms of data, and are useful in delineating the public tenor with regard to a particular issue, the arguments on opposing sides of a debate, and the ‘voice’ with which people articulate their relevant thoughts and sentiments.

As a common practice, I black out all identifying information when reposting this content. Last week, I posted some tweets with the names and images redacted. A reader commented on my post to ask why I did so, given that the tweets were public. We had a quick discussion, but, as I mentioned in that discussion, this issue deserves independent treatment. more...

Can a gift be a data breach? Lots of Apple product users think so, as evidenced by the strong reaction against the company for their unsolicited syncing of U2’s latest album songs of innocence to 500 million iCloud accounts. Although part of the negative reaction stems from differences of musical taste, what Apple shared with customers seems less important than the fact that they put content on user accounts at all.

u2-apple-eventWith a proverbial expectant smile, Apple gifted the album’s 11 songs to unsuspecting users. A promotional move, this was timed with the launch of the iPhone6 and Apple iWatch. And much like teenagers who find that their parents spent the day reorganizing their bedrooms, some customers found the move invasive rather than generous.

Sarah Wanenchack has done some great work on this blog with regards to device ownership—or more precisely, our increasing lack of ownership over the devices that we buy. That Apple can, without user permission, add content to our devices, highlights this lack of ownership. Music is personal. Devices are personal. And they should be. We bought them with our own money. And yet, these devices remain accessible to the company from which they came; they remain alterable; they remain—despite a monetary transaction that generally implies buyer ownership—nonetheless shared. And this, for some people, is offensive.   more...