surveillance

Drew Harwell (@DrewHarwell) wrote a balanced article in the Washington Post about the ways universities are using wifi, bluetooth, and mobile phones to enact systematic monitoring of student populations. The article offers multiple perspectives that variously support and critique the technologies at play and their institutional implementation. I’m here to lay out in clear terms why these systems should be categorically resisted.

The article focuses on the SpotterEDU app which advertises itself as an “automated attendance monitoring and early alerting platform.” The idea is that students download the app and then universities can easily keep track of who’s coming to class and also, identify students who may be in, or on the brink of, crisis (e.g., a student only leaves her room to eat and therefore may be experiencing mental health issues). As university faculty, I would find these data useful. They are not worth the social costs. more...

As technology expands its footprint across nearly every domain of contemporary life, some spheres raise particularly acute issues that illuminate larger trends at hand. The criminal justice system is one such area, with automated systems being adopted widely and rapidly—and with activists and advocates beginning to push back with alternate politics that seek to ameliorate existing inequalities rather than instantiate and exacerbate them. The criminal justice system (and its well-known subsidiary, the prison-industrial complex) is a space often cited for its dehumanizing tendencies and outcomes; technologizing this realm may feed into these patterns, despite proponents pitching this as an “alternative to incarceration” that will promote more humane treatment through rehabilitation and employment opportunities.

As such, calls to modernize and reform criminal justice often manifest as a rapid move toward automated processes throughout many penal systems. Numerous jurisdictions are adopting digital tools at all levels, from policing to parole, in order to promote efficiency and (it is claimed) fairness. However, critics argue that mechanized systems—driven by Big Data, artificial intelligence, and human-coded algorithms—are ushering in an era of expansive policing, digital profiling, and punitive methods that can intensify structural inequalities. In this view, the embedded biases in algorithms can serve to deepen inequities, via automated systems built on platforms that are opaque and unregulated; likewise, emerging policing and surveillance technologies are often deployed disproportionately toward vulnerable segments of the population. In an era of digital saturation and rapidly shifting societal norms, these contrasting views of efficiency and inequality are playing out in quintessential ways throughout the realm of criminal justice. more...

 

Stories of data breaches and privacy violations dot the news landscape on a near daily basis. This week, security vendor Carbon Black published their Australian Threat Report based on 250 interviews with tech executives across multiple business sectors. 89% Of those interviewed reported some form of data breach in their companies. That’s almost everyone. These breaches represent both a business problem and a social problem. Privacy violations threaten institutional and organizational trust and also, expose individuals to surveillance and potential harm.

But “breaches” are not the only way that data exposure and privacy violations take shape. Often, widespread surveillance and exposure are integral to technological design. In such cases, exposure isn’t leveled at powerful organizations, but enacted by them.  Legacy services like Facebook and Google trade in data. They provide information and social connection, and users provide copious information about themselves. These services are not common goods, but businesses that operate through a data extraction economy.

 I’ve been thinking a lot about the cost-benefit dynamics of data economies and in particular, how to grapple with the fact that for most individuals, including myself, the data exchange feels relatively inconsequential or even mildly beneficial. Yet at a societal level, the breadth and depth of normative surveillance is devastating. Resolving this tension isn’t just an intellectual exercise, but a way of answering the persistent and nagging question: “why should I care if Facebook knows where I ate brunch?” This is often wrapped in a broader “nothing to hide” narrative, in which data exposure is a problem only for deviant actors.

more...

Image By Al Ibrahim

I want all of your mind
People turn the TV on, it looks just like a window…

Digital witnesses
What’s the point of even sleeping?

— St. Vincent, “Digital Witness” (2014)

 

Each day seemingly brings new revelations as to the extent of our Faustian bargain with the purveyors of the digital world in which we find ourselves. Our movements, moods, and monies are tracked with relentless precision, yielding the ability to not only predict future behaviors but to actively direct them. Permissions are sometimes given with pro forma consent, while other times they’re simply baked into the baseline of the shiniest and newest hardware and software alike. Back doors, data breaches, cookies and trackers, smart everything, always-on devices, and so much more — to compare Big Tech to Big Brother is trite by now, even as we might soon look back on the latter as a quaint form of social control.

While data breaches and privacy incursions are very serious and have tangible consequences, debates over user rights and platform regulation barely scratch the surface. Deeper questions about power, autonomy, and what it means to be human still loom, largely uncovered.  And when these concerns are even voiced at all they can often seem retrogressive, as if they represent mere longings for a bygone (pre-internet) time when children played outside, politics was honorable, and everyone was a great conversationalist. Despite ostensible consternation when something goes egregiously wrong (like influencing an election, let’s say), the public and political conversation around mass data collection and its commercialization never goes far enough: why do so many seemingly reasonable and critical people accept a surveillance-for-profit economy (with all of us as the primary commodity) as tolerable at all? more...

Or, When Batting Your Eyelashes Doesn’t Work

I was recently asked to run a “Sex School” seminar on dirty talk at a sex club in Toronto. This invitation came by way of Twitter, in large part because of the profile I maintain in part to advertise my phone sex services. Toronto is half a day’s drive from where I live, so I drove up on the morning of the event. For this reason, I primped at home beforehand, which included full makeup appropriate for the role I was playing as sexpert at a club (read: heavy eyeliner and fake eyelashes – sexy for the club, a bit over the top for daylight).

What I didn’t anticipate was the way that my appearance (in conjunction with my role as a paid speaker in a sex club) was going to read to the Border Patrol. Indeed, while they didn’t say it explicitly, they flagged me as a sex worker, and detained my husband and I under that suspicion.

The line of questioning went something like this: Where are you speaking? What do you do to get such speaking gigs? Is there money involved? How much? Did you bring a contract? How will they pay you? I told the Canadian Border Patrol agent that this was all negotiated on Twitter, and she asked for my phone, making sure the social media apps were accessible. We were told to return to our seats, where we watched her comb through my phone from a distance.

I was surprised to receive this level of scrutiny, but perhaps I shouldn’t have been. Just a few months earlier, I was at an industry event in Miami where several sex cam models on the Canadian side of the boarder were denied entry into the United States after they had their social media profiles examined, which outed them as cam models. They had to forfeit their tickets and hotel accommodations, not to mention presence at an event that was meant to help bolster their careers. So, I suppose it shouldn’t have been too much of a shock that this sort of scrutiny would also flow in the other direction.

more...

Making the world a better place has always been central to Mark Zuckerberg’s message. From community building to a long record of insistent authenticity, the goal of fostering a “best self” through meaningful connection underlies various iterations and evolutions of the Facebook project. In this light, the company’s recent move to deploy artificial intelligence towards suicide prevention continues the thread of altruistic objectives.

Last week, Facebook announced an automated suicide prevention system to supplement its existing user-reporting model. While previously, users could alert Facebook when they were worried about a friend, the new system uses algorithms to identify worrisome content. When a person is flagged, Facebook contacts that person and connects them with mental health resources.

Far from artificial, the intelligence that Facebook algorithmically constructs is meticulously designed to pick up on cultural cues of sadness and concern (e.g., friends asking ‘are you okay?’).  What Facebook’s done, is supplement personal intelligence with systematized intelligence, all based on a combination or personal biographies and cultural repositories. If it’s not immediately clear how you should feel about this new feature, that’s for good reason. Automated suicide prevention as an integral feature of the primordial social media platform brings up dense philosophical concerns at the nexus of mental health, privacy, and corporate responsibility. Although a blog post is hardly the place to solve such tightly packed issues, I do think we can unravel them through recent advances in affordances theory. But first, let’s lay out the tensions.   more...

Le Corbusier's La Ville Radieuse
Le Corbusier’s La Ville Radieuse

The motor has killed the great city. The motor must save the great city.”

-Le Corbusier, 1924.

 

In the fast and shallow anxiety around driverless cars, there isn’t a lot of attention being paid to what driving in cities itself will become, and not just for drivers (of any kind of car) but also for pedestrians, governments, regulators and the law. This post is about the ‘relative geographies’ being produced by driverless cars, drones and big data technologies. Another way to think about this may be: what is the city when it is made for autonomous vehicles with artificial intelligence? more...

Panama Papers

Hacking is the new social justice activism, and the Panama Papers are the result of an epic hack. Consisting of 11.5million files and 2.6TB of data, the body of content given to German newspaper Süddeutsche Zeitung by an anonymous[1] source and then analyzed by the International Consortium of Investigative Journalists (ICIJ), is uniquely behemoth. It puts Wikileaks 1.7GB to shame.

The documents were obtained from Mossack Fonseca. The company is among the largest offshore banking firms, and their emails and other electronic documents tell a compelling (if not entirely surprising) story about untraceable monetary exchanges and the ways that state leaders manage to grow their wealth while maintaining a façade of economic neutrality. By forming shell companies, people can move money without attaching that money to themselves. This is not a sufficient condition for illegal activities, but certainly fosters illicit ones. more...

A 1916 American Mug Shot
A 1916 American Mug Shot

Visual technologies continue to play an increasingly key role in strategies for monitoring and surveillance in modern capitalist societies in crime prevention and detection, and the apprehension, recording, documenting and classification of criminals and criminal activities. Still and moving ‘visual evidence’ is stored in state archives, used in courtrooms as evidence, and disseminated across almost every major media platform: from the printed press to the World Wide Web.

The relationship between visual technologies and the criminal justice system can be traced back to the emergence of photography and the invention of the camera as a tool for documenting ‘reality’ in the nineteenth century. The camera was widely believed, even more so than today, to be able to objectively and truthfully record social reality. A photograph was perceived to be like a window on the world – a mechanically produced, impartial and literal representation of the real world. more...

cameras and justice

Dear Cyborgology Readers, we want you to write for us!! In our first ever thematic CFP, we invite guest posts about Cameras and Justice. This theme is broad in scope and we encourage you to put your own spin on it.

If you have an idea, pitch us. If you have a full post, send it our way. We will be taking submissions on this theme until mid June.

Posts are generally between 500-1500 words. Authors should write in a clear and accessible style (think upper-level undergraduate or well read non-academic). We welcome traditional text based essays, image based essays, and art pieces.

To get the brain juices flowing, here are a few pieces on Cameras and Justice from the Cyborgology team:

Cameras on Cops Isn’t the Same as Cops on Camera

ACLU Mobile Justice App: Channeling Citizen Voices

Sousveillance and Justice: A Panopticon in the Crowds

Surveillance from the Clouds to the Fog

Other riffs on this theme could include children’s privacy, tourism, unsolicited dick pics, structural oppression aided by the rhetoric of authenticity, and much, much more.

For submissions, questions, and proposals, email co-editors David Banks (david.adam.banks@gmail.com) and Jenny Davis (jdavis11474@gmail.com) using the subject line “Cameras and Justice.”

Remember that Cyborgology (for better or worse) is an all volunteer effort and we cannot pay for writing.

 

 

Headline Pic: Source