surveillance

Photo by blogtrepreneur.com/tech, Flickr CC

Surveillance technology dominates policing in many major cities, and software companies continue to develop tools that allow law enforcement to collect and analyze data on traffic violations, citizen complaints, and even license plate photographs. A recent CNN Tech article highlighted sociologist Sarah Brayne’s research on the Los Angeles Police Department’s use of one such data collection software, Palantir.  Brayne’s findings suggest that while the utilization of big data in policing facilitates communication, it also raises some major concerns of privacy and potential bias.

With the help of Palantir, LAPD officers use a point system to measure the risk of individuals with extensive criminal records, awarding points for a variety of law infractions and police interactions. However, Brayne found that individuals from low-income communities of color are more likely to have their risk measured — she cautions that such systems can be cyclic, with more points leading to more police contact, and vice versa.

Another potential problem is that of privacy. Palantir has improved location tracking abilities and allows law enforcement to gather and connect more information about individuals than ever before, but this often includes information on individuals without police contact. Certainly there are clear benefits; sharing data can help connect related crimes and more information helps police to work more efficiently and effectively. But challenges arise as technology develops. Brayne warns,

“I’d caution against the thinking that if you have nothing to hide, you have nothing to fear. That logic rests on the assumption of the infallible state. It rests on the assumption that actors are entering information without error, prejudice or discretion.”

For more on the biases behind surveillance technologies, check out this TROT on computer code as free speech.

Does the "wrong" come in creating the secret or telling it? Photo by John Perivolaris via flickr.com. Click for original.
Does the “wrong” come in creating the secret or telling it? Photo by John Perivolaris via flickr.com. Click for original.

Philosopher Peter Ludlow, a faculty member at Northwestern University, writes in a recent post for “The Stone” blog on NYTimes.com that, instead of undermining systems and generally acting immorally, people like Chelsea Manning and Edward Snowden took real risks to expose what Hannah Arendt famously called “the banality of systemic evil.” In a lengthy dissection, Ludlow looks at the leaks that so many have condemned and, noting that one of Aaron Swartz’s self-professed favorite books was the sociology text Moral Mazes, and finds an emerging extra-institutional morality across the cases. Ludlow concludes:

…if there are psychological motivations for whistleblowing, leaking and hacktivism, there are likewise psychological motivations for closing ranks with the power structure within a system — in this case a system in which corporate media plays an important role. Similarly it is possible that the system itself is sick, even though the actors within the organization are behaving in accord with organizational etiquette and respecting the internal bonds of trust.

Just as Hannah Arendt saw that the combined action of loyal managers can give rise to unspeakable systemic evil, so too generation W has seen that complicity within the surveillance state can give rise to evil as well — not the horrific evil that Eichmann’s bureaucratic efficiency brought us, but still an Orwellian future that must be avoided at all costs.

For more on weighing the costs and benefits of surveillance, be sure to check out “A Social Welfare Critique of Contemporary Crime Control” and pretty much all of the Community Page Cyborgology here on TSP. For more on moral ambiguity, consider Moral Mazes by Robert Jackall (updated and released in paperback by Oxford University Press in 2009) and Teaching TSP’s piece on the “Obedience to Authority” and the Milgram experiments.