As technology expands its footprint across nearly every domain of contemporary life, some spheres raise particularly acute issues that illuminate larger trends at hand. The criminal justice system is one such area, with automated systems being adopted widely and rapidly—and with activists and advocates beginning to push back with alternate politics that seek to ameliorate existing inequalities rather than instantiate and exacerbate them. The criminal justice system (and its well-known subsidiary, the prison-industrial complex) is a space often cited for its dehumanizing tendencies and outcomes; technologizing this realm may feed into these patterns, despite proponents pitching this as an “alternative to incarceration” that will promote more humane treatment through rehabilitation and employment opportunities.
As such, calls to modernize and reform criminal justice often manifest as a rapid move toward automated processes throughout many penal systems. Numerous jurisdictions are adopting digital tools at all levels, from policing to parole, in order to promote efficiency and (it is claimed) fairness. However, critics argue that mechanized systems—driven by Big Data, artificial intelligence, and human-coded algorithms—are ushering in an era of expansive policing, digital profiling, and punitive methods that can intensify structural inequalities. In this view, the embedded biases in algorithms can serve to deepen inequities, via automated systems built on platforms that are opaque and unregulated; likewise, emerging policing and surveillance technologies are often deployed disproportionately toward vulnerable segments of the population. In an era of digital saturation and rapidly shifting societal norms, these contrasting views of efficiency and inequality are playing out in quintessential ways throughout the realm of criminal justice.
Tracking this arc, critical discourses on technology and social control have brought to light how decision-making algorithms can be a mechanism to “reinforce oppressive social relationships and enact new modes of racial profiling,” as Safiya Umoja Noble argues in her 2018 book, Algorithms of Oppression. In this view, the use of machine learning and artificial intelligence as tools of justice can yield self-reinforcing patterns of racial and socioeconomic inequality. As Cathy O’Neil discerns in Weapons of Math Destruction (2016), emerging models such as “predictive policing” can exacerbate disparate impacts by perpetuating data-driven policies whereby, “because of the strong correlation between poverty and reported crime, the poor continue to get caught up in these digital dragnets.” And in Automating Inequality (2018), Virginia Eubanks further explains how marginalized communities “face the heaviest burdens of high-tech scrutiny,” even as “the widespread use of these systems impacts the quality of democracy for us all.” In talks deriving from his forthcoming book Halfway Home, Reuben Miller advances the concept of “mass supervision” as an extension of systems of mass incarceration; whereas the latter has drawn a great deal of critical analysis in recent years, the former is potentially more dangerous as an outgrowth of patterns of mass surveillance and the erosion of privacy in the digital age—leading to what Miller terms a “supervised society.”
Techniques of digital monitoring impact the entire population, but the leading edge of regulatory and punitive technologies are applied most directly to communities that are already over-policed. Some scholars and critics have been describing these trends under the banner of “E-carceration,” calling out methods that utilize tracking and monitoring devices to extend practices of social control that are doubly (though not exclusively) impacting vulnerable communities. As Michelle Alexander recently wrote in the New York Times, these modes of digital penality are built on a foundation of “corporate secrets” and a thinly veiled impetus toward “perpetual criminalization,” constituting what she terms “the newest Jim Crow.” Nonetheless, while marginalized sectors are most directly impacted, as one of Eubanks’s informants warned us all: “You’re next.”
Advocates of automated and algorithmic justice methods often tout the capacity of such systems to reduce or eliminate human biases, achieve greater efficiency and consistency of outcomes, and ameliorate existing inequities through the use of better data and faster results. This trend is evident across a myriad of jurisdictions in the U.S. in particular (but not solely), as courts nationwide “are making greater use of computer algorithms to help determine whether defendants should be released into the community while they await trial.” In 2017, for instance, New Jersey introduced a statewide “risk assessment” system using algorithms and large data sets to determine bail, in some cases serving to potentially supplant judicial discretion altogether.
Many have been critical of these processes, noting that these automated decisions are only as good as the data points utilized—which are often tainted both by preexisting subjective biases and prior accumulations of structural bias recorded in people’s records based on them. The algorithms deployed for these purposes are primarily conceived as “proprietary techniques” that are largely opaque and obscured from public scrutiny; as a recent law review article asserts, we may be in the process of opening up “Pandora’s algorithmic black box.” In evaluating these emerging techniques, researchers at Harvard University thus have expressed a pair of related concerns: (1) the critical “need for explainable algorithmic decisions to satisfy both legal and ethical imperatives,” and (2) the fact that “AI systems may not be able to provide human-interpretable reasons for their decisions given their complexity and ability to account for thousands of factors.” This raises foundational questions of justice, ethics, and accountability, but in practice this discussion is in danger of being mooted by widespread implementation.
The net effect of adopting digital mechanisms for policing and crime control without more scrutiny can yield a divided society in which the inner workings (and associated power relations) of these tools are almost completely opaque and thus shielded from critique, while the outer manifestations are concretely inscribed and societally pervasive. The CBC radio program SPARK recently examined a range of these new policing technologies, from Body Cams and virtual Ride-Along applications to those such as Shot Spotter that draw upon data gleaned from a vast network of recording devices embedded in public spaces. Critically assessing the much-touted benefits of such nouveau tools as a “Thin Blue Lie,” Matt Stroud challenges the prevailing view that these technologies are inherently helpful innovations, arguing instead that they have actually made policing more reckless, discriminatory, and unaccountable in the process.
This has prompted a recent spate of critical interventions and resistance efforts, including a network galvanized under the banner of “Challenging E-Carceration.” In this lexicon, it is argued that “E-Carceration may be the successor to mass incarceration as we exchange prison cells for being confined in our own homes and communities.” The cumulative impacts of this potential “net-widening” of enforcement mechanisms include new technologies that gather information about our daily lives, such as license plate readers and facial recognition software. As Miller suggested in his invocation of “mass supervision” as the logical extension of such patterns and practices, these effects may be most immediately felt by those already overburdened by systems of crime control, but the impacts are harbingers of wider forms of social control.
Some advocates thus have begun calling for a form of “digital sanctuary.” An important intervention along these lines has been offered by the Sunlight Foundation, which advocates for “responsible municipal data management.” Their detailed proposal begins with the larger justice implications inherent in emerging technologies, calling upon cities to establish sound digital policies: “Municipal departments need to consider their formal data collection, retention, storage and sharing practices, [and] their informal data practices.” In particular, it is urged that cities should not collect sensitive information “unless it is absolutely necessary to do so,” and likewise should “publicly document all policies, practices and requests which result in the sharing of information.” In light of the escalating use of data-gathering systems, this framework calls for protections that would benefit vulnerable populations and all residents.
These notions parallel the emergence of a wider societal discussion on technology, providing a basis for assessing which current techniques present the greatest threats to, and/or opportunities for, the cultivation of justice. Despite these efforts, we are left with critical questions of whether the debate will catch up to utilization trends, and how the trajectory of tools will continue to evolve if left unchecked. As Adam Greenfield plaintively inquired in his 2017 book Radical Technologies: “Can we make other politics with these technologies? Can we use them in ways that don’t simply reproduce all-too-familiar arrangements of power?” This is the overarching task at hand, even as opportunities for public oversight seemingly remain elusive.
Randall Amster, J.D., Ph.D., is a teaching professor and co-director of environmental studies at Georgetown University in Washington, DC, and is the author of books including Peace Ecology. Recent work focuses on the ways in which technology can make people long for a time when children played outside and everyone was a great conversationalist. He cannot be reached on Twitter @randallamster.
Headline pic via: Source