Editor’s Note: We are re-posting this piece that originally ran in June 2016. With the newest season of OITNB launching this Friday, the post’s original author (Apryl Williams) reports that she has found no evidence of increased racial diversity in the OITNB writer’s room. In light of this, the message of her essay bears repeating.
Orange is the New Black’s newest season demands to be binge watched with its notorious twists at every episode style. When it came out on June 17th, I began my annual binge session and had completed it by Saturday, June 18th.
If you haven’t heard, the series delivered “The mother of all finales” at the end of this season. As I mourned the death of a major black character, I found myself simultaneously mourning the real deaths of Eric Garner, Sandra Bland, Freddie Gray and the list unfortunately goes on. The stylized portrayal of a death in prison custody at the hands – or knee rather – of a white correctional officer was unmistakably close to Garner’s “I can’t breath.” Though those words were never uttered, anyone who has kept up with news in the last year would find haunting familiarity in the fictional inmate’s all-too-real gasps for air.
With her small frame and spine gradually being crushed by the full weight of the white correctional officer as she tried to breathe but failed, the imagery was almost too painful to watch. But I had come this far, I had to continue. At the end of the season, instead of falling into my usual “showhole” syndrome, I was angry and emotionally distraught. This had a visceral, personal effect and nobody warned me it was coming. As the other inmates grieved the death of their friend and urged those in charge to move her body, I wondered who was responsible for writing these scenes and this episode. Surely, a person of color would have cautioned against such tactics without ample viewer preparation. It appears as though the perspective of black viewers was not taken into consideration; a likely result of the limited representation we have in media production. Then I realized that to a white audience, a warning would not have the same meaning or importance. more...
Technological advancements have had a profound influence on social science research. The rise of the internet, mobile hardware and app economies generate a breadth, depth and type of data previously unimaginable, while computational capabilities allow granular analyses that reveal patterns across massive data sets. From these new types of data and forms of analysis, has emerged a crisis and renaissance of methodological thought.
Early excitement around big data celebrated a world that would be entirely changed and entirely knowable. Big data would “revolutionize” the way we “live, work, and think” claimed Viktor Mayer-Schönberger and Kenneth Cuckier in their 2013 monograph, which so aptly captured the cultural zeitgeist energized around this new way of knowing. At the same time, social scientists and humanities scholars expressed concern that big data would displace their rich array of methodological traditions, undermining diverse scholarly practices and forms of knowledge production. However, with the hype around big data beginning to settle, polemic visions of omnipotence on the one hand, and bleak austerity on the other, seem unlikely to come into fruition.
While big data itself enables researchers to ask new kinds of questions, I argue that big data’s most significant effect has been to bring social thinkers back to the methodological (and philosophical) drawing board. For decades, more...
“It’s not about the money, it is about the principle”, I’ve heard this phrase so many times from friends, colleagues and internet influencers who refuse to pay an extra charge for a service or product not deemed worthwhile. In an episode titled ‘No Change’, a famous influencer was complaining about what he had felt was a growing phenomenon—that of waiters not giving back change when he pays the bill. He was expressing annoyance at ‘being duped’ by a waiter and went on to share that it should be his decision to leave a tip. In the wake of the ubiquity of imposed minimum charges at cafés in Egypt, people started resorting to storytelling on social media platforms to expose certain companies and ameliorate the standards of services. Instead of waiting on hold to make a complaint, a woman had provided a detailed account on Facebook of her conversation with a waiter at a café, where she was explaining to him that minimum charge is an illegal practice and he can’t really force her to pay it. She shared what she felt was a success story on a group titled ‘Don’t shop here-a list of untrustworthy shops in Egypt’, a public Facebook group where middle-class Egyptians would share stories about bad consumer experiences. The group now serves as an eclectic archive for a wide range of stories recounting bad experiences (from raw chicken at a famous restaurant to slow internet to undelivered customer service promises). more...
As the school year endswe at Cyborgology thought it fitting to publish our first-ever anonymous contribution. We all have varying opinions about the views stated below but we did agree that these are ideas worth putting out there for discussion.
To Whom It May Concern:
If it is your job to keep track and rank institutions of higher education and publish that data in venues like U.S. News & World Report or the Princeton Review, I have a simple request for you. Please start keeping track of institutions’ administrator to faculty ratios and, in your proprietary ranking formulas, reduce the numerical rank of institutions with a low ratio. The reasoning here is equally straightforward: putting more emphasis on administrative work than actual teaching and research is detrimental to student outcomes.
I wish I could say there was lots of data to back this up but, sadly, researchers are reticent to publish findings that are directly hostile to their bosses. Still though, there are preliminary findings that are worth paying attention to. For starters, a 2014 report by the Institute for Policy Studies found that within public universities high president salaries and high administrative spending overall, correlated positively with high student debt, high reliance on part-time adjunct hiring, and sharp declines in permanent tenure-track faculty. You already keep track of graduating students’ debt and the percentage of adjunct professors in the faculty pool so why not track what seems to be a predictive variable for both of those things? more...
With advances in machine learning and a growing ubiquity of “smart” technologies, questions of technological agency rise to the fore of philosophical and practical importance. Technological agency implies deep ethical questions about autonomy, ownership, and what it means to be human(e), while engendering real concerns about safety, control, and new forms of inequality. Such questions, however, hinge on a more basic one: can technology be agentic?
To have agency, technologies need to want something. Agency entails values, desires, and goals. In turn, agency entails vulnerability, in the sense that the agentic subject—the one who wants some things and does not want others—can be deprived and/or violated should those wishes be ignored.
The presence vs. absence of technological agency, though an ontologically philosophical conundrum, can only be assessed through the empirical case. In particular, agency can be found or negated through an empirical instance in which a technological object seems, quite clearly, to express some desire. Such a case arises in the WCry ransomware virus ravaging network systems as I write. more...
Last Sunday French voters seemingly stemmed the tide of nationalist candidates winning major elections. I say seemingly because, as The Guardian reported: “Turnout was the lowest in more than 40 years. Almost one-third of voters chose neither Macron nor Le Pen, with 12 million abstaining and 4.2 million spoiling ballot papers.” The most disturbing statistic though, is that nearly half of voters 18 to 24 voted for Le Pen. She may have not won this time, but the future in France looks pretty fascist. For now, though, France seems to have dodged a bullet with a familiar caliber.
Late last Friday night the Macron campaign announced it had been hacked and many internal documents had been leaked to the open internet through Pastebin and later spread on /Pol/ and Twitter. The comparisons to the American election were easy and numerous but unlike the United States, France has a media blackout period. Elections are held on weekends and new reporting is severely limited. Emily Schultheis in The Atlantic explains:
Here, the pre-election ban on active campaigning, which begins at midnight the Friday night before an election, and ends only when the polls close Sunday night, is practically sacred. The pause is seen as a time when French voters can sit back, gather their information and reflect on their choice before heading to the voting booth on Sunday. It’s also the law: According to French election rules, the blackout includes not just candidate events but anything that could theoretically sway the course of the election: media commentary, interviews, and candidate postings on social media are not just illegal, but taboo.
Welcome to part three of my multi-partseries on the history of the Quantified Self as a genealogical ancestor of eugenics. In last week’s post, I elucidated Francis Galton’s influence on experimental psychology, arguing that it was, largely, a technological one. In an oft-cited paper from 2013, researcher Melanie Swan argues that “the idea of aggregated data from multiple…self-trackers[, who] share and work collaboratively with their data” will help make that data more valuable—be it to the individual tracking, physician working with them, corporation selling the device worn, or other stakeholder (86). No doubt, then, the value of the predictive power of correlation and regression to these trackers. Harvey Goldstein, in a paper tracing Galton’s contributions to psychometrics, notes that Galton was not the only late-nineteenth century scientist to believe that genius was passed hereditarily. He was, however, one of the few to take up the task of designing a study to show genealogical causality regarding character, thanks once again to his correlation coefficient and resultant laws of regression. more...
In turbulent times there is something emotionally powerful about reliability in and of itself. Facebook, for all its faults, is reliable. I can bet on Facebook being up and available more often than the Internet connection I rely on to access it. Hell, it works more reliably than my toilet. Changes to the site trigger cascades of stories and opinions about user experience which, really, goes to show how infrequently Facebook makes major alterations to core functions. You don’t have to like Facebook as a company or as a product to acknowledge that it is stable and works as intended more often that most other things. This transcendent reliability—a steadfast infrastructure of emotive communication and identity construction—has become Facebook’s core service. You may not like what you see in your timeline, but the timeline will be there.
Watching an organization embed itself into the lives of nearly a third of the global population is a strange thing. To be a common tread across all of those lives is to be as unthreatening or uncontroversial as possible. Conversely, it was only a matter of time before Facebook played host to something deeply disturbing like a murder, or even world-changing like a reactionary election. This tension between striving for unassuming background service and inevitable host to calamity goes a long way towards explaining why Mark Zuckerberg is traveling across the U.S and writing 6,000-word manifestos about community, despite the fact that most Facebook users aren’t Americans and Facebook is not a community. Shoring up good will in the most powerful nation on the planet is not only good business, it is tapping into a tradition of American progressivism that is so embedded in our daily lives we can’t recognize it when we see it enacted. It is the water we swim in and Mark Zuckerberg wants to tint it Facebook blue. more...
Over the past decade, theorizing about data and digital mediums has typically been kept to spaces like New Media Studies. The rise of Digital Humanities as a strictly empirical field cuts against this grain in a manner worth examining. Part I: The Hegemony of Data, discusses a longer history of information to evaluate the intuitive sense of objectivity that surrounds “Big Data”. Part II: The Heavens and Hells of the Web examines the initial beliefs in digital messianism as a method of eliminating moral and social problems, how they turned apocalyptic, and what lessons Digital Humanities should take from it. Part III: Digital Epistemology goes beyond critique and builds a sense in which anti-colonial, anti-capitalist, moral visions of a future may benefit and actually advance discourses through our experiences with digital tools and society. more...
According to its author JG Ballard, Crash is ‘the first pornographic novel based on technology’. From bizarre to oddly pleasing, the book’s narrative drifts from one erotically charged description to another of mangled human bodies and machines fused together in the moment of a car crash. The “TV scientist”, Vaughan, and his motley crew of car crash fetishists seek out crashes in-the-making, even causing them, just for the thrill of it. The protagonist of the tale, a James Ballard, gets deeply entangled with Vaughan and his ragged band after his own experience of a crash. Vaughan’s ultimate sexual fantasy is to die in a head-on collision with the actress Elizabeth Taylor.
Like a STS scholar-version of Vaughan, I imagine what it may be like to arrive on the scene of a driverless car crash, and draw maps to understand what happened. Scenario planning is one kind of map-making to plan for ‘unthinkable futures’.
The ‘scenario’ is a phenomenon that became prominent during the Korean War, and through the following decades of the Cold War, to allow the US army to plan its strategy in the event of nuclear disaster. Peter Galison describes scenarios as a “literature of future war” “located somewhere between a story outline and ever more sophisticated role-playing war games”, “a staple of the new futurism”. Since then scenario-planning has been adopted by a range of organisations, and features in the modelling of risk and to identify errors. Galison cites the Boston Group as having written a scenario – their very first one- in which feminist epistemologists, historians, and philosophers of science running amok might present a threat to the release of radioactive waste from the Cold War (“A Feminist World, 2091”).
The applications of the Trolley Problem to driverless car crashes are a sort of scenario planning exercise. Now familiar to most readers of mainstream technology reporting, the Trolley problem is presented as a series of hypothetical situations with different outcomes derived from a pitting of consequentialism against deontological ethics. Trolley Problems are constructed as either/or scenarios where a single choice must be made. MIT’s Moral Machine project materialises this thought experiment with an online template in which the user has to complete scenarios in which she has to instruct the driverless car about which kinds of pedestrian to avoid in the case of brake failure: runaway criminals, pets, children, parents, athletes, old people, or fat people. Patrick Lin has a short animation describing the application of the Trolley Problem in the driverless car scenario.
The applications of the Trolley Problem, as well as the Pascalian Wager (Bhargava, 2016), are applications that attempt to create what-if scenarios. These scenarios guide the technical development of what has become both holy grail and a smokescreen in talking about autonomous driving (in the absence of an actual autonomous vehicle): how can we be sure that driverless car software will make the right assessment about the value of human lives in the case of an accident?
These scenarios and their outcomes is being referred to as the ‘ethics’ of autonomous driving. In the development of driverless cars we see an ambition for the development of what James Moor refers to as an ‘explicit ethical agent’ – one that is able to calculate the best action in an ethical dilemma. I resist this construction of ethics because I claim that relationships with complex machines, and the outcomes of our interactions with them, are not either/or, and are perhaps far more entangled, especially in crashes. There is an assumption in applications of Trolley Problems that crashes take place in certain specific ways, that ethical outcomes can be computed in a logical fashion, and that this will amount to an accountability for machine behaviour. There is an assumption that the driverless car is a kind of neoliberal subject itself, which can somehow account for its behaviour just as it might some day just drive itself (thanks to Martha Poon for this phrasing).
Crash scenarios are particular moments that are being planned for and against; what kinds of questions does the crash event allow us to ask about how we’re constructing relationships with and about machines with artificial intelligence technologies in them? I claim that when we say “ethics” in the context of hypothetical driverless car crashes, what we’re really saying “who is accountable for this?” or, “who will pay for this?”, or “how can we regulate machine intelligence?”, or “how should humans apply artificially intelligent technologies in different areas?”. Some of these questions can be drawn out in terms of an accident involving a Tesla semi-autonomous vehicle.
In May 2016, an ex-US Navy veteran was test-driving a Model S Tesla semi-autonomous vehicle. The test driver, who was allegedly watching a Harry Potter movie at the time with the car in ‘autopilot’ mode, drove into a large tractor trailerwhose white surface was mistaken by the computer vision software for the bright sky. Thus it did not stop, and went straight into the truck. The fault, it seemed, was the driver’s for trusting the autopilot mode as the Tesla statement after the event says. Autopilot in the semi-autonomous car is perhaps misleading for those who go up in airplanes, so much so that the German government has told Tesla that it cannot use the word ‘autopilot’ in the German market.
In order to understand what might have happened in the Tesla case, it is necessary to look at applications of computer vision and machine learning in driverless cars. A driverless car is equipped with a variety of sensors and cameras that will record objects around it. These objects will be identified by specialised deep learning algorithms called neural nets. Neural nets are distinct in that they can be programmed to build internal models for identifying features of a dataset, and can learn how those features are related without being explicitly programmed to do so (NVIDIA 2016; Bojarski et al 2016).
Computer vision software makes an image of an object and breaks that image up into small parts – edges, lines, corners, colour gradients and so on. By looking at billions of images, neural nets can identify patterns in how combinations of edges, lines, corners and gradients come together to constitute different objects; this is its ‘model making’. The expectation is that such software can identify a ball, a cat, or a child, and that this identification will enable the car’s software to make a decision about how to react based on that identification.
Yet, this is a technology still in development and there is the possibility for much confusion. So, things that are yellow, or things that have faces and two ears on top of the head, which share features such as shape, or where edges, gradients, and lines come together, can be misread until the software sees enough examples that distinguish how things that are yellow, or things with two ears on the top of the head, are different. The more complex something is visually, without solid edges, curves or single colours; or if an object is fast, small, or flexible, the more difficult it is to read. So, computer vision in cars is shown to have a ‘bicycle problem‘ because bicycles are difficult to identify, do not always havea a specific, structured shape, and can move at different speeds.
In the case of the Tesla crash, the software misread the large expanse of the side of the trailer truck for the sky. It is possible that the machine learning was not well-trained enough to make the distinction. The Tesla crash suggests that there was both an error in the computer vision and machine learning software, as well as a lapse on the part of the test driver who misunderstood what autopilot meant. How, then, are these separate conditions to be understood as part of the dynamic that resulted in the crash? How might an ethics be imagined for this sort of crash that comes from an unfortunate entanglement of machine error and human error?
Looking away from driverless car crashes, and to aviation crashes instead, a specific narrative around the idea of accountability for crashes emerges. Galison, again, in his astounding chapter, An Accident of History, meticulously unpacks aviation crashes and how they are are investigated, documented and recorded. In doing so he makes the point that it is near impossible to escape the entanglements between human and machine actors involved in an accident. And that in attempting to identify how and why a crash occurred, we find a “recurrent strain to between a drive to ascribe final causation to human factors and an equally powerful, countervailing drive to assign agency to technological factors.” Galison finds that in accidents, human action and material agency are entwined to the point that causal chains both seem to terminate at particular, critical actions, as well as radiate out towards human interactions and organisational cultures. Yet, what is embedded in accident reporting is the desire for a “single point of culpability”, as Alexander Brown puts it, which never seems to come.
Brown’s own accounts of accidents and engineering at NASA, and Diane Vaughan’s landmark ethnography about the reasons for the Challenger Space Shuttle crash suggest the same: from organisational culture and bureaucratic processes, to faulty design, to a wide swathe of human errors, to the combinations of these, are all implicated in how crashes of complex vehicles occur.
Anyone who has ever been in a car crash probably agrees that correctly identifying exactly what happened is a challenging task, at least. How can the multiple, parallel conditions present in driving and car crashes be conceptualised? Rather thanan ‘ethics of driver-less cars’as a set of programmable rules for appropriate action, could it be imagined as a process by which an assemblageof people, social groups, cultural codes, institutions, regulatory standards, infrastructures, technical code, and engineering are framed in terms of their interaction? Mike Ananny suggests that “technology ethics emerges from a mix of institutionalized codes, professional cultures, technological capabilities, social practices, and individual decision making.” He shows that ethics is not a “test to be passed or a culture to be interrogated but a complex social and cultural achievement” (emphasis in original).
What the Trolley Problem scenario and the applications of machine learning in driving suggest is that we’re seeing a shift in how ethics is being constructed: from accounting for crashes after the fact, to pre-empting them (though, the automotive industry has been using computer simulated crash modeling for over twenty years); from ethics that is about values, or reasoning, to ethics as based on datasets of correct responses, and, crucially, of ethicsas the outcome of software engineering. Specifically in the context of driverless cars, there is the shift from ethics as a framework of “values for living well and dying well”, as Gregoire Chamayou puts it, to a framework for “killing well”, or ‘necroethics’.
Perhaps the unthinkable scenario to confront is that ethics is not a machine-learned response, nor an end-point, but as a series of socio-technical, technical, human, and post-human relationships, ontologies, and exchanges. These challenging and intriguing scenarios are yet to be mapped.
Maya is a PhD candidate at Leuphana University and is Director of Applied Research at Tactical Tech. She can be reached via twitter
(Some ideas in this paper have been developed for a paper submitted for publication in a peer-reviewed journal, APRJA)
We live in a cyborg society. Technology has infiltrated the most fundamental aspects of our lives: social organization, the body, even our self-concepts. This blog chronicles our new, augmented reality.