commentary

“It’s not about the money, it is about the principle”, I’ve heard this phrase so many times from friends, colleagues and internet influencers who refuse to pay an extra charge for a service or product not deemed worthwhile. In an episode titled ‘No Change’, a famous influencer was complaining about what he had felt was a growing phenomenon—that of waiters not giving back change when he pays the bill. He was expressing annoyance at ‘being duped’ by a waiter and went on to share that it should be his decision to leave a tip. In the wake of the ubiquity of imposed minimum charges at cafés in Egypt, people started resorting to storytelling on social media platforms to expose certain companies and ameliorate the standards of services.  Instead of waiting on hold to make a complaint, a woman had provided a detailed account on Facebook of her conversation with a waiter at a café, where she was explaining to him that minimum charge is an illegal practice and he can’t really force her to pay it. She shared what she felt was a success story on a group titled ‘Don’t shop here-a list of untrustworthy shops in Egypt’, a public Facebook group where middle-class Egyptians would share stories about bad consumer experiences. The group now serves as an eclectic archive for a wide range of stories recounting bad experiences (from raw chicken at a famous restaurant to slow internet to undelivered customer service promises).  more...

As the school year ends we at Cyborgology thought it fitting to publish our first-ever anonymous contribution. We all have varying opinions about the views stated below but we did agree that these are ideas worth putting out there for discussion.

Excerpt from an infographic included in the IPP’s report on college president pay. Full graphic here.

To Whom It May Concern:

If it is your job to keep track and rank institutions of higher education and publish that data in venues like U.S. News & World Report or the Princeton Review, I have a simple request for you. Please start keeping track of institutions’ administrator to faculty ratios and, in your proprietary ranking formulas, reduce the numerical rank of institutions with a low ratio. The reasoning here is equally straightforward: putting more emphasis on administrative work than actual teaching and research is detrimental to student outcomes.

I wish I could say there was lots of data to back this up but, sadly, researchers are reticent to publish findings that are directly hostile to their bosses. Still though, there are preliminary findings that are worth paying attention to. For starters, a 2014 report by the Institute for Policy Studies found that within public universities high president salaries and high administrative spending overall, correlated positively with high student debt, high reliance on part-time adjunct hiring, and sharp declines in permanent tenure-track faculty. You already keep track of graduating students’ debt and the percentage of adjunct professors in the faculty pool so why not track what seems to be a predictive variable for both of those things? more...


With advances in machine learning and a growing ubiquity of “smart” technologies, questions of technological agency rise to the fore of philosophical and practical importance. Technological agency implies deep ethical questions about autonomy, ownership, and what it means to be human(e), while engendering real concerns about safety, control, and new forms of inequality. Such questions, however, hinge on a more basic one: can technology be agentic?

To have agency, technologies need to want something. Agency entails values, desires, and goals. In turn, agency entails vulnerability, in the sense that the agentic subject—the one who wants some things and does not want others—can be deprived and/or violated should those wishes be ignored.

The presence vs. absence of technological agency, though an ontologically philosophical conundrum, can only be assessed through the empirical case. In particular, agency can be found or negated through an empirical instance in which a technological object seems, quite clearly, to express some desire. Such a case arises in the WCry ransomware virus ravaging network systems as I write. more...

Last Sunday French voters seemingly stemmed the tide of nationalist candidates winning major elections. I say seemingly because, as The Guardian reported: “Turnout was the lowest in more than 40 years. Almost one-third of voters chose neither Macron nor Le Pen, with 12 million abstaining and 4.2 million spoiling ballot papers.” The most disturbing statistic though, is that nearly half of voters 18 to 24 voted for Le Pen. She may have not won this time, but the future in France looks pretty fascist. For now, though, France seems to have dodged a bullet with a familiar caliber.

Late last Friday night the Macron campaign announced it had been hacked and many internal documents had been leaked to the open internet through Pastebin and later spread on /Pol/ and Twitter. The comparisons to the American election were easy and numerous but unlike the United States, France has a media blackout period. Elections are held on weekends and new reporting is severely limited. Emily Schultheis in The Atlantic explains:

Here, the pre-election ban on active campaigning, which begins at midnight the Friday night before an election, and ends only when the polls close Sunday night, is practically sacred. The pause is seen as a time when French voters can sit back, gather their information and reflect on their choice before heading to the voting booth on Sunday. It’s also the law: According to French election rules, the blackout includes not just candidate events but anything that could theoretically sway the course of the election: media commentary, interviews, and candidate postings on social media are not just illegal, but taboo.

more...

Welcome to part three of my multi-part series on the history of the Quantified Self as a genealogical ancestor of eugenics. In last week’s post, I elucidated Francis Galton’s influence on experimental psychology, arguing that it was, largely, a technological one. In an oft-cited paper from 2013, researcher Melanie Swan argues that “the idea of aggregated data from multiple…self-trackers[, who] share and work collaboratively with their data” will help make that data more valuable—be it to the individual tracking, physician working with them, corporation selling the device worn, or other stakeholder (86). No doubt, then, the value of the predictive power of correlation and regression to these trackers. Harvey Goldstein, in a paper tracing Galton’s contributions to psychometrics, notes that Galton was not the only late-nineteenth century scientist to believe that genius was passed hereditarily. He was, however, one of the few to take up the task of designing a study to show genealogical causality regarding character, thanks once again to his correlation coefficient and resultant laws of regression. more...

In turbulent times there is something emotionally powerful about reliability in and of itself. Facebook, for all its faults, is reliable. I can bet on Facebook being up and available more often than the Internet connection I rely on to access it. Hell, it works more reliably than my toilet. Changes to the site trigger cascades of stories and opinions about user experience which, really, goes to show how infrequently Facebook makes major alterations to core functions. You don’t have to like Facebook as a company or as a product to acknowledge that it is stable and works as intended more often that most other things. This transcendent reliability—a steadfast infrastructure of emotive communication and identity construction—has become Facebook’s core service. You may not like what you see in your timeline, but the timeline will be there.

Watching an organization embed itself into the lives of nearly a third of the global population is a strange thing. To be a common tread across all of those lives is to be as unthreatening or uncontroversial as possible. Conversely, it was only a matter of time before Facebook played host to something deeply disturbing like a murder, or even world-changing like a reactionary election. This tension between striving for unassuming background service and inevitable host to calamity goes a long way towards explaining why Mark Zuckerberg is traveling across the U.S and writing 6,000-word manifestos about community, despite the fact that most Facebook users aren’t Americans and Facebook is not a community. Shoring up good will in the most powerful nation on the planet is not only good business, it is tapping into a tradition of American progressivism that is so embedded in our daily lives we can’t recognize it when we see it enacted. It is the water we swim in and Mark Zuckerberg wants to tint it Facebook blue. more...

Michele Graffieti’s narrative panorama of the “Mapping the Republic of Letters,” a more famous example of Digital History.

Over the past decade, theorizing about data and digital mediums has typically been kept to spaces like New Media Studies. The rise of Digital Humanities as a strictly empirical field cuts against this grain in a manner worth examining. Part I: The Hegemony of Data, discusses a longer history of information to evaluate the intuitive sense of objectivity that surrounds “Big Data”. Part II: The Heavens and Hells of the Web examines the initial beliefs in digital messianism as a method of eliminating moral and social problems, how they turned apocalyptic, and what lessons Digital Humanities should take from it. Part III: Digital Epistemology goes beyond critique and builds a sense in which anti-colonial, anti-capitalist, moral visions of a future may benefit and actually advance discourses through our experiences with digital tools and society.
more...

Street view, Calcutta, 2010.

 

According to its author JG Ballard, Crash is the first pornographic novel based on technology’. From bizarre to oddly pleasing, the book’s narrative drifts from one erotically charged description to another of mangled human bodies and machines fused together in the moment of a car crash. The “TV scientist”, Vaughan, and his motley crew of car crash fetishists seek out crashes in-the-making, even causing them, just for the thrill of it. The protagonist of the tale, a James Ballard, gets deeply entangled with Vaughan and his ragged band after his own experience of a crash. Vaughan’s ultimate sexual fantasy is to die in a head-on collision with the actress Elizabeth Taylor.

Like a STS scholar-version of Vaughan, I imagine what it may be like to arrive on the scene of  a driverless car crash, and draw maps to understand what happened. Scenario planning is one kind of map-making to plan for ‘unthinkable futures’.

The ‘scenario’ is a phenomenon that became prominent during the Korean War, and through the following decades of the Cold War, to allow the US army to plan its strategy in the event of nuclear disaster. Peter Galison describes scenarios as a “literature of future war” “located somewhere between a story outline and ever more sophisticated role-playing war games”, “a staple of the new futurism”. Since then scenario-planning has been adopted by a range of organisations, and features in the modelling of risk and to identify errors. Galison cites the Boston Group as having written a scenario – their very first one-  in which feminist epistemologists, historians, and philosophers of science running amok might present a threat to the release of radioactive waste from the Cold War (“A Feminist World, 2091”).

The applications of the Trolley Problem to driverless car crashes are a sort of scenario planning exercise. Now familiar to most readers of mainstream technology reporting, the Trolley problem is presented as a series of hypothetical situations with different outcomes derived from a pitting of consequentialism against deontological ethics. Trolley Problems are constructed as either/or scenarios where a single choice must be made. MIT’s Moral Machine project materialises this thought experiment with an online template in which the user has to complete scenarios in which she has to instruct the driverless car about which kinds of pedestrian to avoid in the case of brake failure: runaway criminals, pets, children, parents, athletes, old people, or fat people.  Patrick Lin has a short animation describing the application of the Trolley Problem in the driverless car scenario.

 

The applications of the Trolley Problem, as well as the Pascalian Wager (Bhargava, 2016), are applications that attempt to create what-if scenarios. These scenarios guide the technical development of what has become both holy grail and a smokescreen in talking about autonomous driving (in the absence of an actual autonomous vehicle): how can we be sure that driverless car software will make the right assessment about the value of human lives in the case of an accident?

These scenarios and their outcomes is being referred to as the ‘ethics’ of autonomous driving.  In the development of driverless cars we see an ambition for the development of what James Moor refers to as an ‘explicit ethical agent’ – one that is able to calculate the best action in an ethical dilemma. I resist this construction of ethics because I claim that relationships with complex machines, and the outcomes of our interactions with them, are not either/or, and are perhaps far more entangled, especially in crashes. There is an assumption in applications of Trolley Problems that crashes take place in certain specific ways,  that  ethical outcomes can be computed in a logical fashion, and that this will amount to an accountability for machine behaviour. There is an assumption that the driverless car is a kind of neoliberal subject itself, which can somehow account for its behaviour just as it might some day just drive itself (thanks to Martha Poon for this phrasing).

Crash scenarios are particular moments that are being planned for and against; what kinds of questions does the crash event allow us to ask about how we’re constructing relationships with and about machines with artificial intelligence technologies in them? I claim that when we say “ethics” in the context of hypothetical driverless car crashes, what we’re really saying “who is accountable for this?” or, “who will pay for this?”, or “how can we regulate machine intelligence?”, or “how should humans apply artificially intelligent technologies in different areas?”. Some of these questions can be drawn out in terms of an accident involving a Tesla semi-autonomous vehicle.

In May 2016, an ex-US Navy veteran was test-driving a Model S Tesla semi-autonomous vehicle. The test driver, who was allegedly watching a Harry Potter movie at the time with the car in ‘autopilot’ mode, drove into a large tractor trailer whose white surface was mistaken by the computer vision software for the bright sky. Thus it did not stop, and went straight into the truck. The fault, it seemed, was the driver’s for trusting the autopilot mode as the Tesla statement after the event says. Autopilot in the semi-autonomous car is perhaps misleading for those who go up in airplanes, so much so that the German government has told Tesla that it cannot use the word ‘autopilot’ in the German market.

In order to understand what might have happened in the Tesla case, it is necessary to look at applications of computer vision and machine learning in driverless cars. A driverless car is equipped with a variety of sensors and cameras that will record objects around it. These objects will be identified by specialised deep learning algorithms called neural nets. Neural nets are distinct in that they can be programmed to build internal models for identifying features of a dataset, and can learn how those features are related without being explicitly programmed to do so (NVIDIA 2016; Bojarski et al 2016).

Computer vision software makes an image of an object and breaks that image up into small parts – edges, lines, corners, colour gradients and so on. By looking at billions of images, neural nets can identify patterns in how combinations of edges, lines, corners and gradients come together to constitute different objects; this is its ‘model making’. The expectation is that such software can identify a ball, a cat, or a child, and that this identification will enable the car’s software to make a decision about how to react based on that identification.

Yet, this is a technology still in development and there is the possibility for much confusion. So, things that are yellow, or things that have faces and two ears on top of the head, which share features such as shape, or where edges, gradients, and lines come together, can be misread until the software sees enough examples that distinguish how things that are yellow, or things with two ears on the top of the head, are different. The more complex something is visually, without solid edges, curves or single colours; or if an object is fast, small, or flexible, the more difficult it is to read. So, computer vision in cars is shown to have a ‘bicycle problem‘ because bicycles are difficult to identify, do not always havea a specific, structured shape, and can move at different speeds.

In the case of the Tesla crash, the software misread the large expanse of the side of the trailer truck for the sky. It is possible that the machine learning was not well-trained enough to make the distinction. The Tesla crash suggests that there was both an error in the computer vision and machine learning software, as well as a lapse on the part of the test driver who misunderstood what autopilot meant. How, then, are these separate conditions to be understood as part of the dynamic that resulted in the crash? How might an ethics be imagined for this sort of crash that comes from an unfortunate entanglement of machine error and human error? 

Looking away from driverless car crashes, and to aviation crashes instead, a specific narrative around the idea of accountability for crashes emerges. Galison, again, in his astounding chapter, An Accident of History,  meticulously unpacks aviation crashes and how they are are investigated, documented and recorded. In doing so he makes the point that it is near impossible to escape the entanglements between human and machine actors involved in an accident. And that in attempting to identify how and why a crash occurred, we find a “recurrent strain to between a drive to ascribe final causation to human factors and an equally powerful, countervailing drive to assign agency to technological factors.” Galison finds that in accidents, human action and material agency are entwined to the point that causal chains both seem to terminate at particular, critical actions, as well as radiate out towards human interactions and organisational cultures. Yet, what is embedded in accident reporting is the desire for a “single point of culpability”, as Alexander Brown puts it, which never seems to come.

Brown’s own accounts of accidents and engineering at NASA, and Diane Vaughan’s landmark ethnography about the reasons for the Challenger Space Shuttle crash suggest the same: from organisational culture and bureaucratic processes, to faulty design, to a wide swathe of human errors, to the combinations of these, are all implicated in how crashes of complex vehicles occur.

Anyone who has ever been in a car crash probably agrees that correctly identifying exactly what happened is a challenging task, at least.  How can the multiple, parallel conditions present in driving and car crashes be conceptualised? Rather than anethics of driver-less cars’ as a set of programmable rules for appropriate action, could it be imagined as a process by which an assemblage of people, social groups, cultural codes, institutions, regulatory standards, infrastructures, technical code, and engineering are framed in terms of their interaction? Mike Ananny suggests that “technology ethics emerges from a mix of institutionalized codes, professional cultures, technological capabilities, social practices, and individual decision making.” He shows that ethics is not a “test to be passed or a culture to be interrogated but a complex social and cultural achievement” (emphasis in original).

What the Trolley Problem scenario and the applications of machine learning in driving suggest is that we’re seeing a shift in how ethics is being constructed: from accounting for crashes after the fact, to pre-empting them (though, the automotive industry has been using computer simulated crash modeling for over twenty years); from ethics that is about values, or reasoning, to ethics as based on datasets of correct responses, and, crucially, of ethics as the outcome of software engineering. Specifically in the context of driverless cars, there is the shift from ethics as a framework of “values for living well and dying well”, as Gregoire Chamayou puts it, to a framework for “killing well”, or ‘necroethics’.

Perhaps the unthinkable scenario to confront is that  ethics is not a machine-learned response, nor an end-point, but as a series of socio-technical, technical, human, and post-human relationships, ontologies, and exchanges. These challenging and intriguing scenarios are yet to be mapped.

Maya is  a PhD candidate at Leuphana University and is Director of Applied Research at Tactical Tech. She can be reached via twitter 

(Some ideas in this paper have been developed for a paper submitted for publication in a peer-reviewed journal, APRJA)

c68e5-redditsnoo_hugging.png

Source: Redditblog.com

Users and administrators alike constantly refer to Reddit as a community. Whether talking about specific subreddits or the site as a whole, the discourse of community is powerful. Unlike Facebook or Twitter, it isn’t just a branding concept. Many Reddit users also consider Reddit a community in a way other sites are not. Redditors appreciate that the site isn’t a social media network. They like that the model for Reddit is about content aggregation and forum discussion, they like the relative anonymity they have, and they like being able to curate their experience by subscribing to subreddits tailored to their interests.

I have previously argued that Facebook is not a community. I feel less confident making that argument for Reddit, primarily because so many users consider it a community. Regardless of my own definition of community—a social unit based on voluntary association, shared beliefs and values, and contribution without the expectation of direct compensation—and the extent to which it does or does not fit this definition, the fact is that there is an important affective component to community, and many users certainly feel that connection. more...

“We need to tell more diverse and realistic stories about AI,” Sara Watson writes, “if we want to understand how these technologies fit into our society today, and in the future.”

Watson’s point that popular narratives inform our understandings of and responses to AI feels timely and timeless. As the same handful of AI narratives circulate, repeating themselves like a befuddled Siri, their utopian and dystopian plots prejudice seemingly every discussion about AI. And like the Terminator itself, these paranoid, fatalistic stories now feel inevitable and unstoppable. As Watson warns, “If we continue to rely on these sci-fi extremes, we miss the realities of the current state of AI, and distract our attention from real and present concerns.” more...