Presider: Michael Connor (@michael_connor)
Hashmod: Annie Wang (@annieyilingwang)
This is one post in a series of Panel Previews for the upcoming Theorizing the Web conference (#TtW14) in NYC. The panel under review is titled Tales From the Script: Infrastructures and Design.
Each presentation in this panel considers very different case studies but all are deeply interested in the standards, practices, and assumptions that undergird our augmented society. From Karen Levy’s study of “meth-proof” Sudafed to Angela VandenBroek research into a nation’s Twitter account, the panel demonstrates that our relationships to consumer goods and services are often based in assumptions and technical standards worthy of interrogation. The panel suggests several opportunities to intercede in hidden or ignored infrastructures, including R. Stuart Geiger’s call for “Successor Systems” in the tradition of Donna Haraway’s “Successor Science” and the infrastructure-based activism presented by Sebastian Benthall. Taken together, this panel presents an intriguing look into some of the more institutionalized uses of networked technology.
Karen Levy The Myth of the End User
Who is the “end user” of a technological system? Designers are encouraged to think in terms of the end user’s needs, habits, capabilities, and goals when making decisions about the features of a product or interface of a website; “user experience” (or UX) is a growing subfield of tech education and research, and “user-centered” has become a watchword of conscientious design.
In this talk, I argue that the end user doesn’t really exist, and that conceptualizing the end user as we typically do obscures the dynamics of power that inhere in technological artifacts. The term “user” conflates four independent facets of engagement between people and technologies – ownership of an artifact, control over that artifact, its operation, and the choice to engage – which frequently do not co-reside in the same person. Moreover, the term obscures the institutional landscapes in which so-called “users” find themselves, and how technologies may harmonize or conflict with these entanglements.
To illustrate, I offer a case study of Nexafed, a new over-the-counter pharmaceutical. Nexafed is a “meth-proof” formulation of pseudoephedrine (Sudafed): it has the same active ingredient but cannot be tampered with to produce methamphetamine, as Sudafed can. Thinking in typical “end user” terms, the market for Nexafed seems nonexistent: if the user intends to tamper with it, it’s useless; if the user just wants to relieve cold symptoms and has no intention of cooking meth, it’s not clear why she would be motivated to buy Nexafed instead of Sudafed. But when we consider the end user in broader terms – as a constellation of power relations and inequalities that includes children, pharmacies, pharmacists, regulators, police, and thieves – the market for Nexafed comes into focus, and new motivations (like social shame, parental mistrust, robbery, and gossip) become salient drivers of the market.
I use this case to argue for a broader conception of the “end user” in technology studies: one that recognizes that complex social and institutional relationships are mediated through technological systems and design choices, and that ownership, control, operation, and choice aren’t necessarily integrated.
Sebastian Benthall (@sbenthall) Designing Digital Publics for Participatory Parity
This paper outlines a theoretical perspective from which to orient infrastructure-based activism for social equality. Drawing on Fraser, empirical literature, and original on-line ethnographic work, I conceptualize the Web as a nexus of multiple publics, and note that they fall short of the public ideal due to a lack participatory parity. In particular, participation and influence in these publics is empirically ordered according to a “power law” distribution, a statistical distribution known for its extreme inequality. This concept of inequality on the Web is more immediate and measurable in social media than concepts that depend on demographic categories, it is less discussed in that context–perhaps because it directly challenges those who benefit most from participatory disparity.
This participatory disparity in on-line publics both reflects and reinforces social inequality (due to race, class, gender, etc.). It reflects inequalities of digital access because participation requires agency and resources. It reinforces that inequality because influence within the public is a form of power. It is also self-reinforcing through generic patterns of on-line social network formation like the preferential attachment of new members to “follow” already popular members. Counteracting those patterns is a means of combating social inequality more broadly.
In the interest of setting an agenda for equality-oriented praxis, I note that Web publics are especially subject to regulation by technology, a la Lessig. This regulation can and does in some cases bring on-line publics closer to the public ideal. An example is Twitter’s automated spam blocking, a mechanism that takes social input to exclude participants who are expected to be acting in bad faith. But social media also self-regulates according to the commercial interests of its hosts and against the public ideal. For example, Facebook’s EdgeRank algorithm, which controls the contents of its news feed, prioritizes familiar sources of content that drive identity-divulging “engagement.” This facilitates Facebook’s ability to target ads at the expense of the circulation of ideas that would make it an effective site of public contestation and deliberation. Understanding the practical reality of these technologies opens the theoretical imagination to alternative mechanisms to regulate on-line publics with participatory parity in mind.
As a positive example of a technology that enables participatory parity, I highlight as inspiration the NYU-ITP “The Listserve”, a mailing list that sends one daily email authored by one of its subscribers picked at random. I present @TheTweetserve, a Twitter bot that extends this principle to Twitter as a digital public. This bot acts mathematically to correct participatory disparity by undermining social patterns of preferential attachment. I conclude with a general call for critical and social theory to provide constructive principles from which to derive ethical infrastructure designs.
Angela VandenBroek (@akvbroek) Tweeting Sweden: Complicating Anthropology through the Analysis of the World’s Most Democratic Twitter Account
After completing my masters in anthropology, I spent six years working as a web developer, weaving together ethnographic methods and insights into the design and development of websites. In the Fall of 2013, I returned to my academic roots and began coursework for a PhD in anthropology to explore, in greater depth, the relationship between humans and the Internet. However, my experiences in the field have often grated against two common theoretical trends in the anthropological literature.
First, I have found the academe to be infatuated with the user and dismissive of the Internet’s designers, developers and creatives, except when those creators fit neatly into categories of traditional anthropological interest, such as the open source and free software movement (Kelty 2008, Coleman 2013, Karanovic 2008, 2012, 2010). By extracting the user bit of the digital and failing to contextualize user experience amid the greater web of connections in and among digital technology, digital anthropologists have failed to heed the important lesson put forth by Eric Wolf (1982) in Europe and the People Without History: “…the world of humankind constitutes a manifold, a totality of interconnected processes, and inquiries that disassemble this totality into bits and then fail to reassemble it falsify reality” (3).
Second, the concepts of online and offline have taken on a privileged position within digital anthropology. I have found through work with diverse users this distinction to be of little importance to most users and technology professionals and that the actual experience of Internet involves many states of being and experience that have little connection to the simplistic binary of online and offline. The entanglement of the online-offline concepts — including virtual-actual, online-onground (Cool 2012), and other similar reimaged and renamed online-offline distinctions — within anthropology seem rooted in three problematic areas in the development of digital anthropology: establishing academic validity (Miller and Horst 2012:18), making disciplinary or specialist boundaries (Boellstorff 2012:35, 45), and the establishment of methodological best practices (Boellstorff 2012:34, Cool 2012:24).
The Curators of Sweden project began in 2011 when two official governmental agencies, the Swedish Institute and VisitSweden, gave a Swedish citizen full and seemingly unfettered control of the official Twitter account of the Swedish government. Every week since then, a new Swedish citizen has been given access to write as @Sweden, to curate Swedishness for the Internet. Through the example of the Curators of Sweden project, I will explore the dangers of ignoring the relationship between designers, developers, and creatives and their users by exploring the subtle dialog between creators, participants, and users across platforms, media, personal communication, and documentation that has shaped the project and its users’ experiences. I will also problematicize the online and offline concepts as analytical tools by extending the analytic scope of this “online” project beyond its online-ness to more fruitful engagements with history, politics, business, and technology. I will contextualize the project into the history of Swedish Modernism and Swedish nation branding that shaped the creators’ choices in design, development, and platforms.
R. Stuart Geiger (@staeiou) Successor Systems: Enacting Ideological Critique Through the Development of Software
During the “science wars” of the early 90s, Haraway and Harding introduced the concept of a “successor science,” a call for new sciences that blend objectivity with situatedness. In “Situated Knowledges,” Haraway argued “Feminists have to insist on a better account of the world; it is not enough to show radical historical contingency and modes of construction for everything.” I extend the concept of a successor science to the realm of software, introducing the concept of “successor systems” – doing ideological critique by designing and deploying systems that build better accounts of the world. I discuss three activist projects, all based on enacting an ideological critique through a technological system, rather than pure discourse.
Hollaback, a system for reporting and representing street harassment, critiques social institutions through a new technologically-enabled mode of knowledge production. Street harassment is a longstanding and ubiquitous problem across the world, but dominant institutions (from the police to news media) generally encourage women to ignore harassment, rather than report it. Hollaback provides a safe space for victims of street harassment to assemble as a networked public and frame this issue in a way that is often marginalized by various social institutions. Hollaback is a critique of the widespread institutional ignorance of street harassment, providing an infrastructure for building ‘better’ accounts of the world: ones that make often-ignored experiences of street harassment visible at a variety of scales.
Turkopticon is a browser extension that modifies Amazon’s Mechanical Turk service. AMT disproportionately benefits employers, who are able to know and rate individual workers in ways that workers are not able to know and rate their employers, leading to exploitation. As a successor system, Turkopticon critiques this assumption built into in AMT, using feedback from workers about employers to produce a new mode of knowledge production that is designed to protect Turk workers. The system is named for Bentham’s infamous panopticon prison, discussed by Foucault; Turkopticon seeks to reverse the direction of surveillance built into AMT by putting employers under a kind of collective ‘sousveillance’ — surveillance from below.
Snuggle is a tool supporting mentoring in Wikipedia, explicitly built to counter the often hostile reactions that veteran Wikipedians unleash on new contributors. Most of the highly-automated tools that have been developed to support Wikipedian editors situate their users as police who are on patrol for “vandalism.” Assisted by algorithms that rank by ‘suspiciousness,’ these editors see some of the worst content submitted to Wikipedia, then make fast-paced decisions about what is kept and removed. Snuggle was designed to reverse the assumptions built into this practice, situating Wikipedians as mentors and newcomers as potential collaborators to be supported. The tool lets Wikipedians holistically search for potentially desirable newcomers, affording activities of praise, constructive criticism, and directed intervention.
As both a systems designer and a web theorist, I have a personal interest in discussing these successor systems from both a theoretical and design-oriented perspective. I will conclude with overarching recommendations for both designers and theorists interested in successor systems.