Last week, I began an attempt at tracing a genealogical relationship between eugenics and the Quantified Self. I reviewed the history of eugenics and the ways in which statistics, anthropometrics, and psychometrics influenced the pseudoscience. This week, I’d like to begin to trace backwards from QS and towards eugenics. Let me begin, as I did last week, with something quite obvious: the Quantified Self has a great deal to do with one’s self. Stating this, however, helps place QS in a historical context that will prove fruitful in the overall task at hand.

In a study published in 2014, a group of researchers from both the University of Washington and the Microsoft Corporation found that the term “self-experimentation” was used prevalently among their QS-embracing subjects.

“Q-Selfers,” they write, “wanted to draw definitive conclusions from their QS practice—such as identifying correlation…or even causation” (Choe, et al. 1149). Although not performed with “scientific rigor”, this experimentation was about finding meaningful, individualized information with which to take further action (Choe, et al. 1149).

Looking back at the history of self-experimentation in the sciences—in particular, experimental and behavioral psychology—leads to a 1981 paper by Reed College professor and psychologist, Allen Neuringer, entitled, “Self-Experimentation: A Call for Change”. In it, Neuringer argues for a closer emphasis on the self by behaviorists:

If experimental psychologists applied the scientific method to their own lives, they would learn more of importance to everyone, and assist more in the solution of problems, than if they continue to relegate science exclusively to the study of others. The area of inquiry would be relevant to the experimenter’s ongoing life, the subject would be the experimenter, and the dependent variable some aspect of the experimenter’s behavior, overt or covert. (79)

The psychologist goes on to suggest that poets and novelists could use the method to discover what causes love and that “all members of society” will “view their lives as important” thanks to their contributions to scientific progress (93).

Neuringer’s argument is heavily influenced by the work of B. F. Skinner, the father of radical behaviorism—a subset of psychology in which the behavior of a subject (be it human or otherwise) can be “explained through the conditioning…in response to the receipt of rewards or punishments for its actions” (Gilette 114). We can see, then, a lineage of both behavioral and experimental psychologies on the quantified-self: not only do QS devices track, but many of the interfaces built into and around them embrace “gamification”. That is, beyond the watch face or pedometer display, the dashboards displaying results, the emails and alerts presented to subjects, the “competition” features, etc., all embrace what Deborah Lupton calls “the rendering of aspects of using…self-tracking as games…an important dimension of new approaches to self-tracking as part of motivation strategies” (23).

The field of experimental psychology from which behaviorism grew when, in 1913, John B. Watson wrote “Psychology as the Behaviorist Views It”, was not specifically an invention of Francis Galton. This is not to say that Galton did not partake in experimental psychology during his eugenic research. In fact, his protégé and biographer, Karl Pearson, cites “a leading psychologist” writing in 1911: “‘Galton deserves to be called the first Englishman to publish work that was strictly what is now called Experimental Psychology, but the development of the movement academically has, I believe, in no way been influenced by him’” (213). Pearson, who included this quote in the 1924 second volume of The Life, Letters and Labours of Francis Galton, goes on to argue that American and English psychological papers are far superior to their continental counterparts thanks directly to Galton’s work on correlation in statistical datasets, though, per Ian Hacking, Pearson later notes that correlation laws may have been identified “much earlier in the Gaussian [or Normal] tradition” (187).

Here we begin to see an awkward situation in our quest to draw a line from Galton and hard-line eugenics (we will differentiate between hardline and “reform” eugenics further on) to the quantified self movement. Behaviorism sits diametrically opposed to eugenics for a number of reasons. Firstly, it does not distinguish between human and animal beings—certainly a tenet to which Galton and his like would object, understanding that humans are the superior species and a hierarchy of greatness existing within that species as well. Secondly, behaviorism accepts that outside, environmental influences will change the psychology of a subject. In 1971, Skinner argued that “An experimental analysis shifts the determination of behavior from autonomous man to the environment—an environment responsible both for the evolution of the species and for the repertoire acquired by each member” (214).  This stands in direct conflict with the eugenical ideal that physical and psychological makeup is determined by heredity. Indeed, the eugenicist Robert Yerkes, otherwise close with Watson, wholly rejected the behaviorist’s views (Hergenhahn 400). Tracing the quantified-self’s behaviorist and self-experimental roots, then, leaves us without a very strong connection to the ideologies driving eugenics. Still, using Pearson as a hint, there may be a better path to follow.

So come back next week and we’ll see what else we can dig up in our quest to understand a true history of the Quantified Self.

Gabi Schaffzin is a PhD student at UC San Diego. He has a very good dog named Buckingham. 


References

Choe, Eun Kyoung, et al. “Understanding Quantified-Selfers’ Practices in Collecting and Exploring Personal Data.” Proceedings of the 32nd Annual ACM Conference on Human Factors in Computing Systems – CHI ’14, 2014, pp. 1143–1152., doi:10.1145/2556288.2557372.

Gillette, Aaron. Eugenics and the Nature-Nurture Debate in the Twentieth Century. New York, Palgrave Macmillan, 2011.

Hacking, Ian. The Taming of Chance. Cambridge, Cambridge University Press, 1990.

Hergenhahn, B. R. An Introduction to the History of Psychology. Belmont, CA, Wadsworth, 2009.

Lupton, Deborah. The Quantified Self: a Sociology of Self-Tracking. Cambridge, UK, Polity, 2016.

Neuringer, Allen. “Self-Experimentation: A Call for Change.” Behaviorism, vol. 9, no. 1, 1981, pp. 79–94., academic.reed.edu/psychology/docs/SelfExperimentation.pdf. Accessed 19 Mar. 2017.

Pearson, Karl. The Life, Letters and Labours of Francis Galton. Characterisation, Especially by Letters. Index. Cambridge, UP, 1930, galton.org/pearson/index.html. Accessed 17 Mar. 2017.

In the past few months, I’ve posted about two works of long-form scholarship on the Quantified Self: Debora Lupton’s The Quantified Self and Gina Neff and Dawn Nufus’s Self-Tracking. Neff recently edited a volume of essays on QS (Quantified: Biosensing Technologies in Everyday Life, MIT 2016), but I’d like to take a not-so-brief break from reviewing books to address an issue that has been on my mind recently. Most texts that I read about the Quantified Self (be they traditional scholarship or more informal) refer to a meeting in 2007 at the house of Kevin Kelly for the official start to the QS movement. And while, yes, the name “Quantified Self” was coined by Kelly and his colleague Gary Wolf (the former founded Wired, the latter was an editor for the magazine), the practice of self-tracking obviously goes back much further than 10 years. Still, most historical references to the practice often point to Sanctorius of Padua, who, per an oft-cited study by consultant Melanie Swan, “studied energy expenditure in living systems by tracking his weight versus food intake and elimination for 30 years in the 16th century.” Neff and Nufus cite Benjamin Franklin’s practice of keeping a daily record of his time use. These anecdotal histories, however, don’t give us much in terms of understanding what a history of the Quantified Self is actually a history of.

Briefly, what I would like to prove over the course of a few posts is that at the heart of QS are statistics, anthropometrics, and psychometrics. I recognize that it’s not terribly controversial to suggest that these three technologies (I hesitate to call them “fields” here because of how widely they can be applied), all developed over the course of the nineteenth century, are critical to the way that QS works. Good thing, then, that there is a second half to my argument: as I touched upon briefly in my [shameless plug alert] Theorizing the Web talk last week, these three technologies were also critical to the proliferation of eugenics, that pseudoscientific attempt at strengthening the whole of the human race by breeding out or killing off those deemed deficient.

I don’t think it’s very hard to see an analogous relationship between QS and eugenics: both movements are predicated on anthropometrics and psychometrics, comparisons against norms, and the categorization and classification of human bodies as a result of the use of statistical technologies. But an analogy only gets us so far in seeking to build a history. I don’t think we can just jump from Francis Galton’s ramblings at the turn of one century to Kevin Kelly’s at the turn of the next. So what I’m going to attempt here is a sort of Foucauldian genealogy—from what was left of eugenics after its [rightful, though perhaps not as complete as one would hope] marginalization in the 1940s through to QS and the multi-billion dollar industry the movement has inspired.

I hope you’ll stick around for the full ride—it’s going to take a a number of weeks. For now, let’s start with a brief introduction to that bastion of Western exceptionalism: the eugenics movement.

Francis Galton had already been interested in heredity and statistics before he read Charles Darwin’s On the Origin of the Species upon its publication in 1859. The work, written by his half-cousin, acted as a major inspiration in Galton’s thinking on the way that genius was passed through generations—so much so, that Galton spent the remainder of his life working on a theory of hereditary intelligence. His first publication on the topic, “Hereditary Talent and Character” (1865), traced the genealogy of nearly 1,700 men whom he deemed worthy of accolades—a small sample of “the chief men of genius whom the world is known to have produced” (Bullmer 159)—eventually concluding that “Everywhere is the enormous power of hereditary influence forced on our attention” (Galton 1865, 163). Four years later, the essay inspired a full volume, Hereditary Genius, in which Galton utilized Adolphe Quetelet’s statistical law detailing a predictive uniformity in  deviation from a normally distributed set of data points—the law of errors.

Much like Darwin’s seminal work, Quetelet’s advancements in statistics played a critical part in the development of Galton’s theories on the hereditary nature of human greatness. Quetelet, a Belgian astronomer, was taken by his predecessors’ work to normalize the variation in error that occurred when the position of celestial bodies were measured multiple times. Around the same time—that is, in the first half of the nineteenth century—French intellectuals and bureaucrats alike had taken a cue from Marquis de Condorcet, who had proposed a way to treat moral—or, social—inquiries in a similar manner to the way the physical sciences were approached. Quetelet, combining the moral sciences with normal distributions, began to apply statistical laws of error in distribution to the results of anthropometric measurements across large groups of people: e.g., the chest size of soldiers, the height of school boys. The result, which effectively treated the variation between individual subjects’ measurements in the same manner as a variation in a set of measurements of a single astronomical object, was homme type—the typical man (Hacking 111–12).

In 1889, Galton wrote, “I know of scarcely anything so apt to impress the imagination as the wonderful form of cosmic order expressed by the ‘Law of Frequency of Error’” (66). Six years earlier, in Inquiries Into Human Faculty, he declared that he was interested in topics “more or less connected with that of the cultivation of race” (17, emphasis added)—that is, eugenics—than simply the observation of it. Galton’s argument was rather simple, albeit vague: society should encourage the early marriage and reproduction of men of high stature. Per Michael Bulmer, “He suggested that a scheme of marks for family merit should be devised, so that ancestral qualities as well as personal qualities could be taken into account” (82). Once these scores were evaluated, the individuals with top marks would be encouraged to and rewarded for breeding; at one point, he recommended a £5,000 “wedding gift” for the top ten couples in Britain each year, accompanied by a ceremony in Westminster Abbey officiated by the Queen of England (Bulmer 82). This type of selective breeding would eventually be referred to as “positive eugenics”.

The statistical technologies developed by Quetelet and the like were utilized by Galton for more than just the evaluation of which individuals were worthy of reproduction, they also allowed for the prediction of how improvements would permeate through a population. Specifically, he argued that if a normally distributed population (being measured upon whichever metric—or combination of which—he had chosen) reproduced, it would result in another normally distributed population—that is, the bulk of the population would be average or mediocre (Hacking 183). He called this the law of regression and understood it to slow severely the improvement of a race towards the ideal. However, if one could guarantee that those individuals at the opposite end of the bell curve—that is, the morally, physically, or psychologically deficient—were not reproducing, then an accelerated reproduction of the exceptional could take place (Bulmer 83). Thus was born “negative eugenics”.

I will revisit the proliferation of eugenics a bit later in this study, but it is important here to note that the historical trail of the active and public implementation of eugenics eventually goes cold somewhere between 1940 and 1945, depending on in which country one is looking. Most obviously, the rise of the Third Reich and its party platform built primarily on eugenicist policies had a direct effect on the decline of eugenics towards the midway point of the twentieth century. Previously enacted (and confidently defended) state policies regarding forced sterilization from Scandinavia to the United States were eventually struck-down and stay as embarrassing marks on national histories to this day (Hasian 140), though the last US law came off the books in the 1970s.

This is not to suggest that the scientific ethos behind the field—that one’s genetic makeup determines both physical and psychological traits—went completely out of fashion. Instead, I hope it has becomes obvious, even in this brief overview, that the aforementioned analogies between eugenics and QS are not difficult to draw. But how do we get from one to the other? And am I being crazy in doing so?

The second question is probably up for grabs for a little while. I’ll begin to answer the first one next week, however, when I sketch out a history of self-experimentation and behavioral psychology, moving backwards from the Quantified Self to eugenics. Come back again, won’t you?

Gabi Schaffzin is a PhD student at UC San Diego. Having just returned from the east coast, his jetlag has left him without anything witty to add. 


References

Bulmer, M. G. Francis Galton: Pioneer of Heredity and Biometry. Baltimore, Johns Hopkins University Press, 2003.

Galton, Francis. “Hereditary Talent and Character.” Macmillan’s Magazine, 1865, pp. 157–327, galton.org/essays/1860-1869/galton-1865-hereditary-talent.pdf. Accessed 17 Mar. 2017.

Galton, Francis. Natural Inheritance. New York, AMS Press, 1973 (Originally published 1889).

Hacking, Ian. The Taming of Chance. Cambridge, Cambridge University Press, 1990.

Hasian, Marouf Arif. The Rhetoric of Eugenics in Anglo-American Thought. Athens, University of Georgia Press, 1996.

Lupton, Deborah. The Quantified Self: a Sociology of Self-Tracking. Cambridge, UK, Polity, 2016.

Neff, Gina, and Dawn Nafus. Self-Tracking. Cambridge, MIT Press, 2016.

Back in January, I wrote about Deborah Lupton’s The Quantified Self, a recent publication from Polity by the University of Canberra professor in Communication. In that post I mentioned that I planned to read another book on the QS movement from MIT Press, Self-Tracking by Gina Neff, a Communication scholar out of University of Washington, and Dawn Nafus, an anthropologist at Intel. And so I have.

Much like Lupton’s book, Self-Tracking is best utilized as an introduction to the structures and cultural context in which the quantified self operates. The work begins with a relatively broad introduction to what the quantified self is (the authors differentiate between lower-case quantified self as the general self-tracking industry and uppercase Quantified Self as the Meet-Up-ing, annual-conference-ing, ever-proselytizing community) and what practices the term encompasses. Just as in Lupton’s book, we are treated to insight from Cyborgology’s super-famous past contributor, Whitney Erin Boesel, and her “Taxonomy of types of people”. As I noted back in January, however, Lupton uses a great deal of ink giving example after example of QS devices and services; the authors of Self-Tracking sprinkle their examples throughout which helps the book flow in a significantly more natural manner.

Neff and Nafus also narrow their focus on the health-related aspects of QS. For instance, the pair consider what sorts of problems a doctor might encounter when a patient brings self-tracked data (spoiler: a whole bunch). In considering how this differs from Lupton’s account, I am tempted to suggest that her analysis touched on a much broader swath of the QS market—but this is to consider there to be a difference between QS devices and health-related tracking. That is, as I read Self-Tracking, I wondered if there are any QS devices not health-related. What is the boundary between the body and health? How are normal bodies and healthy bodies any different? Could a QS device be marketed as something that will help you become something other than healthy?

Most of these questions are not explicitly asked by the authors of Self-Tracking. Lupton, on the other hand, does delve into more theoretical questions of what defines the self—at one point suggesting a QS-enabled prosthesis of selfhood, rendering “self-extension possible” (70) (Neff and Nafus refer to a “prosthesis of feeling” at one point, but this is a different issue). In some respects, reading Quantified Self and Self-Tracking together provides a reader with perhaps the right balance of depth—into the utilization of self-tracking in the service of and complementary to the healthcare industry—and breadth—across multiple theoretical categories of data and selfhood.

Still, one thing I don’t get from either work is the answer to the question, where did this all come from? That is, what is the history of the quantified self a history of? Both Lupton and Neff and Nafus offer anecdotal histories of Benjamin Franklin tracking his wellbeing on a small piece of paper in his pocket or the launch of the Quantified Self Meet Up in 2007. Neither, however, consider the social or cultural phenomenon that led to the proliferation of behavioral modification through self-tracking. This is something I hope to write about in future posts, but for now, I want to make it clear that I am not necessarily faulting these authors for the lack of this history.

Instead, it is important to consider that both books sit in very precarious positions academically. That is, these scholars took great risk spending so much time and effort to publish in the long-form on a subject-matter which is changing just about weekly. Already, Neff and Nafus’s assertions about FDA regulations feel outdated under the Trump administration (note that Trump has not taken any direct actions regarding the FDA quite yet, but it’s hard to consider any Obama-era regulations or policies staying intact throughout all of Trump’s time as president, however brief or extensive that may be). This is perhaps why Self-Tracking is part of MIT Press’s “Essential Knowledge Series”, which, per the publisher’s website, “offers concise, accessible overviews of compelling topics…expert syntheses of subjects ranging from the cultural and historical to the scientific and technical.” Even the physical book itself feels temporary—more like a 5”x7” pocket guide than something that belongs on library shelves for the foreseeable future. I think both of these books would be excellent reading for students just learning to question the hegemonic properties of the technologies being heralded for whatever reason its marketers choose.

For now, the search continues for more QS scholarship.

Gabi Schaffzin is a PhD student at UC San Diego in the Visual Arts department. He spent probably too much time fretting over the typography choices of the book he reviewed in this post.

The English translation of Martin Luther and Phillip Melancthon’s 1523 Deuttung der czwo grewlichen Figuren, Bapstesels czu Rom und Munchkalbs czu Freyerbeg ijnn Meysszen funden is a 19 page pamphlet describing two monsters: a pope-ass and a monk-calf. The former, a donkey-headed biped with one hand, two hooves, and a chicken’s foot, per Arnold Davidson, represents how “horrible that the Bishop of Rome should be the head of the Church.” The latter, a creature that brings to mind Admiral Akbar (think, “it’s a trap!”),  illustrates the “frivolous prattle” of Catholic Sacraments. Davidson explains, “Both of these monsters were interpreted within the context of a polemic against the Roman church. They were prodigies, signs of God’s wrath against the Church which prophesied its imminent ruin.” Fifty-six years after the pamphlet’s original publication in German, Of two wonderful popish monsters was distributed in English.

Nearly 600 years after that, in August of last year, five larger-than-life statues of a naked, blonde, bloated man were affixed to the pavement in highly trafficked areas of Cleveland, San Francisco, New York City, Los Angeles, and Seattle.

The statues, made in the likeness of now President Donald Trump, were created by a Las Vegas-based artist, Ginger, using over 300 pounds of clay and silicone and were commissioned by the anonymous graffiti group, Indecline. In an interview with the Washington Post, Ginger noted that he has “a long history of designing monsters for haunted houses and horror movies.” In fact, he explained, Indecline chose Ginger “‘because of my monster-making abilities.’”

What good are monsters? Is it productive to call our new president one? According to Georges Canguilhem, “the existence of monsters calls into question the capacity of life to teach us order…a living being with negative value…whose value is to be a counterpoint.” The opposite of life is not death, per the philosopher, it is the monster. In this sense, portraying Dear Leader as a monster might indeed be productive: we are forced to consider him the antithesis of the “normal”, the opposite of what we actually want or need. Much like the pope-ass and the monk-calf, we understand what is the other, what is not to be sought after. We can tell our children: do not be like this, you will end up with hooves as hands and varicose veins in your legs.

Ambroise Paré’s 1573 On Monsters and Marvels details 13 “causes of monsters,” including “the glory of God…God’s wrath…too great a quantity of seed…too little a quantity [of seed]” and so on. The heavily illustrated volume is, like Of two wonderful popish monsters, a warning (“women sullied by menstrual blood will conceive monsters”) but also a guidebook: here is what causes monsters…avoid these conditions and your offspring will be healthy. “Monsters are things that appear outside the course of Nature,” he writes, “(and are usually signs of some forthcoming misfortune).”

Approaching our president as monster might leave us with too many reasons to look outside of ourselves—outside the course of Nature. If we, instead, consider Donald Trump to be a human being, we might be more likely to reflect on the structural changes required to prevent his ascendancy to begin with. His story is not the non-normal. The disgusting and soulless decisions he has already made by this, the fifth day of his tenure, are capable of being perpetrated by someone inside the course of Nature. If we consider the critical distinction here—between monster and not (or, as Canguilhem might suggest, between monster and life)—then we must ask where one begins and one ends. And if Trump is, in fact, a monster, is it because of his actions or because of his body?

To be sure, Indecline has proven itself to be a vile, self-promoting group of anarchists. So I can’t say I believe they spent much time considering the ethics of what amounts to petty body-shaming. Back in March, Britney Summit-Gil called out a previous Trump-focused body-shame:

The failure to see why it is toxic to critique Trump based on a presumption about his penis is a failure to see the root problems that allow for the perpetuation of genital shaming, and its often horrific consequences. If we can’t see why penis-shaming Trump is bad, how can we tackle systemic sex- and gender-based oppression?

Ensconced in the statues of Trump, The Monster, is a multitude of complex questions about body-shaming, “freak” culture, disability politics, and more—all of which warrant our attention. But in this moment where our country is falling under the leadership of fascism at its worst, these questions are violently distracting. When a man with the soul of a monster sits in the Oval Office, we must remember that he is not a figure of anyone’s imagination, he is not outside the course of Nature. He is a rapist, a criminal, a pathological liar. And now he’s President of the United States. If, as Davidson writes, “the history of monsters encodes a complicated and changing history of emotion, one that helps to reveal to us the structures and limits of the human community,” then no, this man is no monster. He must be seen as inside the limits of the human community, a lesson of what other humans are capable of. And it is from there that we must fight him: not as a fable or marvel, but as a man.

Gabi Schaffzin is a PhD student at UC San Diego. His physical prowess notwithstanding, he’d quite dutifully punch a Nazi in the face.

images

Until very recently, the majority of texts on the quantified self have been either short-form essays or uncritical manifestos penned by the same neoliberal technocrats whose biohacking dreams we have to thank for self-tracking’s proliferation over the past decade. Last year saw the publication of two books that take a more critical look at QS: Self-Tracking (MIT Press) by a pair of American researchers, Gina Neff and Dawn Nafus, and The Quantified Self (Polity) by Deborah Lupton, a professor in Communications at the University of Canberra in Australia. While I haven’t read Neff and Nafus’s work yet (but plan to do so in the coming months), I did just finish Lupton’s book and think it’s a great place to start for anyone beginning to research the quantified self and its associated movement.

I say that The Quantified Self is a good place to start because Lupton’s emphasis seems to have been on breadth, rather than depth. At 302 entries in her bibliography for only 147 pages of body text, the author provides what amounts to an extremely thorough lit review: she cites marketing material from Apple and FitBit alongside an extensive collection of tech-focused cultural critique (there’s even a cameo from Cyborgology’s own Whitney Erin Boesel!). I found the text to be, at times, monotonous—the entire first chapter is a list of projects and products that can be classified as quantified self related—but at others, reaffirming—“Self-tracking,” she writes on page 68, “represents the apotheosis of the neoliberal entrepreneurial citizen ideal.” Nice.

If, then, the perfect reader of The Quantified Self is an individual whose body of research on QS is still in its nascent stage, I believe Lupton risks doing a disservice on two accounts. Firstly, while she does spend a good number of pages describing “communal self-tracking,” (per Lupton, “the consensual sharing of a tracker’s personal data with other people” (130)) the author rarely acknowledges that this is the default modus operandi of the quantified self. That is, collecting a critical mass of individuals’ data in order to average, normalize, compete, rank, and so on, is not only one of the tenets of the QS movement, it is also one of its most dangerous features. In Ian Hacking’s Taming of Chance, the philosopher elucidates the normalizing power of statistics—the tendency to jettison both the deficient and exceptional from the bell-curve in order to focus on the survival of the masses (or, in this case, the largest customer base). The neoliberal QS project is nothing, then, without communal self-tracking.

Secondly, Lupton refers to “data” throughout the book without ever considering what this data is made up of. That is, while she highlights the various form self-tracked data might take (photographs, step counts, personal textual records, etc.), we are never asked to consider what it actually is. A FitBit step, for instance, might be calibrated differently from an Apple Watch step or a Garmin step. The bits and bytes in which these kinetic movements are encrypted and stored are only able to be translated by the proprietary software owned and protected by the corporate entities that design and produce various self-trackers. Ignoring this quality of QS data undermines those who argue in favor of patients and other “self-loggers” gaining access to their “raw data”—what good is a count of my steps if I have no idea how those steps were actually calculated?

These qualms, I recognize, are perhaps a bit specific. And it’s important to acknowledge that this is a text about a rapidly emerging and shifting phenomenon. Personally, as I mentioned above, much of Lupton’s work was self-affirming: as an early-career academic, it was a bit new for me to see so many references to essays and articles already in my own bibliography. So I would definitely recommend The Quantified Self for those scholars interested in jumping into the subject-matter without a strong familiarity. Just be sure to take good notes and be ready to build your own reading list.

Gabi Schaffzin is a PhD student at UC San Diego in the Visual Arts department. He finished one full book over his winter break.

screen-shot-2016-12-21-at-7-38-43-am

Over at The New Inquiry, an excellent piece by Trevor Paglen about machine-readable imagery was recently posted. In “Invisible Images (Your Pictures Are Looking at You)”, Paglen highlights the ways in which algorithmically driven breakdowns of photo-content is a phenomenon that comes along with digital images. When an image is made of machine-generated pixels rather than chemically-generated gradations, machines can read these pixels, regardless of a human’s ability to do so. With film, machines could not read pre-developed exposures. With bits and bytes, machines have access to image content as soon as it is stored. The scale and speed enabled by this phenomenon, argues Paglen, leads to major market- and police-based implications.

Overall, I really enjoyed the essay—Paglen does an excellent job of highlighting how systems that take advantage of machine-readable photographs work, as well as outlining the day-to-day implications of the widespread use of these systems. There is room, however, for some historical context surrounding both systematic photographic analysis and what that means for the unsuspecting public.

Specifically, I’d like to point to Allan Sekula’s landmark 1986 essay, “The Body and the Archive”, as a way to understand the socio-political history of a data-based understanding of photography. In it, Sekula argues that photographic archives are important centers of power. He uses Alphonse Bertillon and Francis Galton as perfect examples of such: the former is considered the reason why police forces fingerprint, the latter is the father of eugenics and—most relevant to Sekula—inventor of composite portraiture.

So when Paglen notes that “all computer vision systems produce mathematical abstractions from the images they’re analyzing, and the qualities of those abstractions are guided by the kind of metadata the algorithm is trying to read,” I can’t help but think about the projects by Bertillon and Galton. These two researchers believed that mathematical abstraction would provide a truth—one from the aggregation of a mass of individual metrics, the other from a composition of the same, but in photographic form.

Certainly, Paglen has read Sekula’s piece—the New Inquiry essay often references “visual culture of the past” or “classical visual culture” and “The Body and the Archive” played a major part in the development of visual culture studies. And it’s important to note that my goal in referencing the 1986 piece is not to dismiss Paglen’s concerns as “nothing new.” Rather, I think it’s important to consider the “not-new-ness” of the socio-political implications of these image-reading systems (see: 19th century scientists trying to determine the “average criminal face”) alongside the increased speed and “accuracy” of the technology. That is, this is something humans have been trying to do for hundreds of years, but now it is more widely integrated into our day-to-day.

At the end of his essay, Paglen offers a few calls to action:

To mediate against the optimizations and predations of a machinic landscape, one must create deliberate inefficiencies and spheres of life removed from market and political predations–“safe houses” in the invisible digital sphere. It is in inefficiency, experimentation, self-expression, and often law-breaking that freedom and political self-representation can be found.

I really like these suggestions, though I’d offer one more: re-creation. That is, what if we asked our students to recreate the type of abstracting experiments performed by the likes of Galton and Bertillon, but to use today’s technology? Better yet, what if we asked them to recreate today’s machine-reading systems using 19th century tools? This sort of historical-fictive practice doesn’t require students’ experiments to “work”, per se. Rather, it asks them to consider the steps taken and decisions made along the way. The whys and hows and wheres. In taking on this task, students might be able to more concretely connect the subjectivity inherent in our present-day systems by calling out the individual decisions that need to be made during their development. We might illustrate possible motives behind projects like Google DeepDream or Facebook’s DeepFace.

Within our new algorithmic watchmen are embedded a plethora of stakeholders and the things they want or need. Paglen, unfortunately, doesn’t do a very good job reminding us of this (he paints a picture, so to speak, of machines reading machines, but forgets that said machines must be programmed by humans at some point). And I’d be curious to know what he had mind when he refers to “safe houses” without “market or political predations” (as a colleague recently reminded me, even the Tor project can thank the US government for its existence).

To conclude, I’d like to highlight an important project by an artist named Zach Blas, Facial Wesponization Suite (2011-2014). The piece is meant as a protest against facial recognition software in both consumer-level devices, corporate and governmental security systems, and research efforts. “One mask,” writes Blas, “the Fag Face Mask, generated from the biometric facial data of many queer men’s faces, is a response to scientific studies that link determining sexual orientation through rapid facial recognition techniques.” Blas uses composite 3D scans of faces to build masks that confuse facial recognition systems.

Facial Weaponization SuiteFacial Weaponization Suite by Zach Blas

This project is important here for two reasons: firstly, it’s an example of exactly the kind of thing Paglen says won’t work (“In the long run, developing visual strategies to defeat machine vision algorithms is a losing strategy,” he writes). But that’s only true if you see Facial Weaponization Suite as simply a means to confuse the software. On the other hand, if you recognize the performative nature of the work—individuals walking around in public wearing bright pink masks of amorphous blobs—you quickly understand that the piece can also confuse humans, i.e., bystanders, hopefully bringing an awareness of these machinic systems to the fore.

Wearing the masks in public, however, can be a violation of some state penal codes, which brings me to my second point. Understanding the technology here is not enough. Rather, the technology must be studied in a way that incorporates multiple disciplines: history, of course, but also law, biomedicine, communication hierarchies and infrastructure, and so on.

To be clear, I see Paglen’s essay as an excellent starting point. It begins to bring to our attention what makes our machine-readable world particularly dangerous without tripping any apocalyptic warning sirens. Let’s see if we can’t take it a step further, however, by taking a few steps back.

Gabi Schaffzin is a PhD student in Visual Arts, Art Practice at UC San Diego. He wears his sunglasses at night.

7050486303_86a1ff7351_z

23andMe Co-Founder Anne Wojcicki
by Thomas Hawk on Flickr

Anne Wojcicki’s thinks it’s “incredibly meaningful” to honor scientists who are “purists” who “love what they do” and have “never looked for any kind of celebrity.” So she and a slew of other Silicon Valley technocrats gathered to recognize these altruistic innovators at the NASA Ames Research Center in Mountain View last week by giving them a spotlight on primetime network television and also $3 million each. At the event, called the Breakthrough Prize ceremony, the 23andMe CEO sat down with a reporter from Bloomberg to discuss the award, which, per her interviewer, should “empower scientists just like technologists are empowered in silicon valley.”

It is most likely wishful thinking to presume that the curriculum for a Yale bachelors of science in molecular biology—of which Wojcicki is a recipient—would include the likes of Ludwick Fleck or Bruno Latour. The former, a physician and biologist, was the author of Genesis and Development of a Scientific Fact, originally published in Polish in 1935, though not translated into English until 1979. In it, Fleck tracks the history of research around syphilis, eventually outlining the concept of a “thought-collective”, a way to consider the social act of cognition—that is, how an idea changes and is passed down through history, from and to different individuals and circles. Syphilis, argues Fleck, as it was first known at the end of the 15th century was not the same syphilis that was cured nearly 500 years later. Latour, whose breakthrough work, Laboratory Life: The Construction of Scientific Fact (cowritten with Steve Woolgar), was published the same year as the English translation of Genesis and Development, is most famous for enacting a sociology of science based on ethnography. He and Woolgar spent time in a laboratory watching how science is made—from discussions regarding funding and publishing to actual work at lab benches.

Reading Fleck and Latour help us realize that celebrating the individual is counter to how science works. Then again, to argue that the Breakthrough Prize should be more focused on the collective or that we should jettison the fantasy of a mad scientist isolated in a lab somewhere is to pretend like the Nobel Prize or MacArthur Genius Grant are not two of the highest honors bestowed in the field. But I have no interest in further critiquing this silly award show (which you can catch on Fox this Sunday night at 8/7c!). Instead, I think it’s worth paying close attention to what individuals like Wojcicki are saying and doing when it comes to how they see science in action—a science they want us to believe is hindered by seeking to critique it through social and political lenses. One that is revolutionary in its own right, performed for the sake of truth, regardless of ulterior, capitalist motives.

During the same Bloomberg interview, when asked for her thoughts on the impending the rich asshole administration, Wojcicki offered that “I’m a wait and see [kind of person]. I want to be able to judge once things are happening.” This was December 4, 2016—26 days after the rich asshole was elected and started building his cabinet. Nine hundred and fifty six days after he tweeted that there are “many such cases” of vaccines causing “AUTISM”. One thousand two hundred and eighty seven days since he argued that “Fracking poses ZERO health risks.” And 1,463 days after he declared that “The concept of global warming was created by and for the Chinese in order to make U.S. manufacturing non-competitive.”1 What, exactly, is Wojcicki waiting for?

According to the Silicon Valley executive, she’s waiting to find out who is going to determine the rules which govern her business: the heads of the Food and Drug Administration and Health and Human Services. She notes that she is glad to have found out who will run HHS, though she doesn’t offer her opinions on the nomination of Representative Tom Price (R-Ga.)—a man whose career has been marked by, per The Huffington Post, “a constant…hostility to government interference with the practice of medicine.” Instead, she declares that she is “excited about the idea of potentially more freedoms.” Freedoms, one assumes, to go back to doing what made her company famous to begin with: using a customer’s DNA to provide them with their probability of getting sick. 23andMe was ordered to stop doing exactly that when, in a November 2013 letter, the FDA declared that “Most of the intended uses for [23andMe results]…have not been classified and thus require premarket approval or de novo classification.” Simply put, the FDA didn’t think it was appropriate for a company to tell its customers things that a doctor should be saying. This is, of course, the same FDA which is set to be run by Jim O’Neill, noted venture capitalist, libertarian, and Peter Thiel colleague.

It’s worth pointing out here that, per the FEC database, Wojcicki has given about a quarter million dollars worth of donations to Democratic Party candidates and committees over the past couple of years. This is a critical point, not because I think recognizing her support for the Clinton campaign and others is any sort of saving grace. Rather, we have to realize that the kind of rhetoric used here by Wojcicki and others—about empowering “scientists just like technologists” or believing that “with things like the Breakthrough Prize…it doesn’t matter what the government is saying as much”—is not partisan. In an interview a week earlier, she argued that an education system “decentralized down to the individual” will empower our next generation of scientists.

This is someone in charge of a private company collecting and storing over a million individuals’ DNA data. And while she notes that the company does not sell that data to large biotech and pharmaceutical companies, it charges quite a premium to engage in “research projects” with those companies, eventually sharing “anonymized” records with them. Combine this with the Breakthrough Prize and 23andMe becomes the gatekeeper and funder for research (not to mention the supplier—their recent “Genotyping Services for Research” offering lets universities and other labs purchase kits for study participants, effectively outsourcing their genotyping capabilities to Mountain View).

When Jonas Salk—whose institute was the subject of Latour’s Laboratory Life—championed the development and reproduction of a polio vaccine, he didn’t patent it. That’s not to say, however, that he and his fellow researchers weren’t properly funded (though it’s worth noting he never won the Nobel Prize). Instead, money came pouring in from donations collected by an organization called the National Foundation for Infantile Paralysis, founded by FDR and eventually renamed the March of Dimes due to the small donations it received from citizens. To suggest that today’s scientists use Salk as some sort of altruistic model is naive and not at all the goal of this blog post. But what are we left with when education and research and science are all “decentralized down to the individual”? This is a dangerously ahistorical and anti-communal approach to science. What sort of rights or powers do we give up when we acquiesce to a system of research based on market-values and, as one Forbes contributor suggests we do, buy into a system that “gives real scientists more celebrity treatment through awards shows, television, movies, advertisements and other means”? What happens when we treat science like a business, government like a menace, and the individual as the only way forward?

1. I won’t link directly to the rich asshole’s tweets, but for sources on my quotes, please see this piece from Scientific American.

Gabi Schaffzin is not a scientist, though he once played the Wizard of Oz in a fifth grade production. 

After my last post on Neil Degrasse Tyson and the (seemingly fictional?) War on Science, I received feedback from a few people suggesting that there was more to be written. In fact, more has been written; for well over a century, the field of science studies has been developing and shifting, comprised of scholars studying the history, philosophy, and sociology of “science” and many of its cousins (technology, asceticism, “innovation”, et al.). I am an artist who is starting to immerse myself in the field in order to strengthen my critique of technological mediation in culture. So while I don’t plan on using science studies frames in every one of my posts, I do expect there to be shades of its tenets throughout.

That said, where should one start to understand the history and development of science studies? There are, of course, the mainstays: the best-sellers like Thomas Khun’s The Structure of Scientific Revolutions which reintroduced the term “paradigm” to the sciences and broader culture in general. Lorraine Daston and Peter Galison’s Objectivity is superb, especially for those of us in the visual culture side of things. But these works mark the spot on which I’d like to end, not begin.

Pierre Duhem

Late 19th century physicist, historian, and philosopher, Pierre Duhem, is a fascinating lesson in the importance of learning history, especially to a field with as widespread an influence and frequent changes as science. Alive during France’s Third Republic, Duhem saw the Catholic Church being pushed out of government as detrimental, an ignorance to the Church’s significant contributions to society (especially, of course, science) in the Middle Ages. This did not win him many friends in Paris and he was exiled to Bordeaux where, even without direct access to archives, he still penned the ten volume Le système du monde: histoire des doctrines cosmologiques de Platon à Copernic (The System of World: A History of Cosmological Doctrines from Plato to Copernicus). Maybe don’t try to read that. Instead give “The English School and Physical Theories” a looksee—it’s a fascinating case study in French nationalism and the subjectivity inherent to scientific approaches.

Boris Hessen

Here’s an amazing story: in 1931, a delegation from Soviet Russia gets on an airplane to London to attend the Second International Congress of the History of Science. On the plane are three notable figures: Boris Hessen, Nikolai Bukharin, and a guy named Ernst (né Arnosht) Kolman. Bukharin, who struggled for power with Stalin at one point (which gives you a hint of where he ends up) forgets his paper in Moscow and they turn the plane around. Kolman is there simply to watch over the delegation and make sure it espouses the proper party politics. Meanwhile, Hessen, a physicist who spent some time pre-revolution studying in Edinburgh, spends the entire flight writing out a paper entitled “The Social and Economic Roots of Newton’s Principia” which is then typed up by a pool of secretaries (who came with Hessen, et al) and published for the conference.

In the paper, Hessen makes an argument that attempts to resolve the Scientific Socialist philosophies governing his homeland (℅ Marx) and their absolute progressivism with the relatively new Einsteinian theories of relativity. In a sense, Hessen argues that Einsteinian theories must be incorporated into the party philosophy so that scientific progress might be achieved—but he has to do so by discounting Newtonian physics through an analysis of the social contexts in which they were developed. The historian Loren Graham (who, somehow, runs into Kolman at another International Congress in Moscow in 1971) notes that this paper is “one of the most influential reports ever presented at a meeting of historians of science.” So, probably worth a read.

Robert Merton

Born Meyer Schkolnick, Robert Merton chose to go by a stage name when performing magic (the show kind, not the Harry Potter “real” kind) as a child around Philadelphia in the 1920s. That stage name probably helped when he tried to get into Harvard in 1931 and was accepted. I cited Merton in my previous post, but I think he’s an important figure to review here, albeit briefly.

Merton, who was in the audience at Hessen’s talk in London in 1931, toyed with the question of “can there be a sociology of science?” This also explains why a search for his name on The Society Pages turns up a number of results. Merton understood that religion has an important and complicated relationship with science (as I sought to demonstrate previously), but his was different than what had come before—mainly, Hessen’s arguments about Newton, Descartes, and God. Instead, his turning to the Protestant ethic illustrates his commitment to Weberian theories of society.

This list is, of course, a very small tip of a very large iceberg. But this is not a science studies blog and I’m not a science studies scholar (yet), so I hope you’ll bear with me and check out those who I’ve recommended above. I’m also hoping that, if you know of any critical figures in the history of science studies, you’ll contribute them in the comments, below. I’m especially eager to learn of individuals who are not old white dudes.

But I’m also eager to post this list because when I tell most people that I’m working in science studies, the first response is almost always, “science what?” The idea that there is a place outside of science to understand the field is foreign to many, since science is posed, in itself, an answer to many quandaries about the natural world. The history of science, philosophy of science, and sociology of science are all critical places from which to understand so much of our culture. I hope I can bring more insights from the field throughout my contributions to this blog.

Science from Tenor
source

On the second day of Rosh Hashanah, the rabbi at my synagogue gave a sermon about four themes, all of which he felt needed addressing when there was a larger crowd than usual (though, it should be noted, the sanctuary was sparsely filled, especially compared to the SRO crowd the day before): racism, sexism, anti-semitism, and “the war on science.” As he recited off his list, the first three items made perfect sense to me; I was even proud to hear him cover current events like the Black Lives Matter movement and Donald Trump’s misogyny and how they are understood within Jewish tradition (hint: the first one’s good, the second one’s bad). That fourth item, though, piqued my curiosity a bit.

Since when did a war on science begin? Is it like the ill-fated War on Drugs? Or the ill-fated War on Terror? Or the ill-fated War on Poverty?

It turned out the rabbi was talking specifically about climate change deniers and their penchant for ignoring the overwhelming evidence pointing towards the anthropocentric damage we’re doing to our planet. I admit that it was a bit refreshing to hear a clergyman align his religious values with scientific discourse, encouraging his congregation to do the same. He fell short of explicitly blaming market-based motives or any specific lobbying efforts for why the country as a whole struggles to enact legislation. But that’s to be expected considering his delicate position as congregational leader of a highly varied group, culturally and economically.

“Science”, of course, is a complex apparatus, an assemblage of research, experiments, textbooks, journals, breakthroughs, studies, and—oh yeah—scientists. If there is, in fact, a war on science, then it seems quite obvious to me that its allied forces (i.e., science’s great defenders) would be led by General Neil deGrasse Tyson, astrophysicist, host of National Geographic’s Cosmos, and oft-retweeted Twitter personality.

You may remember some of Tyson’s best work:

This tweet declaring that there are objective truths and that, by aligning ourselves with them, the world will become a better place.

Or this one, awkwardly defending Trump supporters.

And then there’s this one.

This last tweet is particularly characteristic of a common Tyson theme: science provides truth and order, while religion is dangerous and arbitrary.

Never mind that “science” can be used for nefarious purposes—see the white supremacists using 23andMe DNA test results to prove they are, in fact, white. Forget that, by its nature, science resists progress by assuming anomalies to be innocuous—see Thomas Khun in The Structure of Scientific Revolutions:

By ensuring that the paradigm will not be too easily surrendered, resistance guarantees that scientists will not be lightly distracted and that the anomalies that lead to paradigm change will penetrate existing knowledge to the core (65).

Instead, realize that science would probably not be science without religion.

In a 1983 essay titled “Motive Forces of the New Science”, Robert K. Merton offers that the ascetic imperatives of the Protestant ethic laid the groundwork for the consecration of scientific inquiry.  “Science embodies patterns of behavior which are congenial to Puritan tastes,” he writes, “Above all, it embraces two highly prized values: utilitarianism and empiricism” (119). Science and technology provided the tools and frameworks for increased power to merchants, a rising class in seventeenth century England. In a Puritan value system that preaches “methodic labor” and “constant diligence in one’s calling” (118), the orderly and dedicated actions of the scientist align beautifully. “And society, once dubious of the merits of those who devoted themselves to the ‘petty, insignificant details of boundless Nature,’ largely relinquished its doubts” (112).

Merton is sure to acknowledge that “Many ‘emancipated souls’ of the present day” are unfamiliar, even made uncomfortable by the aligned values once (still?) shared by science and religion. He points out that this is projecting twentieth (or twenty-first, as the case may be) century values on seventeenth century society. “Though it always serves to inflate the ego of the iconoclast and sometimes to extol the social images of his own day, ‘debunking’ may often supplant truth with error” (116).

It seems illogical and unlikely that an accomplished academician the likes of Tyson would be ignorant to this sort of basic sociology of science. But you can’t make it to the rank of General of the Scientific Defense Forces if you are not debunking, especially when your arguments are tweetable and GIFable.

So maybe the War on Science is actually most like the War on Christmas, Bill O’Reilly’s non-sensical, anglo-centric effort to convince his followers that saying “Happy Holidays” instead of “Merry Christmas” is somehow anti-Christian. Don’t worry, though, Gen. NDT has an answer for that, as well:

Content Advisory: The following contains references (including an embedded video) to sexual assault and misogyny.

Angela Washko @ UCSD

At the end of the panel following Angela Washko’s artist talk at UC San Diego’s Qualcomm Institute, there was time for two questions. The first came from a man in the audience who jumped to the mic in order to frame the artist’s work in the inevitable deluge of AR, or augmented reality technology (think holding up your phone and seeing a Pokestop where another passerby might just see the local Walgreens). The audience member, a computer scientist from UCSD, wanted to know what would happen once we “throw away this technology that we’re tethered to.”

Washko had begun the evening with a presentation about her work, starting with her performances in World of Warcraft, wherein she goes to some of the most popular areas in the game to perform certain actions or ask other players about issues like abortion and feminism. I found the piece both charming and troubling: at one point, Washko’s avatar orchestrates a conga-line type dance party in a field where orks and trolls frolick in harmony while acting like chickens (just trust me, go to 25:00 in the video below). During the WoW interviews, the situation was a bit less whimsical. In Washko‘s words:

I realized that players’ geographic dispersion generates a population that is far more representative of American opinion than those of the art or academic circles that I frequent in New York and San Diego, making it a perfect Petri dish for conversations about women’s rights, feminism and gender expression with people who are uninhibited by IRL accountability.

She finished her talk with her most recent project, The Game: The Game, a choose-your-own-adventure type, compiled using only footage and quotes from pick up artists’ how-to books and DVDs (the DVDs are prohibitively expensive so that those seeking only to critique the PUAs avoid doing so; per Washko, “I got a grant, so I bought them”). By this point, the audience was already familiar with both her preferred subjects of interrogation and also the extremely brave way she places herself at risk for the sake of her work. For her UCSD MFA thesis project, Washko convinced notorious pickup artist Roosh V to agree to a video interview. This was around the time during which #gamergate was garnishing a great deal of media attention and Roosh V had dedicated a section of his site’s forum to the type of mysoginist discourse that accompanied the hashtag on other various platforms. That Roosh V would agree, then, to be interviewed by a self-declared feminist and artist is a testament to Washko’s persistence; as part of the negotiations, she sent a photograph of herself to the PUA, allowing herself to be judged worthy of Roosh’s attention.

While Washko’s interview with Roosh is itself an important piece, the first clip from The Game: The Game that we saw was from a different pick up artist who goes by “RSD Julien”. In this clip, Julien begins by explaining that placing the word “now!” at the end of any declaration sets you up as an Alpha amongst Betas (“Afterparty. Now!”, “Cab. Now!”). Admittedly—ashamedly—I found myself chuckling. A serious-looking man yelling “now!”, declaring this would get him laid, the camera cutting between a head-on and side shot. It looked like a Real World audition tape. My lighthearted reaction quickly receded, however, as a significantly more troubling shot played on-screen: a hidden camera captures Julien forcing kisses onto an unwilling victim, shaming her into leaving her friends to get in a car with him, and finally carrying her away off camera. At this point, I was embarrassed for having ever chuckled.

During the panel after Washko’s presentation, Benjamin Bratton asked the artist who The Game: The Game is for, “who should be playing this?” Her answer was simple: men. Femme-presenting women, she noted, experience the goings on from the narrative on a daily basis, they need not be reminded. Men, on the other hand, usually end up playing the game twice, “to see what actually happens if they try and go along with the pick-up artists.” Immediately, I began to consider John Rauch’s Cinema of Cruelty, an adaptation of avant-garde playwright, Antonin Artaud’s Theatre of Cruelty, a mode of performance which assaults the audience, garnering great affect from their subconscious. In Sensuous Scholarship, Paul Stoller, writing about both Artaud and Rauch, notes that, “In a cinema of cruelty the filmmaker’s goal is not to recount per se, but to present an array of unsettling images that seek to transform the audience psychologically and politically” (120). Washko’s work did just that; the anxiety being produced by her work in that room was palpable.

And so we return to the first question asked of the artist: “what happens when AR takes over…the game is everywhere…when we are autonomous and the possibilities are endless?”

Not unsurprisingly, the question was fielded first by panelist Jurgen Schulze, another computer scientist and professor at UCSD, who presented a utopian vision of being able to paint the characters we wish to see on the individuals around us. We will make our worlds whatever we want them to be. Bratton followed up with a refreshingly realistic view (albeit in the same obfuscating jargon with which he writes), describing an AR-laden world “wherein whatever form of cognitive totalitarianism you happen to subscribe to becomes literally the perceptual platform by which you sort of work through and then the incommensurability of the gamifications of interaction becomes that much noisier.” This felt more like it. Afterall, Washko had already explained that 55% of female avatars in WoW are actually played by men who often say that they prefer to look at a woman’s backside running around rather than a man’s. Imagine, then, these men with their AR goggles, painting whatever fetish they wish all over the town, reproducing an already overbearing sense of ownership over the objects, places, and—of course—people around them, but this time, with a convenient and dualist explanation that “it’s just a game” or “it’s all virtual.”

Fortunately, the second audience question left us on a much more productive note: a graduate student asked Washko to discuss the various points of entry and venues outside of art or academic circles in which she has performed her WoW actions. Her response—that she considers the performance of her work in WoW itself as “outside the art-world context”—was a quick one, but it was the question that left us with a critical reminder. Throughout her work, Washko has continuously used her body—be it a photograph, avatar, or her actual presence in a space—to facilitate and gain access to critical discourse within and surrounding technologically mediated spaces. One need only look at her Twitter mentions to even begin to understand what sort of sacrifice this represents. Panel moderator Ricardo Dominguez (who is also an activist, professor in the Visual Arts department at UCSD, and one of the founders of the Electronic Disturbance Theater) noted at one point that “code and algorithms carry with them histories and other types of scripting that we’ve dealt with and that we have to deal with daily.” Instead of dreaming about what might be once we don’t have monitors or keyboards as intermediaries, Washko has worked for a decade on bringing these histories and scripts to the fore. She has taken them, written them into assaults on her audience’s senses, and drawn attention to a critical and continuous discourse, all at the risk of her own safety and wellbeing.

Gabi Schaffzin is a PhD student in Visual Arts at the University of California, San Diego. You can find more of his work at his website or on his Twitter timeline.