From the 1967 edition of The Measure of Man & Woman by Henry Dreyfuss

Last week I put on a spandex suit and posed in front of my phone so that an app could capture photos of my body (and no, this post is not, I promise, an attempt to encroach on Jessie and PJ’s territory). The suit, which is made by the Japanese clothing company, ZOZO, is black with dozens of white circles on it. Each circle is covered in a unique pattern of dots which are used by ZOZO’s app to identify their position on the body and, consequently, map a set of measurements: arm length, waist size, inseam, etc. From there, the app makes recommendations based on what size clothing would fit you best. Per the company’s “About” page, they “create clothing patterns using real people in dozens of diverse shapes and sizes.” The founder, Yusaku Maezawa, explains further:

“ZOZO was created to be adaptable to each and every person. You don’t have to adapt to ZOZO. ZOZO adapts to you. People are unique, but they also want to be treated and accepted as equal. This concept is reflected in the ZOZO logo. The circle, square and triangles are all different colors and shapes, yet they have the same surface area. They are all unique but still equal.”

If you, like me, pay close attention to the quantified self movement, then you’ll find this rhetoric extremely familiar. 23andMe offers that their service will delve into the “One unique you”. FitBit promises that you will “Find your fit”. These are products that, as Whitney and I have argued over the course of the last few years, are not truly individualizing in nature, but are much more complicated than that—often, aggregation is more critical than individualization. In this post, I’d like to echo that sentiment, but also ground what ZOZO is doing here in the history of another anthropometric tool, one developed for the purposes of so-called “human-centered design” and which has seen a recent resurgence in popularity.

Soon after World War II, Henry Dreyfuss Associates was hired by the US Army to design the cockpit for a new tank. In order to best simulate the cockpit environment and contextualize what the designers were actually working on, employees at the firm—which had become famous creating industry standard designs for everything from a Bell Labs telephone handset to a New York Central Railroad locomotive engine—drew a life-size cross-section of the cockpit, complete with pilot. The pilot was annotated with measurements, culled from sets of previously recorded data about the sizes and ratios of various male bodies. “Without being aware of it,” writes Dreyfuss in 1967, “we had been putting together a dimensional chart of the average adult American male” (1967).

Eventually, HDA named the figure Joe and began building on the dataset. The firm’s Alvin Tilley drew the figure from different angles and added a female form, Josephine. Dreyfuss declares that, by 1959, they “were in sight of something we had dreamt of for years: a mini ‘encyclopedia’ of human factors data for the industrial designer, presented in graphic form.” (1967). HDA expanded each diagram to include three figures: one based on 2.5th percentile data, one based on 50th percentile (median) data, and one at the 97.5th percentile. The firm’s founder is quick to acknowledge that the diagrams “are intended as points of departure for your own thinking. Unless they are used with imagination, they are all but worthless.”

Humanscale Selector 2b. Seat/Table Guide, from Kickstarter page

The final edition of HDA’s The Measure of Man and Woman: Human Factors in Design was published in 2002, though a sort of spin-off was published by Tilley and Niels Diffrient in the 70s and 80s called Humanscale. This project incorporated Dreyfuss’s data, but also “the most up-to-date research of anthropologists, psychologists, scientists, human engineers, and medical experts” (2017). Dozens of “Pictorial selectors”—diagrams with windows through which data changes as a user turns a rotary selector—feature a plethora of body types including wheelchair users, children, and pregnant women. The original Humanscale can be purchased today, if you can find it, starting at a few hundred dollars, but in 2017, the design firm IA Collective launched a Kickstarter campaign to reissue the full manual with updated data and expanded figures. On their Kickstarter page, the publishers of this new edition write:

The Humanscale reissue will introduce a new generation of designers, engineers, architects, and up-and-coming inventors to human factors and ergonomics, which are key aspects of user-centered design. It will provide real utility to anyone getting started on their designs by providing simple access to a range of human factors data.

The project, which was launched with a $137,800 goal, raised $326,109 from 1,704 backers.

 *  *  *

The author in his ZOZO suit

After donning my ZOZO suit, I stood in front of my phone in a position not unlike that of Joe and Josephine from The Measure of Man and Woman and Humanscale: looking straight ahead, feet shoulder width apart, arms at my side. Every few seconds, the female voice coming from the app would instruct me to turn to face a number on the imaginary clock laying on the ground under me: “Turn to five o’clock…turn to six o’clock, you’re halfway there.” After 12 photographs, I was allowed to pick my phone back up and wait another few seconds for the data to be processed. The resulting information was presented as a sort of 3D bizarro-Joe, an uncanny valley version of the data’s subject: me.

ZOZO’s tagline reads, “Custom-Fit Clothing for a Size-Free World”. And yet I spent the next ten minutes obsessing over why one arm is longer than the other and how I could possibly have a “38.5 inch” waistline when I buy 32-inch jeans. Falling into ZOZO’s trap, I did what I promised myself (and my wife) I wouldn’t do and purchased a pair of the jeans recommended to me. They weren’t cheap, but at just under $60, they are well below many “fashion” jeans brands. Plus, according to ZOZO, these would be the most perfect fitting jeans of my entire life—tailored to me and only me. When they arrived two weeks later, I could not believe how poorly fitting they actually were (maybe I’m just not with the fashion these days, but to wear these properly, it would seem I have to button them above my navel…?). Upon reaching out for a return authorization, a company representative asked for more detail “to improve the app.” It seems I’m not only dressing up for them, I’m also beta testing.

 *  *  *

On the surface, ZOZO’s service is the anti-Humanscale: bodies are not categorized, they are “accurately” measured. If we consider that Dreyfuss’s project was predicated on categorizing the human body and jetisoning the non-normative (even by presenting 2.5th and 97.5th percentiles, they are still designating these bodies as outliers, not to mention how many bodies are still excluded), then does the ZOZO project mean that data will become more inclusive? Putting access to the tool aside (carefully), collecting more data does not mean a more just world. Instead, it means a more vulnerable population.

ZOZO claims to “not sell your data to third parties, ever”, but their Privacy Policy notes that the data can be transferred to “A prospective buyer in the event of a merger, acquisition, or sale of any part of our business or assets.” Also, height, weight, and body measurements may be used “To collect statistical information and use such statistical information for marketing and other research purposes.” And, frankly, in the age of the once-a-week data breach, we must come to terms with the fact that our data is never truly “safe” or “private”.

The author’s measurements, per the ZOZO app

Admittedly, ZOZO complicates where I might normally finish off this piece. Most quantified self tools can be pointed to as just another indication of our cultural obsession with measurement and tracking. But we’re talking about making sure your pants fit. For lots of bodies (myself included), most standard off-the-shelf sizes don’t work. So what’s so bad about throwing on a spandex suit and having your picture taken?

In their 1999 Sorting Things Out, Geoffrey Bowker and Susan Leigh Star declare that “to classify is human” and go on to make an excellent argument about the power exercised through these classifying schemes. Playing out a plausible scenario where, for those with the physical, financial, and logistical capabilities, custom-tailored clothes will be available via app, what does that mean for those without the capabilities? How will it affect the individuals along each step of the supply chain? Who will gain and hold on to access to the data? Alternatively, will custom-fit clothing encourage less unhealthy dieting? Will anxiety over fit go away? What anxiety will take its place?

For now, I’ve got a $4 Halloween costume, some blackmail-worthy photos of me in a spandex suit with white dots, and a pair of jeans that need to be shipped back to Ohio.

Gabi Schaffzin is a PhD candidate in Art History, Theory, and Criticism, with a concentration in Art Practice, at UC San Diego. He is sure he will regret adding photos to this post.

About 60 miles inland from the Pacific Ocean, there is a break in the already existing wall built on the border between the United States and Mexico. When you stand at the Shell Gas Station (the one with the Subway in it) off exit 73 on the I-8, near Jacumba, turn towards the southwest and look at the beginning of this fence opening. Paying attention to the terrain just east of where the last bar of steel juts out of the ground, it won’t take you long to figure out why the wall stops: anyone who attempts to travail the 10 miles of wilderness between the last road in Mexico and the Californian freeway must be well equipped physically and mentally.

This doesn’t mean that American border patrol agents don’t survey and patrol the space without a barrier. The militarization of the US-Mexico border started well before the creation of the Department of Homeland Security, President Clinton’s “war on crime”, or the “war on drugs”. In 1924, the US Border Patrol was created in an effort to keep immigrants from Asian countries from coming into the country. During that time, agents also sought to block illegal shipments of alcohol into the country during prohibition.

The 1996 “Illegal Immigration Reform and Immigrant Responsibility Act”, signed into law by Clinton, was responsible mainly for authorizing the mass deportation of undocumented migrants, as well as a major expansion of the barrier between the US and Mexico, as well as a secondary wall slightly north of the primary structure. After 9/11, billions more (one estimate has it at $286 billion since 1986) were poured into the border: Blackhawk helicopters, drones costing $18 million a piece, 20,000 border patrol agents in military grade Humvees, heat sensors, seismic sensors, motion sensors, and the willful disregard of vigilantes in border-adjacent towns all stand in the way of individuals looking to cross the border—and that’s once they get there.

A large number of the migrants come from Central America to escape political or gang violence. Once in Mexico, options to get to the US border are as dangerous as they are limited: one “popular” way to do it is by hitching a ride on top of a freight train known as La Beastia (The Beast), or El tren de la muerte (The Death Train). Riding this train means risking kidnapping, robbery, or serious injury (limbs are easily removed by obstacles along the train’s route). To reinforce a point made by immigrant and refugee rights activists the world wide, if someone is willing to risk absolutely everything for entry into this country, a place that, with all of its very real and very serious faults, is still safer than the place from where that individual is fleeing, what right do we have to deny them entry, treat them like an animal, arrest them, and/or deport them?

* * *

I’m writing this here because I’m not sure what else to write.

Cyborgology is a pretty laid back operation and I don’t necessarily feel pressure from my editors to post. But I’m listed as a contributor to this community and I’d like to live up to the title. Over the past few months, I’ve had some work and school obligations that have slowed me down, but what’s really kept me from posting since my last essay over six months ago is the absolute fear that (at the risk of being a digital dualist) what I’m going to write here does not have enough to do with what’s going on out there (I want to be very clear here that my colleagues at Cyborgology have written a good number of posts about the current administration and so my fear is not based on whether or not my fellow authors focus on the right issues—they do).

Reading about the immigrant detention centers inspired me to turn to a favorite of Cyborgology authors and prolific authority on discipline, Michel Foucault. I found plenty there to draw connections. Similarly, Giorgio Agamben’s State of Exception would be extremely relevant here. And normally, that’s how I would start a post—think of a technology, say, seismic sensors embedded in the desert sand, and then turn to someone who has written abstractly about the sort of apparatus of control embedded within the sensor. Or I’d turn to the use of DNA testing for the reunification of families and consider what it means when an archive of marginalized bodies is being built anew, fortified with the very code of each individual’s physical manifestation.

I just don’t see where my analysis changes the fact that these abominations exist. Foucault was a brilliant historian—a self-proclaimed “archeologist” of power. He set up a multitude of signposts that we can read today to recognize how structures of society organize and control the individual. Has his work changed anything?

This is not an argument against online activism—we know how important and inclusive that element of resistance is. And it’s not a plea to ask you to get into the streets and start punching Nazis (for all I know, you’re already doing that). A few times during the last couple of years, I’ve gone out to the Jacumba wilderness and left water and supplies with an amazing organization called Border Angels. And I’ve attended a few protests. But, primarily, when I fret over what I’m doing to make change, I convince myself that being an artist, historian, and writer is what I do well and what I should keep doing. Is that going to be enough? I try hard to be an ally to the marginalized, but when does allyship fall too short?

* * *

My grandmother was lucky enough to escape Europe while the rest of her immediate family was sent to Auschwitz. Three of my great aunts, through a series of luck and generosity from otherwise barbaric Nazis, lived through the experience until the camp was liberated. They traveled on foot and by hitchhiking from one Red Cross shelter to another before finally getting back to their family’s house in Czechoslovakia. They discovered that their neighbors had taken over their home and the shop which their father ran from within. “You were dead,” their neighbors proclaimed, “we figured you weren’t coming back. This is ours now. Go back to being dead.”

Walking through the desert near the routes taken by migrants seeking a better life in the United States, you’ll see evidence of those people: empty cans of food, a body-sized imprint in the sand in a crawlspace. One time I saw a Little Mermaid backpack and another I saw two large dish sponges with shoelaces and foot imprints—most likely someone trying to walk without leaving footprints or disturbing a seismic sensor. How can I make sure that these people—if they make it through the mountains, and the desert, and past the helicopters and drones, and through the sensors, and to the highway, and out of detention centers—how can I make sure that they aren’t told, “go back to being dead”?

Warning: Mr. Robot spoilers abound (but, come on, what are you doing still not caught up with this show?). 

In my first post for Cyborgology last year, I suggested that perhaps Elliot Alderson, the paranoid and delusional protagonist of USA Network’s Mr. Robot, was the epitome of a Deleuzian Body Without Organs—an extreme state wherein rhythms and intensities not available in the anatomical body provide access to a plane of immanence (though I also incorrectly suggested that Elliot is schizophrenic). I used his own self-tracking technique—a journal—and how this journal led to advantage being taken of him. If you’ve read my other posts here, you won’t be surprised to hear that I then used all of that as a critique of the quantified self. What can I say? When all you read is QS critique, everything you read is QS critique. After making my way through season 3, however, I’d like to revisit Elliot and what other sorts of theoretical signposts I might use to understand his character.

I am particularly interested in a scene from s03e04, “eps3.3_metadata.par2”, in which Mr. Robot, after a particularly aggressive rant to Tyrell and Angela, turns back into Elliot. The audio pops and the video skips a bit—Mr. Robot is “glitching”. In most of season 2, viewers had been cued to Elliot’s transitions by light static in the audio track, but show creator Sam Esmail and crew have implemented this trope in progressively more encompassing ways over the course of the series. The use of a glitch in film or TV is nothing new—from art house film A Colour Box (1935) by Len Lye to The Matrix and Wreck it Ralph. But in using the glitch to indicate the transition from Elliot to Mr. Robot and back (we rarely see the latter, making the episode four scene even more notable), Esmail and team use the trope to delineate a liminal space, one that might best be characterized by Legacy Russell’s Glitch Feminism.

As defined by Russell on this blog in 2012, Glitch Feminism

embraces the causality of “error”, and turns the gloomy implication of glitch on its ear by acknowledging that an error in a social system that has already been disturbed by economic, racial, social, sexual, and cultural stratification and the imperialist wrecking-ball of globalization—processes that continue to enact violence on all bodies—may not, in fact, be an error at all, but rather a much-needed erratum.

Russell’s work on the topic focuses on the body as a limit to be overcome through slippage—“glitch” comes from the Yiddish for “slippery area” and the German for “to slip”. She came to the concept through Nathan’s digital dualism, wherein the fallacy of IRL as distinct from online, distracts from the goings on both online and AFK. Reading Elliot’s glitches through Russell’s Glitch Feminism Manifesto, then, we see his slips between psyches as corrections—a fix for what he needs in the moment. In episode 6, “”, static, color bars, video artifacts, frame skipping, and abrasive audio cues are all utilized in progressively more jarring ways as Elliot and Mr. Robot fight for control of their shared body, the former seeking to block the destruction put in motion by the latter (Elliot wins the battle, but not, of course, that day’s war). The glitch is there for the viewer to understand that a transition is happening, certainly, but also for Elliot’s body that, using Russell’s words, “exist[s] somewhere before arrival upon a final concretized identity.”

I am particularly taken by both Glitch Feminism and Elliot’s character because of the way they each resist a synthetic delineation between the technological and biological. Not because new computing systems are being developed with biological materials, but because old computing systems were built by human emotion. This is the general argument made by Elizabeth Wilson in her 2011 Affect and Artificial Intelligence—that “alliances between human and machine were calibrated through the affects of curiosity, surprise, contempt, interest, fear, and shame.” Throughout the work, she argues for a reconceptualization of AI away from the stereotypical “cool”, emotionless field for mathematicians and computer scientists and into a significantly warmer, more emotional place, eventually suggesting that the proliferation and improvement of AI technologies will increase when all parties involved agree on the aforementioned reframing.

Glitch Feminism embodies this reconceptualization by turning the metaphor of a cold, computationally-focused self on its head—the brain as affectless computer is gone. The body as vessel for emotional processing is brought to the fore. The glitch gives that body a chance against the white patriarchy that has marginalized it to this point. Russell is sure to point out that “Glitch Feminism is not gender-specific.” Elliot’s body, however, as it glitches between his outer persona and Mr. Robot, is being controlled by the women in his life—Darlene spies on him for the FBI, Angela uses him to resurrect her mother, White Rose has co-opted his skills for her dreams of global domination. Whether they understand it (White Rose) or not (Darlene), he is an empty vessel until they act. He is the [unoccupied] embodiment of Melvin Kranzberg’s first law of technology: that it is neither good, nor bad, nor is it neutral.

Today, AI systems are programmed from the bottom up. Siri is not pre-loaded with every possible question. Instead, when one Siri instance learns how to answer a question properly, it passes that information on to every other Siri (so to speak—most of this is happening in centralized, remote applications). The self-correction happening within these systems is a product of the programmers and participants (I’ll let you guess who actually gets compensated for their work…but we’ll leave Marx out of this for now). How, then, might we as participants cause these systems to glitch, correcting them, making way for the underrepresented bodies left behind?

Gabi Schaffzin is pursuing his PhD in Art History, Theory, and Criticism, with a concentration in art practice, at UC San Diego. He will never—NEVER!—shut up about this show.

When the team here at Cyborgology first started working on The Quantified Mind, a collaboratively authored post about the increasing metrification of academic life, production, and “success”, I immediately reached out to Zach Kaiser, a close friend and collaborator. Last year, Zach produced Our Program, a short film narrated by a professor from a large research institution at which a newly implemented set of performance indicators has the full attention of the faculty.

For my post this week, then, I’d like to consider Zach an Artist in Residence at Cyborgology—someone using the production and dissemination of works that embody the types of cultural phenomena or theories covered on the blog (as it turns out, this is not Zach’s first film featured on Cyborgology). I suppose it’s up to him if he’d like to include the position on his CV. In the following, I would like to present some of my reactions to the film and let Zach respond, hopefully raising questions that can be asked in dialogue with the ones presented at the end of The Quantified Mind. In full disclosure, I am very familiar with Zach’s scholarship and art (I’m listed as a co-author or co-artist on much of it, though not Our Program in particular), so I hope I don’t lead the witness too much here.

But first, the film:

As a classically trained designer teaching in the art department of a Research I school, Zach’s perspective is valuable here for a number of reasons: obviously his day-to-day is highly influenced by the metrification trend in academia (especially considering his pre-tenure status), but he has worked in the commercial realm with companies and organizations enamored with the exact sort of technologically enabled quantification tools and systems (read: big data, et al.) driving the platforms through which academics’ metrics are being tallied. My first reaction to the film, then, is about the use of an object as the main visual here. After all, the film is not about a device—it’s not called Our Ticker or Our Shiny White Box With Seductive Red LEDs, it’s called Our Program; it’s about a cultural phenomenon with, for all intents and purposes, no real consumer-facing physical manifestation (beyond, perhaps, online dashboards or the like).

I’m reminded, however, of a book I recently read, Elizabeth Wilson’s Affect and Artificial Intelligence, a brief but fascinating argument for a reconceptualization of AI away from the stereotypical “cool”, emotionless field for mathematicians and computer scientists and into a significantly warmer, more emotional place. Alongside this main pitch, she also suggests that the proliferation and improvement of AI technologies will increase when all parties involved agree on the aforementioned reframing—that is, when AI is understood to be less Skynet, more PARO. One striking piece of research that stands out to me now in the context of Our Program comes in the chapter discussing ELIZA and PARRY, two AI psychoanalysts from the 1960s. Wilson references Sherry Turkle (frequently written about on this blog) and her work on humans’ relationships to technology, but ultimately dismisses this research in favor of Byron Reeves and Clifford Nass, who argue that we as a species are drawn to befriend our technological devices, summed up by Wilson as our “direct affiliative inclinations for artificial objects” (95).

ZK: Considering the metrification of academia in the context of affect is something I didn’t originally conceive as part of the work, but I’m reminded here of various efforts (within the humanities) to produce better metrics that are specific to humanities disciplines as opposed to inheriting metrics systems from the “hard” sciences. This strikes me as curious when situated in relationship to PARO or Siri. Ironically, through producing more “humane” metrics, we may end up furthering the idea that humans are fundamentally computational in nature.

A drive to produce more nuanced or, in the case of the humanities, humane metrics, or to make AI more relatable is not about the metrics or the AI themselves but is, I would suggest, about what we think about ourselves as people—whether we are, or are not, at some basic level, computational. The apotheosis of such a belief is a kind of pan-computationalism, where microbes and microchips operate in glorious harmony, not unlike the proclamations made in the poem “All Watched Over by Machines of Loving Grace,” by Richard Brautigan. Such a belief also caters to a neoliberalization of all life underpinned by models of self-interested human behavior that reach back to the early days of game theory.

To me, it’s not necessarily about asking whether we want to have affinities with artificial intelligence or computational objects in general but to what degree our affinities with those things become absorbed into our own ontological space, rendering us equally as computational as those objects. In this way, I see a strong connection between efforts to make scholarly metrics more nuanced, sophisticated, contextual, etc., and an affinity towards a PARO over a Skynet.

GS: If you’ve read my recent posts on the value of using obvious fiction in art and design versus trying to seem “real”, then you won’t be surprised that I hope Zach will discuss how he frames his narrative. This is not a piece of marketing. He does not leave his name off of the film. Nor does he call the piece a “product tour” or “brand video” on his website. That said, it is obviously influenced by his real life experiences in academia, experiences that we recognize as very much not unique in The Quantified Mind. Why fiction then?

ZK: I was recently asked if the “parody” can keep up with “reality.” Career benchmarking in higher education in Europe (like Reappointment, Promotion, and Tenure here in the States) is increasingly metrics-focused. A european colleague once told me about his dissertation committee, which required him to prove his impact via citations before he could graduate. Universities in the UK are using platforms like Simitive Academic Solutions to address “goal setting and alignment” to produce stronger accountability and incentive systems for faculty members.

I feel as though the fiction is a way of grappling with reality. The object, this “ticker” on which the film centers, is somewhat absurd in both its form and purpose. The intent was to make more explicit the link between the kind of (dare I say) neoliberal, market-based nature of faculty metrics and the physical faculty and university themselves: a “stock-ticker” that illustrates whether or not we as faculty members should continue to receive investment from our institutions. This kind of marketization of faculty data is already happening, and is not necessarily “new,” but the kind of control it wields is shifting. The more sophisticated, contextual, and nuanced the metrics become—not just about citations or number of publications, but about everything related to faculty output (e.g., fitness data via partnerships with FitBit and smart furniture manufacturers to determine whether more fit faculty produce more “impact”, other biometric and psychometric indications to help faculty identify causes of stress that decrease productivity, weighting of metrics based on location, discipline, type of institution)—the more administrators will rely on metrics to shape decision-making processes. As long as we develop ways to demonstrate our fundamentally computational nature, the influence of metrics on academia will be a positive feedback loop, with new metrics being developed, new decisions being based on those metrics, and new metrics being developed in response to the consequences of those decisions.

Gabi Schaffzin is pursuing his PhD in Art History, Theory, and Criticism, with a concentration in art practice, at UC San Diego.

Zach Kaiser is Assistant Professor of Graphic Design and Experience Architecture in the Department of Art, Art History, and Design at Michigan State University.

The two, along with other collaborators, have been working on the Culture Industry [dot] Club, a dynamic assemblage of artist-researchers engaged with emergent media practices and deep historical and theoretical research.

When I started this series three weeks ago, my goal was to provide a review/recap of Orphan Black’s final season, tying it to issues of the body, history and philosophy of science, and the value of fiction. Turns out that last element drew me in and I was most curious about the way that Orphan Black’s creators, Graeme Manson and John Fawcett, employed their science consultant, Cosima Herter, in order to make the science in the show as “real” as possible, while still developing and producing a piece of work that was very clearly fiction. Along the way, I’ve found myself wanting to bring in other works of narrative-based fiction. I wrote about Mr. Robot, but I have drafts that include Minority Report, Black Mirror, Nathaniel Rich’s 2013 novel, Odds Against Tomorrow, Margaret Atwood’s Oryx & Crake, The Blair Witch Project, and Orson Welles’s 1938 War of the Worlds radio broadcast. Reflecting back on those drafts, I came just short of plotting these works along a matrix consisting of two axes: plausibility and believability. That is, could this not only actually happen, but would a public believe it had? In effect, I began to work out how hyperstitious to consider each of these culture artifacts.

Deciding that in earnest would take quite a bit of effort on my part, plus some training in sociological methods, not to mention IRB approval. I am willing to argue, however, that medium and channel have a significant effect on the believability of a piece. Not terribly controversial, I know. But when Margaret Atwood writes a novel, it goes into the fiction section and we need not worry that a madman acclerationist capable of doing so has decided to play god. Why, then, does she need to begin the acknowledgements for Year of the Flood, Oryx & Crake’s follow up, by stating that the novel “is fiction, but the general tendencies and many of the details in it are alarmingly close to fact”?

Hyperstition is a valuable concept because it helps us see that a story like Atwood’s might move out of the fiction section, coming to fruition in another form. This shift is not only promulgated by artists or authors, of course. Advertisers ask you to imagine yourself in a car that will park itself a year before the product is available on the market. Realtors on HGTV or its equivalents are thrilled to hear visitors to an open house begin to “see” their own furniture in the living room of a listed property. My concern as an artist, however, is how to utilize hyperstition in the service of affect, change, and revolution.

Earlier this year, I argued that a popular “sciency-art” is increasingly being produced for the sake of science and not so much for art. But maybe this trend is something that we can appropriate for our own motives. As I noted in my previous post, when Dunne and Raby show Faraday Chair, it’s very clearly a piece of art meant to provoke. But what if they showed it at a furniture show? What if Wodiczko brought his Homeless Vehicle to a TED talk? Would they be selling out? Would they be perpetrating hoaxes? If they were found out to be a performance piece, would their adoring public feel devastated (as one tech scholar did after Horse e-books was revealed to not be as algorithmically curated as once thought)?

The reason these methods of presentation would not work is clearer when we return, once again, to Delphi Carstens’s explanation that hyperstitions “fan the flames” of cultural anxiety. When Wodiczko complains that viewers of his work want projects like Homeless Vehicle mass produced, he is responding to the fact that they are not made anxious by it—they feel hopeful. Charlie Brooker’s Black Mirror episodes garner affect through shock value. His challenge is to not go too far before swaying into campy horror or implausible absurdity. Galleries and festivals like Transmediale, SLSA, and Science Gallery have made room for this absurdity, now it’s up to us to embrace the “neatness” factor, but certainly not at the expense of affect.

Of course, in the case of Orson Welles’s radio show, the reports of the panic that resulted from the broadcast were widely exaggerated. Jefferson Pooley and Michael J. Socolow write that a wire report was picked up by daily newspapers, hungry for something to boost readership and build doubt in the reliability of radio as a whole, leading to the promulgation of the myth of the program’s panic-inspiring believability. On the complete flip-side, 61 years later, when Artisan Entertainment released Daniel Myrick and Eduardo Sánchez’s The Blair Witch Project, the studio made it very clear in promotional material that the film was completely fictional. Still, the town of Burkittsville, where the story was purported to have taken place, was inundated with witch hunters for a significant time after a successful theatrical release. The hyperstitious outcomes of a work are, perhaps, out of the hands of the artist.

But we can still give it a shot.

Gabi Schaffzin is a PhD student at UC San Diego. He has lost track of whether his work is considered real, fiction, or fake. 


Last week, I introduced some characters to my argument: Orphan Black and its writing and consulting staff, Mr. Robot and its creators, the Cybernetic Culture Research Unit and Nick Land, accelerationism, and hyperstition. Need a refresher? Find it here. Now, I’d like to take a brief detour in order to introduce another important character here: speculative design.

If you’ve been in or around the design academy in the past decade, you will no doubt have heard the word “speculative” thrown around quite a bit. Colloquially, speculative design (or design fiction, or critical design) uses methodology and mediums traditional to the design studio (that is, the division of labor, tools, prototyping, and manufacturing processes common to industrial and graphic design shops) in order to produce visual artifacts from the future. One oft-cited project is the Faraday Chair (1995) designed by Anthony Dunne, erroneously but popularly labeled as the field’s progenitor. Dunne suggested that radio waves carrying wireless communication would eventually crowd our airspace to the point that we would need a physical respite from their electromagnetic effect. As part of his PhD at the Royal College of Art he, with the help of his collaborator Fiona Raby, installed in a gallery a large plexiglass box with an air tube and a pillow and had a model lay inside, supposedly “protected” from the invisible radiation of wifi and bluetooth messages.

Speculative design, however, is not as new as some will have you think. One of its earlier figures, Krzysztof Wodiczko, grew up in a post war autocratic Poland—an experience that significantly shapes his work. Trained as an industrial designer, Wodiczko began his career working for a small Polish electronics manufacturer, but quickly turned to his design education as a means to develop pieces which “could interrogate ethical and political voicelessness” (102). It was in this vein that he designed and fabricated the Personal Instrument (1969), featuring a mic’d up helmet, sound-canceling headphones, and two gloves with light sensors. As the wearer reduces how much light hits the sensors on their hands, sounds from their immediate environment are blocked out and vice versa. About the piece, Wodiczko writes:

I was in a strange position: a designer employed by the state industrial corporation while trying to establish a critical and ironic dialogue with a real and monstrous designer—the communist state itself—who was in total control of the entire society and treated it as a single work of art or design…In the Personal Instrument, I somehow represented myself, my colleagues, and possibly many others, swimming “freely” in a world of sounds and speech, yet remaining silent. (102)

Eventually, Wodiczko labeled his flavor of design “interrogative” and in 1994 he penned a manifesto of sorts. In it, he declares that interrogative design “takes a risk, explores, articulates, and [responds] to the questionable conditions of life in today’s world, and does so in a questioning manner” (16). One of Wodiczko’s more well-known pieces, Homeless Vehicle Project (1988–89), was designed in consultation with homeless individuals in and around Philadelphia and New York City. The piece features a rolling cart, complete with aluminum can storage, sleeping cubby, and safety flag to make sure its presence is known from afar. The artist created the Homeless Vehicle

in the context of jungle capitalism, along streets and parks of New York City…[where] the petrified homeless, dying of their wounds or of malnutrition were advocating the benefits of American freedom in front of monuments of Washington and Lafayette, thus revivifying and reanimating these former dead. (xiv)

When asked about its viability as a mass-produced vehicle, Wodiczko leans on the value of a prototype:

“The minute you present a proposal, people think you must be offering a grand vision for a better future.” They can’t see a thing like the Homeless Vehicle…as the “concretisation” of a present problem, a makeshift transitional device, or an aesthetic experiment. Instead, “they think it must be designed for mass production…taking over the cities.” (Wright 1992, 272-273)

In effect, the “people” want to know if it’s real. Wodiczko argues it need not be.

I return, then, to last week’s post and Delphi Carstens on hyperstitions: “the trauma and fear engendered by their cultural ‘makeovers’…merely serve to further empower [their] basic premises and fan the flames.” Surely, seeing Faraday Chair in a gallery in 1995, complete with model and pillow and breathing tube, a viewer understands it is a piece of critical design (or is it art at this point?). Having read the plaque, our gallery visitor walks away considering the health risks of an inevitable flood of electromagnetic waves into their apartment (remember, this is well before “wifi” or “AirPort” or “Bluetooth” became household names). Walking out of the gallery and into the street, however, does a passing Homeless Vehicle engender in our viewer a sense of anxiety for the homeless it proposes to serve? Or, seeing a working prototype in use, do they feel a sense of relief that a “solution” to the homeless “problem” is available, relinquishing them of any obligation to do their part?

Writing this now, I feel a sense of crisis. Dunne and Raby, having popularized the field of speculative/critical design by making it a central focus of the now defunct Design Interaction program at RCA, have been busy preparing visions of future western cultures, visions bereft of refugees but complete with “Anarcho-evolutionists” and “Communo-nuclearists”. Wodiczko has been using high-powered projectors to cover monuments and buildings around the world with the recorded stories of members of marginalized communities such as women factory workers in Mexico or US war veterans. Certainly, I don’t wish to suggest that the Polish artist is any sort of martyr—he develops his work while on the Harvard Graduate School of Design faculty payroll. But I do very much want to love his methodology—one that he argues “attempts to heal the numbness that threatens the health of democratic process by pinching and disrupting it, waking it up, and inserting the voice, experiences, and presence of those others who have been silenced, alienated, and marginalized” (xiii).

Are Dunne and Raby trying to “heal” anything? They suggest that by identifying a current sociotechnical trajectory and trying to plot its future through the creation of material artifacts that present the audience with that future, they can provoke discourse on the desirability of that future. You can see, then, why our old friends, the accelerationists would prefer this methodology—especially those on the left, who see projects such as Occupy and the Arab Spring as having failed in their very un-futuristic shortsightedness. So instead of trying to figure out who gets to sleep in the warm tents once the snow comes, I guess it’s better to forget about sleeping altogether and just leaning into our four hour work week to build systems of automation that will most certainly save us from late capital’s technocratic grip?

Where are we left then? If we envision projects to address homelessness today, we are absolved of needing to worry about it tomorrow. If, instead, we envision what homelessness looks like tomorrow, then we need not worry about today—our hyperstitious ways will materialize the answers. Our real projects are fake, so we must rely on our fictions? 

If you’ll allow me to return to television (warning: spoilers ahead, tread lightly), consider that all of the accelerationist projects and hyperstitious speculative design pieces being extruded slowly through the 3D printers and from the laser cutters of the shiny new “makerspaces” of  otherwise underfunded art and design departments are not in the service of dismantling heavily capitalized corporate cloning projects or bringing down the E-Corps of meat space. At first blush, this makes perfect sense. Firstly, we mustn’t upset the backers of said spaces. Secondly, and perhaps more obviously (absurdly?), there isn’t a heavily capitalized corporate cloning project, nor is there a Sino-American conspiracy involving multinational corporations, cryptocurrency, and a Swedish executive-cum-hacker (and if you believe there is, you must obviously sleep with a tinfoil hat on your head).

At this point, I’ve brought you with me down a big of a rabbit hole. There are questions of scale (are we fixing capitalism or cloning?) and medium (metal, Photoshop, or television? Maybe all three?) and channel (gallery, street, or BBC America?) and more. So in my next post, let’s see if I can’t wrap this up and finally start to answer the questions I posed at the beginning of last week’s piece: when is fiction too fake, reality too real?

Gabi Schaffzin is a PhD student at UC San Diego. Anthony Dunne thought his work was all a big hoax. Krzysztof Wodiczko thought his work was too ironic. He was rejected from a PhD with the former (fortunately) and a job with the latter (disappointingly). 

At what point does a fictional tale of a present day technocapitalist advancement and the characters embroiled in its aftermath turn into a dystopia? Is there ever a clear threshold between the plausible and the absurd? And what responsibility does the artist or author have towards their audience to make clear the realism of the piece?

Spoiler Warning: you may want to tread lightly if you haven’t yet but still plan on watching through season 2 of Mr. Robot and season 5 of Orphan Black.

In Graeme Manson and John Fawcett’s Orphan Black, which recently wrapped its fifth and last season on BBC America, a young con artist discovers she exists for very complicated reasons. She is at once unique in her willingness and ability to protect her family by destroying the systems which created her, while simultaneously living as one in (at least) 274 other women who are genetically identical. Along with their science consultant, the creators and writers of Orphan Black built a world in which capitalists, religious fanatics, a wealthy mad man, and scientists (though many characters would cross into more than one category), came together to circumvent ethics, legalities, and well-established scientific notions as they sought wealth, immortality, and the secrets to human kind. Good thing it was all made up.

And yet Manson and Fawcett have never shied away from revealing their reliance on Cosima Herter, the show’s science consultant. Herter, a scholar in the History of Science, Technology, and Medicine, spent her time on the show researching the science referenced, challenging writers to reconsider assumptions they’ve made about, for instance, the relationship between science and religion, and generally ensuring a tenable story. Manson has said that Herter’s insights “help to inform the big picture even if it’s not overt. So those are important thematic things. We don’t want it to take over the show, but we want it to be such a part of the fabric that you can’t avoid it.” Still, what value does the show’s mostly-believable science1 bring?

The same might be asked about Sam Esmail’s Mr. Robot, in which a relatively unbelievable apocalypse occurs in an extremely believable world. There is a very small gap between what Esmail and his writing team create and what we understand to be our current economic and technocratic situations—at least pre-5/9 hack. Esmail has said that he works hard with consultants to ensure that the technology for the show is plausible, based on non-fictitious products and events.

I’m not as interested in considering here what Orphan Black or Mr. Robot would be like if their writers didn’t ensure a strong plausibility. Instead, I want to consider what they would be like if they pushed even further into the “real”.

* * *

Loosely, the term “hyperstition” refers to the way that new ideas propagate through culture, the way that fictions have the power to shape the “real” future. The term was coined by the Cybernetic Culture Research Unit (Ccru) out of Warwick University in the mid-90s. Ccru was a highly problematic experiment in renegade academics, disbanding almost as quickly as it came together, alienating outsiders and insiders alike along the way. Perhaps the most important concept to have emerged from Ccru, however, was that of accelerationism.

Today, it is generally understood that there are two flavors of accelerationism: the original, “right-wing” version and the newer, leftist variety. The former, developed by Nick Land, one of the founders of Ccru and a philosopher oft-cited in alt-right/neo-Nazi texts, proposes to speed up the capitalist project to the point of technological singularity and ultimate efficiency. The latter, popularized in recent years by Alex Williams and Nick Srnicek, argues that full automation of labor, combined with a universal basic income, means technology will free the working class from capitalism altogether—the traditional left, they claim, will stagnate as long as projects such as Occupy are its chosen path of revolution.

Mr. Robot and Orphan Black become hyperstitious, not because their individual premises have necessarily come to fruition, but, as Delphi Carstens writes of hyperstitions in general, they’ve done so in the sense that “the trauma and fear engendered by their cultural ‘makeovers’…merely serve to further empower [their] basic premises and fan the flames.” That is, the anxiety produced by these shows might be enough to force an audience to consider their realism. Still, the “realness” of these shows is limited by the genre and medium. That is, in order to tell the story from Sarah Manning’s or Elliot Alderson’s perspective, we as viewers must understand immediately that this is a fiction—it is not shot as a documentary or news report.

But, once again, what if they were?

In my next post, I’d like to explore what sorts of efforts are currently made by artists and designers in the name of envisioning and/or making a future. I’d also like to work through what sorts of aesthetic or programmatic decisions leave a viewer considering a piece to be real, fiction, or fake. I will use more examples of various types of art that could be or seems to be about a “truth” and hope eventually to challenge artists to play with the boundaries of when these truths are revealed.

1 There is a minuscule element of the supernatural that helps the clones survive, but I have yet to find anyone angry enough to write about that.

In this post, I’d like to make an argument about a way to understand how the Democratic party seems to be making messaging and policy decisions. An argument like this can’t be made in a vacuum—or in 1,500 words. Nor can any one or even ten reasons be decided upon for why the leaders of a party do what they do. But I recognize a pattern in how the DNC and leadership has acted over the past decade and I want to work that through here. So please forgive any indication that I am not a policy wonk or political analyst—I do not claim to be, nor do I wish to be either.

In my series on the history of the Quantified Self and eugenics earlier this year, I referenced the Belgian astronomer, Adolphe Quetelet, who argued that man could be measured just like the positions of planets. I didn’t have the space to explain it very well back then, but think about it like this: you and, say, 570 of your closest friends have telescopes. At the same time on the same night, you each measure the position of a certain star in the sky. You all come up with roughly the same position, but with distinct and consistent variation. Take those measurements and plot them along a chart, like this:

The numbers of measurements that fall into position A (14 friends got this measurement), B (21), C (41), etc. is counted and plotted. The astronomer’s error law, normal distribution, and Gaussian density function (which are all the same thing) dictate that these values will fall into the normal bell curve. Most of the measurements (217) fell at position E, which means that your friends who got other measurements were probably wrong. So let’s say that the star you’re measuring is, in fact, at position E.

Now, let’s assume that instead of 571 people taking the same measurement, it’s just you but you’re measuring the height of 571 people. Quetelet would argue (in fact, he did just this in 1842) that the heights of these people (he would call them men…because they were) would fall into the same normal distribution. And, just like position E on the above graph revealed the “real” position of the star, position E on our height graph would reveal the “real” height of a man. After compiling a good number of measurements about this man, he labeled him l’homme moyen, the average man.

Remember that this was all happening in the mid-1800s in France and Belgium, a time during which the French monarchy was under upheaval. In 1830, Charles X was forced to abdicate after the July Revolution, and so his cousin, Louis Philippe, became king. Louis Philippe (whose daughter was married to Leopold I, king of Quetelet’s home nation Belgium) operated under “a juste milieu, in an equal distance from the excesses of popular power and the abuses of royal power” (Antonetti 1994, p 713). Quetelet, often quoting Victor Cousin and the philosopher’s ideal of moderation and compromise, was quite taken by this idea of juste milieu and equally enthusiastic about the application of the astronomer’s law as an instrument of social analysis: that there is a common type of man and that, just like the “real” position of the planet or the “real” height of man, that type is found somewhere in the middle of the bell curve. Per Ted Porter (1988), “L’home moyen, then, was not just a mathematical abstraction, but a moral ideal” (103). Quetelet believed that income inequality could be tied to crime rates, that the rich lived longer because they did not drink as much, and that moderate men tempered their passion and helped regulate birth and death rates. By smoothing out the curves that described man, oscillations of the social body were eliminated and an ideal existence could be achieved.

What, then, does this have to do with the Democratic Party? It is a relatively well-known history that the Dems (that is, DNC-sanctioned campaigns for legislative and executive offices) have been basing much of their decisions on a sophisticated data operation. As Daniel Kreiss described last February on this blog, starting after the failed 2004 presidential election, the DNC began to build and amass a sophisticated database of constituent and voter information. In The Audacity to Win, 2008 Obama campaign manager David Plouffe elucidates how critical projects like the DNC’s (and the campaign’s own data and media programs1) helped the campaign understand which issues voters wanted to hear about, what geographic areas to focus on—down to the precinct level, and which ads to run when. Reportedly, the 2016 Clinton campaign leaned too heavily on their data, eschewing opportunities to campaign in what would eventually prove to be critical markets…like all of Wisconsin.

Obama won on a centrist platform of compromise, one that led to increased civil freedoms like the right to marry, but his tenure as president also saw large banks and corporations make exponential gains thanks to a largely hands-off approach to post-bailout repercussions. And while the ACA is an extremely critical step in the right direction, it is a far cry from a single-payer healthcare system. On the other hand, the Republican party has enjoyed control of both houses of congress ever since 2010 and conservative extremism has taken hold of all three branches of government after Clinton’s centrist platform could neither carry her, nor her down-ticket colleagues to office. Meanwhile, in England, we’ve observed an oscillation from one extreme—Thatcherism—to the other—Corbyn-inspired Socialism. What might have been considered the “mainstream” Labour party two years ago failed miserably, running on, yes, another centrist platform—even with the help of Obama’s 2008 and 2012 strategydata, and media team.

Francis Galton, you may remember from the first installment of my eugenics series, took Quetelet’s work and shifted it—literally. Rather than seeking to find the normal man and make him the model, the father of eugenics wanted to work against what he considered to be a “reversion to mediocrity.” So he promoted the reproduction of those on the exceptional edge of the bell curve and…gently suggested that those on the “deficient” end not reproduce. Of course, this suggestion manifested itself in forced sterilization programs that lasted well into the 1970s in the United States. The idea, of course, was that by removing the deficient and growing the exceptional, the entire curve is forced to move to the right—to the highest IQs, longest legs, fastest reaction times.

Let’s, for the sake of argument, go ahead and call the Republican party Galtonian. Sure, the AHCA, the travel ban, the removal of LGBT identity from the census, and all of the other appalling policies in place or being put in place have eugenicist characteristics. But for now, I want to argue that the Republican party has been using an edge case messaging strategy: war with the terrorists on our soil is imminent, so keep them out and arm yourselves; you might get rich, so let’s reduce the top-earners’ taxes; your marriage will be ruined by someone else’s decisions; women get abortions for fun and your daughter is next. Meanwhile, the Democrats want to reach across the aisle and find a happy medium. They want to incorporate the insurance companies’ wishes into the ACA. Bankers are people, too. We’ll never get single-payer or free college tuition or comprehensive gun control done because the “average American voter” doesn’t want it.

I don’t get to see the data that DNC or GOP operatives have. Nor do I believe one side won or lost solely on the quality or quantity of its data. I have some idea, albeit nascent, why the Democrats refuse to come down hard for social programs that are primarily beneficial for the populous over the corporations (hint: Republican candidates aren’t the only rich ones out there). But I do know that the July Monarchy of Louis Philipe only lasted 18 years, during which he survived seven assassination attempts. It’s time to push towards the other end of the bell curve—to shift the message to a polarized edge case: single-payer is the only just system, free education will lift everyone, top earners owe more to society than vice versa and should pay their share, guns do kill people. If the Democratic party wants to continue to let the data dictate the policy, they will never move beyond a juste milieu. They will point to l’homme moyen and say, “this is our target.” The problem is that target is moving and unless they take control, then thanks to a general apathy surrounding and rejection of their candidates, it will continue to move to the right.

1In the interest of full disclosure, I worked for a year at Blue State Digital, though not on the Obama or Clinton campaigns, nor does anything I write here violate any sort of non-disclosure agreement.

Gabi Schaffzin is a PhD student at UC San Diego. On this, America’s celebration of independence from the British, he wants you to know that Bernie would have won.

METATOPIA 4.0 – Algoricene (2017) by Jaime Del Val

The 23rd International Symposium on Electronic Art was held in collaboration with the 16th Festival Internacional De La Imagen in Manizales, Colombia in mid-June 2017. The opening ceremony for the conference kicked off with a performance by the artist Jaime Del Val, entitled METATOPIA 4.0 – Algoricene (2017), described by the artist as “a nomadic, interactive and performative environment for outdoors and indoors spaces.” The artist statement goes on (and on) to explain that the piece “merges dynamic physical and digital architectures” in an effort to “def[y] prediction and control in the Big Data Era.” In actuality, Del Val stripped down to his naked body, put himself in a clear mesh tent, projected abstract shapes onto the tent, and danced to what might best be called abstract electronica (think dubstep’s “wubwubwub” without the pop).

What piece of what Del Val presented qualifies as “electronic art”? Was it the music? The projector? The use of the term “Big Data Era”, capitalized (in lieu, perhaps, of scare-quotes) in his entirely glib artist statement? I was similarly confused by Alejandro Brianza’s artist talk, “Underground Soundscapes”, in which he showed a few photos of subway systems around the world, accompanied by sound recordings from each visit. About Brianza’s work and Del Val’s, I wondered: why is this electronic art? In fact, throughout the duration of my visit to the ISEA conference and festival, I found myself asking “why” quite often.

To be sure, there were plenty of projects that were quite obviously “electronic”. Bat-bots (2015), for instance, by Daniel Miller, features a pair of bat-like sculptures, complete with echolocation measurement devices and speakers to emit what might be inaudible if you were to walk by an actual bat. Self-proclaimed “sound explorer” Franck Vigroux performed a 45-minute DJ set in front of a Malevitch-inspired white cross, made of “thousands of individual pixels, which explode in space according to the levels of energy of the audio”; the track sounded much the same as Del Val’s musical accompaniment. ISEA, then, was in no shortage of art that is obviously “electronic” in the sense that it had to be plugged in or it used computation as a medium. Still, I could not help but wonder “why” again: why was this even made? Why subject your audience to 45 minutes of the same set of particle physics acting on a simple shape? Why reinvent bats?

ISEA is by no means unique in its ability to attract a congregation of technophilic artists or those intrigued by a mix of science and art. For the past three decades and beyond, organizations like Transmediale, Ars Electronica, and Science Gallery have grown to be major curators of “sciency art” the world over. They operate on mission statements that boast about the interactivity and broad cultural appeal of the work. They throw costly events in major cities around the world and smaller gatherings in satellite venues. Some, like Ars, give out coveted prizes for work deemed superior by a panel of (mostly male) jurors. What they lack, however, is an overt acknowledgement of the political nature of what they are doing. Yes, there is the occasional surveillance detector or VR poverty simulator, but the general excitement that these festivals and the artists showing in them are taking advantage of is a facile equation of “art + science = innovation/truth/the future”.

It seems almost anachronistic to argue for art and politics to be considered necessary partners today. In 1984, artist and critic Lucy Lippard wrote that

It is understood by now that all art is ideological and all art is used politically by the right or the left, with the conscious and unconscious assent of the artist. There is no neutral zone. Artists who remain stubbornly uninformed about the social and emotional effects of their images and their connections to other images outside the art context are most easily manipulated by the prevailing systems of distribution, interpretation, and marketing.

The conservative art critic Hilton Kramer was not so sure, arguing that statements such as “There is no neutral zone” would lead to Lionel Trilling’s “‘eventual acquiescence in tyranny’.” Fifteen years earlier, Kramer, a staunch formalist, watched in horror as Lippard and her Concpetualist peers filled galleries from MoMA to LA’s MOCA with politically charged works of art that often implicated viewers as collaborators in the art. MoMA’s 1970 show, Information, featured Vito Acconci’s Service Area in which the artist had his postal mail forwarded to the museum. “The piece is performed (unawares),” he writes in the show catalogue, “by the postal service…and by the senders of the mail.” The museum guard becomes a “mail guard” and the artist performs the piece by going to pick up his letters. In Hans Haacke’s Poll of MoMA Visitors, the artist asked exhibition visitors to place a ballot in one of two boxes, each answering “yes” or “no” to the question, “Would the fact that Governor Rockefeller has not denounced President Nixon’s Indochina policy be a reason for you not to vote for him in November?”. Haacke didn’t reveal the question until the night before the show opened. This was considered one of the artist’s first “institutional critiques”—works that sought to bring to light the questionable practices of the venue in which they were exhibited (Governor Rockefeller was brother to Nelson, member of the MoMA board, and son to Abigail, founder of the museum).

Kramer was unamused. In a particularly scathing review for the New York Times, he called the show “overweeningly intellectual”, making sure to question the artistic value of the work entirely (“There are more than 150 artists—or ‘artists’—from 15 countries”) before declaring the entire show “egregiously boring.” The critic, it seems, was not willing to consider the conceptual and political meaning behind the work, instead taking jabs at its—gasp!—interactive nature: “I am not sure I can give a very accurate or coherent account of what the visitor to this exhibition is invited to look at, listen to, sit down on, clamber over, go to sleep in, write on, stand in front of, read, and otherwise connect with.”

If, nearly fifty years ago, Kramer was bored because he refused to see the depth of the ideological implications in the art, I am bored because I simply cannot find it. Encontros (2017) features two iPhones, screen-to-screen, one showing a video of the brown waters of the Amazon, the other showing the black waters of the Amazon’s Rio Negra tributary. The site at which the two meet—a place of indigeonous persecution and slavery since the early 1700s—is a marvel of nature, a limnological metaphor of the clash between cultures, one overpowering the other. The artist statement—signed by fifteen individuals—makes no mention of any sort of geopolitical consideration, instead opting to highlight that “the system searches for real-time information in such a way as to reflect changes in the tides and the phases of the moon.” Projects like Encontros not only could be political, they feel like they should be. This begs the question then: do the artists (who, presumably, also write the text to accompany the piece) leave it to me to find the culturally critical element? Is the political in the eye of the beholder?

I would be more inclined to consider this possibility if not for the dearth of ideology-inviting rhetoric in the majority of the programming and literature surrounding each organization’s events. With the notable exception of Transmediale, the mission statements of the festivals in question sprinkle words like “society” and “culture” among pronouncements of the juxtaposition of “Biotechnology and genetic engineering, neurology, robotics, prosthetics and media art” (Ars Electronica) and the ignition of “creativity and discovery where science and art collide” (Science Gallery). Science Gallery, in particular, boasts of turning STEM to STEAM—a dubious cheapening of art in the name of STEM’s focus on education qua employment. In the program’s video appealing to possible funders of “the world’s first university-linked network dedicated to public engagement with science and art”, Luke O’Neill, Director of the Trinity Biomedical Science Institute, declares, “there’s no difference in my mind between an artist and a scientist—we’re all after the truth!” I beg to differ.

Welcome to the fourth and final installment to my series on the history of the Quantified Self. If you’re just joining us, be sure to review parts one, two, and three, wherein I introduced and explored a project that seeks to build a genealogical relationship between an already analogous pair: eugenics and the contemporary Quantified Self movement. The last two posts appear to have, at best, complicated, and at worst, failed the hypothesis: critical breaks along both of the genealogies elucidated within each post seem more like chasms which make eugenics and QS difficult to connect in a meaningful way. At the root of this break seems to be the fundamental tenets underlying each movement. Eugenics, with its emphasis on hereditarily passed physical and psychological traits, precludes the possibility that outside, environmental influences may lead to changes in an individual’s bodily or mental makeup. The Quantified Self, on the other hand, is predicated on the belief that, by tracking the variables associated with one’s activities or environment, one might be able to make adjustments to achieve physical or psychological health. On the surface, then, there is an incommensurability between the two fields. However, by understanding how the technologies of the two movements work in the context of the predominant form of Foucauldian governmentality and biopower of their respective times, we may be able to resolve this chasm.

First, it is important to recognize how closely intertwined the eugenics movement was into the welfare state of early-twentieth century Europe and the United States. Per Nils Roll-Hansen in the conclusion to Eugenics and the Welfare State, in the first decade of the 1900s, a classical concept of genetics was formed in which an individual’s phenotype could be influenced by not only their genetic makeup, but by a combination of genotype and environmental and social factors. After being pioneered by conservative evolutionists such as Galton and his cohort of protégés, then, “reform” eugenics of the 1920s and 1930s was led by scientists looking to jettison the racist reputation of their predecessors through a “renewal of the ‘social contract’ of the movement” (Roll-Hansen 260).   In Scandinavia, Britain, and elsewhere in Europe, newly elected Labour governments used legislation to enact the forced sterilization of the “feebleminded” and weak in the name of the protection of both that marginalized group and the population as a whole. In England in particular, liberals used “eugenical arguments to disseminate information to the working classes on how they should behave biologically for their own benefit and that of the English ‘race’” (Hasian 115). American liberals used neo-Lamarckian ideas concerning the social influences on human traits to emphasize the importance of “race poison” studies (Hasian 128)—research that “proved” that, for example, cigarettes and alcohol had negative downstream effects on the human race (Hasian 28).

For an understanding of how this type of welfare state came to be, I turn, now, to the eighteenth century, as sovereign power shifted from individuals ruling over principalities and whomever lived inside of them to governments overseeing populations understood to live in, travel to, trade with, and war with neighboring lands. In a 1978 talk to the Collège de France, Michel Foucault outlined this shift in governance, arguing that it ushered in the birth of economies: collections of goods, people, and money that all fell under the sovereignty of a state. Critical to the management of these economies were technologies of counting and tracking—statistics, anthropometrics, and the like. Majia Nadesan, reading Foucault as well as Nikolas Rose, notes that governmentality addresses some key concepts surrounding the organization of society’s technologies, problems, and authorities; it recognizes, too, that individuals are both turned into “self-regulating agents” and/or marginalized as invisible or dangerous (1). In order to explain how hegemonies develop and deploy technologies to control the life of populations, Foucault developed the concept of biopower, “arguably the most pervasive form of power engendering the homologies and systemic regularities across the diverse fields of social life” (Nadesan 3).

Without question, the technologies enabling eugenics and their legislative implementation are prime examples of governmentality and biopower at work—the combination of which can be understood through Foucault’s “biopolitics”. In the biopolitical realm, knowledge of man—at once global, quantitative (i.e., concerning the population), and analytical (i.e., concerning the individual)—is exploited by loci of power to divide, categorize, and act “upon populations in order to securitize the nation” (Nadesan 25). As the nineteenth century came to a close, the negative effects of laissez-faire policies turned the tide towards a more active liberal state, one that enabled citizens to maximize their liberties. Nadesan perfectly sums up where welfare-state sponsored eugenics comes in: “the modern liberal-welfare state utilized biopolitical knowledge and expert authorities to expand its power at the level of the population…while simultaneously these forms of knowledge operated to individualize and subjectify citizens as particular kinds of subjects” (26). This occurred at the expense of the liberties of some individuals, of course, as conceptualizations of the normal and pathological were dispersed throughout the population (Nadesan 26).

As the twentieth century progressed through two World Wars and the biomedical and technological revolutions that accompanied them, psychology, anthropology, and sociology saw major shifts towards the social experiences of the individual in shaping psychologies and behaviors—this is something exemplified in the two brief histories above. Alongside these new visions of what it means to be human, new technologies of the self (e.g., the self-help personality test, the self-experiment, psychotropics) engendered an empowered, self-governing subject of liberal democracy (Nadesan 149). These technologies of the self (Foucault’s term) ushered in a neoliberal mode of governance—one in which welfare states jettisoned responsibility for the individual. As Nadesan notes, “By stressing ‘self-care,’ the neoliberal state divulges paternalistic responsibility for its subjects but simultaneously holds its subjects responsible for self-government” (33). Enter, then, the Quantified Self: a movement predicated on the use of technologies which enable individuals not only to self-track, but to make changes in their lives—based on the data collected—towards a normative conceptualization of a good, healthy citizen. And while certainly not a prerequisite, sharing that data with others adds “value” to it by enabling comparison and competition, though at the risk of being utilized by surveillance apparatuses.

Eugenics, then, was seemingly predicated on wholesale changes to the collective while Quantified Self is based on an individual’s efforts to play their responsible part in society—for the sake of that same collective. Both utilize technologies of governmentality that depend on statistical mechanisms invented and/or made mainstream by Francis Galton. But this relationship is more than just analogous—by tracking the development of technologies of experimentation, behaviorism, psychometrics, and personality classification, we see a complex progression from welfare-style “one for all” approach to the neoliberal state’s reliance on self-governance. I have already noted a number of social-welfare focused programs offered by “reform” eugenicists. In hard-liner, “positive” eugenics, those deemed worthy are incentivized to reproduce—see, for example, Galton’s £5,000 wedding gift proposal, as well as Henry Fairfield Osborn’s speech to the Third International Congress on Eugenics, in which he argued for “not more but better Americans” (41). To a eugenicist—even a hard-liner—these types of programs might be considered what William Epstein calls “moral behaviorism—the use of material incentives to promote socially acceptable behavior” (183-4), in this case, reproduction for the sake of the race. The development of behaviorism into self-experimentation and incentivized self-tracking makes a great deal of sense, then, as the neoliberal emphasis on self-care no longer warranted social welfare programs. Nadesan, once again citing Rose, notes that “political authorities sought to ‘act at a distance’ upon the desires and social practices of citizens primarily through the promulgation of biopolitical knowledge, experts, and institutions that promised individual empowerment and self-actualization” (27). The classificatory power of psychometric testing under the early-twentieth century welfare state served to exclude and erase those individuals deemed worthy of institutionalization or, worse, deemed unworthy of reproduction. The same technology which enabled these tests drive the self-informing power of the daily happiness meters and mood surveys of the Quantified Self. Nadesan, this time citing Mitchell Dean, points out neoliberalism’s heavy emphasis on normalization of our social and cultural condition—a normalization centered around containment and extrication of risk; “concerns for ‘responsibility’ and ‘obligation’ outweigh freedom and rehabilitation” (35). Participating in the Quantified Self, one is under the impression that their freedom to excel will be enhanced by the adjustments made thanks to the data they have collected. Welfare states sought to normalize towards compliance through aggregate data. The neoliberal state aggregates through surveillance apparatuses for the sake of risk management. Galton’s psychometrically driven tests classified those worthy of breeding and those not. Tracing the progression of these tests along with the shift from social-welfare to neoliberal biopolitics, it is easy to recognize and understand the shift into a market based on products heavily reliant on the collection and analysis of personal data.

What is the history of the quantified self a history of? One could point to technological advances in circuitry miniaturization or in big data collection and processing. The proprietary and patented nature of the majority of QS devices precludes certain types of inquiry into their invention and proliferation. But it is not difficult to identify one of QS’s most critical underlying tenets: self-tracking for the purpose of self-improvement through the identification of behavioral and environmental variables critical to one’s physical and psychological makeup. Recognizing the importance of this premise to QS allows us to trace back through the scientific fields which have strongly influenced the QS movement—from both a consumer and product standpoint. Doing so, however, reveals a seeming incommensurability between an otherwise analogous pair: QS and eugenics. A eugenical emphasis on heredity sits in direct conflict to a self-tracker’s belief that a focus on environmental factors could change one’s life for the better—even while both are predicated on statistical analysis, both purport to improve the human stock, and both, as argued by Dale Carrico, make assertions towards what is a “normal” human.

A more complicated relationship between the two is revealed upon attempting this genealogical connection. What I have outlined over the past few weeks is, I hope, only the beginning of such a project. I chose not to produce a rhetorical analysis of the visual and textual language of efficiency in both movements—from that utilized by the likes of Frederick Taylor and his eugenicist protégés, the Gilbreths, to what Christina Cogdell calls “Biological Efficiency and Streamline Design” in her work, Eugenic Design, and into a deep trove of rhetoric around efficiency utilized by market-available QS device marketers. Nor did I aim to produce an exhaustive bibliographic lineage. I did, however, seek to use the strong sense of self-experimentation in QS to work backwards towards the presence of behaviorism in early-twentieth century eugenical rhetoric. Then, moving in the opposite direction, I tracked the proliferation of Galtonian psychometrics into mid-century personality test development and eventually into the risk-management goals of the neoliberal surveillance state. I hope that what I have argued will lead to a more in-depth investigation into each step along this homological relationship. In the grander scheme, I see this project as part of a critical interrogation into the Quantified Self. By throwing into sharp relief the linkages between eugenics and QS, I seek to encourage resistance to fetishizing the latter’s technologies and their output, as well as the potential for meaningful change via those technologies.

Gabi Schaffzin is a PhD student at UC San Diego. He swore he’d never bring Foucault into his Cyborgology posts. ¯\_(ツ)_/¯. 


Carrico, Dale. “Two Variations of Contemporary Eugenicist Politics.” Two Variations of Contemporary Eugenicist Politics, 1 Jan. 1970, Accessed 22 Mar. 2017.

Cogdell, Christina. Eugenic Design: Streamlining America in the 1930s. Philadelphia, Pa, University of Pennsylvania Press, 2010.

Epstein, William M. The Masses Are the Ruling Classes: Policy Romanticism, Democratic Populism, and American Social Welfare. New York, NY, Oxford University Press, 2017.

Foucault, Michel. “Governmentality.” The Foucault Effect Studies in Governmentality, edited by Graham Burchell et al., The University of Chicago Press, Chicago, 1991, pp. 87–104.

Hasian, Marouf Arif. The Rhetoric of Eugenics in Anglo-American Thought. Athens, University of Georgia Press, 1996.

Nadesan, Majia Holmer. Governmentality, Biopower, and Everyday Life. New York, Routledge, 2011.

Roll-Hansen, Nils. “Conclusion: Scandinavian Eugenics in the International Context.” Eugenics and the Welfare State: Sterilization Policy in Denmark, Sweden, Norway, and in Finland, edited by Gunnar Broberg and Nils Roll-Hansen, Michigan State University Press, East Lansing, 2005, pp. 259–271.

Perkins, Henry Farnham, and Henry Fairfield Osborn. “Birth Selection versus Birth Control.” A Decade of Progress in Eugenics; Scientific Papers of the Third International Congress of Eugenics, Williams & Wilkins, Baltimore, 1934, pp. 29–41.