At what point does a fictional tale of a present day technocapitalist advancement and the characters embroiled in its aftermath turn into a dystopia? Is there ever a clear threshold between the plausible and the absurd? And what responsibility does the artist or author have towards their audience to make clear the realism of the piece?

Spoiler Warning: you may want to tread lightly if you haven’t yet but still plan on watching through season 2 of Mr. Robot and season 5 of Orphan Black.

In Graeme Manson and John Fawcett’s Orphan Black, which recently wrapped its fifth and last season on BBC America, a young con artist discovers she exists for very complicated reasons. She is at once unique in her willingness and ability to protect her family by destroying the systems which created her, while simultaneously living as one in (at least) 274 other women who are genetically identical. Along with their science consultant, the creators and writers of Orphan Black built a world in which capitalists, religious fanatics, a wealthy mad man, and scientists (though many characters would cross into more than one category), came together to circumvent ethics, legalities, and well-established scientific notions as they sought wealth, immortality, and the secrets to human kind. Good thing it was all made up.

And yet Manson and Fawcett have never shied away from revealing their reliance on Cosima Herter, the show’s science consultant. Herter, a scholar in the History of Science, Technology, and Medicine, spent her time on the show researching the science referenced, challenging writers to reconsider assumptions they’ve made about, for instance, the relationship between science and religion, and generally ensuring a tenable story. Manson has said that Herter’s insights “help to inform the big picture even if it’s not overt. So those are important thematic things. We don’t want it to take over the show, but we want it to be such a part of the fabric that you can’t avoid it.” Still, what value does the show’s mostly-believable science1 bring?

The same might be asked about Sam Esmail’s Mr. Robot, in which a relatively unbelievable apocalypse occurs in an extremely believable world. There is a very small gap between what Esmail and his writing team create and what we understand to be our current economic and technocratic situations—at least pre-5/9 hack. Esmail has said that he works hard with consultants to ensure that the technology for the show is plausible, based on non-fictitious products and events.

I’m not as interested in considering here what Orphan Black or Mr. Robot would be like if their writers didn’t ensure a strong plausibility. Instead, I want to consider what they would be like if they pushed even further into the “real”.

* * *

Loosely, the term “hyperstition” refers to the way that new ideas propagate through culture, the way that fictions have the power to shape the “real” future. The term was coined by the Cybernetic Culture Research Unit (Ccru) out of Warwick University in the mid-90s. Ccru was a highly problematic experiment in renegade academics, disbanding almost as quickly as it came together, alienating outsiders and insiders alike along the way. Perhaps the most important concept to have emerged from Ccru, however, was that of accelerationism.

Today, it is generally understood that there are two flavors of accelerationism: the original, “right-wing” version and the newer, leftist variety. The former, developed by Nick Land, one of the founders of Ccru and a philosopher oft-cited in alt-right/neo-Nazi texts, proposes to speed up the capitalist project to the point of technological singularity and ultimate efficiency. The latter, popularized in recent years by Alex Williams and Nick Srnicek, argues that full automation of labor, combined with a universal basic income, means technology will free the working class from capitalism altogether—the traditional left, they claim, will stagnate as long as projects such as Occupy are its chosen path of revolution.

Mr. Robot and Orphan Black become hyperstitious, not because their individual premises have necessarily come to fruition, but, as Delphi Carstens writes of hyperstitions in general, they’ve done so in the sense that “the trauma and fear engendered by their cultural ‘makeovers’…merely serve to further empower [their] basic premises and fan the flames.” That is, the anxiety produced by these shows might be enough to force an audience to consider their realism. Still, the “realness” of these shows is limited by the genre and medium. That is, in order to tell the story from Sarah Manning’s or Elliot Alderson’s perspective, we as viewers must understand immediately that this is a fiction—it is not shot as a documentary or news report.

But, once again, what if they were?

In my next post, I’d like to explore what sorts of efforts are currently made by artists and designers in the name of envisioning and/or making a future. I’d also like to work through what sorts of aesthetic or programmatic decisions leave a viewer considering a piece to be real, fiction, or fake. I will use more examples of various types of art that could be or seems to be about a “truth” and hope eventually to challenge artists to play with the boundaries of when these truths are revealed.

1 There is a minuscule element of the supernatural that helps the clones survive, but I have yet to find anyone angry enough to write about that.

In this post, I’d like to make an argument about a way to understand how the Democratic party seems to be making messaging and policy decisions. An argument like this can’t be made in a vacuum—or in 1,500 words. Nor can any one or even ten reasons be decided upon for why the leaders of a party do what they do. But I recognize a pattern in how the DNC and leadership has acted over the past decade and I want to work that through here. So please forgive any indication that I am not a policy wonk or political analyst—I do not claim to be, nor do I wish to be either.

In my series on the history of the Quantified Self and eugenics earlier this year, I referenced the Belgian astronomer, Adolphe Quetelet, who argued that man could be measured just like the positions of planets. I didn’t have the space to explain it very well back then, but think about it like this: you and, say, 570 of your closest friends have telescopes. At the same time on the same night, you each measure the position of a certain star in the sky. You all come up with roughly the same position, but with distinct and consistent variation. Take those measurements and plot them along a chart, like this:

The numbers of measurements that fall into position A (14 friends got this measurement), B (21), C (41), etc. is counted and plotted. The astronomer’s error law, normal distribution, and Gaussian density function (which are all the same thing) dictate that these values will fall into the normal bell curve. Most of the measurements (217) fell at position E, which means that your friends who got other measurements were probably wrong. So let’s say that the star you’re measuring is, in fact, at position E.

Now, let’s assume that instead of 571 people taking the same measurement, it’s just you but you’re measuring the height of 571 people. Quetelet would argue (in fact, he did just this in 1842) that the heights of these people (he would call them men…because they were) would fall into the same normal distribution. And, just like position E on the above graph revealed the “real” position of the star, position E on our height graph would reveal the “real” height of a man. After compiling a good number of measurements about this man, he labeled him l’homme moyen, the average man.

Remember that this was all happening in the mid-1800s in France and Belgium, a time during which the French monarchy was under upheaval. In 1830, Charles X was forced to abdicate after the July Revolution, and so his cousin, Louis Philippe, became king. Louis Philippe (whose daughter was married to Leopold I, king of Quetelet’s home nation Belgium) operated under “a juste milieu, in an equal distance from the excesses of popular power and the abuses of royal power” (Antonetti 1994, p 713). Quetelet, often quoting Victor Cousin and the philosopher’s ideal of moderation and compromise, was quite taken by this idea of juste milieu and equally enthusiastic about the application of the astronomer’s law as an instrument of social analysis: that there is a common type of man and that, just like the “real” position of the planet or the “real” height of man, that type is found somewhere in the middle of the bell curve. Per Ted Porter (1988), “L’home moyen, then, was not just a mathematical abstraction, but a moral ideal” (103). Quetelet believed that income inequality could be tied to crime rates, that the rich lived longer because they did not drink as much, and that moderate men tempered their passion and helped regulate birth and death rates. By smoothing out the curves that described man, oscillations of the social body were eliminated and an ideal existence could be achieved.

What, then, does this have to do with the Democratic Party? It is a relatively well-known history that the Dems (that is, DNC-sanctioned campaigns for legislative and executive offices) have been basing much of their decisions on a sophisticated data operation. As Daniel Kreiss described last February on this blog, starting after the failed 2004 presidential election, the DNC began to build and amass a sophisticated database of constituent and voter information. In The Audacity to Win, 2008 Obama campaign manager David Plouffe elucidates how critical projects like the DNC’s (and the campaign’s own data and media programs1) helped the campaign understand which issues voters wanted to hear about, what geographic areas to focus on—down to the precinct level, and which ads to run when. Reportedly, the 2016 Clinton campaign leaned too heavily on their data, eschewing opportunities to campaign in what would eventually prove to be critical markets…like all of Wisconsin.

Obama won on a centrist platform of compromise, one that led to increased civil freedoms like the right to marry, but his tenure as president also saw large banks and corporations make exponential gains thanks to a largely hands-off approach to post-bailout repercussions. And while the ACA is an extremely critical step in the right direction, it is a far cry from a single-payer healthcare system. On the other hand, the Republican party has enjoyed control of both houses of congress ever since 2010 and conservative extremism has taken hold of all three branches of government after Clinton’s centrist platform could neither carry her, nor her down-ticket colleagues to office. Meanwhile, in England, we’ve observed an oscillation from one extreme—Thatcherism—to the other—Corbyn-inspired Socialism. What might have been considered the “mainstream” Labour party two years ago failed miserably, running on, yes, another centrist platform—even with the help of Obama’s 2008 and 2012 strategydata, and media team.

Francis Galton, you may remember from the first installment of my eugenics series, took Quetelet’s work and shifted it—literally. Rather than seeking to find the normal man and make him the model, the father of eugenics wanted to work against what he considered to be a “reversion to mediocrity.” So he promoted the reproduction of those on the exceptional edge of the bell curve and…gently suggested that those on the “deficient” end not reproduce. Of course, this suggestion manifested itself in forced sterilization programs that lasted well into the 1970s in the United States. The idea, of course, was that by removing the deficient and growing the exceptional, the entire curve is forced to move to the right—to the highest IQs, longest legs, fastest reaction times.

Let’s, for the sake of argument, go ahead and call the Republican party Galtonian. Sure, the AHCA, the travel ban, the removal of LGBT identity from the census, and all of the other appalling policies in place or being put in place have eugenicist characteristics. But for now, I want to argue that the Republican party has been using an edge case messaging strategy: war with the terrorists on our soil is imminent, so keep them out and arm yourselves; you might get rich, so let’s reduce the top-earners’ taxes; your marriage will be ruined by someone else’s decisions; women get abortions for fun and your daughter is next. Meanwhile, the Democrats want to reach across the aisle and find a happy medium. They want to incorporate the insurance companies’ wishes into the ACA. Bankers are people, too. We’ll never get single-payer or free college tuition or comprehensive gun control done because the “average American voter” doesn’t want it.

I don’t get to see the data that DNC or GOP operatives have. Nor do I believe one side won or lost solely on the quality or quantity of its data. I have some idea, albeit nascent, why the Democrats refuse to come down hard for social programs that are primarily beneficial for the populous over the corporations (hint: Republican candidates aren’t the only rich ones out there). But I do know that the July Monarchy of Louis Philipe only lasted 18 years, during which he survived seven assassination attempts. It’s time to push towards the other end of the bell curve—to shift the message to a polarized edge case: single-payer is the only just system, free education will lift everyone, top earners owe more to society than vice versa and should pay their share, guns do kill people. If the Democratic party wants to continue to let the data dictate the policy, they will never move beyond a juste milieu. They will point to l’homme moyen and say, “this is our target.” The problem is that target is moving and unless they take control, then thanks to a general apathy surrounding and rejection of their candidates, it will continue to move to the right.

1In the interest of full disclosure, I worked for a year at Blue State Digital, though not on the Obama or Clinton campaigns, nor does anything I write here violate any sort of non-disclosure agreement.

Gabi Schaffzin is a PhD student at UC San Diego. On this, America’s celebration of independence from the British, he wants you to know that Bernie would have won.

METATOPIA 4.0 – Algoricene (2017) by Jaime Del Val

The 23rd International Symposium on Electronic Art was held in collaboration with the 16th Festival Internacional De La Imagen in Manizales, Colombia in mid-June 2017. The opening ceremony for the conference kicked off with a performance by the artist Jaime Del Val, entitled METATOPIA 4.0 – Algoricene (2017), described by the artist as “a nomadic, interactive and performative environment for outdoors and indoors spaces.” The artist statement goes on (and on) to explain that the piece “merges dynamic physical and digital architectures” in an effort to “def[y] prediction and control in the Big Data Era.” In actuality, Del Val stripped down to his naked body, put himself in a clear mesh tent, projected abstract shapes onto the tent, and danced to what might best be called abstract electronica (think dubstep’s “wubwubwub” without the pop).

What piece of what Del Val presented qualifies as “electronic art”? Was it the music? The projector? The use of the term “Big Data Era”, capitalized (in lieu, perhaps, of scare-quotes) in his entirely glib artist statement? I was similarly confused by Alejandro Brianza’s artist talk, “Underground Soundscapes”, in which he showed a few photos of subway systems around the world, accompanied by sound recordings from each visit. About Brianza’s work and Del Val’s, I wondered: why is this electronic art? In fact, throughout the duration of my visit to the ISEA conference and festival, I found myself asking “why” quite often.

To be sure, there were plenty of projects that were quite obviously “electronic”. Bat-bots (2015), for instance, by Daniel Miller, features a pair of bat-like sculptures, complete with echolocation measurement devices and speakers to emit what might be inaudible if you were to walk by an actual bat. Self-proclaimed “sound explorer” Franck Vigroux performed a 45-minute DJ set in front of a Malevitch-inspired white cross, made of “thousands of individual pixels, which explode in space according to the levels of energy of the audio”; the track sounded much the same as Del Val’s musical accompaniment. ISEA, then, was in no shortage of art that is obviously “electronic” in the sense that it had to be plugged in or it used computation as a medium. Still, I could not help but wonder “why” again: why was this even made? Why subject your audience to 45 minutes of the same set of particle physics acting on a simple shape? Why reinvent bats?

ISEA is by no means unique in its ability to attract a congregation of technophilic artists or those intrigued by a mix of science and art. For the past three decades and beyond, organizations like Transmediale, Ars Electronica, and Science Gallery have grown to be major curators of “sciency art” the world over. They operate on mission statements that boast about the interactivity and broad cultural appeal of the work. They throw costly events in major cities around the world and smaller gatherings in satellite venues. Some, like Ars, give out coveted prizes for work deemed superior by a panel of (mostly male) jurors. What they lack, however, is an overt acknowledgement of the political nature of what they are doing. Yes, there is the occasional surveillance detector or VR poverty simulator, but the general excitement that these festivals and the artists showing in them are taking advantage of is a facile equation of “art + science = innovation/truth/the future”.

It seems almost anachronistic to argue for art and politics to be considered necessary partners today. In 1984, artist and critic Lucy Lippard wrote that

It is understood by now that all art is ideological and all art is used politically by the right or the left, with the conscious and unconscious assent of the artist. There is no neutral zone. Artists who remain stubbornly uninformed about the social and emotional effects of their images and their connections to other images outside the art context are most easily manipulated by the prevailing systems of distribution, interpretation, and marketing.

The conservative art critic Hilton Kramer was not so sure, arguing that statements such as “There is no neutral zone” would lead to Lionel Trilling’s “‘eventual acquiescence in tyranny’.” Fifteen years earlier, Kramer, a staunch formalist, watched in horror as Lippard and her Concpetualist peers filled galleries from MoMA to LA’s MOCA with politically charged works of art that often implicated viewers as collaborators in the art. MoMA’s 1970 show, Information, featured Vito Acconci’s Service Area in which the artist had his postal mail forwarded to the museum. “The piece is performed (unawares),” he writes in the show catalogue, “by the postal service…and by the senders of the mail.” The museum guard becomes a “mail guard” and the artist performs the piece by going to pick up his letters. In Hans Haacke’s Poll of MoMA Visitors, the artist asked exhibition visitors to place a ballot in one of two boxes, each answering “yes” or “no” to the question, “Would the fact that Governor Rockefeller has not denounced President Nixon’s Indochina policy be a reason for you not to vote for him in November?”. Haacke didn’t reveal the question until the night before the show opened. This was considered one of the artist’s first “institutional critiques”—works that sought to bring to light the questionable practices of the venue in which they were exhibited (Governor Rockefeller was brother to Nelson, member of the MoMA board, and son to Abigail, founder of the museum).

Kramer was unamused. In a particularly scathing review for the New York Times, he called the show “overweeningly intellectual”, making sure to question the artistic value of the work entirely (“There are more than 150 artists—or ‘artists’—from 15 countries”) before declaring the entire show “egregiously boring.” The critic, it seems, was not willing to consider the conceptual and political meaning behind the work, instead taking jabs at its—gasp!—interactive nature: “I am not sure I can give a very accurate or coherent account of what the visitor to this exhibition is invited to look at, listen to, sit down on, clamber over, go to sleep in, write on, stand in front of, read, and otherwise connect with.”

If, nearly fifty years ago, Kramer was bored because he refused to see the depth of the ideological implications in the art, I am bored because I simply cannot find it. Encontros (2017) features two iPhones, screen-to-screen, one showing a video of the brown waters of the Amazon, the other showing the black waters of the Amazon’s Rio Negra tributary. The site at which the two meet—a place of indigeonous persecution and slavery since the early 1700s—is a marvel of nature, a limnological metaphor of the clash between cultures, one overpowering the other. The artist statement—signed by fifteen individuals—makes no mention of any sort of geopolitical consideration, instead opting to highlight that “the system searches for real-time information in such a way as to reflect changes in the tides and the phases of the moon.” Projects like Encontros not only could be political, they feel like they should be. This begs the question then: do the artists (who, presumably, also write the text to accompany the piece) leave it to me to find the culturally critical element? Is the political in the eye of the beholder?

I would be more inclined to consider this possibility if not for the dearth of ideology-inviting rhetoric in the majority of the programming and literature surrounding each organization’s events. With the notable exception of Transmediale, the mission statements of the festivals in question sprinkle words like “society” and “culture” among pronouncements of the juxtaposition of “Biotechnology and genetic engineering, neurology, robotics, prosthetics and media art” (Ars Electronica) and the ignition of “creativity and discovery where science and art collide” (Science Gallery). Science Gallery, in particular, boasts of turning STEM to STEAM—a dubious cheapening of art in the name of STEM’s focus on education qua employment. In the program’s video appealing to possible funders of “the world’s first university-linked network dedicated to public engagement with science and art”, Luke O’Neill, Director of the Trinity Biomedical Science Institute, declares, “there’s no difference in my mind between an artist and a scientist—we’re all after the truth!” I beg to differ.

Welcome to the fourth and final installment to my series on the history of the Quantified Self. If you’re just joining us, be sure to review parts one, two, and three, wherein I introduced and explored a project that seeks to build a genealogical relationship between an already analogous pair: eugenics and the contemporary Quantified Self movement. The last two posts appear to have, at best, complicated, and at worst, failed the hypothesis: critical breaks along both of the genealogies elucidated within each post seem more like chasms which make eugenics and QS difficult to connect in a meaningful way. At the root of this break seems to be the fundamental tenets underlying each movement. Eugenics, with its emphasis on hereditarily passed physical and psychological traits, precludes the possibility that outside, environmental influences may lead to changes in an individual’s bodily or mental makeup. The Quantified Self, on the other hand, is predicated on the belief that, by tracking the variables associated with one’s activities or environment, one might be able to make adjustments to achieve physical or psychological health. On the surface, then, there is an incommensurability between the two fields. However, by understanding how the technologies of the two movements work in the context of the predominant form of Foucauldian governmentality and biopower of their respective times, we may be able to resolve this chasm.

First, it is important to recognize how closely intertwined the eugenics movement was into the welfare state of early-twentieth century Europe and the United States. Per Nils Roll-Hansen in the conclusion to Eugenics and the Welfare State, in the first decade of the 1900s, a classical concept of genetics was formed in which an individual’s phenotype could be influenced by not only their genetic makeup, but by a combination of genotype and environmental and social factors. After being pioneered by conservative evolutionists such as Galton and his cohort of protégés, then, “reform” eugenics of the 1920s and 1930s was led by scientists looking to jettison the racist reputation of their predecessors through a “renewal of the ‘social contract’ of the movement” (Roll-Hansen 260).   In Scandinavia, Britain, and elsewhere in Europe, newly elected Labour governments used legislation to enact the forced sterilization of the “feebleminded” and weak in the name of the protection of both that marginalized group and the population as a whole. In England in particular, liberals used “eugenical arguments to disseminate information to the working classes on how they should behave biologically for their own benefit and that of the English ‘race’” (Hasian 115). American liberals used neo-Lamarckian ideas concerning the social influences on human traits to emphasize the importance of “race poison” studies (Hasian 128)—research that “proved” that, for example, cigarettes and alcohol had negative downstream effects on the human race (Hasian 28).

For an understanding of how this type of welfare state came to be, I turn, now, to the eighteenth century, as sovereign power shifted from individuals ruling over principalities and whomever lived inside of them to governments overseeing populations understood to live in, travel to, trade with, and war with neighboring lands. In a 1978 talk to the Collège de France, Michel Foucault outlined this shift in governance, arguing that it ushered in the birth of economies: collections of goods, people, and money that all fell under the sovereignty of a state. Critical to the management of these economies were technologies of counting and tracking—statistics, anthropometrics, and the like. Majia Nadesan, reading Foucault as well as Nikolas Rose, notes that governmentality addresses some key concepts surrounding the organization of society’s technologies, problems, and authorities; it recognizes, too, that individuals are both turned into “self-regulating agents” and/or marginalized as invisible or dangerous (1). In order to explain how hegemonies develop and deploy technologies to control the life of populations, Foucault developed the concept of biopower, “arguably the most pervasive form of power engendering the homologies and systemic regularities across the diverse fields of social life” (Nadesan 3).

Without question, the technologies enabling eugenics and their legislative implementation are prime examples of governmentality and biopower at work—the combination of which can be understood through Foucault’s “biopolitics”. In the biopolitical realm, knowledge of man—at once global, quantitative (i.e., concerning the population), and analytical (i.e., concerning the individual)—is exploited by loci of power to divide, categorize, and act “upon populations in order to securitize the nation” (Nadesan 25). As the nineteenth century came to a close, the negative effects of laissez-faire policies turned the tide towards a more active liberal state, one that enabled citizens to maximize their liberties. Nadesan perfectly sums up where welfare-state sponsored eugenics comes in: “the modern liberal-welfare state utilized biopolitical knowledge and expert authorities to expand its power at the level of the population…while simultaneously these forms of knowledge operated to individualize and subjectify citizens as particular kinds of subjects” (26). This occurred at the expense of the liberties of some individuals, of course, as conceptualizations of the normal and pathological were dispersed throughout the population (Nadesan 26).

As the twentieth century progressed through two World Wars and the biomedical and technological revolutions that accompanied them, psychology, anthropology, and sociology saw major shifts towards the social experiences of the individual in shaping psychologies and behaviors—this is something exemplified in the two brief histories above. Alongside these new visions of what it means to be human, new technologies of the self (e.g., the self-help personality test, the self-experiment, psychotropics) engendered an empowered, self-governing subject of liberal democracy (Nadesan 149). These technologies of the self (Foucault’s term) ushered in a neoliberal mode of governance—one in which welfare states jettisoned responsibility for the individual. As Nadesan notes, “By stressing ‘self-care,’ the neoliberal state divulges paternalistic responsibility for its subjects but simultaneously holds its subjects responsible for self-government” (33). Enter, then, the Quantified Self: a movement predicated on the use of technologies which enable individuals not only to self-track, but to make changes in their lives—based on the data collected—towards a normative conceptualization of a good, healthy citizen. And while certainly not a prerequisite, sharing that data with others adds “value” to it by enabling comparison and competition, though at the risk of being utilized by surveillance apparatuses.

Eugenics, then, was seemingly predicated on wholesale changes to the collective while Quantified Self is based on an individual’s efforts to play their responsible part in society—for the sake of that same collective. Both utilize technologies of governmentality that depend on statistical mechanisms invented and/or made mainstream by Francis Galton. But this relationship is more than just analogous—by tracking the development of technologies of experimentation, behaviorism, psychometrics, and personality classification, we see a complex progression from welfare-style “one for all” approach to the neoliberal state’s reliance on self-governance. I have already noted a number of social-welfare focused programs offered by “reform” eugenicists. In hard-liner, “positive” eugenics, those deemed worthy are incentivized to reproduce—see, for example, Galton’s £5,000 wedding gift proposal, as well as Henry Fairfield Osborn’s speech to the Third International Congress on Eugenics, in which he argued for “not more but better Americans” (41). To a eugenicist—even a hard-liner—these types of programs might be considered what William Epstein calls “moral behaviorism—the use of material incentives to promote socially acceptable behavior” (183-4), in this case, reproduction for the sake of the race. The development of behaviorism into self-experimentation and incentivized self-tracking makes a great deal of sense, then, as the neoliberal emphasis on self-care no longer warranted social welfare programs. Nadesan, once again citing Rose, notes that “political authorities sought to ‘act at a distance’ upon the desires and social practices of citizens primarily through the promulgation of biopolitical knowledge, experts, and institutions that promised individual empowerment and self-actualization” (27). The classificatory power of psychometric testing under the early-twentieth century welfare state served to exclude and erase those individuals deemed worthy of institutionalization or, worse, deemed unworthy of reproduction. The same technology which enabled these tests drive the self-informing power of the daily happiness meters and mood surveys of the Quantified Self. Nadesan, this time citing Mitchell Dean, points out neoliberalism’s heavy emphasis on normalization of our social and cultural condition—a normalization centered around containment and extrication of risk; “concerns for ‘responsibility’ and ‘obligation’ outweigh freedom and rehabilitation” (35). Participating in the Quantified Self, one is under the impression that their freedom to excel will be enhanced by the adjustments made thanks to the data they have collected. Welfare states sought to normalize towards compliance through aggregate data. The neoliberal state aggregates through surveillance apparatuses for the sake of risk management. Galton’s psychometrically driven tests classified those worthy of breeding and those not. Tracing the progression of these tests along with the shift from social-welfare to neoliberal biopolitics, it is easy to recognize and understand the shift into a market based on products heavily reliant on the collection and analysis of personal data.

What is the history of the quantified self a history of? One could point to technological advances in circuitry miniaturization or in big data collection and processing. The proprietary and patented nature of the majority of QS devices precludes certain types of inquiry into their invention and proliferation. But it is not difficult to identify one of QS’s most critical underlying tenets: self-tracking for the purpose of self-improvement through the identification of behavioral and environmental variables critical to one’s physical and psychological makeup. Recognizing the importance of this premise to QS allows us to trace back through the scientific fields which have strongly influenced the QS movement—from both a consumer and product standpoint. Doing so, however, reveals a seeming incommensurability between an otherwise analogous pair: QS and eugenics. A eugenical emphasis on heredity sits in direct conflict to a self-tracker’s belief that a focus on environmental factors could change one’s life for the better—even while both are predicated on statistical analysis, both purport to improve the human stock, and both, as argued by Dale Carrico, make assertions towards what is a “normal” human.

A more complicated relationship between the two is revealed upon attempting this genealogical connection. What I have outlined over the past few weeks is, I hope, only the beginning of such a project. I chose not to produce a rhetorical analysis of the visual and textual language of efficiency in both movements—from that utilized by the likes of Frederick Taylor and his eugenicist protégés, the Gilbreths, to what Christina Cogdell calls “Biological Efficiency and Streamline Design” in her work, Eugenic Design, and into a deep trove of rhetoric around efficiency utilized by market-available QS device marketers. Nor did I aim to produce an exhaustive bibliographic lineage. I did, however, seek to use the strong sense of self-experimentation in QS to work backwards towards the presence of behaviorism in early-twentieth century eugenical rhetoric. Then, moving in the opposite direction, I tracked the proliferation of Galtonian psychometrics into mid-century personality test development and eventually into the risk-management goals of the neoliberal surveillance state. I hope that what I have argued will lead to a more in-depth investigation into each step along this homological relationship. In the grander scheme, I see this project as part of a critical interrogation into the Quantified Self. By throwing into sharp relief the linkages between eugenics and QS, I seek to encourage resistance to fetishizing the latter’s technologies and their output, as well as the potential for meaningful change via those technologies.

Gabi Schaffzin is a PhD student at UC San Diego. He swore he’d never bring Foucault into his Cyborgology posts. ¯\_(ツ)_/¯. 


Carrico, Dale. “Two Variations of Contemporary Eugenicist Politics.” Two Variations of Contemporary Eugenicist Politics, 1 Jan. 1970, Accessed 22 Mar. 2017.

Cogdell, Christina. Eugenic Design: Streamlining America in the 1930s. Philadelphia, Pa, University of Pennsylvania Press, 2010.

Epstein, William M. The Masses Are the Ruling Classes: Policy Romanticism, Democratic Populism, and American Social Welfare. New York, NY, Oxford University Press, 2017.

Foucault, Michel. “Governmentality.” The Foucault Effect Studies in Governmentality, edited by Graham Burchell et al., The University of Chicago Press, Chicago, 1991, pp. 87–104.

Hasian, Marouf Arif. The Rhetoric of Eugenics in Anglo-American Thought. Athens, University of Georgia Press, 1996.

Nadesan, Majia Holmer. Governmentality, Biopower, and Everyday Life. New York, Routledge, 2011.

Roll-Hansen, Nils. “Conclusion: Scandinavian Eugenics in the International Context.” Eugenics and the Welfare State: Sterilization Policy in Denmark, Sweden, Norway, and in Finland, edited by Gunnar Broberg and Nils Roll-Hansen, Michigan State University Press, East Lansing, 2005, pp. 259–271.

Perkins, Henry Farnham, and Henry Fairfield Osborn. “Birth Selection versus Birth Control.” A Decade of Progress in Eugenics; Scientific Papers of the Third International Congress of Eugenics, Williams & Wilkins, Baltimore, 1934, pp. 29–41.

Welcome to part three of my multi-part series on the history of the Quantified Self as a genealogical ancestor of eugenics. In last week’s post, I elucidated Francis Galton’s influence on experimental psychology, arguing that it was, largely, a technological one. In an oft-cited paper from 2013, researcher Melanie Swan argues that “the idea of aggregated data from multiple…self-trackers[, who] share and work collaboratively with their data” will help make that data more valuable—be it to the individual tracking, physician working with them, corporation selling the device worn, or other stakeholder (86). No doubt, then, the value of the predictive power of correlation and regression to these trackers. Harvey Goldstein, in a paper tracing Galton’s contributions to psychometrics, notes that Galton was not the only late-nineteenth century scientist to believe that genius was passed hereditarily. He was, however, one of the few to take up the task of designing a study to show genealogical causality regarding character, thanks once again to his correlation coefficient and resultant laws of regression.

Galton’s contributions to psychometrics go beyond technological, however, and into methodological. In what I might have also included as an example of the scientist’s support for self-experimentation, Galton’s 1879 “Psychometric Experiments” features the results of a word association test performed on himself:

The plan I adopted was to suddenly display a printed word, to allow about a couple of ideas to successively present themselves, and then, by a violent mental revulsion and sudden awakening of attention, to seize upon those ideas before they had faded, and to record them exactly as they were at the moment when they were surprised and grappled with. (426)

Famously, this word association test was used by Carl Jung as he developed methods to classify his subjects into his various psychological types (Paul 82). Eventually, this tool pioneered by Galton was used to build the Myers-Briggs Type Indicator a 93-question test which plots a test-taker’s personality along multiple axes. Interestingly, the MBTI works against what Nicholas Lemann calls “the first principle of psychometrics…that all distributions bunch up in the middle, in the familiar form of a bell curve” (91). Because of the MBTI’s assumption that individuals are either introverts or extroverts, and so on, resultant data would look like an inverse bell curve, with data bunched up on either end of the axes. Though the test had been conceived of decades prior, Katherine Briggs and Isabel Briggs Myers were finally inspired to finalize the MBTI’s matrices in 1943. The test was, per its creators, intended to help people understand one another—a concern inspired by the onset of World War II, which also provided a more practical reason for its development: helping women who were replacing men in the industrial workplace to find the right “fit” in their new jobs (Myers 208).

Beyond influence in managerial-type personality tests, a Galtonian lineage can be found in the development of the Minnesota Multiphasix Personality Inventory. The 567-item questionnaire was built using a system derived from the nosological methodology of Emil Kraepelin, a German psychologist who, in 1921, published a paper arguing for “inner colonization”—what one translator suggests “as being rightly associated with the eugenics movement” (Engstrom and Weber 341). While the MMPI is perhaps the most widely used psychological personality test, it is closely followed by the Sixteen Personality Factor Questionnaire, a 187-item test developed by Raymond Cattell in the 1940s (Paul xii, xiv). The eccentric researcher developed his own language (with words like “Autia”, “Harria”, Parmia”, and “Zeppia” all referring to different character traits) in order to describe subjects in a novel manner. Cattell’s quirkiness is perhaps not too surprising when his academic pedigree is revealed: he was recruited into psychology by the eugenicist Cyril Burt (Paul 179), who was eventually revealed to have falsified most of his data in twin studies meant to support Galtonian conceptualizations of heredity (Hattie 259). Charles Spearman, Cattell’s academic mentor, was another eugenicist who argued that “‘An accurate measurement of everone’s inteligence would seem to herald the feasibility of selecting better endowed persons for admission into citizenship—and even for the right of having offspring’” (Paul 179). And while Cattell attempted, after World War II, to walk back his belief in purely hereditary personality traits, he could not resist revisiting his eugenicist ways in his 1972 A New Morality From Science (Paul 180-81).

The history of Galton and eugenics, then, can be traced into the history of personality tests. Once again, we come up against an awkward transition—this time from personality tests into the Quantified Self. Certainly, shades of Galtonian psychometrics show themselves to be present in QS technologies—that is, the treatment of statistical datasets for the purpose of correlation and prediction. Galton’s word association tests strongly influenced the MBTI, a test that, much like Quantified Self projects, seeks to help a subject make the right decisions in their life, though not through traditional Galtonian statistical tools. The MMPI and 16PFQ are for psychological evaluative purposes. And while some work has been done to suggest that “mental wellness” can be improved through self-tracking (see Kelley et al., Wolf 2009), much of the self-tracking ethos is based on factors that can be adjusted in order to see a correlative change in the subject (Wolf 2009). That is, by tracking my happiness on a daily basis against the amount of coffee I drink or the places I go, then I am acknowledging an environmental approach and declaring that my current psychological state is not set by my genealogy. A gap, then, between Galtonian personality tests and QS.

Next week, I’ll conclude the series by suggesting that this gap might be closed with the help of your friend and mine, Michel Foucault. Come back, won’t you?

Gabi Schaffzin is a PhD student at UC San Diego. He hates personality tests—of which he has had to take many, thanks to his past life—because he always ends up smack dab in the middle of whatever silly outcomes are possible. 


Engstrom, E. J., and M. M. Weber. “Classic Text No. 83: ‘On Uprootedness’ by Emil Kraepelin (1921).” History of Psychiatry, vol. 21, no. 3, 2010, pp. 340–350., doi:10.1177/0957154×10376890.

Galton, Francis. “Psychometric Experiments.” Brain, vol. 2, no. 2, 1879, pp. 149–162., doi:10.1093/brain/2.2.149.

Goldstein, Harvey. “Francis Galton, Measurement, Psychometrics and Social Progress.” Assessment in Education: Principles, Policy & Practice, vol. 19, no. 2, 2012, pp. 147–158., doi:10.1080/0969594x.2011.614220.

Hattie, J. (1991). “The Burt Controversy: An essay review of Hearnshaw’s and Joynson’s biographies of Sir Cyril Burt.” Alberta Journal of Educational Research, 37(3), 259-275.

Lemann, Nicholas. The Big Test: the Secret History of the American Meritocracy. New York, Farrar, Straus and Giroux, 2007.

Myers, Isabel Briggs, and Peter B. Myers. Gifts Differing: Understanding Personality Type. Mountain View, CA, Nicholas Brealey Publishing, 2010.

Paul, Annie Murphy. The Cult of Personality: How Personality Tests Are Leading Us to Miseducate Our Children, Mismanage Our Companies, and Misunderstand Ourselves. New York, Free Press, 2004.

Swan, Melanie. “The Quantified Self: Fundamental Disruption in Big Data Science and Biological Discovery.” Big Data, vol. 1, no. 2, 2013, pp. 85–99., doi:10.1089/big.2012.0002.

Wolf, Gary. “Measuring Mood – Current Research and New Ideas.” Quantified Self, 12 Feb. 209, Accessed 21 Mar. 2017.

Last week, I began an attempt at tracing a genealogical relationship between eugenics and the Quantified Self. I reviewed the history of eugenics and the ways in which statistics, anthropometrics, and psychometrics influenced the pseudoscience. This week, I’d like to begin to trace backwards from QS and towards eugenics. Let me begin, as I did last week, with something quite obvious: the Quantified Self has a great deal to do with one’s self. Stating this, however, helps place QS in a historical context that will prove fruitful in the overall task at hand.

In a study published in 2014, a group of researchers from both the University of Washington and the Microsoft Corporation found that the term “self-experimentation” was used prevalently among their QS-embracing subjects.

“Q-Selfers,” they write, “wanted to draw definitive conclusions from their QS practice—such as identifying correlation…or even causation” (Choe, et al. 1149). Although not performed with “scientific rigor”, this experimentation was about finding meaningful, individualized information with which to take further action (Choe, et al. 1149).

Looking back at the history of self-experimentation in the sciences—in particular, experimental and behavioral psychology—leads to a 1981 paper by Reed College professor and psychologist, Allen Neuringer, entitled, “Self-Experimentation: A Call for Change”. In it, Neuringer argues for a closer emphasis on the self by behaviorists:

If experimental psychologists applied the scientific method to their own lives, they would learn more of importance to everyone, and assist more in the solution of problems, than if they continue to relegate science exclusively to the study of others. The area of inquiry would be relevant to the experimenter’s ongoing life, the subject would be the experimenter, and the dependent variable some aspect of the experimenter’s behavior, overt or covert. (79)

The psychologist goes on to suggest that poets and novelists could use the method to discover what causes love and that “all members of society” will “view their lives as important” thanks to their contributions to scientific progress (93).

Neuringer’s argument is heavily influenced by the work of B. F. Skinner, the father of radical behaviorism—a subset of psychology in which the behavior of a subject (be it human or otherwise) can be “explained through the conditioning…in response to the receipt of rewards or punishments for its actions” (Gilette 114). We can see, then, a lineage of both behavioral and experimental psychologies on the quantified-self: not only do QS devices track, but many of the interfaces built into and around them embrace “gamification”. That is, beyond the watch face or pedometer display, the dashboards displaying results, the emails and alerts presented to subjects, the “competition” features, etc., all embrace what Deborah Lupton calls “the rendering of aspects of using…self-tracking as games…an important dimension of new approaches to self-tracking as part of motivation strategies” (23).

The field of experimental psychology from which behaviorism grew when, in 1913, John B. Watson wrote “Psychology as the Behaviorist Views It”, was not specifically an invention of Francis Galton. This is not to say that Galton did not partake in experimental psychology during his eugenic research. In fact, his protégé and biographer, Karl Pearson, cites “a leading psychologist” writing in 1911: “‘Galton deserves to be called the first Englishman to publish work that was strictly what is now called Experimental Psychology, but the development of the movement academically has, I believe, in no way been influenced by him’” (213). Pearson, who included this quote in the 1924 second volume of The Life, Letters and Labours of Francis Galton, goes on to argue that American and English psychological papers are far superior to their continental counterparts thanks directly to Galton’s work on correlation in statistical datasets, though, per Ian Hacking, Pearson later notes that correlation laws may have been identified “much earlier in the Gaussian [or Normal] tradition” (187).

Here we begin to see an awkward situation in our quest to draw a line from Galton and hard-line eugenics (we will differentiate between hardline and “reform” eugenics further on) to the quantified self movement. Behaviorism sits diametrically opposed to eugenics for a number of reasons. Firstly, it does not distinguish between human and animal beings—certainly a tenet to which Galton and his like would object, understanding that humans are the superior species and a hierarchy of greatness existing within that species as well. Secondly, behaviorism accepts that outside, environmental influences will change the psychology of a subject. In 1971, Skinner argued that “An experimental analysis shifts the determination of behavior from autonomous man to the environment—an environment responsible both for the evolution of the species and for the repertoire acquired by each member” (214).  This stands in direct conflict with the eugenical ideal that physical and psychological makeup is determined by heredity. Indeed, the eugenicist Robert Yerkes, otherwise close with Watson, wholly rejected the behaviorist’s views (Hergenhahn 400). Tracing the quantified-self’s behaviorist and self-experimental roots, then, leaves us without a very strong connection to the ideologies driving eugenics. Still, using Pearson as a hint, there may be a better path to follow.

So come back next week and we’ll see what else we can dig up in our quest to understand a true history of the Quantified Self.

Gabi Schaffzin is a PhD student at UC San Diego. He has a very good dog named Buckingham. 


Choe, Eun Kyoung, et al. “Understanding Quantified-Selfers’ Practices in Collecting and Exploring Personal Data.” Proceedings of the 32nd Annual ACM Conference on Human Factors in Computing Systems – CHI ’14, 2014, pp. 1143–1152., doi:10.1145/2556288.2557372.

Gillette, Aaron. Eugenics and the Nature-Nurture Debate in the Twentieth Century. New York, Palgrave Macmillan, 2011.

Hacking, Ian. The Taming of Chance. Cambridge, Cambridge University Press, 1990.

Hergenhahn, B. R. An Introduction to the History of Psychology. Belmont, CA, Wadsworth, 2009.

Lupton, Deborah. The Quantified Self: a Sociology of Self-Tracking. Cambridge, UK, Polity, 2016.

Neuringer, Allen. “Self-Experimentation: A Call for Change.” Behaviorism, vol. 9, no. 1, 1981, pp. 79–94., Accessed 19 Mar. 2017.

Pearson, Karl. The Life, Letters and Labours of Francis Galton. Characterisation, Especially by Letters. Index. Cambridge, UP, 1930, Accessed 17 Mar. 2017.

In the past few months, I’ve posted about two works of long-form scholarship on the Quantified Self: Debora Lupton’s The Quantified Self and Gina Neff and Dawn Nufus’s Self-Tracking. Neff recently edited a volume of essays on QS (Quantified: Biosensing Technologies in Everyday Life, MIT 2016), but I’d like to take a not-so-brief break from reviewing books to address an issue that has been on my mind recently. Most texts that I read about the Quantified Self (be they traditional scholarship or more informal) refer to a meeting in 2007 at the house of Kevin Kelly for the official start to the QS movement. And while, yes, the name “Quantified Self” was coined by Kelly and his colleague Gary Wolf (the former founded Wired, the latter was an editor for the magazine), the practice of self-tracking obviously goes back much further than 10 years. Still, most historical references to the practice often point to Sanctorius of Padua, who, per an oft-cited study by consultant Melanie Swan, “studied energy expenditure in living systems by tracking his weight versus food intake and elimination for 30 years in the 16th century.” Neff and Nufus cite Benjamin Franklin’s practice of keeping a daily record of his time use. These anecdotal histories, however, don’t give us much in terms of understanding what a history of the Quantified Self is actually a history of.

Briefly, what I would like to prove over the course of a few posts is that at the heart of QS are statistics, anthropometrics, and psychometrics. I recognize that it’s not terribly controversial to suggest that these three technologies (I hesitate to call them “fields” here because of how widely they can be applied), all developed over the course of the nineteenth century, are critical to the way that QS works. Good thing, then, that there is a second half to my argument: as I touched upon briefly in my [shameless plug alert] Theorizing the Web talk last week, these three technologies were also critical to the proliferation of eugenics, that pseudoscientific attempt at strengthening the whole of the human race by breeding out or killing off those deemed deficient.

I don’t think it’s very hard to see an analogous relationship between QS and eugenics: both movements are predicated on anthropometrics and psychometrics, comparisons against norms, and the categorization and classification of human bodies as a result of the use of statistical technologies. But an analogy only gets us so far in seeking to build a history. I don’t think we can just jump from Francis Galton’s ramblings at the turn of one century to Kevin Kelly’s at the turn of the next. So what I’m going to attempt here is a sort of Foucauldian genealogy—from what was left of eugenics after its [rightful, though perhaps not as complete as one would hope] marginalization in the 1940s through to QS and the multi-billion dollar industry the movement has inspired.

I hope you’ll stick around for the full ride—it’s going to take a a number of weeks. For now, let’s start with a brief introduction to that bastion of Western exceptionalism: the eugenics movement.

Francis Galton had already been interested in heredity and statistics before he read Charles Darwin’s On the Origin of the Species upon its publication in 1859. The work, written by his half-cousin, acted as a major inspiration in Galton’s thinking on the way that genius was passed through generations—so much so, that Galton spent the remainder of his life working on a theory of hereditary intelligence. His first publication on the topic, “Hereditary Talent and Character” (1865), traced the genealogy of nearly 1,700 men whom he deemed worthy of accolades—a small sample of “the chief men of genius whom the world is known to have produced” (Bullmer 159)—eventually concluding that “Everywhere is the enormous power of hereditary influence forced on our attention” (Galton 1865, 163). Four years later, the essay inspired a full volume, Hereditary Genius, in which Galton utilized Adolphe Quetelet’s statistical law detailing a predictive uniformity in  deviation from a normally distributed set of data points—the law of errors.

Much like Darwin’s seminal work, Quetelet’s advancements in statistics played a critical part in the development of Galton’s theories on the hereditary nature of human greatness. Quetelet, a Belgian astronomer, was taken by his predecessors’ work to normalize the variation in error that occurred when the position of celestial bodies were measured multiple times. Around the same time—that is, in the first half of the nineteenth century—French intellectuals and bureaucrats alike had taken a cue from Marquis de Condorcet, who had proposed a way to treat moral—or, social—inquiries in a similar manner to the way the physical sciences were approached. Quetelet, combining the moral sciences with normal distributions, began to apply statistical laws of error in distribution to the results of anthropometric measurements across large groups of people: e.g., the chest size of soldiers, the height of school boys. The result, which effectively treated the variation between individual subjects’ measurements in the same manner as a variation in a set of measurements of a single astronomical object, was homme type—the typical man (Hacking 111–12).

In 1889, Galton wrote, “I know of scarcely anything so apt to impress the imagination as the wonderful form of cosmic order expressed by the ‘Law of Frequency of Error’” (66). Six years earlier, in Inquiries Into Human Faculty, he declared that he was interested in topics “more or less connected with that of the cultivation of race” (17, emphasis added)—that is, eugenics—than simply the observation of it. Galton’s argument was rather simple, albeit vague: society should encourage the early marriage and reproduction of men of high stature. Per Michael Bulmer, “He suggested that a scheme of marks for family merit should be devised, so that ancestral qualities as well as personal qualities could be taken into account” (82). Once these scores were evaluated, the individuals with top marks would be encouraged to and rewarded for breeding; at one point, he recommended a £5,000 “wedding gift” for the top ten couples in Britain each year, accompanied by a ceremony in Westminster Abbey officiated by the Queen of England (Bulmer 82). This type of selective breeding would eventually be referred to as “positive eugenics”.

The statistical technologies developed by Quetelet and the like were utilized by Galton for more than just the evaluation of which individuals were worthy of reproduction, they also allowed for the prediction of how improvements would permeate through a population. Specifically, he argued that if a normally distributed population (being measured upon whichever metric—or combination of which—he had chosen) reproduced, it would result in another normally distributed population—that is, the bulk of the population would be average or mediocre (Hacking 183). He called this the law of regression and understood it to slow severely the improvement of a race towards the ideal. However, if one could guarantee that those individuals at the opposite end of the bell curve—that is, the morally, physically, or psychologically deficient—were not reproducing, then an accelerated reproduction of the exceptional could take place (Bulmer 83). Thus was born “negative eugenics”.

I will revisit the proliferation of eugenics a bit later in this study, but it is important here to note that the historical trail of the active and public implementation of eugenics eventually goes cold somewhere between 1940 and 1945, depending on in which country one is looking. Most obviously, the rise of the Third Reich and its party platform built primarily on eugenicist policies had a direct effect on the decline of eugenics towards the midway point of the twentieth century. Previously enacted (and confidently defended) state policies regarding forced sterilization from Scandinavia to the United States were eventually struck-down and stay as embarrassing marks on national histories to this day (Hasian 140), though the last US law came off the books in the 1970s.

This is not to suggest that the scientific ethos behind the field—that one’s genetic makeup determines both physical and psychological traits—went completely out of fashion. Instead, I hope it has becomes obvious, even in this brief overview, that the aforementioned analogies between eugenics and QS are not difficult to draw. But how do we get from one to the other? And am I being crazy in doing so?

The second question is probably up for grabs for a little while. I’ll begin to answer the first one next week, however, when I sketch out a history of self-experimentation and behavioral psychology, moving backwards from the Quantified Self to eugenics. Come back again, won’t you?

Gabi Schaffzin is a PhD student at UC San Diego. Having just returned from the east coast, his jetlag has left him without anything witty to add. 


Bulmer, M. G. Francis Galton: Pioneer of Heredity and Biometry. Baltimore, Johns Hopkins University Press, 2003.

Galton, Francis. “Hereditary Talent and Character.” Macmillan’s Magazine, 1865, pp. 157–327, Accessed 17 Mar. 2017.

Galton, Francis. Natural Inheritance. New York, AMS Press, 1973 (Originally published 1889).

Hacking, Ian. The Taming of Chance. Cambridge, Cambridge University Press, 1990.

Hasian, Marouf Arif. The Rhetoric of Eugenics in Anglo-American Thought. Athens, University of Georgia Press, 1996.

Lupton, Deborah. The Quantified Self: a Sociology of Self-Tracking. Cambridge, UK, Polity, 2016.

Neff, Gina, and Dawn Nafus. Self-Tracking. Cambridge, MIT Press, 2016.

Back in January, I wrote about Deborah Lupton’s The Quantified Self, a recent publication from Polity by the University of Canberra professor in Communication. In that post I mentioned that I planned to read another book on the QS movement from MIT Press, Self-Tracking by Gina Neff, a Communication scholar out of University of Washington, and Dawn Nafus, an anthropologist at Intel. And so I have.

Much like Lupton’s book, Self-Tracking is best utilized as an introduction to the structures and cultural context in which the quantified self operates. The work begins with a relatively broad introduction to what the quantified self is (the authors differentiate between lower-case quantified self as the general self-tracking industry and uppercase Quantified Self as the Meet-Up-ing, annual-conference-ing, ever-proselytizing community) and what practices the term encompasses. Just as in Lupton’s book, we are treated to insight from Cyborgology’s super-famous past contributor, Whitney Erin Boesel, and her “Taxonomy of types of people”. As I noted back in January, however, Lupton uses a great deal of ink giving example after example of QS devices and services; the authors of Self-Tracking sprinkle their examples throughout which helps the book flow in a significantly more natural manner.

Neff and Nafus also narrow their focus on the health-related aspects of QS. For instance, the pair consider what sorts of problems a doctor might encounter when a patient brings self-tracked data (spoiler: a whole bunch). In considering how this differs from Lupton’s account, I am tempted to suggest that her analysis touched on a much broader swath of the QS market—but this is to consider there to be a difference between QS devices and health-related tracking. That is, as I read Self-Tracking, I wondered if there are any QS devices not health-related. What is the boundary between the body and health? How are normal bodies and healthy bodies any different? Could a QS device be marketed as something that will help you become something other than healthy?

Most of these questions are not explicitly asked by the authors of Self-Tracking. Lupton, on the other hand, does delve into more theoretical questions of what defines the self—at one point suggesting a QS-enabled prosthesis of selfhood, rendering “self-extension possible” (70) (Neff and Nafus refer to a “prosthesis of feeling” at one point, but this is a different issue). In some respects, reading Quantified Self and Self-Tracking together provides a reader with perhaps the right balance of depth—into the utilization of self-tracking in the service of and complementary to the healthcare industry—and breadth—across multiple theoretical categories of data and selfhood.

Still, one thing I don’t get from either work is the answer to the question, where did this all come from? That is, what is the history of the quantified self a history of? Both Lupton and Neff and Nafus offer anecdotal histories of Benjamin Franklin tracking his wellbeing on a small piece of paper in his pocket or the launch of the Quantified Self Meet Up in 2007. Neither, however, consider the social or cultural phenomenon that led to the proliferation of behavioral modification through self-tracking. This is something I hope to write about in future posts, but for now, I want to make it clear that I am not necessarily faulting these authors for the lack of this history.

Instead, it is important to consider that both books sit in very precarious positions academically. That is, these scholars took great risk spending so much time and effort to publish in the long-form on a subject-matter which is changing just about weekly. Already, Neff and Nafus’s assertions about FDA regulations feel outdated under the Trump administration (note that Trump has not taken any direct actions regarding the FDA quite yet, but it’s hard to consider any Obama-era regulations or policies staying intact throughout all of Trump’s time as president, however brief or extensive that may be). This is perhaps why Self-Tracking is part of MIT Press’s “Essential Knowledge Series”, which, per the publisher’s website, “offers concise, accessible overviews of compelling topics…expert syntheses of subjects ranging from the cultural and historical to the scientific and technical.” Even the physical book itself feels temporary—more like a 5”x7” pocket guide than something that belongs on library shelves for the foreseeable future. I think both of these books would be excellent reading for students just learning to question the hegemonic properties of the technologies being heralded for whatever reason its marketers choose.

For now, the search continues for more QS scholarship.

Gabi Schaffzin is a PhD student at UC San Diego in the Visual Arts department. He spent probably too much time fretting over the typography choices of the book he reviewed in this post.

The English translation of Martin Luther and Phillip Melancthon’s 1523 Deuttung der czwo grewlichen Figuren, Bapstesels czu Rom und Munchkalbs czu Freyerbeg ijnn Meysszen funden is a 19 page pamphlet describing two monsters: a pope-ass and a monk-calf. The former, a donkey-headed biped with one hand, two hooves, and a chicken’s foot, per Arnold Davidson, represents how “horrible that the Bishop of Rome should be the head of the Church.” The latter, a creature that brings to mind Admiral Akbar (think, “it’s a trap!”),  illustrates the “frivolous prattle” of Catholic Sacraments. Davidson explains, “Both of these monsters were interpreted within the context of a polemic against the Roman church. They were prodigies, signs of God’s wrath against the Church which prophesied its imminent ruin.” Fifty-six years after the pamphlet’s original publication in German, Of two wonderful popish monsters was distributed in English.

Nearly 600 years after that, in August of last year, five larger-than-life statues of a naked, blonde, bloated man were affixed to the pavement in highly trafficked areas of Cleveland, San Francisco, New York City, Los Angeles, and Seattle.

The statues, made in the likeness of now President Donald Trump, were created by a Las Vegas-based artist, Ginger, using over 300 pounds of clay and silicone and were commissioned by the anonymous graffiti group, Indecline. In an interview with the Washington Post, Ginger noted that he has “a long history of designing monsters for haunted houses and horror movies.” In fact, he explained, Indecline chose Ginger “‘because of my monster-making abilities.’”

What good are monsters? Is it productive to call our new president one? According to Georges Canguilhem, “the existence of monsters calls into question the capacity of life to teach us order…a living being with negative value…whose value is to be a counterpoint.” The opposite of life is not death, per the philosopher, it is the monster. In this sense, portraying Dear Leader as a monster might indeed be productive: we are forced to consider him the antithesis of the “normal”, the opposite of what we actually want or need. Much like the pope-ass and the monk-calf, we understand what is the other, what is not to be sought after. We can tell our children: do not be like this, you will end up with hooves as hands and varicose veins in your legs.

Ambroise Paré’s 1573 On Monsters and Marvels details 13 “causes of monsters,” including “the glory of God…God’s wrath…too great a quantity of seed…too little a quantity [of seed]” and so on. The heavily illustrated volume is, like Of two wonderful popish monsters, a warning (“women sullied by menstrual blood will conceive monsters”) but also a guidebook: here is what causes monsters…avoid these conditions and your offspring will be healthy. “Monsters are things that appear outside the course of Nature,” he writes, “(and are usually signs of some forthcoming misfortune).”

Approaching our president as monster might leave us with too many reasons to look outside of ourselves—outside the course of Nature. If we, instead, consider Donald Trump to be a human being, we might be more likely to reflect on the structural changes required to prevent his ascendancy to begin with. His story is not the non-normal. The disgusting and soulless decisions he has already made by this, the fifth day of his tenure, are capable of being perpetrated by someone inside the course of Nature. If we consider the critical distinction here—between monster and not (or, as Canguilhem might suggest, between monster and life)—then we must ask where one begins and one ends. And if Trump is, in fact, a monster, is it because of his actions or because of his body?

To be sure, Indecline has proven itself to be a vile, self-promoting group of anarchists. So I can’t say I believe they spent much time considering the ethics of what amounts to petty body-shaming. Back in March, Britney Summit-Gil called out a previous Trump-focused body-shame:

The failure to see why it is toxic to critique Trump based on a presumption about his penis is a failure to see the root problems that allow for the perpetuation of genital shaming, and its often horrific consequences. If we can’t see why penis-shaming Trump is bad, how can we tackle systemic sex- and gender-based oppression?

Ensconced in the statues of Trump, The Monster, is a multitude of complex questions about body-shaming, “freak” culture, disability politics, and more—all of which warrant our attention. But in this moment where our country is falling under the leadership of fascism at its worst, these questions are violently distracting. When a man with the soul of a monster sits in the Oval Office, we must remember that he is not a figure of anyone’s imagination, he is not outside the course of Nature. He is a rapist, a criminal, a pathological liar. And now he’s President of the United States. If, as Davidson writes, “the history of monsters encodes a complicated and changing history of emotion, one that helps to reveal to us the structures and limits of the human community,” then no, this man is no monster. He must be seen as inside the limits of the human community, a lesson of what other humans are capable of. And it is from there that we must fight him: not as a fable or marvel, but as a man.

Gabi Schaffzin is a PhD student at UC San Diego. His physical prowess notwithstanding, he’d quite dutifully punch a Nazi in the face.


Until very recently, the majority of texts on the quantified self have been either short-form essays or uncritical manifestos penned by the same neoliberal technocrats whose biohacking dreams we have to thank for self-tracking’s proliferation over the past decade. Last year saw the publication of two books that take a more critical look at QS: Self-Tracking (MIT Press) by a pair of American researchers, Gina Neff and Dawn Nafus, and The Quantified Self (Polity) by Deborah Lupton, a professor in Communications at the University of Canberra in Australia. While I haven’t read Neff and Nafus’s work yet (but plan to do so in the coming months), I did just finish Lupton’s book and think it’s a great place to start for anyone beginning to research the quantified self and its associated movement.

I say that The Quantified Self is a good place to start because Lupton’s emphasis seems to have been on breadth, rather than depth. At 302 entries in her bibliography for only 147 pages of body text, the author provides what amounts to an extremely thorough lit review: she cites marketing material from Apple and FitBit alongside an extensive collection of tech-focused cultural critique (there’s even a cameo from Cyborgology’s own Whitney Erin Boesel!). I found the text to be, at times, monotonous—the entire first chapter is a list of projects and products that can be classified as quantified self related—but at others, reaffirming—“Self-tracking,” she writes on page 68, “represents the apotheosis of the neoliberal entrepreneurial citizen ideal.” Nice.

If, then, the perfect reader of The Quantified Self is an individual whose body of research on QS is still in its nascent stage, I believe Lupton risks doing a disservice on two accounts. Firstly, while she does spend a good number of pages describing “communal self-tracking,” (per Lupton, “the consensual sharing of a tracker’s personal data with other people” (130)) the author rarely acknowledges that this is the default modus operandi of the quantified self. That is, collecting a critical mass of individuals’ data in order to average, normalize, compete, rank, and so on, is not only one of the tenets of the QS movement, it is also one of its most dangerous features. In Ian Hacking’s Taming of Chance, the philosopher elucidates the normalizing power of statistics—the tendency to jettison both the deficient and exceptional from the bell-curve in order to focus on the survival of the masses (or, in this case, the largest customer base). The neoliberal QS project is nothing, then, without communal self-tracking.

Secondly, Lupton refers to “data” throughout the book without ever considering what this data is made up of. That is, while she highlights the various form self-tracked data might take (photographs, step counts, personal textual records, etc.), we are never asked to consider what it actually is. A FitBit step, for instance, might be calibrated differently from an Apple Watch step or a Garmin step. The bits and bytes in which these kinetic movements are encrypted and stored are only able to be translated by the proprietary software owned and protected by the corporate entities that design and produce various self-trackers. Ignoring this quality of QS data undermines those who argue in favor of patients and other “self-loggers” gaining access to their “raw data”—what good is a count of my steps if I have no idea how those steps were actually calculated?

These qualms, I recognize, are perhaps a bit specific. And it’s important to acknowledge that this is a text about a rapidly emerging and shifting phenomenon. Personally, as I mentioned above, much of Lupton’s work was self-affirming: as an early-career academic, it was a bit new for me to see so many references to essays and articles already in my own bibliography. So I would definitely recommend The Quantified Self for those scholars interested in jumping into the subject-matter without a strong familiarity. Just be sure to take good notes and be ready to build your own reading list.

Gabi Schaffzin is a PhD student at UC San Diego in the Visual Arts department. He finished one full book over his winter break.