METATOPIA 4.0 – Algoricene (2017) by Jaime Del Val

The 23rd International Symposium on Electronic Art was held in collaboration with the 16th Festival Internacional De La Imagen in Manizales, Colombia in mid-June 2017. The opening ceremony for the conference kicked off with a performance by the artist Jaime Del Val, entitled METATOPIA 4.0 – Algoricene (2017), described by the artist as “a nomadic, interactive and performative environment for outdoors and indoors spaces.” The artist statement goes on (and on) to explain that the piece “merges dynamic physical and digital architectures” in an effort to “def[y] prediction and control in the Big Data Era.” In actuality, Del Val stripped down to his naked body, put himself in a clear mesh tent, projected abstract shapes onto the tent, and danced to what might best be called abstract electronica (think dubstep’s “wubwubwub” without the pop).

What piece of what Del Val presented qualifies as “electronic art”? Was it the music? The projector? The use of the term “Big Data Era”, capitalized (in lieu, perhaps, of scare-quotes) in his entirely glib artist statement? I was similarly confused by Alejandro Brianza’s artist talk, “Underground Soundscapes”, in which he showed a few photos of subway systems around the world, accompanied by sound recordings from each visit. About Brianza’s work and Del Val’s, I wondered: why is this electronic art? In fact, throughout the duration of my visit to the ISEA conference and festival, I found myself asking “why” quite often.

To be sure, there were plenty of projects that were quite obviously “electronic”. Bat-bots (2015), for instance, by Daniel Miller, features a pair of bat-like sculptures, complete with echolocation measurement devices and speakers to emit what might be inaudible if you were to walk by an actual bat. Self-proclaimed “sound explorer” Franck Vigroux performed a 45-minute DJ set in front of a Malevitch-inspired white cross, made of “thousands of individual pixels, which explode in space according to the levels of energy of the audio”; the track sounded much the same as Del Val’s musical accompaniment. ISEA, then, was in no shortage of art that is obviously “electronic” in the sense that it had to be plugged in or it used computation as a medium. Still, I could not help but wonder “why” again: why was this even made? Why subject your audience to 45 minutes of the same set of particle physics acting on a simple shape? Why reinvent bats?

ISEA is by no means unique in its ability to attract a congregation of technophilic artists or those intrigued by a mix of science and art. For the past three decades and beyond, organizations like Transmediale, Ars Electronica, and Science Gallery have grown to be major curators of “sciency art” the world over. They operate on mission statements that boast about the interactivity and broad cultural appeal of the work. They throw costly events in major cities around the world and smaller gatherings in satellite venues. Some, like Ars, give out coveted prizes for work deemed superior by a panel of (mostly male) jurors. What they lack, however, is an overt acknowledgement of the political nature of what they are doing. Yes, there is the occasional surveillance detector or VR poverty simulator, but the general excitement that these festivals and the artists showing in them are taking advantage of is a facile equation of “art + science = innovation/truth/the future”.

It seems almost anachronistic to argue for art and politics to be considered necessary partners today. In 1984, artist and critic Lucy Lippard wrote that

It is understood by now that all art is ideological and all art is used politically by the right or the left, with the conscious and unconscious assent of the artist. There is no neutral zone. Artists who remain stubbornly uninformed about the social and emotional effects of their images and their connections to other images outside the art context are most easily manipulated by the prevailing systems of distribution, interpretation, and marketing.

The conservative art critic Hilton Kramer was not so sure, arguing that statements such as “There is no neutral zone” would lead to Lionel Trilling’s “‘eventual acquiescence in tyranny’.” Fifteen years earlier, Kramer, a staunch formalist, watched in horror as Lippard and her Concpetualist peers filled galleries from MoMA to LA’s MOCA with politically charged works of art that often implicated viewers as collaborators in the art. MoMA’s 1970 show, Information, featured Vito Acconci’s Service Area in which the artist had his postal mail forwarded to the museum. “The piece is performed (unawares),” he writes in the show catalogue, “by the postal service…and by the senders of the mail.” The museum guard becomes a “mail guard” and the artist performs the piece by going to pick up his letters. In Hans Haacke’s Poll of MoMA Visitors, the artist asked exhibition visitors to place a ballot in one of two boxes, each answering “yes” or “no” to the question, “Would the fact that Governor Rockefeller has not denounced President Nixon’s Indochina policy be a reason for you not to vote for him in November?”. Haacke didn’t reveal the question until the night before the show opened. This was considered one of the artist’s first “institutional critiques”—works that sought to bring to light the questionable practices of the venue in which they were exhibited (Governor Rockefeller was brother to Nelson, member of the MoMA board, and son to Abigail, founder of the museum).

Kramer was unamused. In a particularly scathing review for the New York Times, he called the show “overweeningly intellectual”, making sure to question the artistic value of the work entirely (“There are more than 150 artists—or ‘artists’—from 15 countries”) before declaring the entire show “egregiously boring.” The critic, it seems, was not willing to consider the conceptual and political meaning behind the work, instead taking jabs at its—gasp!—interactive nature: “I am not sure I can give a very accurate or coherent account of what the visitor to this exhibition is invited to look at, listen to, sit down on, clamber over, go to sleep in, write on, stand in front of, read, and otherwise connect with.”

If, nearly fifty years ago, Kramer was bored because he refused to see the depth of the ideological implications in the art, I am bored because I simply cannot find it. Encontros (2017) features two iPhones, screen-to-screen, one showing a video of the brown waters of the Amazon, the other showing the black waters of the Amazon’s Rio Negra tributary. The site at which the two meet—a place of indigeonous persecution and slavery since the early 1700s—is a marvel of nature, a limnological metaphor of the clash between cultures, one overpowering the other. The artist statement—signed by fifteen individuals—makes no mention of any sort of geopolitical consideration, instead opting to highlight that “the system searches for real-time information in such a way as to reflect changes in the tides and the phases of the moon.” Projects like Encontros not only could be political, they feel like they should be. This begs the question then: do the artists (who, presumably, also write the text to accompany the piece) leave it to me to find the culturally critical element? Is the political in the eye of the beholder?

I would be more inclined to consider this possibility if not for the dearth of ideology-inviting rhetoric in the majority of the programming and literature surrounding each organization’s events. With the notable exception of Transmediale, the mission statements of the festivals in question sprinkle words like “society” and “culture” among pronouncements of the juxtaposition of “Biotechnology and genetic engineering, neurology, robotics, prosthetics and media art” (Ars Electronica) and the ignition of “creativity and discovery where science and art collide” (Science Gallery). Science Gallery, in particular, boasts of turning STEM to STEAM—a dubious cheapening of art in the name of STEM’s focus on education qua employment. In the program’s video appealing to possible funders of “the world’s first university-linked network dedicated to public engagement with science and art”, Luke O’Neill, Director of the Trinity Biomedical Science Institute, declares, “there’s no difference in my mind between an artist and a scientist—we’re all after the truth!” I beg to differ.

Welcome to the fourth and final installment to my series on the history of the Quantified Self. If you’re just joining us, be sure to review parts one, two, and three, wherein I introduced and explored a project that seeks to build a genealogical relationship between an already analogous pair: eugenics and the contemporary Quantified Self movement. The last two posts appear to have, at best, complicated, and at worst, failed the hypothesis: critical breaks along both of the genealogies elucidated within each post seem more like chasms which make eugenics and QS difficult to connect in a meaningful way. At the root of this break seems to be the fundamental tenets underlying each movement. Eugenics, with its emphasis on hereditarily passed physical and psychological traits, precludes the possibility that outside, environmental influences may lead to changes in an individual’s bodily or mental makeup. The Quantified Self, on the other hand, is predicated on the belief that, by tracking the variables associated with one’s activities or environment, one might be able to make adjustments to achieve physical or psychological health. On the surface, then, there is an incommensurability between the two fields. However, by understanding how the technologies of the two movements work in the context of the predominant form of Foucauldian governmentality and biopower of their respective times, we may be able to resolve this chasm.

First, it is important to recognize how closely intertwined the eugenics movement was into the welfare state of early-twentieth century Europe and the United States. Per Nils Roll-Hansen in the conclusion to Eugenics and the Welfare State, in the first decade of the 1900s, a classical concept of genetics was formed in which an individual’s phenotype could be influenced by not only their genetic makeup, but by a combination of genotype and environmental and social factors. After being pioneered by conservative evolutionists such as Galton and his cohort of protégés, then, “reform” eugenics of the 1920s and 1930s was led by scientists looking to jettison the racist reputation of their predecessors through a “renewal of the ‘social contract’ of the movement” (Roll-Hansen 260).   In Scandinavia, Britain, and elsewhere in Europe, newly elected Labour governments used legislation to enact the forced sterilization of the “feebleminded” and weak in the name of the protection of both that marginalized group and the population as a whole. In England in particular, liberals used “eugenical arguments to disseminate information to the working classes on how they should behave biologically for their own benefit and that of the English ‘race’” (Hasian 115). American liberals used neo-Lamarckian ideas concerning the social influences on human traits to emphasize the importance of “race poison” studies (Hasian 128)—research that “proved” that, for example, cigarettes and alcohol had negative downstream effects on the human race (Hasian 28).

For an understanding of how this type of welfare state came to be, I turn, now, to the eighteenth century, as sovereign power shifted from individuals ruling over principalities and whomever lived inside of them to governments overseeing populations understood to live in, travel to, trade with, and war with neighboring lands. In a 1978 talk to the Collège de France, Michel Foucault outlined this shift in governance, arguing that it ushered in the birth of economies: collections of goods, people, and money that all fell under the sovereignty of a state. Critical to the management of these economies were technologies of counting and tracking—statistics, anthropometrics, and the like. Majia Nadesan, reading Foucault as well as Nikolas Rose, notes that governmentality addresses some key concepts surrounding the organization of society’s technologies, problems, and authorities; it recognizes, too, that individuals are both turned into “self-regulating agents” and/or marginalized as invisible or dangerous (1). In order to explain how hegemonies develop and deploy technologies to control the life of populations, Foucault developed the concept of biopower, “arguably the most pervasive form of power engendering the homologies and systemic regularities across the diverse fields of social life” (Nadesan 3).

Without question, the technologies enabling eugenics and their legislative implementation are prime examples of governmentality and biopower at work—the combination of which can be understood through Foucault’s “biopolitics”. In the biopolitical realm, knowledge of man—at once global, quantitative (i.e., concerning the population), and analytical (i.e., concerning the individual)—is exploited by loci of power to divide, categorize, and act “upon populations in order to securitize the nation” (Nadesan 25). As the nineteenth century came to a close, the negative effects of laissez-faire policies turned the tide towards a more active liberal state, one that enabled citizens to maximize their liberties. Nadesan perfectly sums up where welfare-state sponsored eugenics comes in: “the modern liberal-welfare state utilized biopolitical knowledge and expert authorities to expand its power at the level of the population…while simultaneously these forms of knowledge operated to individualize and subjectify citizens as particular kinds of subjects” (26). This occurred at the expense of the liberties of some individuals, of course, as conceptualizations of the normal and pathological were dispersed throughout the population (Nadesan 26).

As the twentieth century progressed through two World Wars and the biomedical and technological revolutions that accompanied them, psychology, anthropology, and sociology saw major shifts towards the social experiences of the individual in shaping psychologies and behaviors—this is something exemplified in the two brief histories above. Alongside these new visions of what it means to be human, new technologies of the self (e.g., the self-help personality test, the self-experiment, psychotropics) engendered an empowered, self-governing subject of liberal democracy (Nadesan 149). These technologies of the self (Foucault’s term) ushered in a neoliberal mode of governance—one in which welfare states jettisoned responsibility for the individual. As Nadesan notes, “By stressing ‘self-care,’ the neoliberal state divulges paternalistic responsibility for its subjects but simultaneously holds its subjects responsible for self-government” (33). Enter, then, the Quantified Self: a movement predicated on the use of technologies which enable individuals not only to self-track, but to make changes in their lives—based on the data collected—towards a normative conceptualization of a good, healthy citizen. And while certainly not a prerequisite, sharing that data with others adds “value” to it by enabling comparison and competition, though at the risk of being utilized by surveillance apparatuses.

Eugenics, then, was seemingly predicated on wholesale changes to the collective while Quantified Self is based on an individual’s efforts to play their responsible part in society—for the sake of that same collective. Both utilize technologies of governmentality that depend on statistical mechanisms invented and/or made mainstream by Francis Galton. But this relationship is more than just analogous—by tracking the development of technologies of experimentation, behaviorism, psychometrics, and personality classification, we see a complex progression from welfare-style “one for all” approach to the neoliberal state’s reliance on self-governance. I have already noted a number of social-welfare focused programs offered by “reform” eugenicists. In hard-liner, “positive” eugenics, those deemed worthy are incentivized to reproduce—see, for example, Galton’s £5,000 wedding gift proposal, as well as Henry Fairfield Osborn’s speech to the Third International Congress on Eugenics, in which he argued for “not more but better Americans” (41). To a eugenicist—even a hard-liner—these types of programs might be considered what William Epstein calls “moral behaviorism—the use of material incentives to promote socially acceptable behavior” (183-4), in this case, reproduction for the sake of the race. The development of behaviorism into self-experimentation and incentivized self-tracking makes a great deal of sense, then, as the neoliberal emphasis on self-care no longer warranted social welfare programs. Nadesan, once again citing Rose, notes that “political authorities sought to ‘act at a distance’ upon the desires and social practices of citizens primarily through the promulgation of biopolitical knowledge, experts, and institutions that promised individual empowerment and self-actualization” (27). The classificatory power of psychometric testing under the early-twentieth century welfare state served to exclude and erase those individuals deemed worthy of institutionalization or, worse, deemed unworthy of reproduction. The same technology which enabled these tests drive the self-informing power of the daily happiness meters and mood surveys of the Quantified Self. Nadesan, this time citing Mitchell Dean, points out neoliberalism’s heavy emphasis on normalization of our social and cultural condition—a normalization centered around containment and extrication of risk; “concerns for ‘responsibility’ and ‘obligation’ outweigh freedom and rehabilitation” (35). Participating in the Quantified Self, one is under the impression that their freedom to excel will be enhanced by the adjustments made thanks to the data they have collected. Welfare states sought to normalize towards compliance through aggregate data. The neoliberal state aggregates through surveillance apparatuses for the sake of risk management. Galton’s psychometrically driven tests classified those worthy of breeding and those not. Tracing the progression of these tests along with the shift from social-welfare to neoliberal biopolitics, it is easy to recognize and understand the shift into a market based on products heavily reliant on the collection and analysis of personal data.

What is the history of the quantified self a history of? One could point to technological advances in circuitry miniaturization or in big data collection and processing. The proprietary and patented nature of the majority of QS devices precludes certain types of inquiry into their invention and proliferation. But it is not difficult to identify one of QS’s most critical underlying tenets: self-tracking for the purpose of self-improvement through the identification of behavioral and environmental variables critical to one’s physical and psychological makeup. Recognizing the importance of this premise to QS allows us to trace back through the scientific fields which have strongly influenced the QS movement—from both a consumer and product standpoint. Doing so, however, reveals a seeming incommensurability between an otherwise analogous pair: QS and eugenics. A eugenical emphasis on heredity sits in direct conflict to a self-tracker’s belief that a focus on environmental factors could change one’s life for the better—even while both are predicated on statistical analysis, both purport to improve the human stock, and both, as argued by Dale Carrico, make assertions towards what is a “normal” human.

A more complicated relationship between the two is revealed upon attempting this genealogical connection. What I have outlined over the past few weeks is, I hope, only the beginning of such a project. I chose not to produce a rhetorical analysis of the visual and textual language of efficiency in both movements—from that utilized by the likes of Frederick Taylor and his eugenicist protégés, the Gilbreths, to what Christina Cogdell calls “Biological Efficiency and Streamline Design” in her work, Eugenic Design, and into a deep trove of rhetoric around efficiency utilized by market-available QS device marketers. Nor did I aim to produce an exhaustive bibliographic lineage. I did, however, seek to use the strong sense of self-experimentation in QS to work backwards towards the presence of behaviorism in early-twentieth century eugenical rhetoric. Then, moving in the opposite direction, I tracked the proliferation of Galtonian psychometrics into mid-century personality test development and eventually into the risk-management goals of the neoliberal surveillance state. I hope that what I have argued will lead to a more in-depth investigation into each step along this homological relationship. In the grander scheme, I see this project as part of a critical interrogation into the Quantified Self. By throwing into sharp relief the linkages between eugenics and QS, I seek to encourage resistance to fetishizing the latter’s technologies and their output, as well as the potential for meaningful change via those technologies.

Gabi Schaffzin is a PhD student at UC San Diego. He swore he’d never bring Foucault into his Cyborgology posts. ¯\_(ツ)_/¯. 


Carrico, Dale. “Two Variations of Contemporary Eugenicist Politics.” Two Variations of Contemporary Eugenicist Politics, 1 Jan. 1970, Accessed 22 Mar. 2017.

Cogdell, Christina. Eugenic Design: Streamlining America in the 1930s. Philadelphia, Pa, University of Pennsylvania Press, 2010.

Epstein, William M. The Masses Are the Ruling Classes: Policy Romanticism, Democratic Populism, and American Social Welfare. New York, NY, Oxford University Press, 2017.

Foucault, Michel. “Governmentality.” The Foucault Effect Studies in Governmentality, edited by Graham Burchell et al., The University of Chicago Press, Chicago, 1991, pp. 87–104.

Hasian, Marouf Arif. The Rhetoric of Eugenics in Anglo-American Thought. Athens, University of Georgia Press, 1996.

Nadesan, Majia Holmer. Governmentality, Biopower, and Everyday Life. New York, Routledge, 2011.

Roll-Hansen, Nils. “Conclusion: Scandinavian Eugenics in the International Context.” Eugenics and the Welfare State: Sterilization Policy in Denmark, Sweden, Norway, and in Finland, edited by Gunnar Broberg and Nils Roll-Hansen, Michigan State University Press, East Lansing, 2005, pp. 259–271.

Perkins, Henry Farnham, and Henry Fairfield Osborn. “Birth Selection versus Birth Control.” A Decade of Progress in Eugenics; Scientific Papers of the Third International Congress of Eugenics, Williams & Wilkins, Baltimore, 1934, pp. 29–41.

Welcome to part three of my multi-part series on the history of the Quantified Self as a genealogical ancestor of eugenics. In last week’s post, I elucidated Francis Galton’s influence on experimental psychology, arguing that it was, largely, a technological one. In an oft-cited paper from 2013, researcher Melanie Swan argues that “the idea of aggregated data from multiple…self-trackers[, who] share and work collaboratively with their data” will help make that data more valuable—be it to the individual tracking, physician working with them, corporation selling the device worn, or other stakeholder (86). No doubt, then, the value of the predictive power of correlation and regression to these trackers. Harvey Goldstein, in a paper tracing Galton’s contributions to psychometrics, notes that Galton was not the only late-nineteenth century scientist to believe that genius was passed hereditarily. He was, however, one of the few to take up the task of designing a study to show genealogical causality regarding character, thanks once again to his correlation coefficient and resultant laws of regression.

Galton’s contributions to psychometrics go beyond technological, however, and into methodological. In what I might have also included as an example of the scientist’s support for self-experimentation, Galton’s 1879 “Psychometric Experiments” features the results of a word association test performed on himself:

The plan I adopted was to suddenly display a printed word, to allow about a couple of ideas to successively present themselves, and then, by a violent mental revulsion and sudden awakening of attention, to seize upon those ideas before they had faded, and to record them exactly as they were at the moment when they were surprised and grappled with. (426)

Famously, this word association test was used by Carl Jung as he developed methods to classify his subjects into his various psychological types (Paul 82). Eventually, this tool pioneered by Galton was used to build the Myers-Briggs Type Indicator a 93-question test which plots a test-taker’s personality along multiple axes. Interestingly, the MBTI works against what Nicholas Lemann calls “the first principle of psychometrics…that all distributions bunch up in the middle, in the familiar form of a bell curve” (91). Because of the MBTI’s assumption that individuals are either introverts or extroverts, and so on, resultant data would look like an inverse bell curve, with data bunched up on either end of the axes. Though the test had been conceived of decades prior, Katherine Briggs and Isabel Briggs Myers were finally inspired to finalize the MBTI’s matrices in 1943. The test was, per its creators, intended to help people understand one another—a concern inspired by the onset of World War II, which also provided a more practical reason for its development: helping women who were replacing men in the industrial workplace to find the right “fit” in their new jobs (Myers 208).

Beyond influence in managerial-type personality tests, a Galtonian lineage can be found in the development of the Minnesota Multiphasix Personality Inventory. The 567-item questionnaire was built using a system derived from the nosological methodology of Emil Kraepelin, a German psychologist who, in 1921, published a paper arguing for “inner colonization”—what one translator suggests “as being rightly associated with the eugenics movement” (Engstrom and Weber 341). While the MMPI is perhaps the most widely used psychological personality test, it is closely followed by the Sixteen Personality Factor Questionnaire, a 187-item test developed by Raymond Cattell in the 1940s (Paul xii, xiv). The eccentric researcher developed his own language (with words like “Autia”, “Harria”, Parmia”, and “Zeppia” all referring to different character traits) in order to describe subjects in a novel manner. Cattell’s quirkiness is perhaps not too surprising when his academic pedigree is revealed: he was recruited into psychology by the eugenicist Cyril Burt (Paul 179), who was eventually revealed to have falsified most of his data in twin studies meant to support Galtonian conceptualizations of heredity (Hattie 259). Charles Spearman, Cattell’s academic mentor, was another eugenicist who argued that “‘An accurate measurement of everone’s inteligence would seem to herald the feasibility of selecting better endowed persons for admission into citizenship—and even for the right of having offspring’” (Paul 179). And while Cattell attempted, after World War II, to walk back his belief in purely hereditary personality traits, he could not resist revisiting his eugenicist ways in his 1972 A New Morality From Science (Paul 180-81).

The history of Galton and eugenics, then, can be traced into the history of personality tests. Once again, we come up against an awkward transition—this time from personality tests into the Quantified Self. Certainly, shades of Galtonian psychometrics show themselves to be present in QS technologies—that is, the treatment of statistical datasets for the purpose of correlation and prediction. Galton’s word association tests strongly influenced the MBTI, a test that, much like Quantified Self projects, seeks to help a subject make the right decisions in their life, though not through traditional Galtonian statistical tools. The MMPI and 16PFQ are for psychological evaluative purposes. And while some work has been done to suggest that “mental wellness” can be improved through self-tracking (see Kelley et al., Wolf 2009), much of the self-tracking ethos is based on factors that can be adjusted in order to see a correlative change in the subject (Wolf 2009). That is, by tracking my happiness on a daily basis against the amount of coffee I drink or the places I go, then I am acknowledging an environmental approach and declaring that my current psychological state is not set by my genealogy. A gap, then, between Galtonian personality tests and QS.

Next week, I’ll conclude the series by suggesting that this gap might be closed with the help of your friend and mine, Michel Foucault. Come back, won’t you?

Gabi Schaffzin is a PhD student at UC San Diego. He hates personality tests—of which he has had to take many, thanks to his past life—because he always ends up smack dab in the middle of whatever silly outcomes are possible. 


Engstrom, E. J., and M. M. Weber. “Classic Text No. 83: ‘On Uprootedness’ by Emil Kraepelin (1921).” History of Psychiatry, vol. 21, no. 3, 2010, pp. 340–350., doi:10.1177/0957154×10376890.

Galton, Francis. “Psychometric Experiments.” Brain, vol. 2, no. 2, 1879, pp. 149–162., doi:10.1093/brain/2.2.149.

Goldstein, Harvey. “Francis Galton, Measurement, Psychometrics and Social Progress.” Assessment in Education: Principles, Policy & Practice, vol. 19, no. 2, 2012, pp. 147–158., doi:10.1080/0969594x.2011.614220.

Hattie, J. (1991). “The Burt Controversy: An essay review of Hearnshaw’s and Joynson’s biographies of Sir Cyril Burt.” Alberta Journal of Educational Research, 37(3), 259-275.

Lemann, Nicholas. The Big Test: the Secret History of the American Meritocracy. New York, Farrar, Straus and Giroux, 2007.

Myers, Isabel Briggs, and Peter B. Myers. Gifts Differing: Understanding Personality Type. Mountain View, CA, Nicholas Brealey Publishing, 2010.

Paul, Annie Murphy. The Cult of Personality: How Personality Tests Are Leading Us to Miseducate Our Children, Mismanage Our Companies, and Misunderstand Ourselves. New York, Free Press, 2004.

Swan, Melanie. “The Quantified Self: Fundamental Disruption in Big Data Science and Biological Discovery.” Big Data, vol. 1, no. 2, 2013, pp. 85–99., doi:10.1089/big.2012.0002.

Wolf, Gary. “Measuring Mood – Current Research and New Ideas.” Quantified Self, 12 Feb. 209, Accessed 21 Mar. 2017.

Last week, I began an attempt at tracing a genealogical relationship between eugenics and the Quantified Self. I reviewed the history of eugenics and the ways in which statistics, anthropometrics, and psychometrics influenced the pseudoscience. This week, I’d like to begin to trace backwards from QS and towards eugenics. Let me begin, as I did last week, with something quite obvious: the Quantified Self has a great deal to do with one’s self. Stating this, however, helps place QS in a historical context that will prove fruitful in the overall task at hand.

In a study published in 2014, a group of researchers from both the University of Washington and the Microsoft Corporation found that the term “self-experimentation” was used prevalently among their QS-embracing subjects.

“Q-Selfers,” they write, “wanted to draw definitive conclusions from their QS practice—such as identifying correlation…or even causation” (Choe, et al. 1149). Although not performed with “scientific rigor”, this experimentation was about finding meaningful, individualized information with which to take further action (Choe, et al. 1149).

Looking back at the history of self-experimentation in the sciences—in particular, experimental and behavioral psychology—leads to a 1981 paper by Reed College professor and psychologist, Allen Neuringer, entitled, “Self-Experimentation: A Call for Change”. In it, Neuringer argues for a closer emphasis on the self by behaviorists:

If experimental psychologists applied the scientific method to their own lives, they would learn more of importance to everyone, and assist more in the solution of problems, than if they continue to relegate science exclusively to the study of others. The area of inquiry would be relevant to the experimenter’s ongoing life, the subject would be the experimenter, and the dependent variable some aspect of the experimenter’s behavior, overt or covert. (79)

The psychologist goes on to suggest that poets and novelists could use the method to discover what causes love and that “all members of society” will “view their lives as important” thanks to their contributions to scientific progress (93).

Neuringer’s argument is heavily influenced by the work of B. F. Skinner, the father of radical behaviorism—a subset of psychology in which the behavior of a subject (be it human or otherwise) can be “explained through the conditioning…in response to the receipt of rewards or punishments for its actions” (Gilette 114). We can see, then, a lineage of both behavioral and experimental psychologies on the quantified-self: not only do QS devices track, but many of the interfaces built into and around them embrace “gamification”. That is, beyond the watch face or pedometer display, the dashboards displaying results, the emails and alerts presented to subjects, the “competition” features, etc., all embrace what Deborah Lupton calls “the rendering of aspects of using…self-tracking as games…an important dimension of new approaches to self-tracking as part of motivation strategies” (23).

The field of experimental psychology from which behaviorism grew when, in 1913, John B. Watson wrote “Psychology as the Behaviorist Views It”, was not specifically an invention of Francis Galton. This is not to say that Galton did not partake in experimental psychology during his eugenic research. In fact, his protégé and biographer, Karl Pearson, cites “a leading psychologist” writing in 1911: “‘Galton deserves to be called the first Englishman to publish work that was strictly what is now called Experimental Psychology, but the development of the movement academically has, I believe, in no way been influenced by him’” (213). Pearson, who included this quote in the 1924 second volume of The Life, Letters and Labours of Francis Galton, goes on to argue that American and English psychological papers are far superior to their continental counterparts thanks directly to Galton’s work on correlation in statistical datasets, though, per Ian Hacking, Pearson later notes that correlation laws may have been identified “much earlier in the Gaussian [or Normal] tradition” (187).

Here we begin to see an awkward situation in our quest to draw a line from Galton and hard-line eugenics (we will differentiate between hardline and “reform” eugenics further on) to the quantified self movement. Behaviorism sits diametrically opposed to eugenics for a number of reasons. Firstly, it does not distinguish between human and animal beings—certainly a tenet to which Galton and his like would object, understanding that humans are the superior species and a hierarchy of greatness existing within that species as well. Secondly, behaviorism accepts that outside, environmental influences will change the psychology of a subject. In 1971, Skinner argued that “An experimental analysis shifts the determination of behavior from autonomous man to the environment—an environment responsible both for the evolution of the species and for the repertoire acquired by each member” (214).  This stands in direct conflict with the eugenical ideal that physical and psychological makeup is determined by heredity. Indeed, the eugenicist Robert Yerkes, otherwise close with Watson, wholly rejected the behaviorist’s views (Hergenhahn 400). Tracing the quantified-self’s behaviorist and self-experimental roots, then, leaves us without a very strong connection to the ideologies driving eugenics. Still, using Pearson as a hint, there may be a better path to follow.

So come back next week and we’ll see what else we can dig up in our quest to understand a true history of the Quantified Self.

Gabi Schaffzin is a PhD student at UC San Diego. He has a very good dog named Buckingham. 


Choe, Eun Kyoung, et al. “Understanding Quantified-Selfers’ Practices in Collecting and Exploring Personal Data.” Proceedings of the 32nd Annual ACM Conference on Human Factors in Computing Systems – CHI ’14, 2014, pp. 1143–1152., doi:10.1145/2556288.2557372.

Gillette, Aaron. Eugenics and the Nature-Nurture Debate in the Twentieth Century. New York, Palgrave Macmillan, 2011.

Hacking, Ian. The Taming of Chance. Cambridge, Cambridge University Press, 1990.

Hergenhahn, B. R. An Introduction to the History of Psychology. Belmont, CA, Wadsworth, 2009.

Lupton, Deborah. The Quantified Self: a Sociology of Self-Tracking. Cambridge, UK, Polity, 2016.

Neuringer, Allen. “Self-Experimentation: A Call for Change.” Behaviorism, vol. 9, no. 1, 1981, pp. 79–94., Accessed 19 Mar. 2017.

Pearson, Karl. The Life, Letters and Labours of Francis Galton. Characterisation, Especially by Letters. Index. Cambridge, UP, 1930, Accessed 17 Mar. 2017.

In the past few months, I’ve posted about two works of long-form scholarship on the Quantified Self: Debora Lupton’s The Quantified Self and Gina Neff and Dawn Nufus’s Self-Tracking. Neff recently edited a volume of essays on QS (Quantified: Biosensing Technologies in Everyday Life, MIT 2016), but I’d like to take a not-so-brief break from reviewing books to address an issue that has been on my mind recently. Most texts that I read about the Quantified Self (be they traditional scholarship or more informal) refer to a meeting in 2007 at the house of Kevin Kelly for the official start to the QS movement. And while, yes, the name “Quantified Self” was coined by Kelly and his colleague Gary Wolf (the former founded Wired, the latter was an editor for the magazine), the practice of self-tracking obviously goes back much further than 10 years. Still, most historical references to the practice often point to Sanctorius of Padua, who, per an oft-cited study by consultant Melanie Swan, “studied energy expenditure in living systems by tracking his weight versus food intake and elimination for 30 years in the 16th century.” Neff and Nufus cite Benjamin Franklin’s practice of keeping a daily record of his time use. These anecdotal histories, however, don’t give us much in terms of understanding what a history of the Quantified Self is actually a history of.

Briefly, what I would like to prove over the course of a few posts is that at the heart of QS are statistics, anthropometrics, and psychometrics. I recognize that it’s not terribly controversial to suggest that these three technologies (I hesitate to call them “fields” here because of how widely they can be applied), all developed over the course of the nineteenth century, are critical to the way that QS works. Good thing, then, that there is a second half to my argument: as I touched upon briefly in my [shameless plug alert] Theorizing the Web talk last week, these three technologies were also critical to the proliferation of eugenics, that pseudoscientific attempt at strengthening the whole of the human race by breeding out or killing off those deemed deficient.

I don’t think it’s very hard to see an analogous relationship between QS and eugenics: both movements are predicated on anthropometrics and psychometrics, comparisons against norms, and the categorization and classification of human bodies as a result of the use of statistical technologies. But an analogy only gets us so far in seeking to build a history. I don’t think we can just jump from Francis Galton’s ramblings at the turn of one century to Kevin Kelly’s at the turn of the next. So what I’m going to attempt here is a sort of Foucauldian genealogy—from what was left of eugenics after its [rightful, though perhaps not as complete as one would hope] marginalization in the 1940s through to QS and the multi-billion dollar industry the movement has inspired.

I hope you’ll stick around for the full ride—it’s going to take a a number of weeks. For now, let’s start with a brief introduction to that bastion of Western exceptionalism: the eugenics movement.

Francis Galton had already been interested in heredity and statistics before he read Charles Darwin’s On the Origin of the Species upon its publication in 1859. The work, written by his half-cousin, acted as a major inspiration in Galton’s thinking on the way that genius was passed through generations—so much so, that Galton spent the remainder of his life working on a theory of hereditary intelligence. His first publication on the topic, “Hereditary Talent and Character” (1865), traced the genealogy of nearly 1,700 men whom he deemed worthy of accolades—a small sample of “the chief men of genius whom the world is known to have produced” (Bullmer 159)—eventually concluding that “Everywhere is the enormous power of hereditary influence forced on our attention” (Galton 1865, 163). Four years later, the essay inspired a full volume, Hereditary Genius, in which Galton utilized Adolphe Quetelet’s statistical law detailing a predictive uniformity in  deviation from a normally distributed set of data points—the law of errors.

Much like Darwin’s seminal work, Quetelet’s advancements in statistics played a critical part in the development of Galton’s theories on the hereditary nature of human greatness. Quetelet, a Belgian astronomer, was taken by his predecessors’ work to normalize the variation in error that occurred when the position of celestial bodies were measured multiple times. Around the same time—that is, in the first half of the nineteenth century—French intellectuals and bureaucrats alike had taken a cue from Marquis de Condorcet, who had proposed a way to treat moral—or, social—inquiries in a similar manner to the way the physical sciences were approached. Quetelet, combining the moral sciences with normal distributions, began to apply statistical laws of error in distribution to the results of anthropometric measurements across large groups of people: e.g., the chest size of soldiers, the height of school boys. The result, which effectively treated the variation between individual subjects’ measurements in the same manner as a variation in a set of measurements of a single astronomical object, was homme type—the typical man (Hacking 111–12).

In 1889, Galton wrote, “I know of scarcely anything so apt to impress the imagination as the wonderful form of cosmic order expressed by the ‘Law of Frequency of Error’” (66). Six years earlier, in Inquiries Into Human Faculty, he declared that he was interested in topics “more or less connected with that of the cultivation of race” (17, emphasis added)—that is, eugenics—than simply the observation of it. Galton’s argument was rather simple, albeit vague: society should encourage the early marriage and reproduction of men of high stature. Per Michael Bulmer, “He suggested that a scheme of marks for family merit should be devised, so that ancestral qualities as well as personal qualities could be taken into account” (82). Once these scores were evaluated, the individuals with top marks would be encouraged to and rewarded for breeding; at one point, he recommended a £5,000 “wedding gift” for the top ten couples in Britain each year, accompanied by a ceremony in Westminster Abbey officiated by the Queen of England (Bulmer 82). This type of selective breeding would eventually be referred to as “positive eugenics”.

The statistical technologies developed by Quetelet and the like were utilized by Galton for more than just the evaluation of which individuals were worthy of reproduction, they also allowed for the prediction of how improvements would permeate through a population. Specifically, he argued that if a normally distributed population (being measured upon whichever metric—or combination of which—he had chosen) reproduced, it would result in another normally distributed population—that is, the bulk of the population would be average or mediocre (Hacking 183). He called this the law of regression and understood it to slow severely the improvement of a race towards the ideal. However, if one could guarantee that those individuals at the opposite end of the bell curve—that is, the morally, physically, or psychologically deficient—were not reproducing, then an accelerated reproduction of the exceptional could take place (Bulmer 83). Thus was born “negative eugenics”.

I will revisit the proliferation of eugenics a bit later in this study, but it is important here to note that the historical trail of the active and public implementation of eugenics eventually goes cold somewhere between 1940 and 1945, depending on in which country one is looking. Most obviously, the rise of the Third Reich and its party platform built primarily on eugenicist policies had a direct effect on the decline of eugenics towards the midway point of the twentieth century. Previously enacted (and confidently defended) state policies regarding forced sterilization from Scandinavia to the United States were eventually struck-down and stay as embarrassing marks on national histories to this day (Hasian 140), though the last US law came off the books in the 1970s.

This is not to suggest that the scientific ethos behind the field—that one’s genetic makeup determines both physical and psychological traits—went completely out of fashion. Instead, I hope it has becomes obvious, even in this brief overview, that the aforementioned analogies between eugenics and QS are not difficult to draw. But how do we get from one to the other? And am I being crazy in doing so?

The second question is probably up for grabs for a little while. I’ll begin to answer the first one next week, however, when I sketch out a history of self-experimentation and behavioral psychology, moving backwards from the Quantified Self to eugenics. Come back again, won’t you?

Gabi Schaffzin is a PhD student at UC San Diego. Having just returned from the east coast, his jetlag has left him without anything witty to add. 


Bulmer, M. G. Francis Galton: Pioneer of Heredity and Biometry. Baltimore, Johns Hopkins University Press, 2003.

Galton, Francis. “Hereditary Talent and Character.” Macmillan’s Magazine, 1865, pp. 157–327, Accessed 17 Mar. 2017.

Galton, Francis. Natural Inheritance. New York, AMS Press, 1973 (Originally published 1889).

Hacking, Ian. The Taming of Chance. Cambridge, Cambridge University Press, 1990.

Hasian, Marouf Arif. The Rhetoric of Eugenics in Anglo-American Thought. Athens, University of Georgia Press, 1996.

Lupton, Deborah. The Quantified Self: a Sociology of Self-Tracking. Cambridge, UK, Polity, 2016.

Neff, Gina, and Dawn Nafus. Self-Tracking. Cambridge, MIT Press, 2016.

Back in January, I wrote about Deborah Lupton’s The Quantified Self, a recent publication from Polity by the University of Canberra professor in Communication. In that post I mentioned that I planned to read another book on the QS movement from MIT Press, Self-Tracking by Gina Neff, a Communication scholar out of University of Washington, and Dawn Nafus, an anthropologist at Intel. And so I have.

Much like Lupton’s book, Self-Tracking is best utilized as an introduction to the structures and cultural context in which the quantified self operates. The work begins with a relatively broad introduction to what the quantified self is (the authors differentiate between lower-case quantified self as the general self-tracking industry and uppercase Quantified Self as the Meet-Up-ing, annual-conference-ing, ever-proselytizing community) and what practices the term encompasses. Just as in Lupton’s book, we are treated to insight from Cyborgology’s super-famous past contributor, Whitney Erin Boesel, and her “Taxonomy of types of people”. As I noted back in January, however, Lupton uses a great deal of ink giving example after example of QS devices and services; the authors of Self-Tracking sprinkle their examples throughout which helps the book flow in a significantly more natural manner.

Neff and Nafus also narrow their focus on the health-related aspects of QS. For instance, the pair consider what sorts of problems a doctor might encounter when a patient brings self-tracked data (spoiler: a whole bunch). In considering how this differs from Lupton’s account, I am tempted to suggest that her analysis touched on a much broader swath of the QS market—but this is to consider there to be a difference between QS devices and health-related tracking. That is, as I read Self-Tracking, I wondered if there are any QS devices not health-related. What is the boundary between the body and health? How are normal bodies and healthy bodies any different? Could a QS device be marketed as something that will help you become something other than healthy?

Most of these questions are not explicitly asked by the authors of Self-Tracking. Lupton, on the other hand, does delve into more theoretical questions of what defines the self—at one point suggesting a QS-enabled prosthesis of selfhood, rendering “self-extension possible” (70) (Neff and Nafus refer to a “prosthesis of feeling” at one point, but this is a different issue). In some respects, reading Quantified Self and Self-Tracking together provides a reader with perhaps the right balance of depth—into the utilization of self-tracking in the service of and complementary to the healthcare industry—and breadth—across multiple theoretical categories of data and selfhood.

Still, one thing I don’t get from either work is the answer to the question, where did this all come from? That is, what is the history of the quantified self a history of? Both Lupton and Neff and Nafus offer anecdotal histories of Benjamin Franklin tracking his wellbeing on a small piece of paper in his pocket or the launch of the Quantified Self Meet Up in 2007. Neither, however, consider the social or cultural phenomenon that led to the proliferation of behavioral modification through self-tracking. This is something I hope to write about in future posts, but for now, I want to make it clear that I am not necessarily faulting these authors for the lack of this history.

Instead, it is important to consider that both books sit in very precarious positions academically. That is, these scholars took great risk spending so much time and effort to publish in the long-form on a subject-matter which is changing just about weekly. Already, Neff and Nafus’s assertions about FDA regulations feel outdated under the Trump administration (note that Trump has not taken any direct actions regarding the FDA quite yet, but it’s hard to consider any Obama-era regulations or policies staying intact throughout all of Trump’s time as president, however brief or extensive that may be). This is perhaps why Self-Tracking is part of MIT Press’s “Essential Knowledge Series”, which, per the publisher’s website, “offers concise, accessible overviews of compelling topics…expert syntheses of subjects ranging from the cultural and historical to the scientific and technical.” Even the physical book itself feels temporary—more like a 5”x7” pocket guide than something that belongs on library shelves for the foreseeable future. I think both of these books would be excellent reading for students just learning to question the hegemonic properties of the technologies being heralded for whatever reason its marketers choose.

For now, the search continues for more QS scholarship.

Gabi Schaffzin is a PhD student at UC San Diego in the Visual Arts department. He spent probably too much time fretting over the typography choices of the book he reviewed in this post.

The English translation of Martin Luther and Phillip Melancthon’s 1523 Deuttung der czwo grewlichen Figuren, Bapstesels czu Rom und Munchkalbs czu Freyerbeg ijnn Meysszen funden is a 19 page pamphlet describing two monsters: a pope-ass and a monk-calf. The former, a donkey-headed biped with one hand, two hooves, and a chicken’s foot, per Arnold Davidson, represents how “horrible that the Bishop of Rome should be the head of the Church.” The latter, a creature that brings to mind Admiral Akbar (think, “it’s a trap!”),  illustrates the “frivolous prattle” of Catholic Sacraments. Davidson explains, “Both of these monsters were interpreted within the context of a polemic against the Roman church. They were prodigies, signs of God’s wrath against the Church which prophesied its imminent ruin.” Fifty-six years after the pamphlet’s original publication in German, Of two wonderful popish monsters was distributed in English.

Nearly 600 years after that, in August of last year, five larger-than-life statues of a naked, blonde, bloated man were affixed to the pavement in highly trafficked areas of Cleveland, San Francisco, New York City, Los Angeles, and Seattle.

The statues, made in the likeness of now President Donald Trump, were created by a Las Vegas-based artist, Ginger, using over 300 pounds of clay and silicone and were commissioned by the anonymous graffiti group, Indecline. In an interview with the Washington Post, Ginger noted that he has “a long history of designing monsters for haunted houses and horror movies.” In fact, he explained, Indecline chose Ginger “‘because of my monster-making abilities.’”

What good are monsters? Is it productive to call our new president one? According to Georges Canguilhem, “the existence of monsters calls into question the capacity of life to teach us order…a living being with negative value…whose value is to be a counterpoint.” The opposite of life is not death, per the philosopher, it is the monster. In this sense, portraying Dear Leader as a monster might indeed be productive: we are forced to consider him the antithesis of the “normal”, the opposite of what we actually want or need. Much like the pope-ass and the monk-calf, we understand what is the other, what is not to be sought after. We can tell our children: do not be like this, you will end up with hooves as hands and varicose veins in your legs.

Ambroise Paré’s 1573 On Monsters and Marvels details 13 “causes of monsters,” including “the glory of God…God’s wrath…too great a quantity of seed…too little a quantity [of seed]” and so on. The heavily illustrated volume is, like Of two wonderful popish monsters, a warning (“women sullied by menstrual blood will conceive monsters”) but also a guidebook: here is what causes monsters…avoid these conditions and your offspring will be healthy. “Monsters are things that appear outside the course of Nature,” he writes, “(and are usually signs of some forthcoming misfortune).”

Approaching our president as monster might leave us with too many reasons to look outside of ourselves—outside the course of Nature. If we, instead, consider Donald Trump to be a human being, we might be more likely to reflect on the structural changes required to prevent his ascendancy to begin with. His story is not the non-normal. The disgusting and soulless decisions he has already made by this, the fifth day of his tenure, are capable of being perpetrated by someone inside the course of Nature. If we consider the critical distinction here—between monster and not (or, as Canguilhem might suggest, between monster and life)—then we must ask where one begins and one ends. And if Trump is, in fact, a monster, is it because of his actions or because of his body?

To be sure, Indecline has proven itself to be a vile, self-promoting group of anarchists. So I can’t say I believe they spent much time considering the ethics of what amounts to petty body-shaming. Back in March, Britney Summit-Gil called out a previous Trump-focused body-shame:

The failure to see why it is toxic to critique Trump based on a presumption about his penis is a failure to see the root problems that allow for the perpetuation of genital shaming, and its often horrific consequences. If we can’t see why penis-shaming Trump is bad, how can we tackle systemic sex- and gender-based oppression?

Ensconced in the statues of Trump, The Monster, is a multitude of complex questions about body-shaming, “freak” culture, disability politics, and more—all of which warrant our attention. But in this moment where our country is falling under the leadership of fascism at its worst, these questions are violently distracting. When a man with the soul of a monster sits in the Oval Office, we must remember that he is not a figure of anyone’s imagination, he is not outside the course of Nature. He is a rapist, a criminal, a pathological liar. And now he’s President of the United States. If, as Davidson writes, “the history of monsters encodes a complicated and changing history of emotion, one that helps to reveal to us the structures and limits of the human community,” then no, this man is no monster. He must be seen as inside the limits of the human community, a lesson of what other humans are capable of. And it is from there that we must fight him: not as a fable or marvel, but as a man.

Gabi Schaffzin is a PhD student at UC San Diego. His physical prowess notwithstanding, he’d quite dutifully punch a Nazi in the face.


Until very recently, the majority of texts on the quantified self have been either short-form essays or uncritical manifestos penned by the same neoliberal technocrats whose biohacking dreams we have to thank for self-tracking’s proliferation over the past decade. Last year saw the publication of two books that take a more critical look at QS: Self-Tracking (MIT Press) by a pair of American researchers, Gina Neff and Dawn Nafus, and The Quantified Self (Polity) by Deborah Lupton, a professor in Communications at the University of Canberra in Australia. While I haven’t read Neff and Nafus’s work yet (but plan to do so in the coming months), I did just finish Lupton’s book and think it’s a great place to start for anyone beginning to research the quantified self and its associated movement.

I say that The Quantified Self is a good place to start because Lupton’s emphasis seems to have been on breadth, rather than depth. At 302 entries in her bibliography for only 147 pages of body text, the author provides what amounts to an extremely thorough lit review: she cites marketing material from Apple and FitBit alongside an extensive collection of tech-focused cultural critique (there’s even a cameo from Cyborgology’s own Whitney Erin Boesel!). I found the text to be, at times, monotonous—the entire first chapter is a list of projects and products that can be classified as quantified self related—but at others, reaffirming—“Self-tracking,” she writes on page 68, “represents the apotheosis of the neoliberal entrepreneurial citizen ideal.” Nice.

If, then, the perfect reader of The Quantified Self is an individual whose body of research on QS is still in its nascent stage, I believe Lupton risks doing a disservice on two accounts. Firstly, while she does spend a good number of pages describing “communal self-tracking,” (per Lupton, “the consensual sharing of a tracker’s personal data with other people” (130)) the author rarely acknowledges that this is the default modus operandi of the quantified self. That is, collecting a critical mass of individuals’ data in order to average, normalize, compete, rank, and so on, is not only one of the tenets of the QS movement, it is also one of its most dangerous features. In Ian Hacking’s Taming of Chance, the philosopher elucidates the normalizing power of statistics—the tendency to jettison both the deficient and exceptional from the bell-curve in order to focus on the survival of the masses (or, in this case, the largest customer base). The neoliberal QS project is nothing, then, without communal self-tracking.

Secondly, Lupton refers to “data” throughout the book without ever considering what this data is made up of. That is, while she highlights the various form self-tracked data might take (photographs, step counts, personal textual records, etc.), we are never asked to consider what it actually is. A FitBit step, for instance, might be calibrated differently from an Apple Watch step or a Garmin step. The bits and bytes in which these kinetic movements are encrypted and stored are only able to be translated by the proprietary software owned and protected by the corporate entities that design and produce various self-trackers. Ignoring this quality of QS data undermines those who argue in favor of patients and other “self-loggers” gaining access to their “raw data”—what good is a count of my steps if I have no idea how those steps were actually calculated?

These qualms, I recognize, are perhaps a bit specific. And it’s important to acknowledge that this is a text about a rapidly emerging and shifting phenomenon. Personally, as I mentioned above, much of Lupton’s work was self-affirming: as an early-career academic, it was a bit new for me to see so many references to essays and articles already in my own bibliography. So I would definitely recommend The Quantified Self for those scholars interested in jumping into the subject-matter without a strong familiarity. Just be sure to take good notes and be ready to build your own reading list.

Gabi Schaffzin is a PhD student at UC San Diego in the Visual Arts department. He finished one full book over his winter break.


Over at The New Inquiry, an excellent piece by Trevor Paglen about machine-readable imagery was recently posted. In “Invisible Images (Your Pictures Are Looking at You)”, Paglen highlights the ways in which algorithmically driven breakdowns of photo-content is a phenomenon that comes along with digital images. When an image is made of machine-generated pixels rather than chemically-generated gradations, machines can read these pixels, regardless of a human’s ability to do so. With film, machines could not read pre-developed exposures. With bits and bytes, machines have access to image content as soon as it is stored. The scale and speed enabled by this phenomenon, argues Paglen, leads to major market- and police-based implications.

Overall, I really enjoyed the essay—Paglen does an excellent job of highlighting how systems that take advantage of machine-readable photographs work, as well as outlining the day-to-day implications of the widespread use of these systems. There is room, however, for some historical context surrounding both systematic photographic analysis and what that means for the unsuspecting public.

Specifically, I’d like to point to Allan Sekula’s landmark 1986 essay, “The Body and the Archive”, as a way to understand the socio-political history of a data-based understanding of photography. In it, Sekula argues that photographic archives are important centers of power. He uses Alphonse Bertillon and Francis Galton as perfect examples of such: the former is considered the reason why police forces fingerprint, the latter is the father of eugenics and—most relevant to Sekula—inventor of composite portraiture.

So when Paglen notes that “all computer vision systems produce mathematical abstractions from the images they’re analyzing, and the qualities of those abstractions are guided by the kind of metadata the algorithm is trying to read,” I can’t help but think about the projects by Bertillon and Galton. These two researchers believed that mathematical abstraction would provide a truth—one from the aggregation of a mass of individual metrics, the other from a composition of the same, but in photographic form.

Certainly, Paglen has read Sekula’s piece—the New Inquiry essay often references “visual culture of the past” or “classical visual culture” and “The Body and the Archive” played a major part in the development of visual culture studies. And it’s important to note that my goal in referencing the 1986 piece is not to dismiss Paglen’s concerns as “nothing new.” Rather, I think it’s important to consider the “not-new-ness” of the socio-political implications of these image-reading systems (see: 19th century scientists trying to determine the “average criminal face”) alongside the increased speed and “accuracy” of the technology. That is, this is something humans have been trying to do for hundreds of years, but now it is more widely integrated into our day-to-day.

At the end of his essay, Paglen offers a few calls to action:

To mediate against the optimizations and predations of a machinic landscape, one must create deliberate inefficiencies and spheres of life removed from market and political predations–“safe houses” in the invisible digital sphere. It is in inefficiency, experimentation, self-expression, and often law-breaking that freedom and political self-representation can be found.

I really like these suggestions, though I’d offer one more: re-creation. That is, what if we asked our students to recreate the type of abstracting experiments performed by the likes of Galton and Bertillon, but to use today’s technology? Better yet, what if we asked them to recreate today’s machine-reading systems using 19th century tools? This sort of historical-fictive practice doesn’t require students’ experiments to “work”, per se. Rather, it asks them to consider the steps taken and decisions made along the way. The whys and hows and wheres. In taking on this task, students might be able to more concretely connect the subjectivity inherent in our present-day systems by calling out the individual decisions that need to be made during their development. We might illustrate possible motives behind projects like Google DeepDream or Facebook’s DeepFace.

Within our new algorithmic watchmen are embedded a plethora of stakeholders and the things they want or need. Paglen, unfortunately, doesn’t do a very good job reminding us of this (he paints a picture, so to speak, of machines reading machines, but forgets that said machines must be programmed by humans at some point). And I’d be curious to know what he had mind when he refers to “safe houses” without “market or political predations” (as a colleague recently reminded me, even the Tor project can thank the US government for its existence).

To conclude, I’d like to highlight an important project by an artist named Zach Blas, Facial Wesponization Suite (2011-2014). The piece is meant as a protest against facial recognition software in both consumer-level devices, corporate and governmental security systems, and research efforts. “One mask,” writes Blas, “the Fag Face Mask, generated from the biometric facial data of many queer men’s faces, is a response to scientific studies that link determining sexual orientation through rapid facial recognition techniques.” Blas uses composite 3D scans of faces to build masks that confuse facial recognition systems.

Facial Weaponization SuiteFacial Weaponization Suite by Zach Blas

This project is important here for two reasons: firstly, it’s an example of exactly the kind of thing Paglen says won’t work (“In the long run, developing visual strategies to defeat machine vision algorithms is a losing strategy,” he writes). But that’s only true if you see Facial Weaponization Suite as simply a means to confuse the software. On the other hand, if you recognize the performative nature of the work—individuals walking around in public wearing bright pink masks of amorphous blobs—you quickly understand that the piece can also confuse humans, i.e., bystanders, hopefully bringing an awareness of these machinic systems to the fore.

Wearing the masks in public, however, can be a violation of some state penal codes, which brings me to my second point. Understanding the technology here is not enough. Rather, the technology must be studied in a way that incorporates multiple disciplines: history, of course, but also law, biomedicine, communication hierarchies and infrastructure, and so on.

To be clear, I see Paglen’s essay as an excellent starting point. It begins to bring to our attention what makes our machine-readable world particularly dangerous without tripping any apocalyptic warning sirens. Let’s see if we can’t take it a step further, however, by taking a few steps back.

Gabi Schaffzin is a PhD student in Visual Arts, Art Practice at UC San Diego. He wears his sunglasses at night.


23andMe Co-Founder Anne Wojcicki
by Thomas Hawk on Flickr

Anne Wojcicki’s thinks it’s “incredibly meaningful” to honor scientists who are “purists” who “love what they do” and have “never looked for any kind of celebrity.” So she and a slew of other Silicon Valley technocrats gathered to recognize these altruistic innovators at the NASA Ames Research Center in Mountain View last week by giving them a spotlight on primetime network television and also $3 million each. At the event, called the Breakthrough Prize ceremony, the 23andMe CEO sat down with a reporter from Bloomberg to discuss the award, which, per her interviewer, should “empower scientists just like technologists are empowered in silicon valley.”

It is most likely wishful thinking to presume that the curriculum for a Yale bachelors of science in molecular biology—of which Wojcicki is a recipient—would include the likes of Ludwick Fleck or Bruno Latour. The former, a physician and biologist, was the author of Genesis and Development of a Scientific Fact, originally published in Polish in 1935, though not translated into English until 1979. In it, Fleck tracks the history of research around syphilis, eventually outlining the concept of a “thought-collective”, a way to consider the social act of cognition—that is, how an idea changes and is passed down through history, from and to different individuals and circles. Syphilis, argues Fleck, as it was first known at the end of the 15th century was not the same syphilis that was cured nearly 500 years later. Latour, whose breakthrough work, Laboratory Life: The Construction of Scientific Fact (cowritten with Steve Woolgar), was published the same year as the English translation of Genesis and Development, is most famous for enacting a sociology of science based on ethnography. He and Woolgar spent time in a laboratory watching how science is made—from discussions regarding funding and publishing to actual work at lab benches.

Reading Fleck and Latour help us realize that celebrating the individual is counter to how science works. Then again, to argue that the Breakthrough Prize should be more focused on the collective or that we should jettison the fantasy of a mad scientist isolated in a lab somewhere is to pretend like the Nobel Prize or MacArthur Genius Grant are not two of the highest honors bestowed in the field. But I have no interest in further critiquing this silly award show (which you can catch on Fox this Sunday night at 8/7c!). Instead, I think it’s worth paying close attention to what individuals like Wojcicki are saying and doing when it comes to how they see science in action—a science they want us to believe is hindered by seeking to critique it through social and political lenses. One that is revolutionary in its own right, performed for the sake of truth, regardless of ulterior, capitalist motives.

During the same Bloomberg interview, when asked for her thoughts on the impending the rich asshole administration, Wojcicki offered that “I’m a wait and see [kind of person]. I want to be able to judge once things are happening.” This was December 4, 2016—26 days after the rich asshole was elected and started building his cabinet. Nine hundred and fifty six days after he tweeted that there are “many such cases” of vaccines causing “AUTISM”. One thousand two hundred and eighty seven days since he argued that “Fracking poses ZERO health risks.” And 1,463 days after he declared that “The concept of global warming was created by and for the Chinese in order to make U.S. manufacturing non-competitive.”1 What, exactly, is Wojcicki waiting for?

According to the Silicon Valley executive, she’s waiting to find out who is going to determine the rules which govern her business: the heads of the Food and Drug Administration and Health and Human Services. She notes that she is glad to have found out who will run HHS, though she doesn’t offer her opinions on the nomination of Representative Tom Price (R-Ga.)—a man whose career has been marked by, per The Huffington Post, “a constant…hostility to government interference with the practice of medicine.” Instead, she declares that she is “excited about the idea of potentially more freedoms.” Freedoms, one assumes, to go back to doing what made her company famous to begin with: using a customer’s DNA to provide them with their probability of getting sick. 23andMe was ordered to stop doing exactly that when, in a November 2013 letter, the FDA declared that “Most of the intended uses for [23andMe results]…have not been classified and thus require premarket approval or de novo classification.” Simply put, the FDA didn’t think it was appropriate for a company to tell its customers things that a doctor should be saying. This is, of course, the same FDA which is set to be run by Jim O’Neill, noted venture capitalist, libertarian, and Peter Thiel colleague.

It’s worth pointing out here that, per the FEC database, Wojcicki has given about a quarter million dollars worth of donations to Democratic Party candidates and committees over the past couple of years. This is a critical point, not because I think recognizing her support for the Clinton campaign and others is any sort of saving grace. Rather, we have to realize that the kind of rhetoric used here by Wojcicki and others—about empowering “scientists just like technologists” or believing that “with things like the Breakthrough Prize…it doesn’t matter what the government is saying as much”—is not partisan. In an interview a week earlier, she argued that an education system “decentralized down to the individual” will empower our next generation of scientists.

This is someone in charge of a private company collecting and storing over a million individuals’ DNA data. And while she notes that the company does not sell that data to large biotech and pharmaceutical companies, it charges quite a premium to engage in “research projects” with those companies, eventually sharing “anonymized” records with them. Combine this with the Breakthrough Prize and 23andMe becomes the gatekeeper and funder for research (not to mention the supplier—their recent “Genotyping Services for Research” offering lets universities and other labs purchase kits for study participants, effectively outsourcing their genotyping capabilities to Mountain View).

When Jonas Salk—whose institute was the subject of Latour’s Laboratory Life—championed the development and reproduction of a polio vaccine, he didn’t patent it. That’s not to say, however, that he and his fellow researchers weren’t properly funded (though it’s worth noting he never won the Nobel Prize). Instead, money came pouring in from donations collected by an organization called the National Foundation for Infantile Paralysis, founded by FDR and eventually renamed the March of Dimes due to the small donations it received from citizens. To suggest that today’s scientists use Salk as some sort of altruistic model is naive and not at all the goal of this blog post. But what are we left with when education and research and science are all “decentralized down to the individual”? This is a dangerously ahistorical and anti-communal approach to science. What sort of rights or powers do we give up when we acquiesce to a system of research based on market-values and, as one Forbes contributor suggests we do, buy into a system that “gives real scientists more celebrity treatment through awards shows, television, movies, advertisements and other means”? What happens when we treat science like a business, government like a menace, and the individual as the only way forward?

1. I won’t link directly to the rich asshole’s tweets, but for sources on my quotes, please see this piece from Scientific American.

Gabi Schaffzin is not a scientist, though he once played the Wizard of Oz in a fifth grade production.