When the team here at Cyborgology first started working on The Quantified Mind, a collaboratively authored post about the increasing metrification of academic life, production, and “success”, I immediately reached out to Zach Kaiser, a close friend and collaborator. Last year, Zach produced Our Program, a short film narrated by a professor from a large research institution at which a newly implemented set of performance indicators has the full attention of the faculty.

For my post this week, then, I’d like to consider Zach an Artist in Residence at Cyborgology—someone using the production and dissemination of works that embody the types of cultural phenomena or theories covered on the blog (as it turns out, this is not Zach’s first film featured on Cyborgology). I suppose it’s up to him if he’d like to include the position on his CV. In the following, I would like to present some of my reactions to the film and let Zach respond, hopefully raising questions that can be asked in dialogue with the ones presented at the end of The Quantified Mind. In full disclosure, I am very familiar with Zach’s scholarship and art (I’m listed as a co-author or co-artist on much of it, though not Our Program in particular), so I hope I don’t lead the witness too much here.

But first, the film:


As a classically trained designer teaching in the art department of a Research I school, Zach’s perspective is valuable here for a number of reasons: obviously his day-to-day is highly influenced by the metrification trend in academia (especially considering his pre-tenure status), but he has worked in the commercial realm with companies and organizations enamored with the exact sort of technologically enabled quantification tools and systems (read: big data, et al.) driving the platforms through which academics’ metrics are being tallied. My first reaction to the film, then, is about the use of an object as the main visual here. After all, the film is not about a device—it’s not called Our Ticker or Our Shiny White Box With Seductive Red LEDs, it’s called Our Program; it’s about a cultural phenomenon with, for all intents and purposes, no real consumer-facing physical manifestation (beyond, perhaps, online dashboards or the like).

I’m reminded, however, of a book I recently read, Elizabeth Wilson’s Affect and Artificial Intelligence, a brief but fascinating argument for a reconceptualization of AI away from the stereotypical “cool”, emotionless field for mathematicians and computer scientists and into a significantly warmer, more emotional place. Alongside this main pitch, she also suggests that the proliferation and improvement of AI technologies will increase when all parties involved agree on the aforementioned reframing—that is, when AI is understood to be less Skynet, more PARO. One striking piece of research that stands out to me now in the context of Our Program comes in the chapter discussing ELIZA and PARRY, two AI psychoanalysts from the 1960s. Wilson references Sherry Turkle (frequently written about on this blog) and her work on humans’ relationships to technology, but ultimately dismisses this research in favor of Byron Reeves and Clifford Nass, who argue that we as a species are drawn to befriend our technological devices, summed up by Wilson as our “direct affiliative inclinations for artificial objects” (95).

ZK: Considering the metrification of academia in the context of affect is something I didn’t originally conceive as part of the work, but I’m reminded here of various efforts (within the humanities) to produce better metrics that are specific to humanities disciplines as opposed to inheriting metrics systems from the “hard” sciences. This strikes me as curious when situated in relationship to PARO or Siri. Ironically, through producing more “humane” metrics, we may end up furthering the idea that humans are fundamentally computational in nature.

A drive to produce more nuanced or, in the case of the humanities, humane metrics, or to make AI more relatable is not about the metrics or the AI themselves but is, I would suggest, about what we think about ourselves as people—whether we are, or are not, at some basic level, computational. The apotheosis of such a belief is a kind of pan-computationalism, where microbes and microchips operate in glorious harmony, not unlike the proclamations made in the poem “All Watched Over by Machines of Loving Grace,” by Richard Brautigan. Such a belief also caters to a neoliberalization of all life underpinned by models of self-interested human behavior that reach back to the early days of game theory.

To me, it’s not necessarily about asking whether we want to have affinities with artificial intelligence or computational objects in general but to what degree our affinities with those things become absorbed into our own ontological space, rendering us equally as computational as those objects. In this way, I see a strong connection between efforts to make scholarly metrics more nuanced, sophisticated, contextual, etc., and an affinity towards a PARO over a Skynet.

GS: If you’ve read my recent posts on the value of using obvious fiction in art and design versus trying to seem “real”, then you won’t be surprised that I hope Zach will discuss how he frames his narrative. This is not a piece of marketing. He does not leave his name off of the film. Nor does he call the piece a “product tour” or “brand video” on his website. That said, it is obviously influenced by his real life experiences in academia, experiences that we recognize as very much not unique in The Quantified Mind. Why fiction then?

ZK: I was recently asked if the “parody” can keep up with “reality.” Career benchmarking in higher education in Europe (like Reappointment, Promotion, and Tenure here in the States) is increasingly metrics-focused. A european colleague once told me about his dissertation committee, which required him to prove his impact via citations before he could graduate. Universities in the UK are using platforms like Simitive Academic Solutions to address “goal setting and alignment” to produce stronger accountability and incentive systems for faculty members.

I feel as though the fiction is a way of grappling with reality. The object, this “ticker” on which the film centers, is somewhat absurd in both its form and purpose. The intent was to make more explicit the link between the kind of (dare I say) neoliberal, market-based nature of faculty metrics and the physical faculty and university themselves: a “stock-ticker” that illustrates whether or not we as faculty members should continue to receive investment from our institutions. This kind of marketization of faculty data is already happening, and is not necessarily “new,” but the kind of control it wields is shifting. The more sophisticated, contextual, and nuanced the metrics become—not just about citations or number of publications, but about everything related to faculty output (e.g., fitness data via partnerships with FitBit and smart furniture manufacturers to determine whether more fit faculty produce more “impact”, other biometric and psychometric indications to help faculty identify causes of stress that decrease productivity, weighting of metrics based on location, discipline, type of institution)—the more administrators will rely on metrics to shape decision-making processes. As long as we develop ways to demonstrate our fundamentally computational nature, the influence of metrics on academia will be a positive feedback loop, with new metrics being developed, new decisions being based on those metrics, and new metrics being developed in response to the consequences of those decisions.

Gabi Schaffzin is pursuing his PhD in Art History, Theory, and Criticism, with a concentration in art practice, at UC San Diego.

Zach Kaiser is Assistant Professor of Graphic Design and Experience Architecture in the Department of Art, Art History, and Design at Michigan State University.

The two, along with other collaborators, have been working on the Culture Industry [dot] Club, a dynamic assemblage of artist-researchers engaged with emergent media practices and deep historical and theoretical research.

When I started this series three weeks ago, my goal was to provide a review/recap of Orphan Black’s final season, tying it to issues of the body, history and philosophy of science, and the value of fiction. Turns out that last element drew me in and I was most curious about the way that Orphan Black’s creators, Graeme Manson and John Fawcett, employed their science consultant, Cosima Herter, in order to make the science in the show as “real” as possible, while still developing and producing a piece of work that was very clearly fiction. Along the way, I’ve found myself wanting to bring in other works of narrative-based fiction. I wrote about Mr. Robot, but I have drafts that include Minority Report, Black Mirror, Nathaniel Rich’s 2013 novel, Odds Against Tomorrow, Margaret Atwood’s Oryx & Crake, The Blair Witch Project, and Orson Welles’s 1938 War of the Worlds radio broadcast. Reflecting back on those drafts, I came just short of plotting these works along a matrix consisting of two axes: plausibility and believability. That is, could this not only actually happen, but would a public believe it had? In effect, I began to work out how hyperstitious to consider each of these culture artifacts.

Deciding that in earnest would take quite a bit of effort on my part, plus some training in sociological methods, not to mention IRB approval. I am willing to argue, however, that medium and channel have a significant effect on the believability of a piece. Not terribly controversial, I know. But when Margaret Atwood writes a novel, it goes into the fiction section and we need not worry that a madman acclerationist capable of doing so has decided to play god. Why, then, does she need to begin the acknowledgements for Year of the Flood, Oryx & Crake’s follow up, by stating that the novel “is fiction, but the general tendencies and many of the details in it are alarmingly close to fact”?

Hyperstition is a valuable concept because it helps us see that a story like Atwood’s might move out of the fiction section, coming to fruition in another form. This shift is not only promulgated by artists or authors, of course. Advertisers ask you to imagine yourself in a car that will park itself a year before the product is available on the market. Realtors on HGTV or its equivalents are thrilled to hear visitors to an open house begin to “see” their own furniture in the living room of a listed property. My concern as an artist, however, is how to utilize hyperstition in the service of affect, change, and revolution.

Earlier this year, I argued that a popular “sciency-art” is increasingly being produced for the sake of science and not so much for art. But maybe this trend is something that we can appropriate for our own motives. As I noted in my previous post, when Dunne and Raby show Faraday Chair, it’s very clearly a piece of art meant to provoke. But what if they showed it at a furniture show? What if Wodiczko brought his Homeless Vehicle to a TED talk? Would they be selling out? Would they be perpetrating hoaxes? If they were found out to be a performance piece, would their adoring public feel devastated (as one tech scholar did after Horse e-books was revealed to not be as algorithmically curated as once thought)?

The reason these methods of presentation would not work is clearer when we return, once again, to Delphi Carstens’s explanation that hyperstitions “fan the flames” of cultural anxiety. When Wodiczko complains that viewers of his work want projects like Homeless Vehicle mass produced, he is responding to the fact that they are not made anxious by it—they feel hopeful. Charlie Brooker’s Black Mirror episodes garner affect through shock value. His challenge is to not go too far before swaying into campy horror or implausible absurdity. Galleries and festivals like Transmediale, SLSA, and Science Gallery have made room for this absurdity, now it’s up to us to embrace the “neatness” factor, but certainly not at the expense of affect.

Of course, in the case of Orson Welles’s radio show, the reports of the panic that resulted from the broadcast were widely exaggerated. Jefferson Pooley and Michael J. Socolow write that a wire report was picked up by daily newspapers, hungry for something to boost readership and build doubt in the reliability of radio as a whole, leading to the promulgation of the myth of the program’s panic-inspiring believability. On the complete flip-side, 61 years later, when Artisan Entertainment released Daniel Myrick and Eduardo Sánchez’s The Blair Witch Project, the studio made it very clear in promotional material that the film was completely fictional. Still, the town of Burkittsville, where the story was purported to have taken place, was inundated with witch hunters for a significant time after a successful theatrical release. The hyperstitious outcomes of a work are, perhaps, out of the hands of the artist.

But we can still give it a shot.

Gabi Schaffzin is a PhD student at UC San Diego. He has lost track of whether his work is considered real, fiction, or fake. 


source

Last week, I introduced some characters to my argument: Orphan Black and its writing and consulting staff, Mr. Robot and its creators, the Cybernetic Culture Research Unit and Nick Land, accelerationism, and hyperstition. Need a refresher? Find it here. Now, I’d like to take a brief detour in order to introduce another important character here: speculative design.

If you’ve been in or around the design academy in the past decade, you will no doubt have heard the word “speculative” thrown around quite a bit. Colloquially, speculative design (or design fiction, or critical design) uses methodology and mediums traditional to the design studio (that is, the division of labor, tools, prototyping, and manufacturing processes common to industrial and graphic design shops) in order to produce visual artifacts from the future. One oft-cited project is the Faraday Chair (1995) designed by Anthony Dunne, erroneously but popularly labeled as the field’s progenitor. Dunne suggested that radio waves carrying wireless communication would eventually crowd our airspace to the point that we would need a physical respite from their electromagnetic effect. As part of his PhD at the Royal College of Art he, with the help of his collaborator Fiona Raby, installed in a gallery a large plexiglass box with an air tube and a pillow and had a model lay inside, supposedly “protected” from the invisible radiation of wifi and bluetooth messages.

Speculative design, however, is not as new as some will have you think. One of its earlier figures, Krzysztof Wodiczko, grew up in a post war autocratic Poland—an experience that significantly shapes his work. Trained as an industrial designer, Wodiczko began his career working for a small Polish electronics manufacturer, but quickly turned to his design education as a means to develop pieces which “could interrogate ethical and political voicelessness” (102). It was in this vein that he designed and fabricated the Personal Instrument (1969), featuring a mic’d up helmet, sound-canceling headphones, and two gloves with light sensors. As the wearer reduces how much light hits the sensors on their hands, sounds from their immediate environment are blocked out and vice versa. About the piece, Wodiczko writes:

I was in a strange position: a designer employed by the state industrial corporation while trying to establish a critical and ironic dialogue with a real and monstrous designer—the communist state itself—who was in total control of the entire society and treated it as a single work of art or design…In the Personal Instrument, I somehow represented myself, my colleagues, and possibly many others, swimming “freely” in a world of sounds and speech, yet remaining silent. (102)

Eventually, Wodiczko labeled his flavor of design “interrogative” and in 1994 he penned a manifesto of sorts. In it, he declares that interrogative design “takes a risk, explores, articulates, and [responds] to the questionable conditions of life in today’s world, and does so in a questioning manner” (16). One of Wodiczko’s more well-known pieces, Homeless Vehicle Project (1988–89), was designed in consultation with homeless individuals in and around Philadelphia and New York City. The piece features a rolling cart, complete with aluminum can storage, sleeping cubby, and safety flag to make sure its presence is known from afar. The artist created the Homeless Vehicle

in the context of jungle capitalism, along streets and parks of New York City…[where] the petrified homeless, dying of their wounds or of malnutrition were advocating the benefits of American freedom in front of monuments of Washington and Lafayette, thus revivifying and reanimating these former dead. (xiv)

When asked about its viability as a mass-produced vehicle, Wodiczko leans on the value of a prototype:

“The minute you present a proposal, people think you must be offering a grand vision for a better future.” They can’t see a thing like the Homeless Vehicle…as the “concretisation” of a present problem, a makeshift transitional device, or an aesthetic experiment. Instead, “they think it must be designed for mass production…taking over the cities.” (Wright 1992, 272-273)

In effect, the “people” want to know if it’s real. Wodiczko argues it need not be.

I return, then, to last week’s post and Delphi Carstens on hyperstitions: “the trauma and fear engendered by their cultural ‘makeovers’…merely serve to further empower [their] basic premises and fan the flames.” Surely, seeing Faraday Chair in a gallery in 1995, complete with model and pillow and breathing tube, a viewer understands it is a piece of critical design (or is it art at this point?). Having read the plaque, our gallery visitor walks away considering the health risks of an inevitable flood of electromagnetic waves into their apartment (remember, this is well before “wifi” or “AirPort” or “Bluetooth” became household names). Walking out of the gallery and into the street, however, does a passing Homeless Vehicle engender in our viewer a sense of anxiety for the homeless it proposes to serve? Or, seeing a working prototype in use, do they feel a sense of relief that a “solution” to the homeless “problem” is available, relinquishing them of any obligation to do their part?

Writing this now, I feel a sense of crisis. Dunne and Raby, having popularized the field of speculative/critical design by making it a central focus of the now defunct Design Interaction program at RCA, have been busy preparing visions of future western cultures, visions bereft of refugees but complete with “Anarcho-evolutionists” and “Communo-nuclearists”. Wodiczko has been using high-powered projectors to cover monuments and buildings around the world with the recorded stories of members of marginalized communities such as women factory workers in Mexico or US war veterans. Certainly, I don’t wish to suggest that the Polish artist is any sort of martyr—he develops his work while on the Harvard Graduate School of Design faculty payroll. But I do very much want to love his methodology—one that he argues “attempts to heal the numbness that threatens the health of democratic process by pinching and disrupting it, waking it up, and inserting the voice, experiences, and presence of those others who have been silenced, alienated, and marginalized” (xiii).

Are Dunne and Raby trying to “heal” anything? They suggest that by identifying a current sociotechnical trajectory and trying to plot its future through the creation of material artifacts that present the audience with that future, they can provoke discourse on the desirability of that future. You can see, then, why our old friends, the accelerationists would prefer this methodology—especially those on the left, who see projects such as Occupy and the Arab Spring as having failed in their very un-futuristic shortsightedness. So instead of trying to figure out who gets to sleep in the warm tents once the snow comes, I guess it’s better to forget about sleeping altogether and just leaning into our four hour work week to build systems of automation that will most certainly save us from late capital’s technocratic grip?

Where are we left then? If we envision projects to address homelessness today, we are absolved of needing to worry about it tomorrow. If, instead, we envision what homelessness looks like tomorrow, then we need not worry about today—our hyperstitious ways will materialize the answers. Our real projects are fake, so we must rely on our fictions? 

If you’ll allow me to return to television (warning: spoilers ahead, tread lightly), consider that all of the accelerationist projects and hyperstitious speculative design pieces being extruded slowly through the 3D printers and from the laser cutters of the shiny new “makerspaces” of  otherwise underfunded art and design departments are not in the service of dismantling heavily capitalized corporate cloning projects or bringing down the E-Corps of meat space. At first blush, this makes perfect sense. Firstly, we mustn’t upset the backers of said spaces. Secondly, and perhaps more obviously (absurdly?), there isn’t a heavily capitalized corporate cloning project, nor is there a Sino-American conspiracy involving multinational corporations, cryptocurrency, and a Swedish executive-cum-hacker (and if you believe there is, you must obviously sleep with a tinfoil hat on your head).

At this point, I’ve brought you with me down a big of a rabbit hole. There are questions of scale (are we fixing capitalism or cloning?) and medium (metal, Photoshop, or television? Maybe all three?) and channel (gallery, street, or BBC America?) and more. So in my next post, let’s see if I can’t wrap this up and finally start to answer the questions I posed at the beginning of last week’s piece: when is fiction too fake, reality too real?

Gabi Schaffzin is a PhD student at UC San Diego. Anthony Dunne thought his work was all a big hoax. Krzysztof Wodiczko thought his work was too ironic. He was rejected from a PhD with the former (fortunately) and a job with the latter (disappointingly). 

At what point does a fictional tale of a present day technocapitalist advancement and the characters embroiled in its aftermath turn into a dystopia? Is there ever a clear threshold between the plausible and the absurd? And what responsibility does the artist or author have towards their audience to make clear the realism of the piece?

Spoiler Warning: you may want to tread lightly if you haven’t yet but still plan on watching through season 2 of Mr. Robot and season 5 of Orphan Black.

In Graeme Manson and John Fawcett’s Orphan Black, which recently wrapped its fifth and last season on BBC America, a young con artist discovers she exists for very complicated reasons. She is at once unique in her willingness and ability to protect her family by destroying the systems which created her, while simultaneously living as one in (at least) 274 other women who are genetically identical. Along with their science consultant, the creators and writers of Orphan Black built a world in which capitalists, religious fanatics, a wealthy mad man, and scientists (though many characters would cross into more than one category), came together to circumvent ethics, legalities, and well-established scientific notions as they sought wealth, immortality, and the secrets to human kind. Good thing it was all made up.

And yet Manson and Fawcett have never shied away from revealing their reliance on Cosima Herter, the show’s science consultant. Herter, a scholar in the History of Science, Technology, and Medicine, spent her time on the show researching the science referenced, challenging writers to reconsider assumptions they’ve made about, for instance, the relationship between science and religion, and generally ensuring a tenable story. Manson has said that Herter’s insights “help to inform the big picture even if it’s not overt. So those are important thematic things. We don’t want it to take over the show, but we want it to be such a part of the fabric that you can’t avoid it.” Still, what value does the show’s mostly-believable science1 bring?

The same might be asked about Sam Esmail’s Mr. Robot, in which a relatively unbelievable apocalypse occurs in an extremely believable world. There is a very small gap between what Esmail and his writing team create and what we understand to be our current economic and technocratic situations—at least pre-5/9 hack. Esmail has said that he works hard with consultants to ensure that the technology for the show is plausible, based on non-fictitious products and events.

I’m not as interested in considering here what Orphan Black or Mr. Robot would be like if their writers didn’t ensure a strong plausibility. Instead, I want to consider what they would be like if they pushed even further into the “real”.

* * *

Loosely, the term “hyperstition” refers to the way that new ideas propagate through culture, the way that fictions have the power to shape the “real” future. The term was coined by the Cybernetic Culture Research Unit (Ccru) out of Warwick University in the mid-90s. Ccru was a highly problematic experiment in renegade academics, disbanding almost as quickly as it came together, alienating outsiders and insiders alike along the way. Perhaps the most important concept to have emerged from Ccru, however, was that of accelerationism.

Today, it is generally understood that there are two flavors of accelerationism: the original, “right-wing” version and the newer, leftist variety. The former, developed by Nick Land, one of the founders of Ccru and a philosopher oft-cited in alt-right/neo-Nazi texts, proposes to speed up the capitalist project to the point of technological singularity and ultimate efficiency. The latter, popularized in recent years by Alex Williams and Nick Srnicek, argues that full automation of labor, combined with a universal basic income, means technology will free the working class from capitalism altogether—the traditional left, they claim, will stagnate as long as projects such as Occupy are its chosen path of revolution.

Mr. Robot and Orphan Black become hyperstitious, not because their individual premises have necessarily come to fruition, but, as Delphi Carstens writes of hyperstitions in general, they’ve done so in the sense that “the trauma and fear engendered by their cultural ‘makeovers’…merely serve to further empower [their] basic premises and fan the flames.” That is, the anxiety produced by these shows might be enough to force an audience to consider their realism. Still, the “realness” of these shows is limited by the genre and medium. That is, in order to tell the story from Sarah Manning’s or Elliot Alderson’s perspective, we as viewers must understand immediately that this is a fiction—it is not shot as a documentary or news report.

But, once again, what if they were?

In my next post, I’d like to explore what sorts of efforts are currently made by artists and designers in the name of envisioning and/or making a future. I’d also like to work through what sorts of aesthetic or programmatic decisions leave a viewer considering a piece to be real, fiction, or fake. I will use more examples of various types of art that could be or seems to be about a “truth” and hope eventually to challenge artists to play with the boundaries of when these truths are revealed.

1 There is a minuscule element of the supernatural that helps the clones survive, but I have yet to find anyone angry enough to write about that.

In this post, I’d like to make an argument about a way to understand how the Democratic party seems to be making messaging and policy decisions. An argument like this can’t be made in a vacuum—or in 1,500 words. Nor can any one or even ten reasons be decided upon for why the leaders of a party do what they do. But I recognize a pattern in how the DNC and leadership has acted over the past decade and I want to work that through here. So please forgive any indication that I am not a policy wonk or political analyst—I do not claim to be, nor do I wish to be either.

In my series on the history of the Quantified Self and eugenics earlier this year, I referenced the Belgian astronomer, Adolphe Quetelet, who argued that man could be measured just like the positions of planets. I didn’t have the space to explain it very well back then, but think about it like this: you and, say, 570 of your closest friends have telescopes. At the same time on the same night, you each measure the position of a certain star in the sky. You all come up with roughly the same position, but with distinct and consistent variation. Take those measurements and plot them along a chart, like this:

The numbers of measurements that fall into position A (14 friends got this measurement), B (21), C (41), etc. is counted and plotted. The astronomer’s error law, normal distribution, and Gaussian density function (which are all the same thing) dictate that these values will fall into the normal bell curve. Most of the measurements (217) fell at position E, which means that your friends who got other measurements were probably wrong. So let’s say that the star you’re measuring is, in fact, at position E.

Now, let’s assume that instead of 571 people taking the same measurement, it’s just you but you’re measuring the height of 571 people. Quetelet would argue (in fact, he did just this in 1842) that the heights of these people (he would call them men…because they were) would fall into the same normal distribution. And, just like position E on the above graph revealed the “real” position of the star, position E on our height graph would reveal the “real” height of a man. After compiling a good number of measurements about this man, he labeled him l’homme moyen, the average man.

Remember that this was all happening in the mid-1800s in France and Belgium, a time during which the French monarchy was under upheaval. In 1830, Charles X was forced to abdicate after the July Revolution, and so his cousin, Louis Philippe, became king. Louis Philippe (whose daughter was married to Leopold I, king of Quetelet’s home nation Belgium) operated under “a juste milieu, in an equal distance from the excesses of popular power and the abuses of royal power” (Antonetti 1994, p 713). Quetelet, often quoting Victor Cousin and the philosopher’s ideal of moderation and compromise, was quite taken by this idea of juste milieu and equally enthusiastic about the application of the astronomer’s law as an instrument of social analysis: that there is a common type of man and that, just like the “real” position of the planet or the “real” height of man, that type is found somewhere in the middle of the bell curve. Per Ted Porter (1988), “L’home moyen, then, was not just a mathematical abstraction, but a moral ideal” (103). Quetelet believed that income inequality could be tied to crime rates, that the rich lived longer because they did not drink as much, and that moderate men tempered their passion and helped regulate birth and death rates. By smoothing out the curves that described man, oscillations of the social body were eliminated and an ideal existence could be achieved.

What, then, does this have to do with the Democratic Party? It is a relatively well-known history that the Dems (that is, DNC-sanctioned campaigns for legislative and executive offices) have been basing much of their decisions on a sophisticated data operation. As Daniel Kreiss described last February on this blog, starting after the failed 2004 presidential election, the DNC began to build and amass a sophisticated database of constituent and voter information. In The Audacity to Win, 2008 Obama campaign manager David Plouffe elucidates how critical projects like the DNC’s (and the campaign’s own data and media programs1) helped the campaign understand which issues voters wanted to hear about, what geographic areas to focus on—down to the precinct level, and which ads to run when. Reportedly, the 2016 Clinton campaign leaned too heavily on their data, eschewing opportunities to campaign in what would eventually prove to be critical markets…like all of Wisconsin.

Obama won on a centrist platform of compromise, one that led to increased civil freedoms like the right to marry, but his tenure as president also saw large banks and corporations make exponential gains thanks to a largely hands-off approach to post-bailout repercussions. And while the ACA is an extremely critical step in the right direction, it is a far cry from a single-payer healthcare system. On the other hand, the Republican party has enjoyed control of both houses of congress ever since 2010 and conservative extremism has taken hold of all three branches of government after Clinton’s centrist platform could neither carry her, nor her down-ticket colleagues to office. Meanwhile, in England, we’ve observed an oscillation from one extreme—Thatcherism—to the other—Corbyn-inspired Socialism. What might have been considered the “mainstream” Labour party two years ago failed miserably, running on, yes, another centrist platform—even with the help of Obama’s 2008 and 2012 strategydata, and media team.

Francis Galton, you may remember from the first installment of my eugenics series, took Quetelet’s work and shifted it—literally. Rather than seeking to find the normal man and make him the model, the father of eugenics wanted to work against what he considered to be a “reversion to mediocrity.” So he promoted the reproduction of those on the exceptional edge of the bell curve and…gently suggested that those on the “deficient” end not reproduce. Of course, this suggestion manifested itself in forced sterilization programs that lasted well into the 1970s in the United States. The idea, of course, was that by removing the deficient and growing the exceptional, the entire curve is forced to move to the right—to the highest IQs, longest legs, fastest reaction times.

Let’s, for the sake of argument, go ahead and call the Republican party Galtonian. Sure, the AHCA, the travel ban, the removal of LGBT identity from the census, and all of the other appalling policies in place or being put in place have eugenicist characteristics. But for now, I want to argue that the Republican party has been using an edge case messaging strategy: war with the terrorists on our soil is imminent, so keep them out and arm yourselves; you might get rich, so let’s reduce the top-earners’ taxes; your marriage will be ruined by someone else’s decisions; women get abortions for fun and your daughter is next. Meanwhile, the Democrats want to reach across the aisle and find a happy medium. They want to incorporate the insurance companies’ wishes into the ACA. Bankers are people, too. We’ll never get single-payer or free college tuition or comprehensive gun control done because the “average American voter” doesn’t want it.

I don’t get to see the data that DNC or GOP operatives have. Nor do I believe one side won or lost solely on the quality or quantity of its data. I have some idea, albeit nascent, why the Democrats refuse to come down hard for social programs that are primarily beneficial for the populous over the corporations (hint: Republican candidates aren’t the only rich ones out there). But I do know that the July Monarchy of Louis Philipe only lasted 18 years, during which he survived seven assassination attempts. It’s time to push towards the other end of the bell curve—to shift the message to a polarized edge case: single-payer is the only just system, free education will lift everyone, top earners owe more to society than vice versa and should pay their share, guns do kill people. If the Democratic party wants to continue to let the data dictate the policy, they will never move beyond a juste milieu. They will point to l’homme moyen and say, “this is our target.” The problem is that target is moving and unless they take control, then thanks to a general apathy surrounding and rejection of their candidates, it will continue to move to the right.

1In the interest of full disclosure, I worked for a year at Blue State Digital, though not on the Obama or Clinton campaigns, nor does anything I write here violate any sort of non-disclosure agreement.

Gabi Schaffzin is a PhD student at UC San Diego. On this, America’s celebration of independence from the British, he wants you to know that Bernie would have won.

METATOPIA 4.0 – Algoricene (2017) by Jaime Del Val

The 23rd International Symposium on Electronic Art was held in collaboration with the 16th Festival Internacional De La Imagen in Manizales, Colombia in mid-June 2017. The opening ceremony for the conference kicked off with a performance by the artist Jaime Del Val, entitled METATOPIA 4.0 – Algoricene (2017), described by the artist as “a nomadic, interactive and performative environment for outdoors and indoors spaces.” The artist statement goes on (and on) to explain that the piece “merges dynamic physical and digital architectures” in an effort to “def[y] prediction and control in the Big Data Era.” In actuality, Del Val stripped down to his naked body, put himself in a clear mesh tent, projected abstract shapes onto the tent, and danced to what might best be called abstract electronica (think dubstep’s “wubwubwub” without the pop).

What piece of what Del Val presented qualifies as “electronic art”? Was it the music? The projector? The use of the term “Big Data Era”, capitalized (in lieu, perhaps, of scare-quotes) in his entirely glib artist statement? I was similarly confused by Alejandro Brianza’s artist talk, “Underground Soundscapes”, in which he showed a few photos of subway systems around the world, accompanied by sound recordings from each visit. About Brianza’s work and Del Val’s, I wondered: why is this electronic art? In fact, throughout the duration of my visit to the ISEA conference and festival, I found myself asking “why” quite often.

To be sure, there were plenty of projects that were quite obviously “electronic”. Bat-bots (2015), for instance, by Daniel Miller, features a pair of bat-like sculptures, complete with echolocation measurement devices and speakers to emit what might be inaudible if you were to walk by an actual bat. Self-proclaimed “sound explorer” Franck Vigroux performed a 45-minute DJ set in front of a Malevitch-inspired white cross, made of “thousands of individual pixels, which explode in space according to the levels of energy of the audio”; the track sounded much the same as Del Val’s musical accompaniment. ISEA, then, was in no shortage of art that is obviously “electronic” in the sense that it had to be plugged in or it used computation as a medium. Still, I could not help but wonder “why” again: why was this even made? Why subject your audience to 45 minutes of the same set of particle physics acting on a simple shape? Why reinvent bats?

ISEA is by no means unique in its ability to attract a congregation of technophilic artists or those intrigued by a mix of science and art. For the past three decades and beyond, organizations like Transmediale, Ars Electronica, and Science Gallery have grown to be major curators of “sciency art” the world over. They operate on mission statements that boast about the interactivity and broad cultural appeal of the work. They throw costly events in major cities around the world and smaller gatherings in satellite venues. Some, like Ars, give out coveted prizes for work deemed superior by a panel of (mostly male) jurors. What they lack, however, is an overt acknowledgement of the political nature of what they are doing. Yes, there is the occasional surveillance detector or VR poverty simulator, but the general excitement that these festivals and the artists showing in them are taking advantage of is a facile equation of “art + science = innovation/truth/the future”.

It seems almost anachronistic to argue for art and politics to be considered necessary partners today. In 1984, artist and critic Lucy Lippard wrote that

It is understood by now that all art is ideological and all art is used politically by the right or the left, with the conscious and unconscious assent of the artist. There is no neutral zone. Artists who remain stubbornly uninformed about the social and emotional effects of their images and their connections to other images outside the art context are most easily manipulated by the prevailing systems of distribution, interpretation, and marketing.

The conservative art critic Hilton Kramer was not so sure, arguing that statements such as “There is no neutral zone” would lead to Lionel Trilling’s “‘eventual acquiescence in tyranny’.” Fifteen years earlier, Kramer, a staunch formalist, watched in horror as Lippard and her Concpetualist peers filled galleries from MoMA to LA’s MOCA with politically charged works of art that often implicated viewers as collaborators in the art. MoMA’s 1970 show, Information, featured Vito Acconci’s Service Area in which the artist had his postal mail forwarded to the museum. “The piece is performed (unawares),” he writes in the show catalogue, “by the postal service…and by the senders of the mail.” The museum guard becomes a “mail guard” and the artist performs the piece by going to pick up his letters. In Hans Haacke’s Poll of MoMA Visitors, the artist asked exhibition visitors to place a ballot in one of two boxes, each answering “yes” or “no” to the question, “Would the fact that Governor Rockefeller has not denounced President Nixon’s Indochina policy be a reason for you not to vote for him in November?”. Haacke didn’t reveal the question until the night before the show opened. This was considered one of the artist’s first “institutional critiques”—works that sought to bring to light the questionable practices of the venue in which they were exhibited (Governor Rockefeller was brother to Nelson, member of the MoMA board, and son to Abigail, founder of the museum).

Kramer was unamused. In a particularly scathing review for the New York Times, he called the show “overweeningly intellectual”, making sure to question the artistic value of the work entirely (“There are more than 150 artists—or ‘artists’—from 15 countries”) before declaring the entire show “egregiously boring.” The critic, it seems, was not willing to consider the conceptual and political meaning behind the work, instead taking jabs at its—gasp!—interactive nature: “I am not sure I can give a very accurate or coherent account of what the visitor to this exhibition is invited to look at, listen to, sit down on, clamber over, go to sleep in, write on, stand in front of, read, and otherwise connect with.”

If, nearly fifty years ago, Kramer was bored because he refused to see the depth of the ideological implications in the art, I am bored because I simply cannot find it. Encontros (2017) features two iPhones, screen-to-screen, one showing a video of the brown waters of the Amazon, the other showing the black waters of the Amazon’s Rio Negra tributary. The site at which the two meet—a place of indigeonous persecution and slavery since the early 1700s—is a marvel of nature, a limnological metaphor of the clash between cultures, one overpowering the other. The artist statement—signed by fifteen individuals—makes no mention of any sort of geopolitical consideration, instead opting to highlight that “the system searches for real-time information in such a way as to reflect changes in the tides and the phases of the moon.” Projects like Encontros not only could be political, they feel like they should be. This begs the question then: do the artists (who, presumably, also write the text to accompany the piece) leave it to me to find the culturally critical element? Is the political in the eye of the beholder?

I would be more inclined to consider this possibility if not for the dearth of ideology-inviting rhetoric in the majority of the programming and literature surrounding each organization’s events. With the notable exception of Transmediale, the mission statements of the festivals in question sprinkle words like “society” and “culture” among pronouncements of the juxtaposition of “Biotechnology and genetic engineering, neurology, robotics, prosthetics and media art” (Ars Electronica) and the ignition of “creativity and discovery where science and art collide” (Science Gallery). Science Gallery, in particular, boasts of turning STEM to STEAM—a dubious cheapening of art in the name of STEM’s focus on education qua employment. In the program’s video appealing to possible funders of “the world’s first university-linked network dedicated to public engagement with science and art”, Luke O’Neill, Director of the Trinity Biomedical Science Institute, declares, “there’s no difference in my mind between an artist and a scientist—we’re all after the truth!” I beg to differ.

Welcome to the fourth and final installment to my series on the history of the Quantified Self. If you’re just joining us, be sure to review parts one, two, and three, wherein I introduced and explored a project that seeks to build a genealogical relationship between an already analogous pair: eugenics and the contemporary Quantified Self movement. The last two posts appear to have, at best, complicated, and at worst, failed the hypothesis: critical breaks along both of the genealogies elucidated within each post seem more like chasms which make eugenics and QS difficult to connect in a meaningful way. At the root of this break seems to be the fundamental tenets underlying each movement. Eugenics, with its emphasis on hereditarily passed physical and psychological traits, precludes the possibility that outside, environmental influences may lead to changes in an individual’s bodily or mental makeup. The Quantified Self, on the other hand, is predicated on the belief that, by tracking the variables associated with one’s activities or environment, one might be able to make adjustments to achieve physical or psychological health. On the surface, then, there is an incommensurability between the two fields. However, by understanding how the technologies of the two movements work in the context of the predominant form of Foucauldian governmentality and biopower of their respective times, we may be able to resolve this chasm.

First, it is important to recognize how closely intertwined the eugenics movement was into the welfare state of early-twentieth century Europe and the United States. Per Nils Roll-Hansen in the conclusion to Eugenics and the Welfare State, in the first decade of the 1900s, a classical concept of genetics was formed in which an individual’s phenotype could be influenced by not only their genetic makeup, but by a combination of genotype and environmental and social factors. After being pioneered by conservative evolutionists such as Galton and his cohort of protégés, then, “reform” eugenics of the 1920s and 1930s was led by scientists looking to jettison the racist reputation of their predecessors through a “renewal of the ‘social contract’ of the movement” (Roll-Hansen 260).   In Scandinavia, Britain, and elsewhere in Europe, newly elected Labour governments used legislation to enact the forced sterilization of the “feebleminded” and weak in the name of the protection of both that marginalized group and the population as a whole. In England in particular, liberals used “eugenical arguments to disseminate information to the working classes on how they should behave biologically for their own benefit and that of the English ‘race’” (Hasian 115). American liberals used neo-Lamarckian ideas concerning the social influences on human traits to emphasize the importance of “race poison” studies (Hasian 128)—research that “proved” that, for example, cigarettes and alcohol had negative downstream effects on the human race (Hasian 28).

For an understanding of how this type of welfare state came to be, I turn, now, to the eighteenth century, as sovereign power shifted from individuals ruling over principalities and whomever lived inside of them to governments overseeing populations understood to live in, travel to, trade with, and war with neighboring lands. In a 1978 talk to the Collège de France, Michel Foucault outlined this shift in governance, arguing that it ushered in the birth of economies: collections of goods, people, and money that all fell under the sovereignty of a state. Critical to the management of these economies were technologies of counting and tracking—statistics, anthropometrics, and the like. Majia Nadesan, reading Foucault as well as Nikolas Rose, notes that governmentality addresses some key concepts surrounding the organization of society’s technologies, problems, and authorities; it recognizes, too, that individuals are both turned into “self-regulating agents” and/or marginalized as invisible or dangerous (1). In order to explain how hegemonies develop and deploy technologies to control the life of populations, Foucault developed the concept of biopower, “arguably the most pervasive form of power engendering the homologies and systemic regularities across the diverse fields of social life” (Nadesan 3).

Without question, the technologies enabling eugenics and their legislative implementation are prime examples of governmentality and biopower at work—the combination of which can be understood through Foucault’s “biopolitics”. In the biopolitical realm, knowledge of man—at once global, quantitative (i.e., concerning the population), and analytical (i.e., concerning the individual)—is exploited by loci of power to divide, categorize, and act “upon populations in order to securitize the nation” (Nadesan 25). As the nineteenth century came to a close, the negative effects of laissez-faire policies turned the tide towards a more active liberal state, one that enabled citizens to maximize their liberties. Nadesan perfectly sums up where welfare-state sponsored eugenics comes in: “the modern liberal-welfare state utilized biopolitical knowledge and expert authorities to expand its power at the level of the population…while simultaneously these forms of knowledge operated to individualize and subjectify citizens as particular kinds of subjects” (26). This occurred at the expense of the liberties of some individuals, of course, as conceptualizations of the normal and pathological were dispersed throughout the population (Nadesan 26).

As the twentieth century progressed through two World Wars and the biomedical and technological revolutions that accompanied them, psychology, anthropology, and sociology saw major shifts towards the social experiences of the individual in shaping psychologies and behaviors—this is something exemplified in the two brief histories above. Alongside these new visions of what it means to be human, new technologies of the self (e.g., the self-help personality test, the self-experiment, psychotropics) engendered an empowered, self-governing subject of liberal democracy (Nadesan 149). These technologies of the self (Foucault’s term) ushered in a neoliberal mode of governance—one in which welfare states jettisoned responsibility for the individual. As Nadesan notes, “By stressing ‘self-care,’ the neoliberal state divulges paternalistic responsibility for its subjects but simultaneously holds its subjects responsible for self-government” (33). Enter, then, the Quantified Self: a movement predicated on the use of technologies which enable individuals not only to self-track, but to make changes in their lives—based on the data collected—towards a normative conceptualization of a good, healthy citizen. And while certainly not a prerequisite, sharing that data with others adds “value” to it by enabling comparison and competition, though at the risk of being utilized by surveillance apparatuses.

Eugenics, then, was seemingly predicated on wholesale changes to the collective while Quantified Self is based on an individual’s efforts to play their responsible part in society—for the sake of that same collective. Both utilize technologies of governmentality that depend on statistical mechanisms invented and/or made mainstream by Francis Galton. But this relationship is more than just analogous—by tracking the development of technologies of experimentation, behaviorism, psychometrics, and personality classification, we see a complex progression from welfare-style “one for all” approach to the neoliberal state’s reliance on self-governance. I have already noted a number of social-welfare focused programs offered by “reform” eugenicists. In hard-liner, “positive” eugenics, those deemed worthy are incentivized to reproduce—see, for example, Galton’s £5,000 wedding gift proposal, as well as Henry Fairfield Osborn’s speech to the Third International Congress on Eugenics, in which he argued for “not more but better Americans” (41). To a eugenicist—even a hard-liner—these types of programs might be considered what William Epstein calls “moral behaviorism—the use of material incentives to promote socially acceptable behavior” (183-4), in this case, reproduction for the sake of the race. The development of behaviorism into self-experimentation and incentivized self-tracking makes a great deal of sense, then, as the neoliberal emphasis on self-care no longer warranted social welfare programs. Nadesan, once again citing Rose, notes that “political authorities sought to ‘act at a distance’ upon the desires and social practices of citizens primarily through the promulgation of biopolitical knowledge, experts, and institutions that promised individual empowerment and self-actualization” (27). The classificatory power of psychometric testing under the early-twentieth century welfare state served to exclude and erase those individuals deemed worthy of institutionalization or, worse, deemed unworthy of reproduction. The same technology which enabled these tests drive the self-informing power of the daily happiness meters and mood surveys of the Quantified Self. Nadesan, this time citing Mitchell Dean, points out neoliberalism’s heavy emphasis on normalization of our social and cultural condition—a normalization centered around containment and extrication of risk; “concerns for ‘responsibility’ and ‘obligation’ outweigh freedom and rehabilitation” (35). Participating in the Quantified Self, one is under the impression that their freedom to excel will be enhanced by the adjustments made thanks to the data they have collected. Welfare states sought to normalize towards compliance through aggregate data. The neoliberal state aggregates through surveillance apparatuses for the sake of risk management. Galton’s psychometrically driven tests classified those worthy of breeding and those not. Tracing the progression of these tests along with the shift from social-welfare to neoliberal biopolitics, it is easy to recognize and understand the shift into a market based on products heavily reliant on the collection and analysis of personal data.

What is the history of the quantified self a history of? One could point to technological advances in circuitry miniaturization or in big data collection and processing. The proprietary and patented nature of the majority of QS devices precludes certain types of inquiry into their invention and proliferation. But it is not difficult to identify one of QS’s most critical underlying tenets: self-tracking for the purpose of self-improvement through the identification of behavioral and environmental variables critical to one’s physical and psychological makeup. Recognizing the importance of this premise to QS allows us to trace back through the scientific fields which have strongly influenced the QS movement—from both a consumer and product standpoint. Doing so, however, reveals a seeming incommensurability between an otherwise analogous pair: QS and eugenics. A eugenical emphasis on heredity sits in direct conflict to a self-tracker’s belief that a focus on environmental factors could change one’s life for the better—even while both are predicated on statistical analysis, both purport to improve the human stock, and both, as argued by Dale Carrico, make assertions towards what is a “normal” human.

A more complicated relationship between the two is revealed upon attempting this genealogical connection. What I have outlined over the past few weeks is, I hope, only the beginning of such a project. I chose not to produce a rhetorical analysis of the visual and textual language of efficiency in both movements—from that utilized by the likes of Frederick Taylor and his eugenicist protégés, the Gilbreths, to what Christina Cogdell calls “Biological Efficiency and Streamline Design” in her work, Eugenic Design, and into a deep trove of rhetoric around efficiency utilized by market-available QS device marketers. Nor did I aim to produce an exhaustive bibliographic lineage. I did, however, seek to use the strong sense of self-experimentation in QS to work backwards towards the presence of behaviorism in early-twentieth century eugenical rhetoric. Then, moving in the opposite direction, I tracked the proliferation of Galtonian psychometrics into mid-century personality test development and eventually into the risk-management goals of the neoliberal surveillance state. I hope that what I have argued will lead to a more in-depth investigation into each step along this homological relationship. In the grander scheme, I see this project as part of a critical interrogation into the Quantified Self. By throwing into sharp relief the linkages between eugenics and QS, I seek to encourage resistance to fetishizing the latter’s technologies and their output, as well as the potential for meaningful change via those technologies.

Gabi Schaffzin is a PhD student at UC San Diego. He swore he’d never bring Foucault into his Cyborgology posts. ¯\_(ツ)_/¯. 


References

Carrico, Dale. “Two Variations of Contemporary Eugenicist Politics.” Two Variations of Contemporary Eugenicist Politics, 1 Jan. 1970, amormundi.blogspot.com/2008/01/two-variations-of-contemporary.html. Accessed 22 Mar. 2017.

Cogdell, Christina. Eugenic Design: Streamlining America in the 1930s. Philadelphia, Pa, University of Pennsylvania Press, 2010.

Epstein, William M. The Masses Are the Ruling Classes: Policy Romanticism, Democratic Populism, and American Social Welfare. New York, NY, Oxford University Press, 2017.

Foucault, Michel. “Governmentality.” The Foucault Effect Studies in Governmentality, edited by Graham Burchell et al., The University of Chicago Press, Chicago, 1991, pp. 87–104.

Hasian, Marouf Arif. The Rhetoric of Eugenics in Anglo-American Thought. Athens, University of Georgia Press, 1996.

Nadesan, Majia Holmer. Governmentality, Biopower, and Everyday Life. New York, Routledge, 2011.

Roll-Hansen, Nils. “Conclusion: Scandinavian Eugenics in the International Context.” Eugenics and the Welfare State: Sterilization Policy in Denmark, Sweden, Norway, and in Finland, edited by Gunnar Broberg and Nils Roll-Hansen, Michigan State University Press, East Lansing, 2005, pp. 259–271.

Perkins, Henry Farnham, and Henry Fairfield Osborn. “Birth Selection versus Birth Control.” A Decade of Progress in Eugenics; Scientific Papers of the Third International Congress of Eugenics, Williams & Wilkins, Baltimore, 1934, pp. 29–41.

Welcome to part three of my multi-part series on the history of the Quantified Self as a genealogical ancestor of eugenics. In last week’s post, I elucidated Francis Galton’s influence on experimental psychology, arguing that it was, largely, a technological one. In an oft-cited paper from 2013, researcher Melanie Swan argues that “the idea of aggregated data from multiple…self-trackers[, who] share and work collaboratively with their data” will help make that data more valuable—be it to the individual tracking, physician working with them, corporation selling the device worn, or other stakeholder (86). No doubt, then, the value of the predictive power of correlation and regression to these trackers. Harvey Goldstein, in a paper tracing Galton’s contributions to psychometrics, notes that Galton was not the only late-nineteenth century scientist to believe that genius was passed hereditarily. He was, however, one of the few to take up the task of designing a study to show genealogical causality regarding character, thanks once again to his correlation coefficient and resultant laws of regression.

Galton’s contributions to psychometrics go beyond technological, however, and into methodological. In what I might have also included as an example of the scientist’s support for self-experimentation, Galton’s 1879 “Psychometric Experiments” features the results of a word association test performed on himself:

The plan I adopted was to suddenly display a printed word, to allow about a couple of ideas to successively present themselves, and then, by a violent mental revulsion and sudden awakening of attention, to seize upon those ideas before they had faded, and to record them exactly as they were at the moment when they were surprised and grappled with. (426)

Famously, this word association test was used by Carl Jung as he developed methods to classify his subjects into his various psychological types (Paul 82). Eventually, this tool pioneered by Galton was used to build the Myers-Briggs Type Indicator a 93-question test which plots a test-taker’s personality along multiple axes. Interestingly, the MBTI works against what Nicholas Lemann calls “the first principle of psychometrics…that all distributions bunch up in the middle, in the familiar form of a bell curve” (91). Because of the MBTI’s assumption that individuals are either introverts or extroverts, and so on, resultant data would look like an inverse bell curve, with data bunched up on either end of the axes. Though the test had been conceived of decades prior, Katherine Briggs and Isabel Briggs Myers were finally inspired to finalize the MBTI’s matrices in 1943. The test was, per its creators, intended to help people understand one another—a concern inspired by the onset of World War II, which also provided a more practical reason for its development: helping women who were replacing men in the industrial workplace to find the right “fit” in their new jobs (Myers 208).

Beyond influence in managerial-type personality tests, a Galtonian lineage can be found in the development of the Minnesota Multiphasix Personality Inventory. The 567-item questionnaire was built using a system derived from the nosological methodology of Emil Kraepelin, a German psychologist who, in 1921, published a paper arguing for “inner colonization”—what one translator suggests “as being rightly associated with the eugenics movement” (Engstrom and Weber 341). While the MMPI is perhaps the most widely used psychological personality test, it is closely followed by the Sixteen Personality Factor Questionnaire, a 187-item test developed by Raymond Cattell in the 1940s (Paul xii, xiv). The eccentric researcher developed his own language (with words like “Autia”, “Harria”, Parmia”, and “Zeppia” all referring to different character traits) in order to describe subjects in a novel manner. Cattell’s quirkiness is perhaps not too surprising when his academic pedigree is revealed: he was recruited into psychology by the eugenicist Cyril Burt (Paul 179), who was eventually revealed to have falsified most of his data in twin studies meant to support Galtonian conceptualizations of heredity (Hattie 259). Charles Spearman, Cattell’s academic mentor, was another eugenicist who argued that “‘An accurate measurement of everone’s inteligence would seem to herald the feasibility of selecting better endowed persons for admission into citizenship—and even for the right of having offspring’” (Paul 179). And while Cattell attempted, after World War II, to walk back his belief in purely hereditary personality traits, he could not resist revisiting his eugenicist ways in his 1972 A New Morality From Science (Paul 180-81).

The history of Galton and eugenics, then, can be traced into the history of personality tests. Once again, we come up against an awkward transition—this time from personality tests into the Quantified Self. Certainly, shades of Galtonian psychometrics show themselves to be present in QS technologies—that is, the treatment of statistical datasets for the purpose of correlation and prediction. Galton’s word association tests strongly influenced the MBTI, a test that, much like Quantified Self projects, seeks to help a subject make the right decisions in their life, though not through traditional Galtonian statistical tools. The MMPI and 16PFQ are for psychological evaluative purposes. And while some work has been done to suggest that “mental wellness” can be improved through self-tracking (see Kelley et al., Wolf 2009), much of the self-tracking ethos is based on factors that can be adjusted in order to see a correlative change in the subject (Wolf 2009). That is, by tracking my happiness on a daily basis against the amount of coffee I drink or the places I go, then I am acknowledging an environmental approach and declaring that my current psychological state is not set by my genealogy. A gap, then, between Galtonian personality tests and QS.

Next week, I’ll conclude the series by suggesting that this gap might be closed with the help of your friend and mine, Michel Foucault. Come back, won’t you?

Gabi Schaffzin is a PhD student at UC San Diego. He hates personality tests—of which he has had to take many, thanks to his past life—because he always ends up smack dab in the middle of whatever silly outcomes are possible. 


References

Engstrom, E. J., and M. M. Weber. “Classic Text No. 83: ‘On Uprootedness’ by Emil Kraepelin (1921).” History of Psychiatry, vol. 21, no. 3, 2010, pp. 340–350., doi:10.1177/0957154×10376890.

Galton, Francis. “Psychometric Experiments.” Brain, vol. 2, no. 2, 1879, pp. 149–162., doi:10.1093/brain/2.2.149.

Goldstein, Harvey. “Francis Galton, Measurement, Psychometrics and Social Progress.” Assessment in Education: Principles, Policy & Practice, vol. 19, no. 2, 2012, pp. 147–158., doi:10.1080/0969594x.2011.614220.

Hattie, J. (1991). “The Burt Controversy: An essay review of Hearnshaw’s and Joynson’s biographies of Sir Cyril Burt.” Alberta Journal of Educational Research, 37(3), 259-275.

Lemann, Nicholas. The Big Test: the Secret History of the American Meritocracy. New York, Farrar, Straus and Giroux, 2007.

Myers, Isabel Briggs, and Peter B. Myers. Gifts Differing: Understanding Personality Type. Mountain View, CA, Nicholas Brealey Publishing, 2010.

Paul, Annie Murphy. The Cult of Personality: How Personality Tests Are Leading Us to Miseducate Our Children, Mismanage Our Companies, and Misunderstand Ourselves. New York, Free Press, 2004.

Swan, Melanie. “The Quantified Self: Fundamental Disruption in Big Data Science and Biological Discovery.” Big Data, vol. 1, no. 2, 2013, pp. 85–99., doi:10.1089/big.2012.0002.

Wolf, Gary. “Measuring Mood – Current Research and New Ideas.” Quantified Self, 12 Feb. 209, quantifiedself.com/2009/02/measuring-mood-current-resea/. Accessed 21 Mar. 2017.

Last week, I began an attempt at tracing a genealogical relationship between eugenics and the Quantified Self. I reviewed the history of eugenics and the ways in which statistics, anthropometrics, and psychometrics influenced the pseudoscience. This week, I’d like to begin to trace backwards from QS and towards eugenics. Let me begin, as I did last week, with something quite obvious: the Quantified Self has a great deal to do with one’s self. Stating this, however, helps place QS in a historical context that will prove fruitful in the overall task at hand.

In a study published in 2014, a group of researchers from both the University of Washington and the Microsoft Corporation found that the term “self-experimentation” was used prevalently among their QS-embracing subjects.

“Q-Selfers,” they write, “wanted to draw definitive conclusions from their QS practice—such as identifying correlation…or even causation” (Choe, et al. 1149). Although not performed with “scientific rigor”, this experimentation was about finding meaningful, individualized information with which to take further action (Choe, et al. 1149).

Looking back at the history of self-experimentation in the sciences—in particular, experimental and behavioral psychology—leads to a 1981 paper by Reed College professor and psychologist, Allen Neuringer, entitled, “Self-Experimentation: A Call for Change”. In it, Neuringer argues for a closer emphasis on the self by behaviorists:

If experimental psychologists applied the scientific method to their own lives, they would learn more of importance to everyone, and assist more in the solution of problems, than if they continue to relegate science exclusively to the study of others. The area of inquiry would be relevant to the experimenter’s ongoing life, the subject would be the experimenter, and the dependent variable some aspect of the experimenter’s behavior, overt or covert. (79)

The psychologist goes on to suggest that poets and novelists could use the method to discover what causes love and that “all members of society” will “view their lives as important” thanks to their contributions to scientific progress (93).

Neuringer’s argument is heavily influenced by the work of B. F. Skinner, the father of radical behaviorism—a subset of psychology in which the behavior of a subject (be it human or otherwise) can be “explained through the conditioning…in response to the receipt of rewards or punishments for its actions” (Gilette 114). We can see, then, a lineage of both behavioral and experimental psychologies on the quantified-self: not only do QS devices track, but many of the interfaces built into and around them embrace “gamification”. That is, beyond the watch face or pedometer display, the dashboards displaying results, the emails and alerts presented to subjects, the “competition” features, etc., all embrace what Deborah Lupton calls “the rendering of aspects of using…self-tracking as games…an important dimension of new approaches to self-tracking as part of motivation strategies” (23).

The field of experimental psychology from which behaviorism grew when, in 1913, John B. Watson wrote “Psychology as the Behaviorist Views It”, was not specifically an invention of Francis Galton. This is not to say that Galton did not partake in experimental psychology during his eugenic research. In fact, his protégé and biographer, Karl Pearson, cites “a leading psychologist” writing in 1911: “‘Galton deserves to be called the first Englishman to publish work that was strictly what is now called Experimental Psychology, but the development of the movement academically has, I believe, in no way been influenced by him’” (213). Pearson, who included this quote in the 1924 second volume of The Life, Letters and Labours of Francis Galton, goes on to argue that American and English psychological papers are far superior to their continental counterparts thanks directly to Galton’s work on correlation in statistical datasets, though, per Ian Hacking, Pearson later notes that correlation laws may have been identified “much earlier in the Gaussian [or Normal] tradition” (187).

Here we begin to see an awkward situation in our quest to draw a line from Galton and hard-line eugenics (we will differentiate between hardline and “reform” eugenics further on) to the quantified self movement. Behaviorism sits diametrically opposed to eugenics for a number of reasons. Firstly, it does not distinguish between human and animal beings—certainly a tenet to which Galton and his like would object, understanding that humans are the superior species and a hierarchy of greatness existing within that species as well. Secondly, behaviorism accepts that outside, environmental influences will change the psychology of a subject. In 1971, Skinner argued that “An experimental analysis shifts the determination of behavior from autonomous man to the environment—an environment responsible both for the evolution of the species and for the repertoire acquired by each member” (214).  This stands in direct conflict with the eugenical ideal that physical and psychological makeup is determined by heredity. Indeed, the eugenicist Robert Yerkes, otherwise close with Watson, wholly rejected the behaviorist’s views (Hergenhahn 400). Tracing the quantified-self’s behaviorist and self-experimental roots, then, leaves us without a very strong connection to the ideologies driving eugenics. Still, using Pearson as a hint, there may be a better path to follow.

So come back next week and we’ll see what else we can dig up in our quest to understand a true history of the Quantified Self.

Gabi Schaffzin is a PhD student at UC San Diego. He has a very good dog named Buckingham. 


References

Choe, Eun Kyoung, et al. “Understanding Quantified-Selfers’ Practices in Collecting and Exploring Personal Data.” Proceedings of the 32nd Annual ACM Conference on Human Factors in Computing Systems – CHI ’14, 2014, pp. 1143–1152., doi:10.1145/2556288.2557372.

Gillette, Aaron. Eugenics and the Nature-Nurture Debate in the Twentieth Century. New York, Palgrave Macmillan, 2011.

Hacking, Ian. The Taming of Chance. Cambridge, Cambridge University Press, 1990.

Hergenhahn, B. R. An Introduction to the History of Psychology. Belmont, CA, Wadsworth, 2009.

Lupton, Deborah. The Quantified Self: a Sociology of Self-Tracking. Cambridge, UK, Polity, 2016.

Neuringer, Allen. “Self-Experimentation: A Call for Change.” Behaviorism, vol. 9, no. 1, 1981, pp. 79–94., academic.reed.edu/psychology/docs/SelfExperimentation.pdf. Accessed 19 Mar. 2017.

Pearson, Karl. The Life, Letters and Labours of Francis Galton. Characterisation, Especially by Letters. Index. Cambridge, UP, 1930, galton.org/pearson/index.html. Accessed 17 Mar. 2017.

In the past few months, I’ve posted about two works of long-form scholarship on the Quantified Self: Debora Lupton’s The Quantified Self and Gina Neff and Dawn Nufus’s Self-Tracking. Neff recently edited a volume of essays on QS (Quantified: Biosensing Technologies in Everyday Life, MIT 2016), but I’d like to take a not-so-brief break from reviewing books to address an issue that has been on my mind recently. Most texts that I read about the Quantified Self (be they traditional scholarship or more informal) refer to a meeting in 2007 at the house of Kevin Kelly for the official start to the QS movement. And while, yes, the name “Quantified Self” was coined by Kelly and his colleague Gary Wolf (the former founded Wired, the latter was an editor for the magazine), the practice of self-tracking obviously goes back much further than 10 years. Still, most historical references to the practice often point to Sanctorius of Padua, who, per an oft-cited study by consultant Melanie Swan, “studied energy expenditure in living systems by tracking his weight versus food intake and elimination for 30 years in the 16th century.” Neff and Nufus cite Benjamin Franklin’s practice of keeping a daily record of his time use. These anecdotal histories, however, don’t give us much in terms of understanding what a history of the Quantified Self is actually a history of.

Briefly, what I would like to prove over the course of a few posts is that at the heart of QS are statistics, anthropometrics, and psychometrics. I recognize that it’s not terribly controversial to suggest that these three technologies (I hesitate to call them “fields” here because of how widely they can be applied), all developed over the course of the nineteenth century, are critical to the way that QS works. Good thing, then, that there is a second half to my argument: as I touched upon briefly in my [shameless plug alert] Theorizing the Web talk last week, these three technologies were also critical to the proliferation of eugenics, that pseudoscientific attempt at strengthening the whole of the human race by breeding out or killing off those deemed deficient.

I don’t think it’s very hard to see an analogous relationship between QS and eugenics: both movements are predicated on anthropometrics and psychometrics, comparisons against norms, and the categorization and classification of human bodies as a result of the use of statistical technologies. But an analogy only gets us so far in seeking to build a history. I don’t think we can just jump from Francis Galton’s ramblings at the turn of one century to Kevin Kelly’s at the turn of the next. So what I’m going to attempt here is a sort of Foucauldian genealogy—from what was left of eugenics after its [rightful, though perhaps not as complete as one would hope] marginalization in the 1940s through to QS and the multi-billion dollar industry the movement has inspired.

I hope you’ll stick around for the full ride—it’s going to take a a number of weeks. For now, let’s start with a brief introduction to that bastion of Western exceptionalism: the eugenics movement.

Francis Galton had already been interested in heredity and statistics before he read Charles Darwin’s On the Origin of the Species upon its publication in 1859. The work, written by his half-cousin, acted as a major inspiration in Galton’s thinking on the way that genius was passed through generations—so much so, that Galton spent the remainder of his life working on a theory of hereditary intelligence. His first publication on the topic, “Hereditary Talent and Character” (1865), traced the genealogy of nearly 1,700 men whom he deemed worthy of accolades—a small sample of “the chief men of genius whom the world is known to have produced” (Bullmer 159)—eventually concluding that “Everywhere is the enormous power of hereditary influence forced on our attention” (Galton 1865, 163). Four years later, the essay inspired a full volume, Hereditary Genius, in which Galton utilized Adolphe Quetelet’s statistical law detailing a predictive uniformity in  deviation from a normally distributed set of data points—the law of errors.

Much like Darwin’s seminal work, Quetelet’s advancements in statistics played a critical part in the development of Galton’s theories on the hereditary nature of human greatness. Quetelet, a Belgian astronomer, was taken by his predecessors’ work to normalize the variation in error that occurred when the position of celestial bodies were measured multiple times. Around the same time—that is, in the first half of the nineteenth century—French intellectuals and bureaucrats alike had taken a cue from Marquis de Condorcet, who had proposed a way to treat moral—or, social—inquiries in a similar manner to the way the physical sciences were approached. Quetelet, combining the moral sciences with normal distributions, began to apply statistical laws of error in distribution to the results of anthropometric measurements across large groups of people: e.g., the chest size of soldiers, the height of school boys. The result, which effectively treated the variation between individual subjects’ measurements in the same manner as a variation in a set of measurements of a single astronomical object, was homme type—the typical man (Hacking 111–12).

In 1889, Galton wrote, “I know of scarcely anything so apt to impress the imagination as the wonderful form of cosmic order expressed by the ‘Law of Frequency of Error’” (66). Six years earlier, in Inquiries Into Human Faculty, he declared that he was interested in topics “more or less connected with that of the cultivation of race” (17, emphasis added)—that is, eugenics—than simply the observation of it. Galton’s argument was rather simple, albeit vague: society should encourage the early marriage and reproduction of men of high stature. Per Michael Bulmer, “He suggested that a scheme of marks for family merit should be devised, so that ancestral qualities as well as personal qualities could be taken into account” (82). Once these scores were evaluated, the individuals with top marks would be encouraged to and rewarded for breeding; at one point, he recommended a £5,000 “wedding gift” for the top ten couples in Britain each year, accompanied by a ceremony in Westminster Abbey officiated by the Queen of England (Bulmer 82). This type of selective breeding would eventually be referred to as “positive eugenics”.

The statistical technologies developed by Quetelet and the like were utilized by Galton for more than just the evaluation of which individuals were worthy of reproduction, they also allowed for the prediction of how improvements would permeate through a population. Specifically, he argued that if a normally distributed population (being measured upon whichever metric—or combination of which—he had chosen) reproduced, it would result in another normally distributed population—that is, the bulk of the population would be average or mediocre (Hacking 183). He called this the law of regression and understood it to slow severely the improvement of a race towards the ideal. However, if one could guarantee that those individuals at the opposite end of the bell curve—that is, the morally, physically, or psychologically deficient—were not reproducing, then an accelerated reproduction of the exceptional could take place (Bulmer 83). Thus was born “negative eugenics”.

I will revisit the proliferation of eugenics a bit later in this study, but it is important here to note that the historical trail of the active and public implementation of eugenics eventually goes cold somewhere between 1940 and 1945, depending on in which country one is looking. Most obviously, the rise of the Third Reich and its party platform built primarily on eugenicist policies had a direct effect on the decline of eugenics towards the midway point of the twentieth century. Previously enacted (and confidently defended) state policies regarding forced sterilization from Scandinavia to the United States were eventually struck-down and stay as embarrassing marks on national histories to this day (Hasian 140), though the last US law came off the books in the 1970s.

This is not to suggest that the scientific ethos behind the field—that one’s genetic makeup determines both physical and psychological traits—went completely out of fashion. Instead, I hope it has becomes obvious, even in this brief overview, that the aforementioned analogies between eugenics and QS are not difficult to draw. But how do we get from one to the other? And am I being crazy in doing so?

The second question is probably up for grabs for a little while. I’ll begin to answer the first one next week, however, when I sketch out a history of self-experimentation and behavioral psychology, moving backwards from the Quantified Self to eugenics. Come back again, won’t you?

Gabi Schaffzin is a PhD student at UC San Diego. Having just returned from the east coast, his jetlag has left him without anything witty to add. 


References

Bulmer, M. G. Francis Galton: Pioneer of Heredity and Biometry. Baltimore, Johns Hopkins University Press, 2003.

Galton, Francis. “Hereditary Talent and Character.” Macmillan’s Magazine, 1865, pp. 157–327, galton.org/essays/1860-1869/galton-1865-hereditary-talent.pdf. Accessed 17 Mar. 2017.

Galton, Francis. Natural Inheritance. New York, AMS Press, 1973 (Originally published 1889).

Hacking, Ian. The Taming of Chance. Cambridge, Cambridge University Press, 1990.

Hasian, Marouf Arif. The Rhetoric of Eugenics in Anglo-American Thought. Athens, University of Georgia Press, 1996.

Lupton, Deborah. The Quantified Self: a Sociology of Self-Tracking. Cambridge, UK, Polity, 2016.

Neff, Gina, and Dawn Nafus. Self-Tracking. Cambridge, MIT Press, 2016.