A few weeks ago, Apple and Facebook announced a new benefit for women employees: if you are a woman with functioning ovaries and would like to freeze your eggs to hold off having biological children while you prioritize your job, the company will pay for the procedure. (I know there are a lot of qualifiers and adjectives in that sentence, but those are important qualifiers that often get overlooked: not all women have ovaries, not all children are biologically related to their parents, etc etc).

Jenny wrote about this when the news first broke. There, she argued

All reproductive technologies carry politics of gender and power, and in the U.S., these gender-power politics are embedded in the logics of capitalism. It is therefore only within an unequal gendered system of capitalist logic that we can evaluate the political agenda of particular technologies and their implementation.

She’s absolutely correct. In this post I want to flesh out Jenny’s argument a bit and think about the specific “gendered system of capitalist logic” this policy both reflects & reproduces.

This policy is an extremely clear example of the shifting relationship between (a) gender as identity and (b) patriarchy as system of social organization. Traditionally, Western patriarchy uses a hierarchical, binary, sex-based gender system to group society into two (internally heterogeneous with respect to race/class/sexuality/etc) categories and to position those categories in relationship to one another: men on top, women on bottom. The kind of sexed body you have determines the kind of gender you ought to exhibit, and your position/role in society. Male body = masculine gender = full member of patriarchal society, on the one hand, female body = feminine gender = not a real or full member of patriarchal society, on the other.

But today this airtight sex→ gender→ patriarchial role logic is getting loosened up. People with women’s bodies are being granted to “masculine” gender privileges that used to be reserved only for people with men’s bodies. For example, the physiological process of pregnancy prevents women who choose to get pregnant from fully meeting what are considered normal expectations for full-time employment: at some point, you gotta take some time off for prenatal care, maternity leave, and so on. Or, the fact of having a body that reproduced via pregnancy was what excluded women from full, “normal” participation in the workforce. Institutions and expectations are structured so that having a woman’s body is a barrier for participation and success.

What the Apple/FB policy does is remove that traditional barrier…for some people with women’s bodies. It gives some people with certain kinds of female bodies access to the privileges of masculine gender in patriarchy. In this case, it’s the privilege of both reproductive and economic self-determination. Women’s embodiment is no longer necessarily an impediment to masculine privilege and full membership in patriarchal society. This policy does nothing to help trans women address the specific kinds of gendered oppression they face: it only helps women with the most ‘normal’ bodies (bodies that are destined for pregnancy). And that’s the point: only some women benefit.

But what kinds of people, with what kinds of women’s bodies can and do take advantage of this leveling of the sexed-body playing field? Well, think about it: this is a benefit for full-time employees working middle-class jobs–you know, people who already have a lot of access to other kinds of privilege, the kinds of privilege that land you a really great job. (What percentage of Apple’s or FB’s female employees are black women?) If you don’t have access to these other forms of privilege, then having a woman’s body is still something that denies you access to things like reproductive and economic self-determination. Women’s embodiment makes you susceptible to feminization, that is, to patriarchal marginalization, domination, and harm.

Remember, it’s not that these businesses have changed to accommodate different kinds of bodies and embodiment, but that they’ve allowed some people to better and more completely adapt to the (patriarchal) norms they’ve already set. In other words, policies like this allow some women to adapt their bodies to be more perfectly in synch with the demands capitalism makes on workers. The egg-freezing policy exempts the most privileged women in our society from the harmful effects of feminization. It does nothing to de-center masculine privilege or challenge patriarchy. It just helps some women with certain kinds of  bodies behave more like masculine members of patriarchal society. And if feminism is about destroying patriarchy, then that ain’t feminism.

Contemporary patriarchy grants some women (with certain kinds of gendered bodies) full access to patriarchal privilege to trick us into thinking patriarchy is over and that we don’t need feminism anymore. The egg-freezing program is just one tiny example of this much broader, deeper shift in the “gendered system of capitalist logic.” Increasingly, we’re putting in place policies and practices that exempt some “women” from the harmful effects of feminization. But this also has a side-effect: it intensifies the harm that feminization does to those it still affects. Often, this intensification takes the form of victim-blaming, of making individuals appear to be responsible for structural phenomena: if you still feel the negative effects of feminization, it must be because you didn’t make the right choices, the choices that would put you in a position to access the benefits of patriarchal masculinity. Patriarchy, as the institutionalized system of masculine privilege, recedes behind the veil of “individual responsibility” and “choice.”

Disarticulating (some kinds of) women’s bodies from the negative effects of feminization in patriarchy, patriarchy can disingenuously claim “#notallwomen”: not all women are marginalized, not all women hit the glass ceiling, women as a class aren’t oppressed. And as we know, “notall[X]” discourse is really just about preserving the status quo by failing to interrogate the institutionalized character of oppression.

 

This is a cross-post from Its Her Factory.

 

Is data “vibrant” in the new materialist sense? That is, does it exhibit the “agency” or power that living things have to affect other things? It may not materially vibrate in the way sound waves do, but in its interaction with other phenomena (especially other data), data does exhibit the liveliness new materialists attribute to all things. In fact, some data scientists use concepts of “vibrancy” to describe data’s post- and extra-human capacities to percieve, know, and act.

Intel Vibrant Data from Incubate Design on Vimeo.

For example, in 2013 Intel released a video called “Vibrant Data.” The video begins by contrasting linear perception to networked perception, and arguing that data is “a kind of augmented intuition” that can overcome the limitations of linearly focused perception, which phenomenologist Alia Al-Saji calls “objectifying vision.” Focusing on the persistence of a single signal through time, linear perception overlooks resonances among signals. In other words, by tuning into the primary signal, linear perception tunes out this signal’s overtones. Or, when we treat our lives as linear paths of first-person perspective conscious intentionality, we can only relate to those whose paths directly cross ours. It’s difficult if not impossible to find people whose patterns of behavior are in synch with ours if our paths don’t directly intersect. For example, if I go to the campus coffee shop on Tuesdays and Thursdays, and a researcher with similar interests goes to the campus coffee shop on Mondays and Wednesdays, we won’t know that it might be a good idea to talk about our work over coffee because our behavioral patterns, though strongly resonant, do not directly intersect. Data tunes into these resonances, to behavioral patterns beyond the spectrum of first-person subjective intentionality. As the Intel video suggests, data can connect us to people with “similar interests” and “overlapping circles of friends” but whom we have not yet “crossed paths.”

Intel’s video illustrates this with a story about “Veronica.” Interestingly, the Intel video uses music as a vehicle for data-augmented sociality: Veronica “listens to music most of the day” and thinks of her life “like a soundtrack.” Vibrant data finds Veronica a new favorite band, gets her to their concert in a town several hundred miles away, and in the process connects her to friends new and old. Data–or rather, “Veronica’s data”–knows that she likes this band, that it has an upcoming show in the region, that she won’t want to drive there, that somebody can fly her there, and that at the show she’s likely to run into some high school friends she’s lost touch with. It infers all these things from Veronica’s established patterns of behavior: what music she listens to, her transportation habits, and so on. Veronica’s Data both perceives and acts on information that’s imperceptible to and unknowable from Veronica’s first-person subjective perspective. In this way, as the video’s voice-over tells us, vibrant data gives us access to “experiences, connections, and possibilities we can’t begin to imagine.” Because data can access and process distributed networks of information that are invisible to the Modern subject’s first-person linear gaze, it can bring us more in tune with ourselves, with our surrounding environment, and with one another. And this in-tune-ness, the warm resonance with which vibrant data enriches our life, is described with reference to the “effervescence in the air” at a rock show. The affective and aesthetic high we get from listening to music we like with people we like is Intel’s metaphor for vibrancy.

Though Intel describes vibrancy with metaphors of aesthetic pleasure, it seems to function more like a Marcusean performance principle for algorithmic data processing. More precisely, vibrancy is a pleasure principle for us, but a performance principle for our data. Veronica–and all of us for whom she’s the surrogate–has access to this “effervescence” because “vibrant data is hard at work.” In fact, the full narration is: “there’s an effervescence in the air, our vibrant data hard at work bringing us experiences, connections, and possibilities we can’t begin to imagine.” Mining all our noise for the most resonant signals buried in it, data performs vibrancy in order to nurture and enrich our lives. Sure, vibrant data has agency: Intel’s video makes “data” (or, “Veronica’s data”) the subject of sentences: it “notices,” “decides,” “buys,” “knows Veronica,” “takes a leap,” “suggests,” and so on. But is it granted this agency only to indenture it in service to us? Do we grant data vibrancy so we can extract surplus, erm, vibrancy, from its hard work? This is, after all, what happened with white women in the US: they were given access to wage labor so they can be exploited by capitalist patriarchy not just as unwaged domestic workers, but also as feminized wage laborers.

This story of vibrant data completely obscures the fact that “vibrant” data can and is used to make some people’s lives more precarious, to subject them to disempowering and immobilizing surveillance. Credit data and NSA data are just as “vibrant” as Intel’s data. The “Vibrant Data Project” is much more aware of data’s ambivalent political potential. That’s why they emphasize vibrancy as a method of “democratizing data.” In an interview with  TED blog, founder Eric Barlow argues that “a more vibrant data system…encourages more people to participate.” Democracy, here, means participation. This is a nearly textbook example of liberal democratic theory: participation, often in the form of having a “voice,” is both necessary and sufficient for enfranchisement.

However, as Jacques Ranciere has argued, the very data science that Barlow and his Vibrant Data Project collaborators appeal to has transformed participation and envoicement into post- (which is to say, anti-) democratic practices. That is, data science has made participatory envoicement the very means of de-democratization. To explain, here’s a quote from a blog post I wrote on the topic. According to Ranciere,

 

Data is “the conjunction of science and the media” which understands itself as “exhaustively presenting the people and its parts and bringing the count of those parts in line with the image of the whole” ([Disagreement] 103). Data isn’t treated as a symbol or signifier of the facts, but as a measurement of the facts themselves….Ever-advancing technology “is supposed to liberate the new community as a multiplicity of local rationalities and ethnic, sexual, religious, cultural, or aesthetic minorities” (104). Twitter, for example, supposedly gives voice and access to people who are otherwise closed out of corporate [mass] media….The (supposed) advantage of “data” is that it allows us to think that we’ve solved all problems of justice, that we live in a post-racial, post-feminist, classless society, in a flat and perfectly meritocratic world. It looks like everyone is included, that everyone has a voice and that their voices count. From this perspective, the only injustices are making false claims about exclusion, marginalization, and oppression (e.g., calling out sexism gets interpreted as itself sexist).

The story we tell ourselves about data, that it is a means to universal envoicement, this story is itself the mechanism of post-democratic disenfranchisement. Following Ranciere’s model, we could say that data cannot make oppression or exclusion as something that is legibly wrong. So, though increasing data’s “vibrancy” might strengthen post-democratic institutions and modes of govenrmentality, it does not ameliorate oppression so much as naturalize it.

When data scientists talk about data’s vibrancy, they’re using vibrancy as a metaphor for agency, either of data itself (as in the Intel video), or of “we the data” (as in the Vibrant Data Project), that leads to a more dynamic, participatory, interactive, indeed, “effervescent” life. This effervescence is the affective, aesthetic pleasure that emerges from inclusion and participation in society. It is the feeling of being alive, that is, of having one’s life supported and facilitated by hegemonic institutions. This “effervescence” might also be understood as what Cristina Beltran identifies as “a kind of beauty that is experienced as a form of visible certitude” or “proof that we have collectively moved beyond prejudice and inequality and now live in a ‘post-feminist’ and ‘postracial’ era with institutions that are now fundamentally fair and accessible”(137-8). [1] That is, it’s the euphoria or effervescence of feeling like one lives and participates in a truly inclusive, democratic society. From this perspective, it’s pretty easy to see how vibrant data is quintessentially biopolitical: it’s the use of statistics to manage and optimize the “life” of the population, of the “we” who are data.

As a technology, biopolitics is politically ambivalent: it can be applied in reactionary and radical ways. The effect of its application depends on a lot of things, including the material-historical situation in which it is applied, and how two key variables are defined. These variables are “life” and the “population”: what kinds of living count as healthy, viable lives, and, given the material-historical situation, whose ways of living most easily fit that definition.

The concept of vibrant data naturalizes those variables–that is, it turns them into constants. The metaphor of “vibrancy” defines life as something that is flexible, resilient, and agentially interactive. For example, as Barlow puts it, “vibrant” things are “moving parts” that “influence one another.” Data is “vibrant” when and because it affects other things, like data and, eventually, behavior. This definition of “life” takes phenomenological life experience of the most privileged members of society as the universalized, generalized mode of life as such. It overlooks the fact that these very same technologies fix oppressed groups in cycles of, as Stephen Dillon puts it, “immobility”: “The neoliberal state requires the management, regulation, and immobilization of surplus or expendable populations” (118; emphasis mine). [2] Data profiles characteristic of oppressed populations, like poor credit scores, poor standardized test scores, and prison records, can make it difficult to access things like internet and/or wireless service, student loans, transportation, housing, and a lot of other things one needs to participate in the economy, the digital public sphere, to have an effect on others, to be a “moving part” of society that “influences” other parts. Again, this is classic biopolitics: the vibrancy and vitality of what appears to be the whole of the population is supported by the immobility and social death of those whose styles of living cannot be brought in phase with normative/hegemonic vibrations.

 

Data–or rather, algorithmically processed big data–does not literally, materially vibrate or resonate. Data’s vibrancy is just a metaphor for its liveliness, for its ability to come alive in support of the lives of those of us who are included in the “we” of “we the data.” Vibrant data is one example of how new materialist ontologies support white supremacist, patriarchal political projects.

 

[1] Beltran, Cristina. “Racial Presence Versus Racial Justice: The Affective Power of an Aesthetic Condition” in Du Bois Review, 11:1 (2014) 137-158

[2] Dillon, Stephen. “Possessed by Death: The Neoliberal-Carceral State, Black Feminism, and the Afterlife of Slavery” in Radical History Review Issue 112 (Winter 2012): 113-124.

YouTube Preview Image

In an earlier post, I talked about Apple’s 86ing of the iPod Classic, the one with the clickwheel interface rather than the touchscreen interface. There was plenty of iPod nostalgia as news of the clickwheel iPod’s discontinuation spread, including this piece, which focuses on the aesthetics of the clickwheel as an interface.

Though the touchscreen is often seen as replacing the clickwheel, I think the clickwheel has influenced toucscreen music interfaces. As you can see in the video above, the BeatsMusic touch interface echoes the iPod clickwheel. Just as you use the iPod clickwheel to fast forward or rewind or jump around in a track (press the center key, then slide back or forward on the wheel to place the cursor on the track’s progress bar at the bottom of the screen), you use Beats’ circular touch interface to fastforward or rewind the currently-playing track. (Beats is owned by Apple, so this resonance isn’t surprising; however, I don’t know if the Beats touchscreen wheel was developed before they were acquired by Apple.) So, to paraphrase a line from L.A. Style’s “James Brown Is Dead,” maybe we shouldn’t be mislead when the newsman said the iPod clickwheel is dead?

You could argue that the Beats interface also echoes turntable interfaces…and that’s not wrong, but when I scroll around the circular wheel on my iPhone 5’s touch screen, that much more closely and directly echoes the iPod classic than it does a record turntable. In fact, I’d argue that the clickwheel itself echoes turntablism (has anyone written on this?), so the resonance with turntables is included in the Beats interface’s resonance with the iPod clickwheel.

Clearly the clickwheel has had a lasting impact on digital music interface design. Are there other examples, besides the BeatsMusic interface, that y’all can think of?

You may have heard that academic philosophy is in the middle of an identity crisis. The Philosophical Gourmet Report (aka the Leiter Report, after its founder Brian Leiter) has been central to English-language academic philosophy’s self-concept. It has defined what counts as good and/or real philosophy for nearly 20 years. But in the last few weeks the administration and the validity of the PGR has been called into question by the parts of the discipline that had, up till now, supported it. If you want to read up on the scandal and the ensuing debate, check out Leigh Johnson’s “Archive of the Meltdown.”

At the heart of the debate is whether ranking philosophy departments and programs is something we ought to do in the first place. For reasons articulated here and here, I don’t think ranking philosophy (or any discipline’s) programs is something we ought to do. Rankings actively discourage meaningful diversification of a very non-diverse discipline, and help reinforce existing inequities.

One thing that actively encourages meaningful diversification of philosophers and philosophical practices is PIKSI, the Philosophy in an Inclusive Key Summer Institute. This is a summer program for philosophy students from groups traditionally underrepresented in the discipline; the point is to given them the encouragement and tools they need to successfully apply to graduate school in philosophy, and thus help remedy the discipline’s “pipeline problem.”

However, though the American Philosophical Association has traditionally funded PIKSI, this year it chose not to. So, PIKSI’s organizers are running a crowdsourcing campaign. Herreticat from the XCPhilosophy blog does a great job explaining why you ought to donate to PIKSI:

The subject matter of PIKSI is also crucial to its success. By focusing on the relationship between lived experience and philosophical reflection, the institute emphasizes the importance of students bringing their own concerns and questions to the table. Many students remark that the institute is the first time they learn there are “philosophers like them” and that they could have a role to play in philosophy. It is often the first time students participate in seminar discussions about anti-racist and feminist philosophical work. The testimonials in the PIKSI video also demonstrate the importance of this approach.

If you are a first generation college student, PIKSI could entail learning about what going to graduate school even means. If you are a low income student, it might mean learning concrete details about graduate stipends and having conversations with people who understand what it is like to have to support your family while in grad school. It involves talking to other people who get it about what it is like to be the only queer person of color, or woman with a disability, or first generation college student, and how to find the community that will allow you to not just get through, but thrive. It gives you email addresses and phone numbers and support networks.

Here’s where to go to donate. I know most of Cyborgology’s readers aren’t academic philosophers, but you should still care about diversity in philosophy (a) if you care about ideas, and/or (b) if you care about social justice in general.

This is cross-posted from xcphilosophy.

Traditionally, social identities (race, gender, class, sexuality, etc.) use outward appearance as the basis for inferences about inner content, character, or quality. Kant and Hegel, for example, inferred races’ defining characteristics from the physical geography of their ‘native’ territories: the outward appearance of one’s homeland determines your laziness or industriousness. [1] Stereotypes connect outward appearance to personality and (in)capability; your bodily morphology is a key to understanding if you’re good at math, dancing, caring, car repair, etc. Stereotypes are an example of what Linda Martin Alcoff calls “our ‘visible’ and acknowledged identity” (92). The attribution of social identity is a two-step process. First, we use visual cues about the subject’s physical features, behavior, and comportment to classify them by identity category. We then make inferences about their character, moral and intellectual capacity, tastes, and other personal attributes, based on their group classification. As Alcoff puts it, visual appearance is taken “to be the determinate constitutive factor of subjectivity, indicating personal character traits and internal constitution” (183). She continues, “visible difference, which is materially present even if its meanings are not, can be used to signify or provide purported access to a subjectivity through observable, ‘natural’ attributes, to provide a window on the interiority of the self” (192).  An identity is a “social identity” when outward appearance (i.e., group membership) itself is a sufficient basis for inferring/attributing someone’s “internal constitution.” As David Halperin argues, what makes “sexuality” different from “gender inversion” and other models for understanding same-sex object choice is that “sexuality” includes this move from outward features to “interior” life. Though we may identify people as, say, Cubs fans or students, we don’t usually use that identity as the basis for making inferences about their internal constitution. Social identities are defined by their dualist logic of interpretation or representation: the outer appearance is a signifier of otherwise imperceptible inner content.

Social identities employ a specific type of vision, which Alia Al-Saji calls “objectifying vision” (375). This method of seeing is objectifying because it is “merely a matter of re-cognition, the objectivation and categorization of the visible into clear-cut solids, into objects with definite contours and uses” (375). Objectifying vision treats each visible thing as a token instance of a type. Each type has a set of distinct visual properties, and these properties are the basis upon which tokens are classified by type. According to this method of vision, seeing is classifying. Objectifying vision, in other words, only sees (stereo)types. As Alcoff argues, “Racism makes productive use of this look, using learned visual cues to demarcate and organize human kinds. Recall the suggestion from Goldberg and West that the genealogy of race itself emerged simultaneous to the ocularcentric tendencies of the Western episteme, in which the criterion for knowledge was classifiability, which in turn required visible difference” (198). Social identities are visual technologies because a certain Enlightenment concept of vision–what Al-Saji called “objectifying vision”–is the best and most efficient means to accomplish this type of “classical episteme” (to use Foucault’s term from The Order of Things) classification. Modern society was organized according to this type of classification (fixed differences in hierarchical relation), so objectifying vision was central to modernity’s white supremacist, patriarchal, capitalist social order.

This leads Alcoff to speculate that de-centering (objectifying) vision would in turn de-center white supremacy:

Without the operation through sight, then, perhaps race would truly wither away, or mutate into less oppressive forms of social identity such as ethnicity and culture, which make references to the histories, experiences, and productions of a people, to their subjective lives, in other worlds, and not merely to objective and arbitrary bodily features (198).

In other words, changing how we see would change how society is organized. With the rise of big data, we have, in fact, changed how we see, and this shift coincides with the reorganization of society into post-identity MRWaSP. Just as algorithmic visualization supplements objectifying vision, what John Cheney-Lippold calls “algorithmic identities” supplement and in some cases replace social identities. These algorithmic identities sound a lot like what Alcoff describes in the preceding quote as a positive liberation from what’s oppressive about traditional social identities:

These computer algorithms have the capacity to infer categories of identity upon users based largely on their web-surfing habits…using computer code, statistics, and surveillance to construct categories within populations according to users’ surveilled internet history (164).

Identity is not assigned based on visible body features, but according to one’s history and subjective life. Algorithmic identities, especially because they are designed to serve the interests of the state and capitalism, are not the liberation from what’s oppressive about social identities (they often work in concert). They’re just an upgrade on white supremacist patriarchy, a technology that allows it to operate more efficiently  according to new ideals and methods of social organization.

Like social identities, algorithmic identities turn on an inference from observed traits. However, instead of using visible identity as the basis of inference, algorithmic identities infer identity itself. As Cheney-Lippold argues, “categories of identity are being inferred upon individuals based on their web use…code and algorithm are the engines behind such inference” (165). So, algorithms are programmed to infer both (a) what identity category an individual user profile fits within, and (b) the parameters of that identity category itself. A gender algorithm “can name X as male, [but] it can also develop what ‘male’ may come to be defined as online” (167). “Maleness” as a category includes whatever behaviors that are statistically correlated with reliably identified “male” profiles: “maleness” is whatever “males” do. This is a circular definition, and that’s a feature not a bug. Defining gender in this way, algorithms can track shifting patterns of use within a group, and/or in respect to a specific behavior. For example, they could distinguish between instances in which My Little Pony Friendship Is Magic is an index of feminine or masculine behavior (when a fan is a young girl, and when a fan is a “Brony”). Identity is a matter of “statistical correlation” (170) within a (sub) group and between an individual profile and a group. (I’ll talk about the exact nature of this correlation a bit below.)

This algorithmic practice of identity formation and assignment is just the tool that a post-identity society needs. As Cheny-Lippold argues, “algorithms allow a shift to a more flexible and functional definition of the category, one that de-essentializes gender from its corporeal and societal forms and determinations” (170). Algorithmic gender isn’t essentialist because gender categories have no necessary properties are constantly open to reinterpretation. An identity is a feedback loop of mutual renegotiation between the category and individual instances. So, as long as an individual is sufficiently (“statistically”) masculine or feminine in their online behavior, they are that gender–regardless, for example, of their meatspace “sex.” As long as the data you generate falls into recognizably “male” or “female” patterns, then you assume that gender role. Because gender is de-essentialized, it seems like an individual “choice” and not a biologically determined fact. Anyone, as long as they act and behave in the proper ways, can access the privileges of maleness. This is part of what makes algorithmic identity “post-identity”: privileged categories aren’t de-centered, just expanded a bit and made superficially more inclusive.

Back to the point about the exact nature of the “correlation” between individual and group. Cheney-Lippold’s main argument is that identity categories are now in a mutually-adaptive relationship with (in)dividual data points and profiles. Instead of using disciplinary technologies to compel exact individual conformity to a static, categorical norm, algorithmic technologies seek to “modulate” both (in)dividual behavior and group definition so they synch up as efficiently as possible. The whole point of dataveillance and algorithmic marketing is to “tailor results according to user categorizations…based on the observed web habits of ‘typical’ women and men” (171). For example, targeted online marketing is more interested in finding the ad that most accurately captures my patterns of gendered behavior than compelling or enforcing a single kind of gendered behavior. This is why, as a partnered, graduate-educated, white woman in her mid-30s, I get tons of ads for both baby and fertility products and services. Though statistically those products and services are relevant for women of my demographic, they’re not relevant to me (I don’t have or want kids)…and Facebook really, really wants to know these ads aren’t relevant, and why they aren’t relevant. There are feedback boxes I can and have clicked to get rid of all the baby content in my News Feed. Demanding conformity to one and only one feminine ideal is less profitable for Facebook than it is to tailor their ads to more accurately reflect my style of gender performance. They would much rather send me ads for the combat boots I probably will click through and buy than the diapers or pregnancy tests I won’t. Big data capital wants to get in synch with you just as much as post-identity MRWaSP wants you to synch up with it. [2] Cheney-Lippold calls this process of mutual adaptation “modulation” (168). A type of “perpetual training” (169) of both us and the algorithms that monitor us and send us information, modulation compels us to temper ourselves by the scales set out by algorithmic capitalism, but it also re-tunes these algorithms to fall more efficiently in phase with the segments of the population it needs to control.

The algorithms you synch up with determine the kinds of opportunities and resources that will be directed your way, and the number of extra hoops you will need to jump through (or not) to be able to access them. Modulation “predicts our lives as users by tethering the potential for alternative futures to our previous actions as users” (Cheney-Lippold 169). Your past patterns of behavior determine the opportunities offered you, and the resources you’re given to realize those opportunities. Think about credit scores: your past payment and employment history determines your access to credit (and thus to housing, transportation, even medical care). Credit history determines the cost at which future opportunity comes–opportunities are technically open to all, but at a higher cost to those who fall out of phase with the most healthful, profitable, privileged algorithmic identities. Such algorithmic governmentality “configures life by tailoring its conditions of possibility” (Cheney-Lippold 169): the algorithms don’t tell you what to do (or not to do), but to open specific kinds of futures for you.

Modulation is a particularly efficient way of neutralizing and domesticating the resistance that performative iterability posed to disciplinary power. As Butler famously argued, discipline compels perfect conformity to a norm, but because we constantly have to re-perform these norms (i.e., re-iterate them across time), we often fail to embody norms: some days I look really femme, other days, not so much. Because disciplinary norms are performative, they give rise to unexpected, extra-disciplinary iterations; in this light, performativity is the critical, inventive styles of embodiment that emerge due to the failure of exact disciplinary conformity over time. Where disciplinary power is concerned, time is the technical bug that, for activists, is a feature. Modulation takes time and makes it a feature for MRWaSP–it co-opts the iterative “noise” performativity made around and outside disciplinary signal. As Cheney-Lippold explains, with modulation “the implicit disorder of data collected about an individual is organized, defined, and made valuable by algorithmically assigning meaning to user behavior–and in turn limiting the potential excesses of meanings that raw data offer” (170). Performative iterations made noise because they didn’t synch up with the static disciplinary norm; algorithmic modulation, however, accommodates identity norms to capture individual iterability over time and make rational previously irrational styles of gender performance. Time is no longer a site outside perfect control; with algorithmic modulation, time itself is the medium of control (modulation can only happen over time). [For those of you who have been following my critique of Elizabeth Grosz, this is where the rubber meets the road: her model of ‘time’ and the politics of dynamic differentiation is really just algorithmic identity as ontology, an ontology of data mining…which, notably, Cheney-Lippold defines as “the practice of finding patterns within the chaos of raw data” (169; emphasis mine).]

Objectifying vision and data mining are two very different technologies of identity. Social identities and algorithmic identities need to be understood in their concrete specificity. However, they also interact with one another. Algorithmic identities haven’t replaced social identities; they’ve just supplemented them. For example, your visible social identity still plays a huge role in how other users interact with you online. People who are perceived to be women get a lot more harassment than people perceived to be men, and white women and women of color experience different kinds of harassment. Similarly, some informal experimentation by women of color activists on Twitter strongly suggests that the visible social identity of the person in your avatar picture determines the way people interact with you. When @sueypark & @BlackGirlDanger switched from female to male profile pics, there was a marked reduction in trolling of their accounts. Interactions with other users creates data, which then feeds into your algorithmic identity–so social and algorithmic identities are interrelated, not separate.


One final point: Both Alcoff and Al-Saji argue that vision is itself a more dynamic process than the concept of objectifying vision admits. Objectifying vision is itself a process of visualization (i.e., of the habits, comportments, and implicit knowledges that form one’s “interpretive horizon,” to use Alcoff’s term). In other words, their analyses suggest that the kind of vision at work in visual social identities is more like algorithmic visualization than modernist concepts of vision have led us to believe. This leaves me with a few questions: (1) What’s being left out of our accounts of algorithmic identity? When we say identity works in these ways (modulation, etc), what if any parts of the process are we obscuring? Or, just as the story we told ourselves about “objectifying vision” was only part of the picture, what is missing from the story we’re telling ourselves about algorithmic identities? (2) Is this meta-account of objectifying vision and its relationship to social identity only possible in an epistemic context that also makes algorithmic visualization possible? Or, what’s the relationship between feminist critiques & revisions of visual social identities, and these new types and forms of (post)identity? (3) How can we take feminist philosophers’ recent-ish attempts to redefine “vision” and its relation to identity, and the related moves to center affect and implicit knowledge and situate them not only as alternatives to modernist theories of social identity, but also as either descriptive and/or critical accounts of post-identity? I am concerned that many thinkers treat a shift in objects of inquiry/analysis–affect, matter, things/objects, etc.–as a sufficient break with hegemonic institutions, when in fact hegemonic institutions themselves have “modulated” in accord with the same underlying shifts that inform the institution of “theory.” But I hope I’ve shown one reason why this switch in objects of analysis isn’t itself sufficient to critique contemporary MRWaSP capitalism. How then do we “modulate” our theoretical practices to account for shifts in the politics of identity?

[1] As Kant argues, “The bulging, raised area under the eyes and the half-closed squinting eyes themselves seem to guard this same part of the face partly against the parching cold of the air and partly against the light of the snow…This part of the face seems indeed to be so well arranged that it could just as well be viewed as the natural effect of the climate” (11).

[2] “Instead of standards of maleness defining and disciplining bodies according to an ideal type of maleness, standards of maleness can be suggested to users based on one’s presumed digital identity, from which the success of identification can be measured according to ad click-through rates, page views, and other assorted feedback mechanisms” (Cheney-Lippold 171).

 

In addition to launching the watch & new iPhone, this week Apple also discontinued the iPod Classic–the touch-wheel iPod. To be honest, I’m really sad to see it go. What will I do when my currently 7-year-old 80g iPod Classic goes kaput? I use the thing nearly every day. It’s my second iPod; I had an 8g second-gen that I scrimped and saved for as a grad student. I still remember buying my first iPod at the Mac store on Michigan Avenue. Obviously this thing had a huge impact on me.

Because it (and iTunes) had such a huge impact on how people related to music, the iPod also had a huge impact on how people think about sound and music. The iPod featured in theories of listening, of aesthetics, of the music industry, of subjectivity, and plenty of other things. So, though the iPod Classic may be dead, it lives on in theory.

I thought it would be helpful to make a crowdsourced bibliography of scholarship and criticism on/about/inspired by the iPod. Here’s the gdoc. Please contribute!

Screen Shot 2014-08-29 at 6.11.25 PM

 

Twitter recently made analytics available for free to all users. One of the free metrics is the gender distribution of your followers. This metric is flawed in a lot of ways (most obviously because it’s binary: there are only men and women). Most puzzling, however, is how Twitter determines a an account-holder’s gender. Users don’t have to self-identify–in fact, there’s not even an option to do this.

As both this post about the gender metric and this post from Twitter about its gender-targeted marketing show, Twitter treats gender as an emergent pattern of behavior. As the latter explains, users are thought to send “signals”–such as “user profile names or the accounts she or he follows”–that “have proven effective in inferring gender.”

Classically, one’s body (physiology, phenotype) was the ‘signal’ from which one inferred gendered (or raced) behavior: vagina = nurturing, scrotum = likes video games. In this model, gender is a fixed characteristic inherent in sexed bodies. The kind of body you had determined the patterns of behavior you exhibited.

Twitter’s approach to gender is an example of a broader shift in our understanding of gender (and social identities more generally): genders are not fixed characteristics, but emergent properties. This understanding of gender is different than the traditional one, but it’s not clear that it’s any better. For example, we only recognize something as a pattern if it resonates with other patterns we’ve been habituated to recognize as such (what Reich describes here as “rational” moments). Twitter isn’t crunching numbers to figure out what different kinds of gender patterns people follow; rather, they’re listening for users who fall in phase with already-set “masculine” and “feminine” vibes. (As they say, “where we can’t predict gender reliably, we don’t.”) To count and be treated as a full person/user, you have to exhibit legibly gendered patterns of behavior. Otherwise, you’re effectively non-existent, just irrational noise.
To close with an aside: Has anyone written about the relationship between these gender-predicting algorithms and the parlour game on which Turing based his test? The game was about determining whether or not one’s interlocutor was a woman.

 

I want to think about the relationship two recent-ish articles draw between big data and social “harmony.” Why is big data something that we think is well-suited to facilitate a harmonious society? Or, when we think about applying big data to the control and regulation of society (which is something distinct from, but which can overlap with, the legal control and regulation of the state and its citizens), why is “harmony” the ideal we think it will achieve? Why is a data-driven society a “harmonious” society, and not, say a just society or a peaceful society or a healthy society? Why “harmony” and not some other ideal?

Last month in Foreign Policy, Shane Harris wrote about Singapore’s Risk Assessment and Horizon Scanning (RAHS) project. This project is already in place, sniffing out possible terrorist or public health threats. But, in response to recent election results in which the ruling party received less than near-unanimous support (which is interpreted as a sign of social discord), the government has extended the reach of this program

to analyze Facebook posts, Twitter messages, and other social media in an attempt to “gauge the nation’s mood” about everything from government social programs to the potential for civil unrest. In other words, Singapore has become a laboratory not only for testing how mass surveillance and big-data analysis might prevent terrorism, but for determining whether technology can be used to engineer a more harmonious society. [emphasis mine]

This isn’t just about predicting unruly and disruptive behavior, like bombing a public event. It’s about tuning the general mood to minimize social discord. The aim is to temper everyone’s temper, to monitor and influence not just what people do, but how they feel. And “harmony” is the metaphor Harris and others use to describe the status of a society’s affective temperament, its “national mood.” (Why is harmony the preferred metaphor for affect? Well, I’ve got a paper about that here.)

Harris emphasizes that Singaporeans generally think that finely-tuned social harmony is the one thing that keeps the tiny city-state from tumbling into chaos. [1] In a context where resources are extremely scarce–there’s very little land, and little to no domestic water, food, or energy sources, harmony is crucial. It’s what makes society sufficiently productive so that it can generate enough commercial and tax revenue to buy and import the things it can’t cultivate domestically (and by domestically, I really mean domestically, as in, by ‘housework’ or the un/low-waged labor traditionally done by women and slaves/servants.) Harmony is what makes commercial processes efficient enough to make up for what’s lost when you don’t have a ‘domestic’ supply chain.

***

This situation of scarcity–indeed, austerity after the loss of mothers’s work, mother nature’s work in providing food, water, and energy–is exactly what’s depicted in the much-thinkpieced film Snowpiercer. There, the whole human species is stuck on a train that circles the frozen, sterile earth. Everything–food, water, energy–must come from the train itself, because the earth can no longer be exploited for resources (so instead, the train exploits people more explicitly than when we’ve got nature and people to exploit). In order for this setup to work, everything, especially and including the population of humans, must be curated in a careful balance. This isn’t just population management, but the carefully curated balance among different parts of the population. As the film repeatedly emphasizes, the train will work only if everyone stays in their assigned place and does only what that station requires. In other words, social harmony exists when all the parts of the community are in proper proportion.

This idea of social harmony as proportion among parts is straight outta Plato’s Republic: remember the myth of the metals, the division of society into gold, silver, and bronze? The ideal city, for Plato, was one that embodied the proper proportion among its parts. Not coincidentally, ancient Greeks thought musical harmony was also the expression of proportional relationships among the parts of a musical instrument. So, in Plato as in Snowpiercer, social harmony was a matter of proportionality. But Snowpiercer implies that this idea(l) of harmonic proportionality is something much more contemporary than Plato. With Tilda Swindon’s obvious Margaret Thatcher cariacture as this ideology’s main mouthpiece, the film implies that proportional social harmony is the idea(l) that informs Thatcher-style neoliberalism.

As I have argued before, this neoliberal upgrade on Plato is also, as Jacques Ranciere argues, the ideal that informs data science. He argues:

The science of opionion…this process of specularization where an opinion sees itself in the mirror held up by science to reveal to it its identity with itself…It is the paradoxical realization of [Platonic metaphysics and archipolitics]: that community governed by science that puts everyone in their place, with the right opinion to match. The science of simulations of opinion is the perfect realization of the empty virtue Plato called sophrosune: the fact of each person’s being in their place, going about their own business there, and having the opinion identical to the fact of being in that place and doing only what there is to do there” (Disagreement 105-6).

Data science (what Ranciere calls the “science of opinion,” i.e., the science of opinion polls, which we can also call the science of the “national mood”) is the tool that allows us to listen for, measure, and maintain a particularly neoliberal kind of social harmony. Not harmony as proportion, but harmony as dynamic patterning. Dynamic patterning is how contemporary physics understands sound to work: sound is the dynamic patterning of pressure waves. Dynamic patterning is also what data science listens for–the patterns that emerge as signal out of all the noise.

So, what I want to suggest is that what Alex Pentland calls “social physics” or, “the reliable, mathematical connections between information and idea flow…and people’s behavior” (2), is modeled–implicitly–on the physics of sound. Instead of a geometric mathematics of proportion, social physics is a statistical mathematics of emergence or “dynamic patterning,” to use Julian Henrique’s definition of “sounding.” For example, Pentland says “there are patterns in these individual transactions that drive phenomena such as financial crashes and Arab springs” (10); the role of social physics is to find these patterns, “analyzing patterns within these digital bread crumbs” (5) to find the signal amid a bunch of data-noise. “Social Physics” updates the idea of the “harmony of the spheres” for the 21st century: this harmony is just statistical not geometric, grounded in contemporary acoustics instead of ancient philosophy.

I’m still working my way through Pentland’s book, but for now I want to turn to Nicholas Carr’s review of it. Carr’s review consistently relies on sonic metaphors to describe the “social physics” Pentland theorizes. For example, in the introductory paragraph, Carr notes that Marshall McLuhan “predicted that the machines eventually would be deployed to fine-tune society’s workings” (emphasis mine). Or later, summarizing Pentland’s argument, Carr writes:

If people react predictably to social influences, then governments and businesses can use computers to develop and deliver carefully tailored incentives, such as messages of praise or small cash payments, to “tune” the flows of influence in a group and thereby modify the habits of its members.

What’s getting tuned? As above in Singapore, people’s moods and affects are what social physics listens for and tunes. As Carr notes, Pentland’s studies “measure not only the chains of communication and influence within an organization but also “personal energy levels” and traits such as “extraversion and empathy.””

As Carr reports it, tuning these affects and moods–people’s ‘opinions’ rather than just their actions–is what leads to a harmonious society: “group-based incentive programs can make communities more harmonious and creative. “Our main insight,” [Pentland] reports, “is that by targeting [an] individual’s peers, peer pressure can amplify the desired effect of a reward on the target individual.””

***

Positive or negative reinforcement of behavior through peer pressure…hmmm…this sounds a whole lot like what JS Mill advocates in Chapter 4 of On Liberty. This chapter is about the “limits of the authority of society over the individual.” Here, Mill argues in language that should clearly resonate with the above discussion of Plato & Snowpiercer:

Each [individual and society] will receive its proper share, if each has that which particularly concerns it. To individuality should belong the part of life in which it is chiefly the individual that is interested; to society, the part that chiefly interests society (69).

So what interests the individual, and what interests society? Well, the individual is interested in things that affect him and nobody else (that’s what the first three chapters argue); society is interested in optimizing its health. According to Mill, laws protect individual liberty; they limit individuals and the government from interfering in things in which it is chiefly the individual that is interested. However, as Mill notes, “The acts of an individual may be hurtful to others, or wanting in due consideration for their welfare, without going to the length of violating any of their constituted rights. The offender may then be justly punished by opinion, though not by law” (69).  Opinion is how society manages behaviors it cannot and ought not prohibit or require, but nevertheless needs to encourage or discourage: “In these modes a person may suffer very severe penalties at the hands of others for faults which directly concern only himself; but he suffers these penalties only in so far as they are the natural and, as it were, spontaneous consequence of the faults themselves, not because they are personally inflicted on him for the sake of punishment (72). Arguing that the negative consequences of being out of synch with dominant mood or opinion are the direct effect of discordant behavior, Mill rationalizes his way out of the liberal principle of non-interference by blaming the victim. Society can regulate individuals in this way because they were asking for it, more or less. He concludes, “Any inconveniences which are strictly inseparable from the unfavorable judgment of others, are the only ones to which a person should ever be subjected for that portion of his conduct and character which concerns his own good, but which does not affect the interests of others in their relations with him” (72).

Mill has recognized that the liberal principle (the law can interfere with individual liberty only in matters that have an affect on society), if followed strictly, would lead to social upheaval. So, in order to maintain the status quo–e.g., white bourgeois standards of behavior, taste, comportment, gendered behavior, etc.–the law (“reprobation which is due to him for an offense against the rights of others” (73) needs to be supplemented by opinion, by the “loss of consideration a person may rightly incur by defect of prudence or personal dignity” (73).

But how does Mill relate to social physics and the science of opinion? Well, it seems like data science, in tracking and tuning people’s moods and affects, it’s doing the work of opinion-regulation that Mill thinks is necessary for ‘social harmony.’ Indeed, as Carr points out, social physics “will tend to perpetuate existing social structures and dynamics. It will encourage us to optimize the status quo rather than challenge it.”

In both Harris’s article and Pentland’s book, concepts of individual liberty are seen as things that impede this harmony, or rather, they impede our ability to listen and adjust for this harmony. According to Harris, “many current and former U.S. officials have come to see Singapore as a model for how they’d build an intelligence apparatus if privacy laws and a long tradition of civil liberties weren’t standing in the way” (emphasis mine). Similarly, Pentland argues that, given what a more Marxist theorist would call the current relations of production, i.e., the current state of material-technical existence, “we can no longer think of ourselves as individuals reaching carefully considered decisions; we must include the dynamic social effects that influence individual decisions” (3). So “harmony” is a way of describing the overall behavior of a population, the concord or discord of individuals as they intertwine with and rub up against one another, as their behaviors fall in and out of synch.

Mill has already made–in 1859 no less–the argument that rationalizes the sacrifice of individual liberty for social harmony: as long as such harmony is enforced as a matter of opinion rather than a matter of law, then nobody’s violating anybody’s individual rights or liberties. This is, however, a crap argument, one designed to limit the possibly revolutionary effects of actually granting individual liberty as more than a merely formal, procedural thing (emancipating people really, not just politically, to use Marx’s distinction). For example, a careful, critical reading of On Liberty shows that Mill’s argument only works if large groups of people–mainly Asians–don’t get individual liberty in the first place. [2] So, critiquing Mill’s argument may help us show why updated data-science versions of it are crap, too. (And, I don’t think the solution is to shore up individual liberty–cause remember, individual liberty is exclusionary to begin with–but to think of something that’s both better than the old ideas, and more suited to new material/technical realities.)

***

Big data, social physics, Snowpiercer, Plato, JS Mill–on the one hand this post is all over the place. But what I’ve tried to do is unpack the ideals that inform and often justify/rationalize data science forays into social management, to show just what kind of society data science thinks it can make for us, and why that society might be less than ideal.

[1] Harris writes, “Singapore’s 3.8 million citizens and permanent residents — a mix of ethnic Chinese, Indians, and Malays who live crammed into 716 square kilometers along with another 1.5 million nonresident immigrants and foreign workers — are perpetually on a knife’s edge between harmony and chaos.”

[2] “It is, perhaps, hardly necessary to say that this doctrine is meant to apply only to human beings in the full maturity of their faculties…We may leave out of consideration those backward states of society in which the race itself may be considered in its nonage…Despotism is a legitimate mode of government in dealing with barbarians” (14).

This is a cross-post from Its Her Factory.

 

 

The neoliberal subject is supposed to make economically rational calculations about how she spends her time, her money, and her energy. Do I spend my time working, or would it I get a better return doing something else, like sleeping or going out? Partying hard and going gaga might be a good investment if it helps you work smarter and more efficiently, if it builds your brand, if you need a release, and so on. But the effect of this is that every decision–even the decision to have fun, or the decisions you make about what is fun, while having fun–is now work. It’s not that you’re choosing to do have fun instead of do work, but that having fun its own type of work. If you’re lucky, you get the return on that investment. If you’re less lucky, that return goes to someone else (e.g., I’ve talked about the way clubbing has become a type of outsourced labor here at Cyborgology).

In this context, Katy Perry’s new single “This Is How We Do” sounds like a defense of the wanton disregard for economic rationality. In the bridge (and sounding like she’s doing her best to channel P!nk), Perry praises a bunch of economically irrational activities in the form of shout-outs to

The ladies at breakfast…in last night’s dress

All you kids who still have their cars at the club valet…and its Tuesday

All you kids buying bottle service with your rent money

All you people going to bed with a 10, and waking up with a 2

The last two–spending money on overpriced booze rather than housing, and sleeping with someone who is quantitatively inattractive–really resonate with the idea of economic calculation. All of these decisions are economically irrational because they give you diminishing returns. Imagine the disappointment (and, perhaps, shame or self-disgust) of waking up next to that person you realize you’re not attracted to at all.

There’s also a musical representation of miscalculation at the end of the song. The last iteration of the chorus sounds like it’s going to conclude with a fade-out. But at about 2:56 in the YouTube video posted to Perry’s official account, Perry says “Wait, what? Bring the beat back,” and we get about half a minute more of instrumental coda. In bringing the beat back, the song goes past the point of diminishing returns–it’s really likely, IMO, that this last 30 seconds will get cut in radio airplay. Even in the video this section feels like filler–Perry walks to the background and lies down in the dark as animated ice cream cones twerk in the foreground. So, both the lyrics and the composition give examples of economic irrationality, that is, of pushing something fun past its point of diminishing returns.

Instead of arguing for the benefits of such irrationality, for its positive contributions to individual or social life, the song argues for its normalcy, for its lack of perceptible effect. It doesn’t treat over-the-top partying like something that’s ecstatic or extraordinarily pleasurable, but something that’s mundane. In effect, “This Is” defends economic irrationality as non-disruptive, either to society or to “our” ability to function in it.

You can hear this defense strategy in the song’s music. Especially with the slowed-down sample of the song’s title, “This Is How We Do” sounds like Perry’s answer to Miley’s sizurpy “We Can’t Stop.” Perry’s song has a similarly muted soar, and what I’ve argued here is its concomitant first-person-plural perspective. But what’s really interesting is what Perry sings over that muted soar: she repeats the phrase “it’s no big deal” four times. The song phones in its soars because they’re no big deal. While such irrationality might feel overwhelming to people who don’t “do” like us, from “our” perspective we’re so habituated to it this irrationality barely even rises to the level of perception. What some think is irrational excess is, for us, just another day.

The song’s structure reflects the regularization of otherwise irregular excess. The two NBD soars aren’t even the song’s main climax–they’re just the chorus…a regular, repeated part of the song. The biggest musical moment is at the end of the bridge, when Perry finally puts some support behind her voice and wails “RENT MO-NAY”; this is followed by some sounds of a cheering, whistling crowd (and, um, a really puzzling picture of Aretha Franklin singing at the first Obama inaugural. I get the R-E-S-P-E-C-T analogy, but, um, otherwise the video’s use of this image just seems gratuitous and racist). There’s like a hyper-abbreviated soar in the last few beats of the bridge to lead us back to the final iteration of the chorus, a sort of pale echo of the earlier soars.


Such economic irrationality is “no big deal” only when it’s performed by specific kinds of bodies in very particular circumstances. Just think for a minute about the absolutely huge deal made about “welfare queens”–implicitly black women who make supposedly economically irrational decisions like buying alcohol, beauty services, or even junk food. According to this anti-welfare perspective, such purchases are bad returns on taxpayer investment because they are wasteful–they bring enjoyment and relief to black women, rather than the (generally white) ‘taxpayer.’  This Jezebel post shows plenty of examples of these anti-welfare memes, and does a decent take-down of them.

The ability to fuck up and not be punished is like the definition of privilege (e.g., men getting away with rape, whites getting away with murder, “I Fought the Law and I Won,” etc etc). So perhaps what “This Is How We Do” is really about is affirming the privilege of those whose economically irrational behavior passes as “no big deal”?



Oh, and p.s.: don’t even get me started on the racist appropriation in the video.

YouTube Preview Image

 

Sound happens when things vibrate, displacing air and creating pressure waves that fall within the spectrum of waves the human ear can detect.

Researchers at MIT, working with Microsoft & Adobe, have developed an algorithm that reads video recordings of vibrating objects more or less like a microphone reads the vibrations of a diaphragm. I like to think it turns the world into a record: instead of vibrations etched in vinyl, the algorithm reads vibrations etched in pixels of light–it’s a video phonograph, something that lets us hear the sounds written in the recorded motion of objects. As researcher Abe Davis explains,

We’re recovering sounds from objects. That gives us a lot of information about the sound that’s going on around the object, but it also gives us a lot of information about the object itself, because different objects are going to respond to sound in different ways.

So, this process gives us info about both the ambient audio environment, and the materiality of the videorecorded objects–that’s a lot of information, info that could obviously be used for all sorts of surveillance. And that will likely be people’s primary concern with this practice.

But I think this is about a lot more than surveillance. This research reflects some general trends that cross both theory, pop culture, and media/tech:

1. The privileging, and close interrelationship, between sound and materiality. “New Materialism” is a really trendy field in the theoretical humanities and cultural studies. But if you read new materialists carefully, they rely on a lot of sonic vocabulary (“attunement” is probably the most common of these terms), and these sonic terms do a lot of theoretical work. I would argue (well, I AM arguing in this new manuscript I’m writing) that one of the things that is “new” about this “materialism” is its sonic, rather than visual, epistemology. We’re re-conceiving what matter is and how it works, and to do this we’re relying on a very specific understanding of what sound is and how it works. The unexamined question here is, obviously: so what’s this understanding of sound? I think close readings of studies like this can help us unpack how advances in technology are impacting the dominant concept of “sound” and “listening,” and how these concepts in turn inform new materialist theory.

2. Our dominant, commonsense concepts of sound and listening are changing as technology changes. But these technologies are tied to older ones–in this case, I can see connections to telephony and optical sound (used most famously on early sound films).

The process the researchers use to process the video frames seems related to AutoTune: instead of smoothing out off-pitch parts (which is what AutoTune does), this MIT algorithm amplifies sonic irregularities to help differentiate among sonic events and their qualities (timbre, rhythm, articulation, etc.). [1] It finds these irregularities by not by enlarging images or seeking greater visual definition (eg by using a higher frame rate)–not by bringing things into focus–but by paying attention to the parts of the video that are most indeterminate, fuzzy, and glitchy. As Hardesty explains on the MIT news site:

That technique passes successive frames of video through a battery of image filters, which are used to measure fluctuations, such as the changing color values at boundaries, at several different orientations — say, horizontal, vertical, and diagonal — and several different scales…Slight distortions of the edges of objects in conventional video, though invisible to the naked eye, contain information about the objects’ high-frequency vibration. And that information is enough to yield a murky but potentially useful audio signal.

So, as I understand it, the technique finds a lot of different ways to process the visual noise in the video images, and from this database of visual noise it pulls out the profile of vibrations that would generate the most probable, “common sense” auditory signal. The algorithm produces lots and lots of visual noise, because the more visual data it can collect, the more accurate a rendition of the audio signal it can produce.

If we traditionally understand listening as eliminating noise that distracts us from the signal (i.e., as focusing), this practice re-imagines listening as multiplying (visual) noise to find the (audio) signal hidden in it. Instead of tuning noise out, we amplify it so that the hidden ‘harmonics’ emerge from all the irrational noise. Listening means extracting signal from a database. Our ears alone are incapable of processing all that noise–which isn’t even auditory noise in the first place. Listening isn’t something our ears do–it’s something algorithms do. Nowhere is this more evident than in the MIT video, which shows a smartphone Shazamming the “Ice Ice Baby” riff recovered from one of their experiments. The proof of their experiment isn’t whether the audience can recognize the sound they recovered from the visual mic, but whether Shazam’s bots can.

We often talk about algorithms “visualizing” data. But what does it mean to understand them as “listening” to us, as “hearing” data, making it comprehensible to our puny, limited senses?

 
[1] Hardesty explains: “So the researchers borrowed a technique from earlier work on algorithms that amplify minuscule variations in video, making visible previously undetectable motions: the breathing of an infant in the neonatal ward of a hospital, or the pulse in a subject’s wrist.”

 

Robin is on Twitter as @doctaj.