With the latest round of thinkpieces about Rihanna’s BBHMM video, it seems like we’ve finally reached peak “Is it feminist?” I mean, it’s been a long road up to this peak, but this question feels like it’s growing stale and exhausting its ability to generate clicks.

“Is it feminist?” has always been a disciplining and normalizing question, one that centers particular kinds of women (privileged women) as the proper subject of feminism, and so on. This is what academic feminist theory learned in the 1900s, right? Anyway, “Is it feminist?” might be a productive question when feminism is itself a minority discourse, but in the era of Branded Post-Feminism(™), “Is it feminist?” it’s more normalizing than not. To be a lot theoretical about it: “Is it feminist?” used to serve as an instance of Rancierian disagreement. The question used to disrupt and at least give a little pause to hegemonic modes of thought and practice. But it’s not disruptive anymore; its disruption has itself been normalized. (Think, for example, of how “disruption” in general is fetishized as a term for innovation.)

But “Is it feminist?” is not the only way to start a feminist analysis or to think critically about gender politics. “What is it?” or “Is X a Y?” is like the oldest question in philosophy; “ti esti…?” (“what is…?”) is Socrates’ whole M.O. Philosophers have developed a lot more types of inquiry since then. We could, for example, ask “What is gendered and how?” or “What are the gendered components of this and how do they interact?” or, as Cynthia Enloe puts it, we can ask “Where are the women?” or “Where are the gender minorities?” or “Where are the nonbinary people?”

All the way back in 1949 Simone de Beauvoir identified the problems with “what is…?” or “is it….” style questions, and offered some alternative types of questions to ask instead. She begins the introduction to The Second Sex with a critique of these questions. Modeling the first part of the introduction after a Platonic dialogue, Beauvoir repeatedly asks “What is a woman?”: Biology? Nope. Metaphysical essence? Nope. Something made up, a false belief we should just get rid of? Nope. “Woman” is, Beauvoir argues, a situation in patriarchal power relations: “She is determined and differentiated in relation to man, while he is not in relation to her; she is the inessential in front of the essential. He is the Subject; he is the Absolute. She is the Other” (26). The “What is/Is it…?” questions get the ontology wrong. (See the “scope of the verb ‘to be’” discussion on p.33 of TSS…from a Beauvoirian perspective “Is it…?” questions are all asking after “serious [that is, predetermined] values” and are thus all grounded in bad faith.) Woman/feminist isn’t a definite thing or feature or set of features; it’s a status in a particular type of gendered social and epistemological structure. So, as Beauvoir concludes:

But what singularly defines the situation of woman is that being, like all humans, an autonomous freedom, she discovers and chooses herself in a world where men force her to assume herself as Other: an attempt is made to freeze her as an object and doom her to immanence, since her transcendence will be forever transcended by another essential and sovereign consciousness. Woman’s drama lies in this conflict between the fundamental claim of every subject, which always posits itself as essential, and the demands of a situation that constitutes her as inessential. How, in the feminine condition, can a human being accomplish herself? What paths are open to her? Which ones lead to dead ends? How can she find independence within dependence? What circumstances limit women’s freedom and can she overcome them? These are the fundamental questions we would like to elucidate. (37).

Following Beauvoir, we could say this: “feminist” is a situation or relational status. Something cannot “be” feminist. It can assist or impede our ongoing reproduction of patriarchy–it can do things. Notice the questions Beauvoir asks at the end of this quote: they’re all action-oriented: What can one do? What does the material situation allow? How might one effectively change the concrete reality of patriarchy so that nobody finds themselves in this contradictory concrete status of feminization? Beauvoir’s questions are also contextually dependent: whether or not something assists or impedes the ongoing reproduction of patriarchy depends on the concrete specifics of that particular situation, how patriarchy manifests itself there and then.

So, those are a few feminist questions you can ask instead of “Is it feminist?” Do y’all have some favorites to add to the list?

For as influential as Attali’s Noise has been, most scholars have sidestepped its central claim: “music is prophecy” (11). It feels really undersupported; Attali asserts that music anticipates or foreshadows social change, but he doesn’t seem to provide anything more solid than correlations as evidence. As Eric Drott says in his just-published article in Critical Inquiry, “the book never fully spells out the mechanisms by which music performs its prophetic function” (725). I think Attali does spell out this mechanism. Music is prophecy because its physical structure–sound waves–is isomorphic with the physical structure of economic forecasts (probability functions, which are graphed as sine curves). Attali thinks both music and statistical forecasts are made of the same stuff, so thus music is predictive in the sense that Amazon’s recommendation bot is predictive.

 

I’ve been revising this article on Noise, Foucault, & biopolitical neoliberalism for my next book project. My analysis focuses on the Attali’s claim that the logic of the market, as understood by 1970s macroeconomic theory, is isomorphic with the logic of sound waves. Macroeconomics and acoustics study, essentially, the same phenomena. As Attali puts it, ‘non-harmonic music’ (Noise 115) makes ‘the laws of acoustics. . . the mode of production of a new sound matter’, and in so doing, ‘displays all of the characteristics of the technocracy managing the great machines of the repetitive economy’ (113). The laws of acoustics are isomorphic with the  “rules” of biopolitical governmentality and financialized political economy–that is, with statistical forecasts. The mechanisms introduced by biopolitics” to understand and manage populations “include forecasts, statistical estimates, and overall measures” (SMBD 246). Similarly, the methods economists use to understand the “repetitive” (Attali’s term for late capitalism) market include “macrostatistical and global, aleatory view, in terms of probabilities and statistical groups” (Attali, Social Text, 11). The logic of forecasting and financialization mimics the logic of auditory signals (at least as contemporary physics understands this latter logic)–for example, both probability functions and sound frequencies are visualized as sine waves. Just as harmonics emerge from dynamically interacting frequencies, predictable, reliable ‘signal’ emerges–as life, as human capital, as a data forecast, a data self–from dynamically interacting streams of data.

 

So, because he thinks sound and statistical forecasts are more or less identical in structure, Attali can then argue that music is predictive, that “our music fortells our future” (Noise 11). Writing in 1977, Attali lacks databases and fast, massive-scale distributed computer processing, so uses music, which, like big-data number crunching, “explores, much faster than material reality can, the entire range of possibilities in a given code” (Noise 11). Music, for Attali, is like an algorithm predicts where society will go next: it crunches all the variables and figures out which combination is most probable. Writing in 2014, Attali further explains that this ability to crunch variables and determine the most probable outcome is what makes music similar to finance: “We could also explore the reason why music could be seen as predictive: as an immaterial activity, it explores more rapidly than any other the realm of potentials. In that sense, it is not far from another quasi immaterial activity, finance, which is also very often an excellent predictive tool.” In 2014, Attali gave a lecture titled “Music As A Predictive Science” at Harvard. There, he talks about Noise, his intentions in writing it, and whether his claims about the future were accurate. He repeatedly refers to his project in Noise as “forecasting.” Forecasting is the same term Nate Silver uses to describe what big data analytics does. In a sense, Attali scooped Silver by more than 30 years; Noise uses music in the same way that The Signal And The Noise uses data.
This is widely (and rightly) taken to be the point where Noise jumps the shark into pseudo-rationality: music seems no better suited to predict the future than astrology is. But data forecasting is also pseudo-rational. Attali’s method seems obviously outlandish because it, unlike big data forecasting, can’t hide behind the mantle of scientific objectivity. Privileging noise, understanding music as a market that is predictable and whose future can be forecast, Attali’s analysis of the history of western art music employs some of the central principles of neoliberal economic theory.

According to Steven Shaviro, the combination of digital media and neoliberal capitalism has changed the way movies are composed, their underlying logic. I’ve argued that these changes in film composition parallel recent-ish changes in pop music song composition. Brostep sounds like Transformers having sex because, well, Skrillex and Michael Bay are using the same basic methods to achieve the same general aesthetic. (Seriously, there’s a “Transformers having sex” tag on Mixcloud.) This 2011 video mashes a Transformers clip with a brostep song, and in the same way that 2 Many DJs showed that “Smells Like Teen Spirit” and “Bootylicious” are effectively the same song structure, it shows that Bey and brostep are effectively the same compositional structure.

YouTube Preview Image

But that’s all a preface to what I really want to talk about: Taylor Swift’s “Bad Blood” video.

YouTube Preview Image

Like “We Are Never, Ever Getting Back Together,” “Bad Blood” is another Max Martin produced pop dubstep track, with verses and choruses organized around a soar or a drop. The first soar/drop happens as Swift’s character is getting suited up by the Trinity, around 1:20-1:30. Here the handclaps rhythmically intensify till a drop, but a drop with no wobble. We just land on the downbeat as Swift sings “now,” and the bass and percussion comes back in. [1] This is repeated at 2:15. These soars take us from the verses into the choruses; they’re mini-climaxes.

The main drop happens at the end of the bridge leading into the final chorus (the same place it is in WANEGBT and in “Shake It Off”). As in “Shake It Off,” Swift’s voice provides the wobble. Here, it’s where she sings, on a single repeated pitch, “Blood runs thin.” As she sings the drop, Catastrophe (Swift) appears as a redhead, and Kendrick Lamar, who raps the verses, has disappeared, visually, from the rest of the video, as Catastrophe leads a party of all her gal pals in a final showdown with Arsyn (Gomez). This drop is the most musically important part of the song, just as this shot is the most visually and narratively important. To explain why it is the most visually and narratively important piece of the video, let me first pull back and contextualize the video in terms of the rest of 1989, the album on which “Bad Blood” appears.

This is the second catalog video from 1989. “Shake It Off” catalogues different styles of femininity, each associated with a different musical genre (and sometimes music video, such as the “Hey Mickey!” reference); Swift’s character transcends these feminine stereotypes, embodying what is supposedly her own distinct and quirky self. In the same way that “Shake It Off” catalogues both genres of femininity and music/music video genres, “Bad Blood” catalogues both kinds of women and movie references. The video references a slew of scifi/action films: Kill Bill, Tron, Fifth Element, Sin City, etc. (even the Kingsmen?). Similarly, a panoloy of women celebreties (Lena Dunham, Ellie Goulding, Marishka Hargitay, Jessical Alba, just to name a few) practice a skill with Catastrophe (maybe she’s learning from them?), a skill they can use to battle her frenemy Arsyn; so again, as in “Shake It Off,” Swift’s character sublates a whole bunch of different femininities into her own identity: she can be the transcendent mix of all women, she can do everything but the rest of her crew can do only one thing each. And this sublation is represented, visually, by the change in hair color from blonde to red.

Narratively, this hair color change happens when Kendrick Lamar visually drops out of the video. He rapped the verses, and throughout the soar up to the main drop, he and Catastrophe appear in complementary half-face shots, as though they were two halves of the same face, one white, one black; one woman, one man. Could Catastrophe’s red hair be an indication that she’s also sublated Kendrick, his (very very respectable) blackness into her multiracial feminine mix? Could this sublation of Kendrick, his blackness, and his rapping, be part of her broader effort to distance herself from genre narrowness and claim instead a pluralist pop omnivorousness?

I think so…mainly because “Bad Blood”’s invisible reference is Michael Jackson’s “Jam.” “Bad Blood” is Swift’s first song featuring a black rapper. (She was featured on B.O.B.’s “Both Of Us,” but that was his track, on his album.) “Jam” was Jackson’s first and most prominent track with a rap feature. Tamara Roberts argues that “Jam,” like many of Jackson’s songs, has ”a sound that consisted of the transracial base that was his musical heritage punctuated by carefully wielded hyperracial sounds such as hard rock guitar and rap vocals” (26). “Jam” uses hyperracially coded sonic features, like electric rock guitar (which reads white) or rap (which reads black) in combination to establish and reaffirm that pop is a transracial mix of racially-coded genres. Thus, as Roberts continues, “when he supposedly integrated MTV in 1982, Jackson did not racially cross over but redefined what the mainstream was: a space in which an interracial and intercultural musical past gets filtered through a hyperracial frame” (26). So, since the 1980s, “pop” has been a genre that sublates racially coded parts into a mainstream mix: it calls on hyperracially coded elements only to both negate-and-preserve them (hence this Hegelian language of sublation). As Robert puts it, “the multiracial/cultural legacy of Jackson’s pop kingdom, in which contemporary artists not only imagine a vast world of racialized sounds in their library but also weave them together with self-conscious acknowledgement of their juxtaposition” (36). In this light, “Bad Blood’s” use of Kendrick is part of 1989’s sustained campaign to establish Swift as a pop artist: she is pop in the same way that the King of Pop is. Swift uses hyperracially coded elements to demonstrate a transracial (but not necessarily non-white supremacist) mainstream mix.

Let’s go back to the drop. I talked a little about the video’s visual narrative above, but here I want to focus on the music. Leading up to the drop, there’s a bridge focused mainly on Swift’s lyrically melodic vocals; most of the dubsteppy instruments drop out. KL punctuates the end of the first repetition; this is the first time he and TS appear in split-faced split screen. In the second repetition, TS punctuates the fourth beat of every measure with a Lumineers-style “Hey!,” and KL responds on the and of four with an “aaaah”; when this happens, they’re in split screens, but Janus-headded rather than split-faced (notably, they’ve switched sides; TS is now on the right, KL on the left). At the climax just before the drop, TS and KL appear full-faced in split screen, and sing in unison “If you live like that…”; this leads directly to the drop. So we have all these musical combinations of TS & KL that culminate in red-headed Catastrophe–she doesn’t just sublate all the different types of women/femininity, but Kendrick and his (totally respectable) blackness, too. Catastrophe, like Taylor, is the strong, harmonious mix of diverse capacities, genres, types, whatever–she has all the different variables in the right arrangement. Contrast this with Arsyn and her gas-mask wearing army of mostly faceless women covered in all black everything). Catastrophe is bright and variable, Arsyn is dark and monotonous. This scene is an almost too-convenient illustration of Jared Sexton’s claim that white supremacy has shifted the color line from a white/non-white binary (where whiteness is to be kept pure, one drop of non-white blood makes you not white) to a non-black/black binary (where queer/non-bourgeois blackness is to be kept from contaminating the otherwise healthy pluralist mix).

The song is clearly part of 1989’s sustained campaign to situate TS as pop, that is, as belonging to a genre that calls on racialized (and gendered–dubstep clearly reads as masculine, aka “bros need their drops!”) musical genre markers in order to present itself as transcending the very differences it (re)articulates as narrow and limited. At this drop, the video reflects this narrative of TS-as-pluralist-heroine.

But the two drops in the beginning of the video undercut the centrality of the song’s main drop in the video’s structure. The video changes the composition of the song: it gives us two drops before the song even starts. First, the opening scene goes from a shot of the London skyline, birthplace of dubstep (the faint British police siren in the background is reminiscent of The Clash’s “White Riot,” which also opens with just a police siren…is this Swift getting a girl riot of her own? We know how much she likes to Lean In.), to a shot of an office [2]. A a body lands on an office desk, its thud coinciding with our first gut-punching bass hit, which echoes when we’re introduced to each Catastrophe and Arsyn. We then get some very post-WIlliams/Elfman superhero film score march version of the “Bad Blood” hook as Catastrophe and Arsyn set up the narrative of the video: they collaborate in the beginning, but at the end of the introduction Arsyn turns on Catastrophe. This betrayal gives us the second drop:Arsyn kicks Catastrophe out a window. We get a weak wobble right as Catastrophe crashes through the window, and as her body hits a car below, we have no cinematic sound for this, or any representation of the crash in the accompanying music, as we do in the opening scene.

Kahn gives us two drops with actual bass (and maybe sub-bass? I’ve only listened on earbuds so I don’t know. But they sound as fully resonant and bone-shaking as any big-budget action movie soundtrack.). Compared to TS’s sung treble drop, these prefatory drops sound more powerful (and, maybe, dare I say, “authentic”?) than her popped-up version at the song’s climax. So, even though the big explosion scene at the end might be the video’s narrative climax, the first two drops are its sonic climax.  As TS and Max Martin wrote the song, “Bad Blood” climaxes like a conventional pop song does: late, after two smaller climaxes. As Kahn reworks it, “Bad Blood” climaxes like a lot of contemporary EDM does: early and often. Kahn’s soundtrack de-centers the sung drop as the musical or sonic climax of “Bad Blood.” If today most action movies are non-narrative, following something like the compositional logic of a dubstep track more than the expository logic of traditional narrative, then we hear the organization more than we see it (this is Shaviro’s point). And the organization we hear doesn’t treat Catastrophe as the sublation of all the racialized, gendered genres into one pop mix.

And perhaps in so doing, in making both the song and the video work more like dubstep and less like pop, Kahn’s video de-centers, or at least destabilizes, makes wobbly, “Bad Blood’s” racial narrative. Instead of transcending hyperracialized genre markers, “Bad Blood” merely follows, anticlimactically, from them. I mean, doesn’t Kenderick’s first verse say: “I don’t hate you but I hate to critique, overrate you/These beats of a dark heart/use basslines to replace you”?

 

[1] Right around 1:32/3, swift sings a staccato “Hey!”–she’s invoking a pop trend that started a few years ago with The Lumineers’ “Hey Ho.” Such “Hey!”s also appear on Katy Perry’s “Dark Horse” (just as “Bad Blood” is Swift’s track with a black male rapper, “Dark Horse” was Perry’s–Juicy J). This perhaps lends some credence that the person Swift has bad blood with is indeed Perry. Less gossipy and more music-y, it’s noticeable that this percussive “Hey!” is a common feature of a lot of pop songs, whereas trap employs “Hey!” in a different way. In trap songs, there’s a sample of a male choral “Hey!” that usually gets played on every offbeat. So unlike Chris Molanphy, I see the “Hey!” in BB as fundamentally different than the DJ Mustard-style trap “Hey!”
[2] There’s a lot more to say here about the particular shot of London we get–Canary Wharf–and its pre- and post-gentrification relationship to early grime and dubstep, and how that might also affect how we hear the song. But I’d want to reread Dan Hancox’s Dizzie book before I said anything more than “hey, this probably matters.”

YouTube Preview Image

So, when you talk about DNA with respect to music, THIS is the first thing that comes to MY mind.

This is cross-posted at Its Her Factory.

There are a lot of reasons to headdesk over this 538 video about Pandora’s Music Genome Project and its application to music therapy. There’s the video itself: some people in my twitter TL found its cinematography too precious. There’s the project it details: a big data project that uncritically draws assumptions about music from 18th century European music theory (CPE Bach actually wrote the book on tonal harmony), and assumptions about structure, organization, and relationships from genetics.

In addition to the technical problems with the project (that is, its uncritical reliance on Western music theory…whiiiiiich is also racist, in the sense of normatlively white supremacist), the Music Genome Project is, I want to suggest, racist. Its genetics-based approach is too too resonant with 19th century race science, and its therapeutic application (the second half of the video is entirely about this) is pretty clearly biopolitically racist.

As actual biologists have argued, the MGP isn’t genetics in any technical scientific sense: the “atomistic” bits of musical data it records are not genes, in large part because they are not heritable (that is, genes are causal in a way that the musical characteristics the MGP breaks down into “genes” are not). [1] As David Morrison points out, “ the study of musical attributes is clearly a study of phenotype not genotype,” Phenotype is outwardly observable characteristics–races are phenotypes, for example. Genotype is genetic structure or organization. The move 19th century race science made was to conflate phenotype with genotype, to think that people with different visible characteristics (skin color, hair texture, skull shape, nose shape, etc.) were of different genotypes or “races” (race in the genetic sense is a subdivision of a species, basically like a breed of dog). This conflation of phenotype with genotype is, we now know, bad genetics, and, uh, racist. (The lack of genetic differences among race-based phenotypes is generally what analytic critical philosophers of race referred to in the 1980s and 1990s when they talked about race not being “real.”)

So, the study of phenotype itself is not necessarily racist–it’s white supremacy that makes phenotypical differences racist, not the phenotypical differences themselves (or, it’s power, not objective features). So the MGP isn’t racist for studying musical phenotype–it’s racist for studying phenotype in a way that privileges white phenotypes.

How does the MGP privilege whiteness and white musical phenotypes? There are any number of ways. First, as Nick Seaver pointed out in our conversation about this video, it uses the Western record industry’s genre categories as its basic “phenotypes”–classical (which means Western art music, not, say, Indian or Chinese art music…the art music of non-Western cultures gets folded into the vernacular category ‘world’), pop/rock, jazz, hip hop, world, and so on. These are phenotypes as white culture hears them; they are modeled on white supremacist racial categories. Genre has always, always been tied to race and class, just as radio format is unabashedly tied to demographics (there’s an interesting question here about radio:demographics::streaming:psychographics, but I’ll leave that aside for now). Let’s not forget that for a good part of the 20th century, record stores were divided into two genres: “music” and “race music.”

Not only are the genres identified along white supremacist articulations of the color line, but the musical “genes” and traits the MGP identifies as “important” reflect European understandings of what music is and how it breaks down into fundamental elements. CPE Bach, cited in the introduction as an influence, literally wrote the book on 18th century tonal harmony (Jean-Phillippe Rameau wrote the other). Melody, Harmony, Rhythm, Meter, Timbre, Texture, Form–these are what every gen-ed music appreciation textbook lays out as the fundamental elements of music. But just as genetic research identified two sex chromosomes (X and Y) not because binary sex is an objective feature of DNA as such, but because binary gender/sex is a key organizing principle of Western thought (which organized how we made sense of genetic information), music research hones in on these elements not because of anything about the objective properties of sound, but because of centuries of practice and convention. As far as I can tell, the music theory behind the MGP seems fairly art-music focused. This matters because, as many pop music scholars argue, Western art music theory (its concepts and methods) aren’t really adequate to analyze Western pop music, let alone anything else. Pop music, for example, is a multimedia art in which things like the video impact how we hear the music; so, an analysis that just focused on the musical elements would miss key features of that work, features that impact how the music works and is interpreted.

The misapplication of the genome and genetics metaphors allows the MGP to naturalize white supremacist racial hierarchies as objective features of sounds themselves. That’s how the MGP itself relies on old-school racism. But then in the video Gasser, the head of the MGP, talks about how he applies this “genetic” research to music therapy, to create, as it were, eugenic music, music “that can make our lives healther,” as Gasser says in the video. (This discussion begins about three minutes into the video.) This is where the MGP starts to get biopolitically racist.

As Sylvia Wynter explains in her “No Humans Involved” essay, “our present epistemological order…is now that of securing the material well being of the biologized Body of the Nation” (64). Biopolitics establishes an ideal of “life” or “health,” and then uses race to distinguish between those whose lives it’s in society’s interest to promote, and those whose lives it’s in society’s interest to quarantine and or eliminate. In other words, it separates out eugenic from dysgenic populations. And it does this by using what Wynter calls the “technocultural fallacy,” which is the fallacy of mistaking “imperative of securing the overall conditions of existence (cultural, religious, representational and through their mediation, material) of each local culture’s represented conception of the Self” (49) for “the imperative common to all organic species of securing the material basis of their existence” (49). Biopolitics naturalizes what white bourgeois culture takes as the conditions of its own existence and flourishing as the basic biological conditions of “life” itself: whatever sustains white cisheteropatriarchal bourgeois culture is thought to be the basis of all organic life.

The MGP naturalizes what white bourgeois culture (the culture of institutions like the academy, medicine, etc.) understands as the conditions of the existence of its musical practices and its experience of musical pleasure as the basic empirical features of “music” itself–namely, the “genes” the MGP tracks. Gasser’s search for the “healthiest” music studied not every musical genome/genre, but only three–classical, pop/rock, and jazz. These three genres are the ones most embedded in ‘official’ (white, patriarchal, bourgeois) institutions like symphonies, museums (the Rock n Roll Hall of Fame, the EMP museum, etc.), and state-funded or supported “public” institutions like PBS, NPR, school music curricula, and so on. These are the most culturally and institutionally “white” genres. He didn’t even include R&B, hip hop, trance or ambient or new age (three genres that seem obviously relevant to some sort of therapeutic project, right?), techno, country, bluegrass, Indian film music, J pop, any one of a bazillion Latin genres, reggae–none of these genres even made it into the original study. So this wasn’t a study of music, it was a study of the three most institutionally white genres of music. There’s a racialized distinction between eugenic and implicitly dysgenic genres/phenotypes of music built into the very framework of the study.

And analyzing these three most institutionally white genres, Gasser found that they had some particularly eugenic phenotypical traits: “slow, heartbeat-paced tempo. Consonant harmony. Lyrical and sustained melody. Occasional bursts of rhythmic energy. The use of strings…Classical harmonies with a rock groove.” [2] These were what the MGP found to be the most “healing” and therapeutic sounds. But they’re also, uh, descriptors of most of the white canon, be it ‘classical’ art music or ‘classic’ rock. Conveniently, the most eugenic musical phenotypical traits are the whitest ones! This reminds me of 19th century phrenological charts that claimed to show correlations between skull size and shape and intelligence; of course the whitest people had the most intelligent skulls and blacks the least. In both cases, “science” tells us that white phenotypical traits are the most eugenic. That last phenotypical feature, “classical harmonies with a rock groove” exhibits a specifically neoliberal approach to whiteness, which doesn’t treat white as a category to be kept pure, but which treats whiteness as a healthy, indeed, “multicultural” and “diverse” mix. Jared Sexton talks about this as a shift from a white/non-white binary (where whiteness is exclusive and pure) to a non-black/black binary (where the privileged/white category is mixed with everything but blackness, which is seen as dysgenic). [3] According to Gasser’s research, the most eugenic or “healing” music is that which feels and sounds most white. (The opening chords of the piece are really neo-Coplandesque, there’s some straight up harmonic resolution at the end of the video…).

According to Wynter, the fallacy of “supraculturalism” works in tandem with the technocultural fallacy. This “second fallacy…mistakes the representation for the reality, the map for the territory…narratively instituted culture-specific discursive programs” for the “genetic programs specific to its genome” (49-50). The supracultural fallacy treats culturally and historically local accounts of human life for objective biological fact. The MGP does just this–it takes a historically and culturally local account of what “music” is (in fact, the very IDEA of music as such, as ‘art,’ that’s not a universal idea!) and naturalizes it as a genomic fact. Because it naturalizes a white, bourgeois (patriarchal, too) “map,” the MGP concludes that the kinds of music most “therapeutic” for people who embody white/bourgeois/patriarchal norms are in fact the most eugenic kinds of music for all people.

All people–so now we’re back to fallacious generalizations. For someone working for a streaming service that treats taste as absolutely individual (the algorithms are supposed to taylor the stream specifically for you), it seems weird to assume that Gasser bases his therapeutic project on a generalized, universal “we” with a consistent, coherent taste. The MGP (and not centuries of music criticism and musicology) is what has finally explained what Gasser calls “our musical taste.” But who’s this “we”? Well, Gasser seems to generalize from himself and the people he interacts with. Gasser begins by talking about how his favorite thing about music is how it communicates, and then quotes/paraphrases CPE Bach: “You can’t expect to move the audience unless you yourself are moved.” This reveals a presumption of commonality with his audience, that musicians and audience are of similar enough background to be moved by the same things. (Shit, even the idea that music is expressive is a pretty culturally specific idea.) So we really need to be asking: What music is most eugenic to whom? (I’m reminded here of Mara Mills’s work on the way that test subjects’ bodies become built into audio technologies…)

The video also includes its share of the supracultural fallacy. It mistakes one kind of musical cartography for the “territory” of music itself. Throughout the video there are images of traditional Western notation–either handwritten or digital. What’s this doing? On the one hand, it’s appealing to the respectability of Western Art Music. On the other hand, it’s naturalizing the data-derived findings of the MGP—we never see any analytics, we only see staves and notes and accidentals and key signatures. The constant references to traditional Western notation also reveal the project’s privileging of the kinds of music that can be notated this way. Plenty of Western pop and non-Western genres aren’t best notated in traditional Western terms. (I mean, it’s probably possible to notate something like Skream’s “Midnight Request Line” or a PercTrax release–stuff originally crafted on a DAW–but traditional Western notation just isn’t equipped to deal with all the variables at work in, say, an Ableton session.)

Wynter argues that “the ostensibly evolutionarily determined genetic organizing principle of our Liberal Humanist own as expressed in the empirical hierarchies of race and class (together with the kind of gender role allocation between men and women needed to keep these systemic hierarchies in place) is as fundamentally secured by our present disciplines of the Humanities and Social Sciences” (53). Based on what just went down with the MGP, we can update her claim to include data science. The MGP is just one tiny bit of evidence that “the ostensibly evolutionarily determined genetic organizing principle” of Neoliberal Biopolitics “as expressed in the empirical hierarchies of race, class,” and gender is “fundamentally secured by our present disciplines” of knowledge, including data analytics. So yeah, the Music Genome Project is racist; the white supremacy is built into the very DNA of the research methods, so to speak.

 

[1] Genes are heritable: they are transmitted directly from one generation to the next. Musical influence obviously happens (Missy sampled Cybotron’s “Clear,” Pitbull constantly references “Rapper’s Delight,” the first B-52s record sounds a lot like Gang of Four’s Entertainment!, Ministry’s With Sympathy sounds like a survey of then-current British post-punk…), but influence, sampling, and remixing even aren’t heritability. Sampling comes closest to genetic heritability, because there is concrete material that gets lifted and reworked, but it’s still not inheritance in the genetic sense for a number of reasons: it’s not the result of sexual reproduction; there are no necessary and/or sufficient number of allelles or genes that must be transmitted–a song can, for example, simply not have harmony or melody at all; samples are not expressions of traits, samples are formal elements that don’t necessarily express anything. And to be honest, using sexual reproduction as a model for musical influence would import a TON of crappy (hetero)sexist assumptions into our understanding of musical influence and composition.

 

[2] A larger-scale feature Gasser names is a slow build to a climax: “striving toward apotheosis,” or “climax. And you can’t get there right away; there has to be a gradual unwinding. Because if you’re going to get to the point of having those actual chills, it has to be earned.” This idea of delayed gratification (uh, totally Freudian; see: Civilization & Its Discontents), of building dissonance to a climactic resolution, that’s central to tonal harmony, to some kinds of pop composition (though not all–some pop songs are really repetitive)…and as plenty of music theorists and musicologists have shown, often done in really racist and misogynist ways (I’m thinking here especially of McClary’s Feminine Endings). The idea that music should slowly build to a big climax–that’s just one way to build climaxes! The recent spate of EDM-influenced pop usually climaxes in the first chorus (early and often, like voting in Chicago, lol). Also, DJ sets generally build to a climax, but in ways that are different than the way classical, pop/rock, and jazz pieces do. So again, this is just a really narrow way of thinking about (a) compositional form in general and (b) how to climax a song/how to handle tension-and-release devices.
[3] Wynter puts that non-black/black binary in these terms: It is a “differential…between the suburban category of the owners and job-holders on the one hand (of all races including the Cosby-Huxtable and A Different World Black Americans, and the Black non-owners and non-jobholders on the other. Consequently, since the Sixties, this new variant of the eugenic/dysgenic status organizing principle has been expressed primarily by the growing lifestyle differential between the suburban middle classes (who are metonymically White), and the inner city category of the Post-Industrial Jobless (who are metonymically young black males). Where the category of the owners/jobholders are, of whatever race, assimilated to the category of ‘Whites,’ the opposed category of the non-owners, and the non-jobholders are assimilated to the category of the ‘young Black males’”( 53).

Screen Shot 2015-05-21 at 7.45.45 AM

School’s out for the summer, as they say (well, sorta–I’m still teaching a little), which means I’m back here regularly. I’m really excited to be back! It’s been about a month, but I still wanted to post my reflections on the TtW music panel.

The music panel was so much fun–Sasha was a fantastic moderator with interesting and thoughtful questions, and all my co-panelists had such insightful things to say. The audience questions were also fantastic.

One thing I really, really appreciated about the panel was that we stayed largely away from questions of distribution. It can feel like most discussions of music and the web focus on the web as a location or method of distribution: downloads, streaming, piracy, and so on. Both music critics and tech writers have been talking about  this  for nearly two decades. The topic gets a lot of attention because it sits right at the center of two of our biggest social institutions: capitalism (how to make money with music) and the state (intellectual property, law, etc.). Though there are still important conversations to be had about this (Eric Harvey’s work is a good example), this approach can sometimes feel tired and overplayed, on the one hand, and like it eclipses other, equally important questions about music and the web, on the other. We don’t just use the Internet to distribute music.

So instead of focusing on the web as a platform for distributing music, the panel focused on music and the social web. We talked about tons of stuff, but here are the questions that stood out in my mind (and its admittedly imperfect memory):

  • Music making, listening, and fandom are necessarily and inherently social. How does the sociality of social media impact the way music and music fandom works? How does the sociality of social media impact the work of being a celebrity artist?
  • Is music just fungible content designed to generate interaction, which is the real point?
  • How does social media appearance of a musician impact the interpretive contest for their songs? Or are songs just part of their online portfolio/brand?
  • Someone raised the issue of text-based memes based on song lyrics. From a songwriting and composition perspective, has the lyrical meme replaced the sonic hook as the focal point of a pop song? Sasha pointed out that the lyrics can recall a sonic memory, but it’s not the sonic part that’s driving the spread/loop/virality. Is there a difference between an earworm and a viral meme? (Does the parasite vs viral metaphor get us anywhere with this question?)
  • The social web is focused on interactivity (because that’s what generates sellable data). How does the interactivity of social media manifest in, say, the composition of songs? The relationships between fans and artists? The business model of the music industry?
  • Though we didn’t talk about distribution, we did talk  about labor. How are the labor practices of digital social media adopted/incorporated by the music industry? How is fan labor monetized? What are the ethics of that? One audience member asked about the role of labels nowadays, and I suggested that labels are for labor–they do the secretarial grunt work of making and marketing and promoting a record. Like secretarial work, participatory interaction is historically feminized labor (e.g., wives and daughters playing piano in the parlor). Looking back, I wonder about the gendering of musical labor on the social web. Is it all feminized, in line with the broader feminization of labor under real subsumption? Or is more ‘expert’ work–I’m thinking especially of annotation here, but maybe also some kinds of curation–masculinized (in that it’s more prestigious and might bring more benefits)?
  • There was another question from the audience about cell phone bans at concerts. I argued that this has a social rather than an aesthetic function: concert etiquette has always been way of establishing status distinctions among different audiences, and cell phone bans are, I think, part of this tradition. More elite audiences eschew the use of cell phones at concerts.
  • I wish we talked more about gender and race. Especially because race (ok, white supremacy) is fundamental to both the economics and the aesthetics of pop music in the US, and because we’re starting to understand how race is a factor in the economics, aesthetics, and dynamics of social media, the “race/music/social web” conversation really, really needs to happen.

Listening back to the panel, you might be excused for thinking it was actually a panel about Drake. He was the central reference point for just about every idea and question we talked about. After the panel I told my friend “so, uh, I think I need to write about why this ended up being the Drake panel.” Why does pop music + the web = Drake? (‘Scuse me while I Drake that for myself). In 2013, one study found that Drake was the most talked-about rapper on social media.

 
Is there something about Drake that makes him distinctively appealing to various web media (viral memes, hashtags, lmdraketfy.com, etc)? Or, is there something distinctive about Drake that makes his spread through web media more visible, more appealing to the kind of people who are at TtW? (Ie, why Drake not Beyoncé or Rihanna?) At the level of stereotypes, Drake is “the black guy with feelings.” Is there something about social media that just resonates with the kind of artist Drake is? Given the importance of affect and interpretation (annotation, ‘explainer’s, thinkpieces, etc.) on the social web, is there something (something racialized and gendered) about Drake’s emotive, feelings-y persona that makes him meme-able, annotate-able (to white people)?

download_20150108_174721

This post is co-authored with Justin D Burton.

File under “not at all surprising”: we are pretty sure Beats Music is sniffing users’ Gmail and feeding that info into their “Just For You” recommendations. A few weeks ago I (Robin) mentioned to Justin that I thought this was happening to me, and then he discovered that it’s likely happened to him, too.

Justin: I found a most welcome message in my inbox a few days ago. I teach popular music at Rider University, and a friend who knows 1). winter break is hyper-writing time and 2). I’m always on the lookout for writing and thinking music (you know, a friend) was recommending a recent Hot Since 82 mix I might try. I wrote back to coolly express my gratitude (“OMG! Thanks so much for this!!!”), made a mental note to download it when I was back from traveling, and went back to thinking of snarky things to say about year-end music lists. A few days later, as I scrolled through my Beats “Just For You” section, hoping to find the perfect soundtrack for my morning coffee (*not* The-Dream), there was Hot Since 82’s 2014 album, Knee Deep in Sound. I’ve only been using Beats for a couple of months, so my “Just For You” list is culled from my listening habits in recent weeks (mostly Nicki Minaj, Azealia Banks, and Rihanna…okay, fine, also Drake) and music I may or may not much like but play in the classroom to critique with my students (this is how The-Dream and most of my rock recommendations find their way to being “just for me”). In other words, I’m very interested in Hot Since 82, but it’s not likely Beats would know that yet. Unless, of course, Beats had peeped my email. I like that Hot Since 82 is part of my profile now–Data Claus stuffed some much-appreciated variety in my JFY stocking. But I have this feeling that maybe Beats reading my Gmail isn’t always going to work out so well…

Robin: I made the mistake of hate-watching an episode of Dave Grohl’s HBO series “Foo Fighters: Sonic Highways,” and then I made the further mistake of emailing someone about how awful it was, the Rock Dads and the nationalism and the interview with President Obama, like he was a musicology Ph.D. not a J.D. I mentioned Dave Grohl by name in the email, and then within a few days, the Foo Fighters–who I have never listened to on Beats (nor have I listened to Nirvana…L7 or Ministry are about as close as I get)–were all over my “Just For You” recommendations. It felt almost like the algorithmic version of the U2 album appearing in my iTunes: here is some music that I really, really don’t want in my digital space, crowding up my music feed. (I mean, it’s probably not coincidental that Apple is behind both the U2 album and Beats.) It also felt a bit like a Rickroll: I was surprised with unwanted music where I least expected it to show up. (In retrospect, the old internet meme of Rickrolling seems like it foreshadows late 2014’s series of unwanted media objects dropped in users’ feeds or libraries.) More importantly, Beats’ recommendation algorithm seems to be weighing my emails more heavily than my actual behavior in the app itself (my favoriting, my searches, what I actually listen to)–but I’d need to know more about it before saying anything more definitive.

Like most uses of big data methods, Beats’ data-mining can make life more convenient (in Justin’s case) or more of a pain (in Robin’s case). Beats’ Gmail algorithm seems unable to distinguish between positive and negative mentions of artists: it couldn’t tell the difference between Justin’s praise and Robin’s disdain. It treated every mention as an endorsement or “like.” The Beats bot’s confusion about our actual preferences isn’t too surprising in a social mediascape that builds data profiles out of primarily positive affect. Our affective interactions on Facebook, for instance, are notoriously governed by the Like button. Reading together Sarah’s and Jenny’s recent posts about Facebook’s Year in Review videos, we’re confronted with a platform that has mined us to the point that it assumes the right to tell us a story about our own lives. And that story resonates through a filter of compulsory happiness, a narrow band of thumbs up that are meant to deflect critique, anger, grief, WTFs. When the Beats algorithm populates our JFY lists with artists referenced in our Gmail correspondence, it seems to be calibrated according to the same assumption: mention = like. Instead of compulsory happiness, Beats filters our chatter through compulsory fandom.

But, um, WHY? Why both assume and enforce compulsory fandom? There has to be some good business reason for this, right?

We’re used to social media collecting data bits about us for circulation in the marketplace. But Beats is ad-free, a subscription-only service. If Beats already has our money and doesn’t advertise other products to us, what does it gain by listening to our email conversations that it can’t gain from our listening and liking habits inside the app? And why does it hear these conversations as compulsory fandom?

It may just be that, given the limits and affordances of most big data business, compulsory fandom is the only way Beats can hear these conversations. It may be that there’s no good, reliable, or cost-effective way for Beats to hear mentions of an artist as anything other than an endorsement.

It may be that we’ve become so accustomed to algorithms tailoring streams–of social media posts, of advertisements, of “users also bought”–to our idiosyncratic tastes (or finely-tuned psychographic categories) that we expect Beats to adapt to us. Perhaps Gmail is just another source of input for Beats to use to make its tailoring more accurate than its competitors’?

Emailing my friend or coworker about an artist might count as a “recommendation within a personal network,” which is generally thought to be “far stronger than a non-personal recommendation,” such as one made by a bot or by a casual acquaintance. If so, then perhaps Gmail can be thought of as a hyper-personal social media platform, one where the “recommendation” found in a message would be read as the particularly substantive “wisdom of a friend,” somebody who you bothered to email and not just hit up on FB Messenger, on Twitter, or via text.

So, even though Beats is (probably) paying Google for all this Gmail data, it’s a sound investment for them if it helps them stand out from the competition and offer recommendations that really, really do appear “Just For You.” Though sometimes it ends up feeling like a specially-designed troll rather than an uncannily helpful suggestion. But perhaps that’s an acceptable risk for them? Maybe annoying some people some of the time is actually just the cost of delivering what Beats thinks is a better, higher-quality service? Perhaps, for users, compulsory fandom is the cost of having a more custom-tailored stream? Now that we know that Beats is listening back, we’ll start writing about music we don’t want to appear in our Beats streams using some of the same techniques that social media users employ to stay out of search results: next time, it’s “this F00 F1ght3rs series is barf” or “ugh, this Th3-Dr34m song.” (There’s an “Ima be fresh as hell if the Beats listenin’” joke in here…) In order to curate a pleasing stream of music on Beats, our email habits have to change. And that’s the other side of algorithmic culture: not only do they adapt themselves to us, but we adapt ourselves to them.

 

Recent shifts in the aesthetic value of audio loudness is a symptom of broader shifts in attitudes about social harmony and techniques for managing social “noise.” Put simply, this shift is from maximalism to responsive variability. (“Responsive variability” is the ability to express a spectrum of features or levels of intensity, whatever is called for by constantly changing conditions. You could call it something like dynamism, but, given the focus of this article on musical dynamics (loudness and softness), I thought that term would be too confusing.)  It tracks different phases in “creative destruction” or deregulation–that is, in neoliberal techniques for managing society. In the maximalist approach, generating noise is itself profitable–there has to be destruction for there to be creation, “shocks” for capitalism to transform into surplus value; the more shocks, the more opportunities to profit. However, what happens when you max out maximalism? What do you do next? That’s what responsive variability is, a way to get more surplus aesthetic, economic, and political value from maxed-out noise. (To Jeffrey Nealon’s expansion→ intensification model of capitalism, I’d add → responsive variability. He argues that expansion has been maxed out as a way to generate profits–that’s the result of, among other things, globalization. Intensification is how capitalism adapts–instead of conquering new, raw materials and markets, it invests more fully in what already exists. But once investment is maxed out, then, I think, comes responsive variability: responsiveness and adaptation are optimized.)

Maximal audio loudness was really fashionable in the late 1990s and first decade of the 21st century. Due to both advances in recording and transmission technology (CDs, mp3s), and an increasingly competitive audio landscape, especially on the broadcast radio dial, “loud” mixes were thought to accomplish things that more dynamic mixes couldn’t.

Loud mixes compress audio files so that the amplitude of all the frequencies is (more or less) uniform–i.e., uniformly maxed-out. Or, as Sreedhar puts it, compression “reduc[es] the dynamic range of a song so that the entire song could be amplified to a greater extent before it pushed the physical limits of the medium…Peak levels were brought down…[and] the entire waveform was amplified.” This way, a song, album, or playlist sounds like it has a consistent level of maximum sonic intensity throughout. This helps a song cut through an otherwise noisy environment; just as a loud mix on a store’s Muzak can pierce through the din of the crowd, a loud mix on the radio can help one station stand out from its competitors on the dial. For much of its history, the recording industry thought that loudness correlated to sales and popularity.

But many now consider loudness to be passe and even regressive. Framing it as a matter of “tearing down the wall of noise,” Sreedhar’s article treats loudness as the audio equivalent of the Berlin Wall–a remnant of an obsolete way of doing things, something that must be (creatively) destroyed so that something more healthy, dynamic, and resilient can rise from its dust. Similarly, the organizers of Dynamic Range Day argue that the loudness war is a “sonic arms race” that “makes no sense in the 21st century.” (What’s with the Cold War metaphors?) Maximal loudness, in their view, offers no advantages–according to the research they cite, it neither sells better, nor do average listeners think it sounds better. In fact, critics often claim overcompression damages both our hearing (maybe not our ears, but our discernment) and the music (making it less robust and expressive). Loudness is, in other words, unhealthy, both for us and for music.

As Sreedhar puts it,

many listeners have subconsciously felt the effects of overcompressed songs in the form of auditory fatigue, where it actually becomes tiring to continue listening to the music. ‘You want music that breathes. If the music has stopped breathing, and it’s a continuous wall of sound, that will be fatiguing’ says Katz. ‘If you listen to it loudly as well, it will potentially damage your ears before the older music did because the older music had room to breathe.’

At the end of 2014, we are well aware that breathing room is a completely politicized space: Eric Garner didn’t get it, cops do. “Room to breathe” is the benefit the most privileged members of society get by hoarding all the breathing room, that is, by violently restricting the movement, flexibility, dynamism, and health of oppressed groups. For example, in the era of hyperemoloyment, the ability to sit down and take a breather, or even to take the time to get a full night’s sleep, to exercise, to care for your body and not run it into the ground, that is what privilege looks like (privilege bought on the backs of people who will now have even less space to breathe–like, upper middle class white women who can Lean In because they rely on domestic/service labor, often performed by women of color)? “Room to breathe” is one way of expressing the dynamic range that neoliberalism’s ideally healthy, flexible subjects ought to have. So, it makes sense that this ideal gets applied to music aesthetics, too. Just as we ought to be flexible and have range (and restricting dynamism is one way to reproduce relations of domination), music ought to be flexible and have range.

By now it is well-known that women, especially women of color who express feminist and anti-racist views on social media, are commonly represented as lacking actual dynamic range, as having voices that are always too loud. As Goldie Taylor writes, unlike a white woman pictured shouting in a cop’s face as an act of protest, “even if I were inclined, I couldn’t shout at a police officer—not in his face, not from across the street,” because, as a black woman, her shouting would not be read as legitimate protest but as excessively violent and criminal behavior. White supremacy grants white people the ability to be understood as expressing a dynamic range; whites can legitimately shout because we hear them/ourselves as mainly normalized. At the same time, white supremacy paints black people as always-already too loud: as Taylor notes, Eric Gardner wasn’t doing anything illegal when he was killed–other than, well, existing as a black body in public space. White supremacy made his voice seem that because Gardner’s voice emanated from a black body, it was already shouting, already taking up too much “breathing room,” and thus needing to be muted to restore the proper “dynamic range” of a white supremacist public space.

Taylor continues, “merely mention the word privilege, specifically white privilege, anywhere in the public square—including on social media—and one is likely to be mocked.” These voices feel too loud because they are both supposedly, from the perspective of their critics (a) lacking in range–they stay fixated on one supposedly overblown issue (social justice), and (b) overrepresented among the overall mix of voices. Feminists on social media are charged with the same flaws attributed to overcompressed music (here by Sreedhar): “When the dynamic range of a song is heavily reduced for the sake of achieving loudness, the sound becomes analogous to someone constantly shouting everything he or she says. Not only is all impact lost, but the constant level of the sound is fatiguing to the ear.” Compression feels like someone “shouting” at you in all caps; this both diminishes the effectiveness of the speech, and, above all, is unhealthy and “fatiguing” for those subjected to it. Similarly, liberal critics of women of color activists often characterize them as hostile, uncivil, or overly aggressive in tone, which supposedly diminishes the impact of their work and both upsets the proper and healthy process of social change and fatigues the public. Just as overcompressed music is thought to “sacrifice…the natural ebb and flow of music” (Sreedhar,) feminist activists are thought to to “sacrifice…the natural ebb and flow” of social harmony. But that’s the point. They’re sacrificing what white supremacist patriarchy has naturalized as the “ebb and flow” of everyday life.

But this “ebb and flow” is totally artificial. It just feels “natural” because we’ve grown accustomed to it as a kind of second nature. This ebb and flow is also what algorithmic technical and cultural practices is designed to manage and reproduce. That is, they (re)produce whatever “ebb and flow” that optimizes a specific outcome–like user interaction, which optimizes data production, which ultimately optimizes surplus value extraction.

It’s not too hard to see how an unfiltered social media feed–like OG Twitter–might seem like overcompressed music. Linear-temporal, unfiltered Twitter TLs work like compression: each frequency/user’s stream of tweets is brought up to the same “level” of loudness or visibility–at its specific moment of expression, each rises all the way to the top. But just as overcompressed songs kill dynamic range and upset the balance between what “ought” to be quiet and what “ought” to be loud, unfiltered social media feeds supposedly upset the balance between what “ought” to be quiet and what “ought” to be loud, what “ought” to remain buried in the rest of the noise and what “ought” to cut through as clear signal. (Though what this norm “ought” to be is, of course, the underlying power issue here.) So in an era where all individuals can be egregiously loud, we need technologies and practices to moderate the inappropriately, fatiguingly loud voices, and amplify the ones whose voices contribute to the so-called health of that population.

Many digital music players and streaming services have algorithms that cut overly loud tracks down to size. There’s Replay Gain, which is pretty popular, and Apple’s Sound Check; neither makes any individual track more dynamic, but instead they tame overly loud tracks and bring the overall level of the mix/library/stream to an average consistency. In a way, these are sonic analogues to social media’s feed algorithms–they restore the “proper” balance of signal and noise by moderating overly loud voices, voices that generate user/listener responses that don’t contribute to the “health” of whatever institution or outcome they’re supposed to be contributing to. In a way, Replay Gain and Sound Check seem to work a lot like compression–instead of bringing everything in a single track to the same overall level of loudness, they bring everything in a playlist, stream, or library to the same overall level of loudness. Is the difference between dynamic compression for loudness and algorithmic loudness normalization simply the level at which loudness normalization is applied?

Dynamic compression and range isn’t just about music, or hearing, or audio engineering. The aesthetic and technical issues in the compression-vs-range debate are local manifestations of broader values, ideals, and norms. The era of YOLO is over. Dynamic range, or the ability to responsively attune oneself to variable conditions and express a spectrum of intensity is generally thought to be more “healthy” than full-throttle maximalization–this is why there are things like “digital detox” practices and rhetoric about “work/life balance” and so on. At the same time, range is only granted to those with specific kinds of intersecting privilege. Though the discourse of precarity might encourage us to understand it as an experience of deficit, perhaps it is better understood, at least for now, as an experience of maximal loudness, of always being all the way on, of never getting a rest, never having the luxury of expressing or experiencing a range of intensities.

 

This is a cross-post from Its Her Factory.

This is a cross-post from Its Her Factory.

Frank Swain has a hearing aid that sonifies ambient WiFi signals. A Bluetooth-enabled digital hearing aid paired with a specially programmed iPhone (and its WiFi detector), the device, named Phantom Terrains, “translate[s] the characteristics of wireless networks into sound….Network identifiers, data rates and encryption modes are translated into sonic parameters, with familiar networks becoming recognizable by their auditory representations.” The effect, Swain says, is “something similar to Google Glass – an always-on, networked tool that can seamlessly stream data and audio into your world.” (I’ll leave the accuracy of this comparison to people who have thought more about Glass than I have.)

Why would anyone want to do this? What’s the point of being able to sense, to detect and interpret, the flows of data that are transmitted in your environment? For Swain and his collaborator Daniel Jones, data transmissions are just as much a part of the material, engineered, designed, and planned environment as roads, pipes, and buildings are. We exist in a “digital landscape,” and just like all landscapes, this one has a social meaning and a politics. “Just as the architecture of nearby buildings gives insight to their origin and purpose, we can begin to understand the social world by examining the network landscape.”

But why hearing? Why is audition the (best? easiest?) medium for modulating the WiFi/data spectrum into part of the spectrum more or less “normal” human bodies can interface with? Why is “hearing” the “platform for augmented reality that can immerse us in continuous, dynamic streams of data”? [Aside: Does the platform metaphor frame hearing as an interface for data? And what conceptual work does this metaphor do? Is it a sort of mise-en-place that sets up the argument, makes it easier and faster to put together?]

As Swain writes in the New Scientist,

Hearing is a fantastic platform for interpreting dynamic, continuous, broad spectrum data. Unlike glasses, which simply bring the world into focus, digital hearing aids strive to recreate the soundscape, amplifying useful sound and suppressing noise. As this changes by the second, sorting one from the other requires a lot of programming.

Hearing is the medium for translating data into humanly perceptible form because it’s the best input mechanism we have for the kind of substance that data materially manifests as. Contemporary science understands hearing as a “problem of signal processing” (Mills 332). Because of more than a century of research into hearing (which was, as Jonathan Sterne and Mara Mills have shown so elegantly, completely tied to technology R&D in the communications industry) has led us to understand hearing itself  as dynamic, continuous, broad spectrum signal processing, what better medium for representing data could there be?

The practice of translating between electrical signals and human sense perception is rooted in practices and technologies of hearing. As Mills writes, “Electroacoustics has been at the forefront of signal engineering and signal processing since ‘the transducing 1870s,’ when the development of the telephone marked the first successful conversion of a sensuous phenomenon (sound) into electrical form and back again” (321). Our concepts of and techniques for translating between electrical signals and embodied human (or, mostly human–cats apparently once played a huge role in electroacoustic research) perception are significantly shaped by our understanding of sound and hearing. Thus, as Sterne writes, “the institutional and technical protocols of telephony also helped frame…the basic idea of information that subtends the whole swath of ‘algorithmic culture.”

So, at one level, it’s pretty obvious why hearing is the best and easiest way to perceive data: a couple centuries of scientific research, both in audition and in signal processing technology, have constructed and cemented an extremely close relationship between hearing and electronic signal processing.

The whole point is this is a very particular concept of hearing, a culturally, historically, and materially/technologically local idea of what sound is, how “normal” bodies work, and how we interpret information.

There are some assumptions about listening and musicality embedded in Swain and Jones’s own understanding of their project…and thus also in Phantom Terrains itself.

Phantom Terrains relies on listeners’ acculturation to/literacy in very specific musical conventions. Or, its sonification makes sense and is legible to listeners because it follows the some of the same basic formal or structural/organizational conventions that most Western music does (pre-20th century art music, blues-based pop and folk, etc.). For example, “the strength of the signal, direction, name and security level on these [WiFi signals] are translated into an audio stream,” and musical elements like pitch, rhythm, timbre, and melody are the terms used to translate digital signal parameters (strength, direction, name, security level, etc.) into audio signal parameters. The Phantom Terrain relies on listeners’ already-developed capacities to listen for and interpret sounds in terms of pitch, rhythm, timbre, etc. For example, it builds on the convention of treating melodic material as the sonic foreground and percussive rhythm as sonic background. Swain describes the stream as “made up of a foreground and background layer: distant signals click and pop like hits on a Geiger counter, while the strongest bleat their network ID in a looped melody.” This separation of rhythm (percussive clicks and pops) and pitched, concatenated melody should be easily legible to people accustomed to listening to blues/jazz/rock music, or European classical music (the separation of a less active background and more melodically active foreground builds on 19th century ideas of foreground and background in European art music). In other words, Phantom Terrace organizes its sonic material in ways that most Western music organizes its sonic  material.

The device also builds on established concepts and practices of listening. In the New Scientist piece, Swain describes listening in two different ways: as being “attuned for discordant melodies,” and as a kind of forced exposure to noise that one must learn to “tolerate.” “Most people,” he writes, “would balk at the idea of being forced to listen to the hum and crackle of invisible fields all day. How long I will tolerate the additional noise in my soundscape remains to be seen.” I’ll get to the attunement description shortly; right now I want to focus on listening as noise filtering or noise tolerance. It seems to me that noise filtering and tolerance is the basic, fundamental condition of listening in contemporary US society (and has been for 100+ years…just think about Russolo’s The Art of Noises, published in 1913). There’s SO MUCH NOISE: vehicles, animals, the weather, other people, ubiquitous music, appliances and electronic devices, machines (fridge, HVAC, etc.)…In order to hear any one thing, any one signal–someone’s voice, a recording or broadcast–we have to filter out all the surrounding noise. And buildings are built, nowadays, to help us do this. Architects incorporate white noise into their design so that it covers over disruptive noises: HVAC sounds can mute the conversation in the office next to you, making it easier to focus on your own work; restaurant muzak masks nearby conversations so it’s easier to hone in on the people at your table; there’s a bazillion “8 hours of fan noise” videos on YouTube to mask the night’s bumps and help you sleep. Noise doesn’t necessarily distract us from signal; it can help us hone in on the culturally and situationally most salient ones. All this is to say: I don’t think the “extra” layer of sound Phantom Terrain adds to “normal” human bodily experience will ultimately be that distracting. As with all other parts of our sonic environments, we’ll figure out how and when to tune in, and how and where to let it fade into the unremarked-on and not entirely conscious background. We just need to develop those practices, and internalize them as preconscious, habitual behaviors. Our senses process large quantities of information in real-time: they’re wetware signal/noise filters. To ‘hear’ data, we’d just have to re-tune our bodies–which will take time, and a lot of negotiation of different practices till we settle on some conventions, but it could happen.

Swain also describes listening as a practice of picking dynamically emergent patterns out of swarms of information: “we could one day listen to the churning mass of numbers in real time, our ears attuned for discordant melodies.” Let’s unpack this: what are we listening to, and how do we hone in on that? We’re listening to “a churning mass of numbers”–so, we’re not really listening to sounds, but to masses of dynamic data. We focus our hearing by “attun[ing] to discordant melodies”–by paying special attention to the out-of-phase patterns (harmonics) that emerge from all that noisy data. “Discordant melodies” aren’t noise–they’re patterned/rational enough to be recognizable as a so-called melody; but they’re not smooth signal, either–they’re “dis-cordant.” They are, it seems, comparable to harmonics or partials, which are the sounds that dynamically emerge from the interaction of sound waves (harmonics are frequencies who are whole-number multiples of the fundamental tone; partials are not whole-number multiples of the fundamental tone). To be more precise, these “discordant melodies” seem to most closely resemble partials: because they are not wholly resolvable into fractions of the fundamental frequency, they vibrate slightly out of phase with the fundamental, and thus produce a slight sense of dissonance. Swain’s framing of listening treats data as something that behaves like sound–dynamic flows of data work like we think sound works, so it makes sense that they ought to be translatable into actual, audible sounds. Phantom Terrains treats data as just another part of the frequency spectrum that lies just outside the terrain of “normal” human hearing.

The kind of listening that Phantom Terrains performs is really, really similar to the kind of listening or surveillance big data/big social media makes possible. I’ve written about that here and here.

Phantom Terrains is just more evidence that we (and who exactly this ‘we’ is bears more scrutiny) intuitively think data is something to be heard–that data behaves, more or less, like sound, and that we can physically (rather than just cognitively) interface with data by adjusting our ears a bit.

But why the physical interface? I think part of it is this: More conventional ways of interacting with data are propositional: they’re equations, statistics, ratios, charts, graphs, and so on. To say that they’re propositional means that they are coded in the form of words, concepts, and/or symbols. They require explicit, intentional, consciously thematized thought for interpretation. Physical interfaces don’t necessarily require words or concepts: you physically interface with the bicycle you’re riding. So, though you are ‘thinking’ about riding that bike, you’re not doing so in terms of words and concepts, but in more kinesthetic and haptic terms. When I’m riding a bike, I don’t think “whoops, off balance, better adjust”–I just intuitively notice the balance problem and adjust, seemingly automatically. I can also ride a bike (or drive, or walk, or make coffee, or fold laundry, or do any number of things) while doing something else that requires more focused, explicit intention, like talking or listening to music. So, these kinesthetic knowledges can themselves be running in (what seems like) the background while our more (seemingly) foreground-focused cognitive process take care of other business.

Phantom Terrains not only assimilates concepts and practices of data production and consumption to already pretty normalized concepts and practices of human embodiment, the physical interface naturalizes specific ways of relating and interfacing with data, sedimenting contested and historically/culturally local knowledges in our bodies as though they are natural, commonsense, instinctual capacities. In a way, the physical interface makes our relationship with data manifest in the same way that our relationship with white supremacy and patriarchy manifest–written in, on, and through our bodies, as a mode of embodiment and embodied knowledge. As Mills argues, “all technical scripts are ‘ability scripts,’ and as such they exclude or obstruct other capabilities” (323). So, the question we ought to ask is: what abilities are activated by physically interfacing with data in the form of sound, and what capabilities are excluded and obstructed?

 

It’s Thanksgiving, at least in the US. Originally, I had planned to do a post about feminized digital labor specifically related to “holiday” preparation, but I spent all my time finishing up another project for my own blog–a holiday weekend longread about a new academic book on, among other things, race/gender/sexuality and posthumanisms. One of the main things my post tries to do is work through the way the book distinguishes between the kinds of posthumanisms that actually do anti-racist, feminist work, and the kinds of posthumanisms that merely reinforce and strengthen white supremacist patriarchy.

Because this is Cyborgology, after all, at least some of y’all are probably interested in posthuman theory. So, I thought I’d share with y’all the post I put up on Its Her Factory. It’s definitely about theory, and it’s about an academic book, but, I think it’s accessible enough and grounded enough to be of interest to at least some of the Cyborgology audience. Here’s the introduction to that post, which you can find in full here:

Habeas viscus” is Alexander Weheliye’s term for the queerly racialized mobilities activated by the erstwhile immobilization of “exceptional” populations (populations that, in his terms, are neither the fully human nor the not-quite-human, but the absolutely non-human). “Exceptional” populations are the ones filtered out of a biopolitically healthy society. Seen as individually and collectively incapable of reform or adaptation, of currently or potentially embodying the dynamism and flexibility that are thought to characterize neoliberalism’s healthy, successful subjects, exceptional populations are subjected to various techniques–like surveillance, quarantine (e.g., in the PIC), debt–that produce the material, social immobility they claim to manage.

For example, the editorial choices in the Hollaback! Project’s “10 Hours of Walking in NYC as a Woman” video render black and Latino men exceptions to post-feminist MRWaSP society. The video claims to document the extensive but mundane nature of misogynist street harassment that (otherwise privileged white) women face while in public. However, as many critics noted, the video does not depict any white male perpetrators. It thus presents men of color as solely responsible for (white) women’s street harassment, as embodying a “backwards” masculinity that is out of synch with post-feminist society. This video contributes to the stereotype that “urban” men of color–poor and working-class black and Latino men–are both will not and cannot adapt to keep pace with contemporary social norms and mores…that they drag “us” down and hold “us” back. (“Us” here being the “healthy” and/or treatable portions of the population.) The mobility of thusly racialized, gendered, sexualized bodies on city streets and the unrestricted transmission of their speech in public space thus appears to be something that prevents society from moving forward. In order to keep society healthy, their excessive mobility must be reigned in. Producing the immobility it claims to manage, the video reinforces the idea that poor and working-class black and Latino men are psychologically and culturally inflexible.

On top of this psychological and cultural immobility, the video has been interpreted as evidence that the criminal justice system ought to more intensively restrict the social and material mobility of “urban” black and Latino men. In response to the video, the New York Times published a roundtable on the question “Do We Need A Law Against Catcalling?” As many feminists pointed out, because of the already racist structure of the Prison Industrial Complex, carceral solutions such as this one would further intensify the already extensive and excessive immobilization of poor and working-class black and Latino men.

So, the politics of exception produces “bare life” as immobile, inflexible, rigid, the opposite of “vibrant” matter. If biopolitical MRWaSP optimizes the life, the ‘vibrancy’ of the human or not-quite-human populations, it dampens and masks the vibes emanating from exceptional, non-human populations. Weheilye’s point is, to be a bit reductive, this: queer perceptual practices such as habeas viscus can tune into the “exceptional” vibes that hegemonic institutions mask from (mostly) human perception. Though white European thinkers tend to present exceptional populations as non-vibrant, as absolutely dead (e.g., Agamben’s Muselmann), it is more accurate to think of them as queerly un-dead, as emitting signal that conventionally-tuned ears can’t recognize. Whereas lots of white feminist materialists want to grant agency and “voice” to Modernity’s not-quite-human others, to integrate legibly ‘vibrant’ matter into a more mobile concept of (post)humanity, Weheliye draws our attention to the “anti-matter” force of the flesh/habeas viscus, to the ways of not-being (human) and not-living (biopolitically) that exceptional populations have practiced for centuries (if not millennia).

 

 

 

 

So it’s pretty hard to find a critique of Taylor Swift’s new record that isn’t also (or mostly) misogynist. As they say in academia, consider this piece an attempt to fill that gap in the literature. I may get to the actual record later, but for now I want to think about her business model.

Swift made headlines this week for two different, but ultimately, I think, related moves. First, she pulled her music from the free streaming part of Spotify. In an interview with Yahoo music, she explained that she was

not willing to contribute my life’s work to an experiment that I don’t feel fairly compensates the writers, producers, artists, and creators of this music. And I just don’t agree with perpetuating the perception that music has no value and should be free.

In an economy that has made free labor a de facto requirement for middle-class and creative jobs, Swift’s claim about fair compensation seems, on the one hand, laudable. From this perspective, she’s pushing back on the increasing demand for unwaged labor. But then we have to ask, on Spotify, whose labor is free? What about the fan labor of training the streaming algorithms? Of liking and unliking, skipping and playlist building? Swift doesn’t mention the unfairness of this sort of free labor. In her view, “art” deserves to be compensated…but maybe fan labor, which is a kind of care work, doesn’t deserve such ‘fair’ compensation. Or, to use some of Swift’s own language, the “amount of heart and soul an artist has bled into a body of work” is deserving of fair compensation, but the affective labor of fandom isn’t? From this perspective, Swift’s refusal to perform free labor sounds a lot like bourgeois white feminist demands for waged labor that then pass the underwaged care work off to less privileged women. (Eric Harvey has some really incisive things to say about Swift v Spotify here.)

It’s really, really interesting to see how she uses the discourse of “art” to distinguish between her creative affective labor and fans’ affective labor. In her WSJ op-ed from earlier this year, she argues: “Music is art, and art is important and rare. Important, rare things are valuable. Valuable things should be paid for.” Swift’s argument implies that there are some valueless things, some things that don’t have to be “paid for” because they aren’t important. So, just as race and class work to construct some women as rapeable (i.e., as valuable property) and some women as unrapeable (i.e., there for the taking, without consequence), [1]  just as art/craft hierarchies have historically (I mean, for several centuries, at least in the West) construct some people’s labor as important and valuable and other people’s labor as mundane, Swift’s appeals to art and value distinguish between affective labor that deserves compensation, and affective labor that, by implication, ought to remain uncompensated.

In fact, her whole argument centers on a metaphor of monogamous marriage:

Some music is just for fun, a passing fling (the ones they dance to at clubs and parties for a month while the song is a huge radio hit, that they will soon forget they ever danced to). Some songs and albums represent seasons of our lives, like relationships that we hold dear in our memories but had their time and place in the past. However, some artists will be like finding “the one.”

People buy records by “the one.” Marriage is and pretty much always has been a cultural practice for maintaining the racist, cis/heterosexist distribution of wealth and property. So OF COURSE it makes sense to compare a viable music business model, one that puts and keeps the money in the hands of the people who’ve always had it, to marriage. When Swift says “I believe couples can stay in love for decades if they just continue to surprise each other, so why can’t this love affair exist between an artist and their fans?” I really just want to ask her to read Carole Pateman’s chapter on “The Marriage Contract.” It shows how marriage is basically a relationship for expropriating women’s property (or, more technically, property-in-person) from them. In Swift’s analogy, fans are like married women, artists like the husbands who reap the benefits of their labor.

The second thing Swift did this week was release a video, “Blank Space.” But this isn’t just any video. She released an interactive app to go along with the video. As this Wired article remarks, the app transforms the music video into something like a video game experience. Music videos have been prosumery “interactive” for a while now–think of how many fan re-edits, vids, lip dubs, and lyric videos there are on YouTube. This is just an attempt to channel some of the money (and data? Is this app anything like the Jay Z/Samsung surveillance album?) into the artist’s and label’s pockets, rather than YouTube/Google’s pockets. As WIRED put it, “the Blank Space app is, unlike Spotify, a way for Swift to dictate the terms of an experiment and be at the forefront of a new marketing frontier.”

What these two moves share is the underlying view that some kinds of affective labor and digital interactivity are good–the kinds that Swift can both control and extract the most surplus value from–and some kinds of interactivity are bad–the kinds that Swift doesn’t control and extract enough surplus value from. The bad kinds feminize Swift–they put her in the position of feminized laborer, of wife. We can think of Swift’s two moves this week as attempts to Lean In, that is, pull herself out of structural/economic feminization.

 

 

[1] In this light, Swift’s own elision of music/art and femininity really telling. In the WSJ piece, she says “My hope for the future, not just in the music industry, but in every young girl I meet…is that they all realize their worth and ask for it.”