Three articles came out this week that help me develop my concept of droning as a general type of surveilance that differs in important ways from the more traditional concept of “the gaze” or, more academically, “panopticism.” There’s Molly Crabapple’s post on Rizome, the NYTimes article about consumer surveillance, and my colleague Gordon Hull’s post about the recent NSA legal rulings over on NewAPPS. Thinking with and through these three articles helps me clarify a few things about the difference between droning and gazing: (1) droning is more like visualization than like “the gaze”–that is, droning “watches” patterns and relationships among individual “gazes,” patterns that are emergent properties of algorithmic number-crunching; and (2) though the metaphor of “the gaze” works because the micro- and macro-levels are parallel/homologous, droning exists only at the macro-level; individual people can run droning processes, but only if they’re plugged into crowds (data streams or sets aggregating multiple micro- or individual perspectives).

I. Glass

Google Glass is a great illustration of the way droning layers itself on top of the infrastructure (both technological and cultural, like behaviors and norms for social interaction) of the gaze. Google Glass takes the gaze–individual sight, and uses it as the medium for data generation and collection. It superimposes droning onto the panoptic visual gaze; in this way droning is “super-panoptic” (to use Jasbir Puar’s term).

Crabapple’s article makes this superimposition really clear. On the one hand, Google Glass broadcasts her individual gaze: “Google Glass lets the government see the world from my perspective.” With Glass Gaze, I was giving the network the same opportunity.” But, on the other hand, “he network quantifies eyeballs. It can’t see what’s behind the eyes.” Her individual gaze is broadcast, but it’s just the medium for another type of cultural and economic production, in the same way that paint (pigment + emulsifier) is a medium for the production of a painting. The message is the medium, in a way, a medium for the production of datasets, which are then visualized (that is, algorithmically processed). We can’t see the message until the algorithms visualize it for us. We rely on drones–here, algorithms–to see in the first place. And those drones need data: “INPUT! More input!

Interlude: data

Over on NewAPPs, my UNC Charlotte colleague Gordon Hull has been writing about the recent court rulings on NSA surveillance programs. He argues that “data” is different than “information”–information is meaningful (it has semantic content) in and of itself, whereas “data” is meaningful only in relation to other data; but, we can only see that relationship when we process the data through algorithms designed to pick these relationships out of the enormous haystacks of data we’re constantly collecting. As he explains:

This is the data/information distinction at work: the data by itself (or in a vacuum) is meaningless – and may even be meaningless forever – but you cannot even know whether it will rise to the level of information until after you run the analytics (hence my claim that privacy arrives too late).  In this, I think, big data is charting new territory, insofar as older kinds of surveillance did not extensively collect material that was not obviously meaningful in some way or another.

So this new kind of surveillance isn’t inserting itself into our lives by invading our privacy and listening to or stealing our information. Rather, it compels and rewards us for generating data. We have to feed the algorithms. That’s one way to read Crabapple’s claim that “In a networked world, we’re all sharecroppers for Google.” (And, somewhat aside, if labor is historically less protected than privacy is, perhaps that’s one of the insidious effects of this type of surveillance? It both eviscerates privacy (as Hull argues) and turns surveillance into labor performed by the surveilled?).

As I read Hull, he’s arguing that “data” is necessarily big–”all data needs to be available for collection, since we can never know what data is going to be meaningful.” So, not only does the government need to collect all the data that’s made, but it’s actively interested in getting us to produce all the data that could possibly exist.

II. “More eyes, different eyes”

One of the really interesting things Crabapple’s drawing performance does is perspectivally multiply the individual user’s “gaze.” She explains: “I was caught between focusing on the physical girl, the physical paper and the show that was being streamed through my eyes.” With three different perspectives before her eyes, Crabapple’s experience mimics, in scaled-down form, droning’s “perspectival” vision.

Droning, as a type of surveillance, isn’t the perspective of one individual viewer. It’s not the Renaissance vanishing-point perspective that you learn about in high school art class. That sort of perspective is oriented to the gaze of a single viewer. Droning is perspectival in Nietzsche’s sense of the term: it aggregates multiple single-viewer perspectives in a way that supposedly provides a more circumspect, more accurate account than any single perspective could. He argues,

we can use the difference in perspectives and affective interpretations for knowledge…There is only a perspectival seeing, only a perspectival ‘knowing’:…the more eyes, various eyes we are able to use for the same thing, the more complete will be the ‘concept’ of the thing, our ‘objectivity’ (Genealogy of Morals, Second Essay section 12).

This goes back to what I said earlier, via Hull, that data has to be big—the most valuable knowledge isn’t “information”–the content of an individual perspective, what a gaze sees–but the “difference in perspectives,” the relationships among “more, various” eyes. For example, when I use Yelp or TripAdvisor or some other internet ratings site, I don’t trust any one reviewer over another–I’m looking for consistent patterns across reviews (e.g., did everyone describe similar problems?). In order for there to be patterns, there has to be a largeish pool of data, enough reviews to establish a trend.

So, droning needs as many “eyes” as possible to generate data. Enter Tuesday’s NYTimes article by Quentin Hardy on the growing ubiquity of webcams, webcams attached, even, to tortoises. Tortoisecam, according to Hardy,

illustrates the increasing surveillance of nearly everything by private citizens…[T]he sheer amount of private material means an enormous amount of meaningful behavior, from the whimsical to the criminal, is being stored as never before, and then edited and broadcast.

Droning outsources regular old panoptic surveillance to private citizens, often to do in their leisure time and/or second-shift labor (e.g. nannycams). As a matter of droning,issue with this widespread private surveillance isn’t privacy (to extend Hull’s argument), because droning isn’t collecting information–it isn’t interested in the camera’s gaze. [1] Rather, droning is interested in the data created by the camera. It actively encourages the proliferation of private surveillance cams because it needs “more eyes, various eyes,” as many eyes as it can get…even from tortoises.[2]

III. So Why Call It Droning?

At first glance, the kind of surveillance I call “droning” doesn’t seem to be closely related to autonomous aerial vehicles. However, I think “droning” is the right term for a few reasons:

First, it’s omnipresent. It’s a constant background, like a musical drone in, say, Indian classical music.

Second, it’s an autonomous process run by data-generating, data-saving, and above all by number-crunching hardware and software. It doesn’t need to be operated by or to correspond to the gaze/perspective of an individual human being. Instead, and

Third, as I discussed above, droning happens at the level of the swarm, the flock, the population. UAV (unmanned aerial vehicles) are operated by teams, and they rely on both living and autonomous/machinic/digital team members. The vehicle is just the representative of its constituents, its team. Droning is not something one person does to another, or a tyranny of a “majority” over a minority; droning is of, by, and for ‘the people’. [3]

[1] Because droning isn’t in the camera’s gaze, droning works differently than the “male gaze,” at least as it is classically conceived in feminist film theory. Mulvey argues that the camera’s gaze is what sutures the male gaze, what makes it seem and feel authoritative. But droning isn’t in the camera’s gaze; it’s in the camera’s metadata. In my forthcoming book with Zer0 Books, I have a chapter that discusses the ways the male gaze has been reworked by contemporary media and by biopolitics.

[2] Hardy’s Times article also states: “Evan Selinger, an associate professor of the philosophy of technology at the Rochester Institute of Technology. “Should the contractor like being seen all the time? What happens to the family unit? Sometimes the key to overcoming resentment is being able to forget things.” In this last sentence Selinger is referring to the opening sections of the Second Essay of Nietzsche’s Genealogy–the same text I cited above regarding perspectivism. There, Nietzsche argues that “bad conscience” or “ressentiment” is due, in part, to the inability to “forget” or “get over things.” (I know that’s a massive oversimplification of his argument, but, you can read it for yourself in the above-linked copy of the text.) Maybe Selinger has been edited/quoted in a way that misrepresents his claim, but I don’t think  big/ubiquitous surveillance fails to forget. Droning forgets–in fact, the data/information distinction is a great illustration of precisely the kind of Nietzschean “active forgetting” Selinger alludes to in his comment. There is an explicit choice to let data sit fallow as just data (not information); that choice is coded into the algorithm itself.

[3] Is there any consensus on what a flock of drones is called? I asked about this on Twitter earlier this week, and Sarah Jeong suggested calling it “an unconstitutionality of drones,” which I may like even better than my suggestion of calling it a “murder of drones,” after a murder of crows.

 

Robin is on Twitter as @doctaj.

If drone sexuality means machines telling us who we are and what we want, then dating site algorithms are drone sexuality, right?

As I said last week, I’m responding to Sarah’s recent series of posts on drone sexuality. In this post, I want to follow through/push one of Sarah’s concerns about the way her account relied on binaries–both gender binaries (masculine/feminine) and subject/object binaries. I don’t know if Sarah would want to follow my argument all the way, but, that’s one thing that’s great about thinking with someone–you can develop different but related versions of a theory, and more fully explore the intellectual territory around an issue, topic, or question.

What if droning isn’t something “masculine” phenomena do to “feminine” ones, but a process that everyone/everything undergoes, and, in sifting out the erstwhile winners from losers, distributes gender privilege? In other words, droning is a set of processes that dole out benefits to “normally” gendered/sexually oriented phenomena (masculine, cis-gendered, homo- and hetero-normative, white, bourgeois ones), and that subject “abnormally” gendered/sexually oriented phenomena (feminine, trans*, queer, non-white, working class) to increased vulnerability and death?

Before I get into my argument, I want to first summarize what I understand Sarah’s argument to be. I want to make sure I’m not grossly misreading her ideas (or, maybe, just clarify my reading so Sarah can then point out my mistakes/our points of divergence).

Sarah is asking two main questions: (1) “what happens to sexuality in a surveillance state,” and (2) “What happens when being known isn’t the task of human beings but of machines?” This second question shows, I think, that Sarah’s actually asking about what happens to sexuality under a specific type of ubiquitous surveillance–surveillance performed by machines, through machines, and in which machines tell us not just “the” truth, but our truths, who I really am as a person (I’m riffing here a bit on Foucault’s History of Sexuality v1, in which he argues that we think our sexuality contains some deep inner truth about ourselves). Rob Horning has done some really excellent work on this question of machines telling us our truths, telling us who we really are as people. We barf data into the algorithms, and they spit out our “selves” for us and everyone else to see. The important thing to take from Rob’s work is that these “machines” include algorithms and big data; algorithms also drone. But, back to Sarah’s question: these machines aren’t just surveilling us, they’re producing knowledge. (And, to connect back to the sexuality question, there’s that nice resonance between “knowledge” in the epistemic sense and “knowledge” in the carnal, “biblical” sense.) So, drones (the surveillance machines) produce us as beings capable of knowing ourselves (i.e., conscious, self-aware subjects), and as beings capable of being known carnally, i.e., as sexual subjects. I’ll return to the question of how drones produce us as knowing and knowable/known in a bit.

Importantly, as Sarah emphasizes, this surveillance is not limited to the state.

I don’t think we always explicitly identify the surveillant power of drones specifically with a state. I think that drones are both vaguer and more flexible than that, and for me the idea of droneness is something that isn’t reliant on a state for its existence. A drone itself is a manifestation of and a symbol for potentially any and all forms of surveillance, power, violence, control.

…and, of course, pleasure. So, “drone” is Sarah’s term for a general condition–the contemporary configuration of power, domination, control, and maybe even resistance(?). Drones can, in this account, be specific instruments used to maintain and intensify that configuration (e.g., autonomous aircraft, data algorithms, watch lists, etc.). But, if droning also a “symbol for potentially any and all forms” this configuration of power can take, then droning is also a condition, the contemporary condition produced and maintained by these instruments.

I want to think about droning as a condition not just because this builds on my earlier account of droning, but, more importantly for my purposes here, because I think it helps show how Sarah’s account can work without relying on the binaries that she found troublesome. Sarah says:

The Gaze of a drone is penetrative, because all Gazes are fundamentally penetrative. Sexual violence is gendered: the aggressive performance of violence is masculine performance, and suffering the consequences of violence is constructed as a feminine act. Likewise, traditional forms of sexual power and control. Cisgendered men are powerful; women are weak and submissive. Men watch; women are available for the watching.

I should note here that I’m treating this as more of a binary than I’m strictly comfortable with, and in future I hope this framework can be expanded to allow for a better approach to the diversity of gender, because I think there’s some fascinating stuff going on there.

So, in her post Sarah framed droning as something the privileged do to the less privileged: “men watch; women are available for watching.” But what if droning isn’t something we do (or is done to us by others), but something that’s more like a precondition for our action (or interaction)? What if droning is, as I mentioned earlier, something that makes us knowing and knowable? In other words, what if droning is the process that positions us as people with specific identities (knowing) and desires for (knowable) particular sorts of people with specific identities and desires? Droning, in other words, would be what determines your position in white cisgender hetero- and homo-normative patriarchy. Let me try to explain (and again, this is pretty provisional, so if it doesn’t make sense lemme know!):

As I see it, droning is a configuration of the relations of social, political, economic, and ideological production. It’s like a musical drone in the sense that it’s the constant, consistent background that gives shape to the middle and foreground. Or, it’s the field that lays out all the possibilities for gameplay. Navigating our way through these relations individual people, emerge with (a) an identity (however normative or queer that identity might be) and (b) a position in relation to others, and in relation to hegemonic institutions like the state, patriarchy/rape culture, white supremacy, etc. I think this gets us out of the need to think strictly in binary terms bit doesn’t frame droning or surveillance as something one person or group does another (“woman as image, man as bearer of the look,” as Laura Mulvey puts it), but as something that affects us all. And those affects (and effects) don’t have to manifest in strictly binary terms; we’re produced as men, women, trans*, queer, and everything in between. In other words, navigating through these relations, we come to know who we are, and we become legible to others as that sort of being. Our identities and our social status are less like pregiven essences and more like emergent properties, properties that can’t be known or foreseen without the intervention of, say, number-crunching algorithms.

Droning is done to us all, and often demands our constant participation and complicity. But some people will be better situated to recover from and/or be exempted from it, while those of lower (socioeconomic, citizenship, racial, gender, ability/health, etc.) status aren’t as well-equipped to recover, and don’t get a de facto pass. Just think about the TSA and stop & frisk. For the white, cisgendered, and able-bodied, TSA screenings are a common inconvenience in contemporary middle-class life; for non-white, trans* and genderqueer people, and people with disabilities, routine TSA screening can be violent and violating, and, obviously, people from these groups are often subject to more extensive screening. Similarly, stop & frisk technically applies to anyone who behaves “suspiciously,” but really it applies to black and latino men. I really don’t think I need to go into all the “outs” white people get from encounters with law enforcement (I have cried my way out of a ticket in the past…). Those who most easily and successfully recover from and/or avoid the negative effects of droning (often because they disproportionately benefit from droning), these are the people who count (i.e., who are known and knowable) as “white,” as “masculine,” as “cisgendered,” that is, as members of the privileged classes. Those who cannot successfully recover from and avoid the negative effects of droning (often because they disproportionately experience its harms), these are the people who count, who are known and knowable as non-white, feminine, trans*, queer, undocumented, poor, fat, disabled…

In this way, droning is the set of processes and practices that produce micro-level phenomena (individual people as raced, gendered, sexual subjects) for the purpose of maintaining a macro-level society that naturalizes and privileges whiteness, masculinity, and hetero/homonormativity (that is, non-queer sexuality and cis gender identity). If we traditionally think of (panoptic) surveillance or the gaze as controlling and restricting individual behavior, droning doesn’t control or restrict individual behavior; rather, it allows individuals to respond to a situation in whatever way they like. Droning is the stacking of the deck so that only certain kinds of responses from certain kinds of people will be successful. Put in market or economic terms, the gaze is regulatory, but droning is deregulatory.

As I see it, “droning” could include phenomena such as NSA/big data surveillance, autonomous aircraft strikes, traffic cams, watch lists, TSA checkpoints, drug, fitness, and credit “testing” or “checking” (for social services, for health insurance, for employment), stop & frisk, even something like austerity politics. This understanding may be too metaphorical…for example, some uses of actual drones might not fall under this definition of droning (e.g., the Amazon delivery drones might not). But maybe this is something we should talk about.

I don’t at all disagree with Sarah’s original post. What I did was read a bit into some of her earlier claims about what drones are and how they work to try to push her analysis past some of its limitations (which I think were really just made for convenience and clarity in trying to pin down the original idea). In other words, what I think I did here was to show how Sarah’s own idea can avoid the binary problem she diagnoses. I want to emphasize that her posts are wonderfully incisive and provocative and have really spurred me to think long and hard about my own ideas on this.

If I can get my act together next week (which is our first week of classes at UNC Charlotte), I will try to say something about the role of transgression in Sarah’s posts.

Robin is on twitter as @doctaj.

 

the same track, compressed more and more each remastering

This is a cross-post from Its Her Factory.

Recent shifts in the aesthetic value of audio loudness is a symptom of broader shifts in attitudes about social harmony and techniques for managing social “noise.” Put simply, this shift is from maximalism to responsive variability. (“Responsive variability” is the ability to express a spectrum of features or levels of intensity, whatever is called for by constantly changing conditions. You could call it something like dynamism, but, given the focus of this article on musical dynamics (loudness and softness), I thought that term would be too confusing.)  It tracks different phases in “creative destruction” or deregulation–that is, in neoliberal techniques for managing society. In the maximalist approach, generating noise is itself profitable–there has to be destruction for there to be creation, “shocks” for capitalism to transform into surplus value; the more shocks, the more opportunities to profit. However, what happens when you max out maximalism? What do you do next? That’s what responsive variability is, a way to get more surplus aesthetic, economic, and political value from maxed-out noise. (To Jeffrey Nealon’s expansion→ intensification model of capitalism, I’d add → responsive variability. He argues that expansion has been maxed out as a way to generate profits–that’s the result of, among other things, globalization. Intensification is how capitalism adapts–instead of conquering new, raw materials and markets, it invests more fully in what already exists. But once investment is maxed out, then, I think, comes responsive variability: responsiveness and adaptation are optimized.)

Maximal audio loudness was really fashionable in the late 1990s and first decade of the 21st century. Due to both advances in recording and transmission technology (CDs, mp3s), and an increasingly competitive audio landscape, especially on the broadcast radio dial, “loud” mixes were thought to accomplish things that more dynamic mixes couldn’t.

Loud mixes compress audio files so that the amplitude of all the frequencies is (more or less) uniform–i.e., uniformly maxed-out. Or, as Sreedhar puts it, compression “reduc[es] the dynamic range of a song so that the entire song could be amplified to a greater extent before it pushed the physical limits of the medium…Peak levels were brought down…[and] the entire waveform was amplified.” This way, a song, album, or playlist sounds like it has a consistent level of maximum sonic intensity throughout. This helps a song cut through an otherwise noisy environment; just as a loud mix on a store’s Muzak can pierce through the din of the crowd, a loud mix on the radio can help one station stand out from its competitors on the dial. For much of its history, the recording industry thought that loudness correlated to sales and popularity.

But many now consider loudness to be passe and even regressive. Framing it as a matter of “tearing down the wall of noise,” Sreedhar’s article treats loudness as the audio equivalent of the Berlin Wall–a remnant of an obsolete way of doing things, something that must be (creatively) destroyed so that something more healthy, dynamic, and resilient can rise from its dust. Similarly, the organizers of Dynamic Range Day argue that the loudness war is a “sonic arms race” that “makes no sense in the 21st century.” (What’s with the Cold War metaphors?) Maximal loudness, in their view, offers no advantages–according to the research they cite, it neither sells better, nor do average listeners think it sounds better. In fact, critics often claim overcompression damages both our hearing (maybe not our ears, but our discernment) and the music (making it less robust and expressive). Loudness is, in other words, unhealthy, both for us and for music.

As Sreedhar puts it,

many listeners have subconsciously felt the effects of overcompressed songs in the form of auditory fatigue, where it actually becomes tiring to continue listening to the music. ‘You want music that breathes. If the music has stopped breathing, and it’s a continuous wall of sound, that will be fatiguing’ says Katz. ‘If you listen to it loudly as well, it will potentially damage your ears before the older music did because the older music had room to breathe.’

At the end of 2014, we are well aware that breathing room is a completely politicized space: Eric Garner didn’t get it, cops do. “Room to breathe” is the benefit the most privileged members of society get by hoarding all the breathing room, that is, by violently restricting the movement, flexibility, dynamism, and health of oppressed groups. For example, in the era of hyperemoloyment, the ability to sit down and take a breather, or even to take the time to get a full night’s sleep, to exercise, to care for your body and not run it into the ground, that is what privilege looks like (privilege bought on the backs of people who will now have even less space to breathe–like, upper middle class white women who can Lean In because they rely on domestic/service labor, often performed by women of color)? “Room to breathe” is one way of expressing the dynamic range that neoliberalism’s ideally healthy, flexible subjects ought to have. So, it makes sense that this ideal gets applied to music aesthetics, too. Just as we ought to be flexible and have range (and restricting dynamism is one way to reproduce relations of domination), music ought to be flexible and have range.

By now it is well-known that women, especially women of color who express feminist and anti-racist views on social media, are commonly represented as lacking actual dynamic range, as having voices that are always too loud. As Goldie Taylor writes, unlike a white woman pictured shouting in a cop’s face as an act of protest, “even if I were inclined, I couldn’t shout at a police officer—not in his face, not from across the street,” because, as a black woman, her shouting would not be read as legitimate protest but as excessively violent and criminal behavior. White supremacy grants white people the ability to be understood as expressing a dynamic range; whites can legitimately shout because we hear them/ourselves as mainly normalized. At the same time, white supremacy paints black people as always-already too loud: as Taylor notes, Eric Gardner wasn’t doing anything illegal when he was killed–other than, well, existing as a black body in public space. White supremacy made his voice seem that because Gardner’s voice emanated from a black body, it was already shouting, already taking up too much “breathing room,” and thus needing to be muted to restore the proper “dynamic range” of a white supremacist public space.

Taylor continues, “merely mention the word privilege, specifically white privilege, anywhere in the public square—including on social media—and one is likely to be mocked.” These voices feel too loud because they are both supposedly, from the perspective of their critics (a) lacking in range–they stay fixated on one supposedly overblown issue (social justice), and (b) overrepresented among the overall mix of voices. Feminists on social media are charged with the same flaws attributed to overcompressed music (here by Sreedhar): “When the dynamic range of a song is heavily reduced for the sake of achieving loudness, the sound becomes analogous to someone constantly shouting everything he or she says. Not only is all impact lost, but the constant level of the sound is fatiguing to the ear.” Compression feels like someone “shouting” at you in all caps; this both diminishes the effectiveness of the speech, and, above all, is unhealthy and “fatiguing” for those subjected to it. Similarly, liberal critics of women of color activists often characterize them as hostile, uncivil, or overly aggressive in tone, which supposedly diminishes the impact of their work and both upsets the proper and healthy process of social change and fatigues the public. Just as overcompressed music is thought to “sacrifice…the natural ebb and flow of music” (Sreedhar,) feminist activists are thought to to “sacrifice…the natural ebb and flow” of social harmony. But that’s the point. They’re sacrificing what white supremacist patriarchy has naturalized as the “ebb and flow” of everyday life.

But this “ebb and flow” is totally artificial. It just feels “natural” because we’ve grown accustomed to it as a kind of second nature. This ebb and flow is also what algorithmic technical and cultural practices is designed to manage and reproduce. That is, they (re)produce whatever “ebb and flow” that optimizes a specific outcome–like user interaction, which optimizes data production, which ultimately optimizes surplus value extraction.

It’s not too hard to see how an unfiltered social media feed–like OG Twitter–might seem like overcompressed music. Linear-temporal, unfiltered Twitter TLs work like compression: each frequency/user’s stream of tweets is brought up to the same “level” of loudness or visibility–at its specific moment of expression, each rises all the way to the top. But just as overcompressed songs kill dynamic range and upset the balance between what “ought” to be quiet and what “ought” to be loud, unfiltered social media feeds supposedly upset the balance between what “ought” to be quiet and what “ought” to be loud, what “ought” to remain buried in the rest of the noise and what “ought” to cut through as clear signal. (Though what this norm “ought” to be is, of course, the underlying power issue here.) So in an era where all individuals can be egregiously loud, we need technologies and practices to moderate the inappropriately, fatiguingly loud voices, and amplify the ones whose voices contribute to the so-called health of that population.

Many digital music players and streaming services have algorithms that cut overly loud tracks down to size. There’s Replay Gain, which is pretty popular, and Apple’s Sound Check; neither makes any individual track more dynamic, but instead they tame overly loud tracks and bring the overall level of the mix/library/stream to an average consistency. In a way, these are sonic analogues to social media’s feed algorithms–they restore the “proper” balance of signal and noise by moderating overly loud voices, voices that generate user/listener responses that don’t contribute to the “health” of whatever institution or outcome they’re supposed to be contributing to. In a way, Replay Gain and Sound Check seem to work a lot like compression–instead of bringing everything in a single track to the same overall level of loudness, they bring everything in a playlist, stream, or library to the same overall level of loudness. Is the difference between dynamic compression for loudness and algorithmic loudness normalization simply the level at which loudness normalization is applied? The individual track versus the overall mix, the individual subject versus the population

Dynamic compression and range isn’t just about music, or hearing, or audio engineering. The aesthetic and technical issues in the compression-vs-range debate are local manifestations of broader values, ideals, and norms. The era of YOLO is over. Dynamic range, or the ability to responsively attune oneself to variable conditions and express a spectrum of intensity is generally thought to be more “healthy” than full-throttle maximalization–this is why there are things like “digital detox” practices and rhetoric about “work/life balance” and so on. At the same time, range is only granted to those with specific kinds of intersecting privilege. Though the discourse of precarity might encourage us to understand it as an experience of deficit, perhaps it is better understood, at least for now, as an experience of maximal loudness, of always being all the way on, of never getting a rest, never having the luxury of expressing or experiencing a range of intensities.

I’m working my way through a response to Sarah’s incisive and provocative posts on Drone Sexuality. But, I realized that I need to get some preliminary arguments on the table before I get into the thick of my response. In particular, I want to focus on what Sarah identifies as the ambivalence at the center of drone/cyborg eroticism; this ambivalence is, as I have argued in this article, deeply racialized. In what follows I’ll first explain my reading of Sarah’s point and then follow that up with the relevant excerpt from the article.

In her second post in the Drone Sexuality series, Sarah argues:

I think something particular is going on when cyborgs are sexualized. Transgression is erotic in itself, often powerfully so, and we tend to construct the blurring of the line between human and non-human as strongly taboo. Like all sexual taboos, we feel ambivalent toward it, experiencing fear and revulsion at the same time as we’re fascinated and deeply attracted by the idea…So cyborgian transgressiveness is exactly why we find it so sexy. A sexualized cyborg is at once submissive and potentially dominant, alluring and threatening, subservient and powerful. (emphasis mine)

Sarah’s claim that cyborgs are sexualized, and that this sexualization manifests as an ambivalence, as a tension between submission and dominance, allure and threat, is, I think, absolutely correct. I’ll give some evidence to support her claim, and my assessment of her claim, in the long passage that follows below. I want to push Sarah’s claim further, and consider how this ambivalence is racialized in terms of a black/white binary. I should clarify that I’m talking about race primarily as a system of social organization and less as a matter of personal identity. “White” and “black” express an individual’s, group’s, or phenomenon’s position in white supremacist society: those whom white supremacy benefits are “white,” those whom it oppresses are “black.” The tl;dr of this passage is that we whiten the beneficial, alluring, submissive and subservient aspects of cyborg sexuality, and we blacken the dominant, threatening, and powerful aspects of cyborg sexuality. In other words, we parse our ambivalence about cyborg sexuality along a racialized virgin-whore dichotomy.

What follows is an excerpt from my article “Robo-Diva R&B” from the Journal of Popular Music Studies. You can read the full thing here (and you should! It’s about Beyonce & Rihanna, too.)

 

In his reading of Fritz Lang’s Metropolis, Andreas Huyssen identifies, in the Maria-robot, “the unity of an active and destructive female sexuality and the destructive potential of technology” (77). For Huyssen, the robot expresses early-twentieth-century fears of technology in terms of patriarchy’s fear of female sexuality: both are seen as objects simultaneously desirable and horrifying, as alienating, overwhelming, and, when not strictly disciplined, potentially destructive. “The expressionist fear of a threatening technology which oppresses the workers is displaced and re- constructed as the threat female sexuality poses to men and, ironically, technology” (Huyssen 77). Refusing conventional stereotypes that equate femininity with nature and masculinity with technology, the film thus emphasizes the perceived “common denominator” underlying female sexuality and technology in early twentieth-century Western patriarchy, that is, that they both, when left unchecked, threaten the dominant order. “Woman, nature, machine” function, according to Huyssen, as “a mesh of significations which all had one thing in common: otherness; by their very existence they raised fears and threatened male authority and control” (70).

Now, neither women nor technology are always viewed with hostility; indeed, this “otherness” invests them not only with fear, but with desirability. Huyssen maps capitalist patriarchy’s ambivalence about technology onto the well-known virgin/whore dichotomy—a dichotomy that, as many feminists have demonstrated, is deeply racialized. Thus, I argue that race is an essential element in understanding the sexual–techno politics at work in Metropolis: insofar as white culture hypersexualizes the black female body, and black female sexuality is considered to be a countercivilizing force (witness the Moynihan Report), the robo-woman of Metropolis is most correctly read as a black woman.

Black feminists have long noted that black female sexuality is stereotypically represented as inherently “abnormal” and “excessive.” From Saartje Baartman to Lil’ Kim to Beyonce ́, any number of particular black women have represented, to/for white patriarchy, extreme, disproportionate sexuality. Describing “a sexual hierarchy in operation that holds certain female bodies in higher regard than others” (368), Kimberle ́ Crenshaw explains that

blacks have long been portrayed as more sexual, more earthy, more gratification-oriented; these sexualized images of race intersect with norms of women’s sexuality, norms that are used to distinguish good women from bad, madonnas from whores. Thus, black women are essentially prepackaged as bad women (Crenshaw 1995:369).

Following Crenshaw’s account, virgin/whore dichotomies are racialized such that white women are considered asexual, whereas black women are believed to be excessively sexual. Because they are fundamentally “passive” with respect to their desires (or better yet, have no desires at all), “good” white women are less threatening to the white patriarchy than “bad” black women and their “active” desires. Indeed, insofar as white women are read as, by virtue of their whiteness, more removed from their bodies than black women, the former are less threatening to white patriarchy because their whiteness buffers and tempers their feminine “immediacy” with embodiment, sexu- ality, and nature. Richard Dyer argues that “the white woman . . . was not supposed to have [sexual] drives in the first place . . . The model for white women is the Virgin Mary, a pure vessel for reproduction who is unsullied by the dark drives that reproduction entails” (Dyer 1997:29). Lacking the strength and moral fortitude supposedly contained in whiteness, black women do not possess the capacity to control their otherwise violent desires, and are thus threats to civilized society.

In this light, Huyssen’s discussion of the virgin/whore dichotomy at work in Metropolis involves not only gender, but also the intersection of gender with race. Huyssen’s argument about the relationship between female sexuality and technology rests on the distinction between

two age-old patriarchal images of women which, again, are hooked up with two homologous views of technology . . . The myth of the dualistic nature of woman as either asexual virgin-mother or prostitute-vamp is projected onto technology which appears as either neutral and obedient or as inherently threatening and out of control (73).

The overarching duality, which is applied to both patriarchal perceptions of women and technology, is the “controlled” versus the “uncontrollable.” Huyssen’s analysis of the “homology” between femininity and technology is incomplete insofar as it presents a very abstract notion of “femininity,” one that overlooks the ways in which white privilege and racism have determined which sorts of women are seen as “neutral and obedient” and which sorts are considered “inherently threatening and out of control.”

A better account of gender would recognize its fundamental intersection with race, and would thus expand the “homology” to include white privilege: technology and female sexuality, when in white bodies (individual and social), ensure the progress and development of civilization; technology and female sexuality, when in black bodies (individual and social), corrupt civilization. Thus, if “the machine vamp in Metropolis . . . embodies the unity of an active and destructive female sexuality and the destructive potential of technology” (Huyssen 77), then it is clear that the machine-woman is, for all intents and purposes, black. Even though the machine-woman is based on Maria, who is white, insofar as the machine is Maria’s opposite and represents an “abnormal” (Huyssen 77) sexuality, it is still consistent to read the machine as black, since, historically, stereotypical white femininity and black femininity developed via their opposition; the “good” white girl and the “bad” black girl were defined against one another, as opposites. While omitting the language of race, Huyssen makes this point in different terms: “Rather than keeping the ‘good,’ asexual virgin Maria categorically apart from the ‘evil’ sexual vamp,” he argues “we become aware of the dialectical relationship of these two stereotypes” (79).

 

“Reach out and touch someone” is an old telephone ad slogan; even regular old telephony is a medium for social interaction.

Over on Vice Motherboard, Michael Byrne recently wrote about his desire for “an Instagram of sound.” He says

What I want is a place to hear things that people record in the spaces around them. This seems reasonable to me: An app with just one button to record and another to share. I’d have fewer “friends” than on Instagram, in the realm of sound, but there would surely be some. And some who use the app would be pushed to find better and more interesting sounds, and to appreciate those sounds in new and different ways.

There are already such apps–Audioboo is the one I use (there are plenty of others, as summarized here). Audioboo is a social network for sound-sharing; people follow me on Audioboo, but I’ve also linked my account to Twitter so I can also tweet sound clips and share with my twitter followers, just like I would with Instagram (if, that is, I used Instagram with any regularity). I wish it was as popular as Instagram, Snapchat, and Vine…but it’s not.

I don’t think this relative lack of popularity is primarily due to the fact that, as Byrne argues, we’re trained to use vision as our dominant sense. Certainly that’s part of it, but that’s not the only (and perhaps not even the primary) reason. I think sound recording is a different medium than both photography and even Vine’s short-attention-span videography, and that maybe this medium isn’t as well-suited as photography and videography are to the kinds of tasks we generally want to accomplish on social media. So, the controlling factor here is social media, not auditory or visual content–they’re just means to the end of social mediation.

How do pictures and 6-second videos function socially? Though people do use social media in “off-label” ways, these platforms reward virality over comprehension/comprehensiveness. Twitter counts followers, favorites, retweets, and mentions, and Facebook counts likes, shares, comments, and friends. The design of these platforms prioritizes the volume and pace of interaction (i.e., the sociality) over the digestion of content as such. Content is just a means for socialization (an interesting parallel to commodity fetishism, in which commodities are means for transacting social relations). Because it is just a means, the best, most easily shareable content is the kind you can skim and scan very efficiently. I’m always scanning an article rapidly and then retweeting or sharing it on Facebook without really spending any significant time digesting it. It’s harder to fast forward through a sound than it is to scan a text (though the visible wav files in soundcloud make this easier–I can pick out the climax or the best part of a song by some educated guesses made on the basis of the audio’s visualization.) Sounds might be less efficient means of accomplishing the social work tasked to social media.

We do have something akin to a sonic or musical version of the Vine, at least at the level of format: ringtones. These are short snippets of songs or sounds, often looped, that we use to communicate something about ourselves to others–our favorite song, our business-minded, no-frills attitude, whatever. (FWIW, my text sound is the two chimes at the beginning of Depeche Mode’s “Personal Jesus” because I want to signal to other sound geeks and music fans…and maybe now the title of this post makes more sense?) But ringtones have a different social function than Vines, because they’re not shared or liked on social media. They might alert us to social media/SMS/telephonic activity, drawing us back in to conversations (and, in this way, are a nice example of sonically augmented reality), but the ringtones are not the tokens passed around in conversation that facilitate that conversation. From another perspective, ringtones also have a different capitalist function: they’re not the means to generate social media activity, which is what Facebook et al farm and sell as data. I think this is another important reason why Audioboo and their competitors aren’t as widely used as Instagram and Snapchat–it may not be as easy or as profitable to generate sellable data from sound-sharing.

Byrne suggests additional reasons why audio social media hasn’t taken off. I want to address two of them. First, he argues that most photographs are representational (they are “of something,” as Byrne puts it), whereas sounds are not necessarily representational. This is not exactly untrue, but it’s not entirely correct, either. I would argue that most people hear and listen primarily for content. In my experience, laypeople (i.e., my students) don’t just listen abstractly to sound as such, they try to figure out who or what is making the sound, or what the sound means. We also tend to reduce “sound” to either music or human speech/voice. For example, I was guest teaching a Sociology of Gender course at Luther College this past fall, and I asked the students to think of the ways in which non-musical sounds are gendered, or the ways in which we hear gender. The first round of responses were all about human vocalization and speech. Their assumption was that the main or only type of non-musical sound was human speech; these are the two types of sounds that are about something, that have a content. The students didn’t even consider the possibility that there might be abstract, content-less sounds. All this is to say: I don’t think most people hear and listen that differently than they see.

Second, Byrne suggests that photography tends to (literally) focus on a subject, picking it out of its ambient environment. He argues:

Ambience is a realm where the visual and the auditory match up pretty well in terms of ability to represent. It’s here that sound might even be able to beat out sight. This is because ambience is interested in a subject AND all of the junk around that subject equally. In other words, it’s interested in everything—and everything doesn’t have a subject. Put another way, there’s no primacy in “everything,” however that actually works out in reality. It’s a stew.

According to Byrne, Instagrammed pictures have a subject, but audio recordings can’t focus so narrowly and capture “everything.” But sound isn’t inherently any more ambient than photography is. The difference Byrne notices seems to be more an attribute of the uneven development of video and audio recording tech in the average consumer smartphone. Your average smartphone has a really great camera but a not-so-great microphone for picking out individual sonic phenomena/subjects. (You can get mini synth and mixer apps that will let you “filter” those sounds just like you filter your Instagram photo.) If every smartphone came with, say, a bluetooth remote mic, we could easily focus our sound recording, too.

This idea of sonic ambience links Byrne’s post with Wayne Marshall’s great post on hours-long YouTube videos of ambient sounds, which he calls (brilliantly) vacuumtube. (And, you should open that link in another tab and go read the post–it’s really great.) Vacuumtube is an example of the kind of sound recording and sharing that Byrne discusses in his post. The “9 Hours of Suck” video Marshall cites has 40,000+ views, 45 comments, 107 likes, and 12 dislikes. However, because these videos are sooooooo long, Vacuumtube seems to be causally tied to YouTube’s platform. They are, as Marshall argues, “native” to YouTube:

This particular form’s nativity is, of course, directly related to one relatively big affordance: the unprecedented access to time that YouTube now provides. People, especially the non-Warhol sort, just didn’t typically make 7-12 hour films very frequently prior to the advent of unlimited time on YouTube. So one emerging “native” dimension of vernacular video we might lay at YouTube’s feet is the sudden desire to exploit the “platform” as something other than a visual medium — but not just as a jukebox, rather as a long duration white noise machine (or pink, if you prefer).

Because YouTube, unlike, say, Soundcloud or AudioBoo, has no limitations on the length of recording you can post, it is a better medium for really, really long ambient sound recordings. I wonder: Is there no “Instagram of sound” because the Instagram platform isn’t best suited to the kinds of social things we want to do with (ambient) sound recording?

So, my question boils down to this: There may not yet be an “Instagram of sound”–but is this due to the nature of sound and/or photography, or is this due to the nature of social media, as a type of sociality, as a media platform, and as a business model?

 

Robin is on Twitter as @doctaj. 

Nic Endo, noise musician and member of Atari Teenage Riot

Over on Sounding Out, Primus Luta has finished the third installment (which I’ll refer to as LEP) in a superb series of posts on live electronic music performance. The aim of the series is to develop a “usable aesthetic language” to describe live electronic performance. In this post I want to summarize some of Luta’s argument–which is fascinating–and then push his project past his stated philosophical limitations. Even though Luta’s aesthetic language is still strongly indebted to modernist values and ideas (like “agency” and “virtuosity”), can we push his analysis beyond the frame of modernist aesthetics? Can live electronic music performance help us think about what an object-oriented aesthetics or a compositionist aesthetics might entail? From these perspectives, which aren’t very interested in subject-centered values like agency and virtuosity, what values and ideals would we use to evaluate electronic music?

It is challenging to evaluate live electronic music performance because, as Luta argues, our conventional aesthetic categories, i.e., the concepts we use to analyze and assess live jazz (and, I think, by extension other blues-based genres like rock or R&B) don’t translate to the material conditions of electronic music composition and performance. For example, electronic musicians use an entirely different kind of instrument than classical, jazz, and blues/rock musicians do. Luta explains: “Where we typically think of an instrument as singular, within live electronic music, it is perhaps best to think of the individual components (eg turntables and drum machines) as the musical objects of the live rig as instrument” (LEP). So, the physical body of this rig is more like an assemblage  (a modular collection) and less like an organ (an integrated whole). And a rig contains automated modules, whereas a trumpet cannot be automated (unless, that is, you have Asimo play it…but there Asimo is automated, not the trumpet itself). So because the rig has different material properties, it requires different techniques and methods to play it…techniques and methods that can’t adequately be assessed by concepts such as agency and virtuosity, at least as they are traditionally conceived. Traditional concepts focus on the artist’s direct physical manipulation of a passive, inert instrument. Electronic rigs, however, can play back, automatically. If your aesthetic language evaluates only direct physical manipulation, how do you judge the quality of a performance that relies, or could rely, in part or in whole, on automation? The reason we have difficulty evaluating electronic music performance is because, as Luta argues, “we cannot translate what we hear directly to his [the artist’s] agency” (LEP). Or, as he puts it in the first post in the series, “It is impossible to defend an artist who has been called a hack without the language through which to express their proficiency.” (TPL)

Well, to do that, Luta argues, you need to first distinguish the different ways that direct physical manipulation and automation are used to play an electronic rig. He identifies four such ways:

1. “the manipulation of fixed pre-recorded sound sources into variable performances”

2. “the physical manipulation of electronic instruments into variable performances”

3. “the manipulation of electronic instruments into variable performances by the programming of machines”

4. “an integrated category that can be expanded to include any and all combinations of the previous three”

One thing I find particularly valuable about Luta’s taxonomy is that it clearly demonstrates how the difference between physical manipulation and machine automation is aesthetically meaningful without being aesthetically (or ethically) normative. The different techniques can be used, compositionally and performatively, to say something musically, just as a trumpet player uses a mute or a plunger to say something musically. As Luta explains,

One can see a performer playing the drum machine with pads and correlate that physicality of it with the sound produced and then see them shift to playing the turntable and know that the drum machine has shifted to a machine performance. In this example the visual cues would be clear indicators, but if one is familiar with the distinctions the shifts can be noticed just from the audio.

The aesthetic significance of physical or automated manipulation isn’t indexed to some external standard; rather, it is internal to the composition itself. For example, the mode of manipulation can be used to mark different formal sections of the composition, or can be a method of varying a hook. The distinction between direct physical manipulation and mechanical automation is aesthetically significant without being normative–that is, neither direct physical manipulation nor mechanical automation is aesthetically or ethically superior to the other (it’s not a digital dualism).*

But these distinctions all still center the performer’s agency or virtuosity. “The variable possibilities of this type of set, even while not exploiting the breadth of variable possibilities presented by the gear, clearly points to the artist’s agency in performance.” (LEP). His taxonomy is a more nuanced ways of analyzing artist agency in performance, but it is still the underlying concept and value driving Luta’s project. “Artist agency in performance” (LEP) is still the foundation of his inquiry. But why? Why reduce aesthetics to agency? Or, why frame aesthetic judgment as an evaluation of artist’s agency, and not of other things?

His analysis prioritizes artist agency because jazz aesthetics do. Luta works from the perspective that “electronic music is the new jazz.” But arguing this tethers his analysis of electronic music to jazz’s Afro-modernist aesthetics: he’s evaluating electronic music in the same terms, concepts, and ideological perspective used to evaluate jazz. Take, for example, the way Luta’s project centers “artist agency in performance”. This idea of human subjectivity, agency and virtuosity is a modernist value. Luta’s project translates the values of modernist jazz aesthetics into terms that accurately describe the material and cultural conditions of electronic music performance and production. Or, more simply, he transposes jazz aesthetics into terms compatible with electronic instruments and genres.

But what about an analysis of electronic music that isn’t beholden to modernist aesthetics? What about an aesthetic that, say, de-centered human subjectivity (what philosophers would call an anti-correlationist perspective)? Such an OOO (object-oriented ontology) style analysis would treat human minds and bodies as objects among other objects, such as Max patches, sequencers, synth patches, and other “mechanical” musical objects. Or, if you prefer Bruno Latour, what would a compositionist approach to electronic music aesthetics look like?**

Luta’s project–which is important, interesting, and valuable–limits our evaluative language to terms that evaluate the artist or the performer. But could we develop another language to evaluate electronic music, one that wasn’t so artist/subject-centered, a more anti-correlationist or object-oriented language in which to evaluate electronic music and electronic music performance? Though any good OOO-er would argue that my oboe and my reeds are performing objects just as much as I am, electronic music, with its Max patches and sequencers, seems to offer distinct and productive opportunities to think about musical objects as performers.

* “The key difference between an unsynchronized and s synchronized performance rigs is the amount of control over the performance that can be left to the machines. The benefit of synchronized performance rigs is that they allow for greater complexity either in configuration or control. The value of unsynchronized performance rigs is they have a more natural and less mechanized feel, as the timing flows from the performer’s physical body. Neither could be understood as better than the other, but in general they do make for separate kinds of listening experiences, which the listener should be aware of in evaluating a performance. Expectations should shift depending upon whether or not a performance rig is synchronized” (LEP emphasis mine).

**In some senses, these questions are attempts to extend Adam Harper’s project in Infinite Music: it’s not just that everyone is part of the musical event/performance, everything is.

Robin is on twitter as @doctaj.

 

A longer, more academic version of this post appears at Its Her Factory.

This post follows up on my earlier post about a culture of moderation. Here I want to consider one aspect of this contemporary focus on moderation: the idea of “balance.” We talk about work/life balance, the “balance” between individual freedom and national security, and, as Jenny notes, the “balance” between tech use and abstention.

This language of balance was particularly prominent in recent discussions of NSA spying. In fact, “balance” is the term the Obama administration uses to justify and rationalize government surveillance. In an August 2013 news conference, President Obama said “we have to strike the right balance between protecting our security and preserving our freedoms.” When he was interviewed by NBC’s Andrea Mitchell, Director of National Intelligence James Clapper elaborates on the administration’s concept of “balance” (this starts in the video around 6:30).

Clapper says: “The challenge for us is navigating between those two poles. It’s not a balance, it’s not either/or, there has to be that balance so that we protect the country and also protect civil liberties.” Though he appears to contradict himself in this statement (it’s not a balance, it is a balance), I read this as a contrast between two different concepts of balance. On the one hand, “balance” could mean the average of two extremes, an either/or; on the other hand, “balance” could mean a dynamically-adjusting continuum (the kind of balancing done, for example, by an audio equalizer or an electrical resistor). What Clapper is arguing, I think, is that the “balance” the government must strike is of the latter type, not the former type. Mitchell’s follow-up reinforces this reading. She says, paraphrasing President Obama, “You can’t have a 100% security and then you have 100% privacy and then 0% inconvenience.” Security and freedom/privacy are not absolutes or binary opposites; rather, they’re like asymptotes or limits on a continuum, limits that you can approach but never reach.

According to this model of balance, the most just approach is the one that finds the sweet spot, or, in Clapper’s language, “the exact tipping point” in which both freedom and security are maximized without one pushing the other beyond the point of diminishing returns (the Boston Marathon bombings are the example Clapper cites of diminishing returns).

This language is echoed in Ludacris’s song and video “Rest of My Life.” I’ve written about this here, but the tl;dr is: the image of a cresting wave captures the ideal of maximizing risk and reward, a life pushed to its limits, or, what I call elsewhere, “living on the edge of burnout.” The lyrics constantly evoke this ideal:

…What the hell is a life worth living if it’s not on the edge

…Tryin to keep my balance, I’m twisted, so just in case I fall/written on my

tombstone should be ‘women, weed, and alcohol

…I go for broke…I’m willing to bet…

So if balance is the ideal, how is it achieved? Clapper argues that the best way to preserve the balance between individual freedom and state security is “to be as precise as we can be” in selecting “what we actually need to ‘read’.” In this view, a just society is one that most accurately filters the signal (useful information, or information whose use reaps a profit) from the noise (information that’s not profitable, info with too high an opportunity cost). This echoes Nate Silver’s argument in his book about big data, The Signal And The Noise. “Information is no longer a scarce commodity; we have more of it than we know what to do with. But relatively little of it is useful…We have to be terribly selective about the information we choose to remember” (34/26 of Kindle version). Filtering signal from noise (“the truth” from “what distracts us from the truth,” Silver 35) is, according to Silver, the epistemological, social, and ethical problem facing us today.

Ok, so, filtering. But what are the filters? But Clapper and Silver suggest utility as a filter. But that just pushes the question back: useful for whom, and in what way? In the end, it all depends on the end balance you want to achieve. Think back to the audio equalizer example: some equalizers let you choose among different acoustic profiles–rock arena, concert hall, etc. In social and political terms, these profiles might be something like the relative distribution of wealth, whiteness, vulnerability, health, etc. In other words, you can strike a balance that distributes risk and rewards at specific levels for specific populations. For example, it is now common for US government policy to distribute risk to individuals and rewards to corporations (e.g., bailouts for banks but not borrowers). Stop-and-frisk and stand your ground laws are other examples: risk is distributed to people of color, reward (security, survival) to white citizens and to the state/police. We choose the filters that produce the distribution of risk and reward (i.e., the social profile) that is most beneficial to society’s powerful and privileged groups. So, this concept of “balance” isn’t really about protecting individual rights, liberties, or privacy; rather, it’s about maintaining the overall mix or balance of relationships that allow society to function at maximum efficiency and productivity. (This is why, as Jenny notes in her above-linked post, that at the individual level this ideal of balance is really only an ethical imperative for privileged elites–a well-balanced society maximizes the capacities of its most privileged members. For this overall balance to work, oppressed groups need to live relatively “unbalanced” lives, lives that can never really get ahead.) This means that “privacy” as an individual liberty is more or less a red herring–it distracts us from the real focus and intent of contemporary ideology and practice.
Robin is on twitter as @doctaj.

 

 

Does digital technology, especially insofar as it is masculinized or seen as gender-neutral (which are generally the same thing: mankind, postman, etc.), resignify the gendered stigma conventionally attached to care work, affective work, and other sorts of feminized work that never quite counts as “real” labor?

This question comes out of a conversation I was having with some of my graduate students about Karen Gregory’s recent response to Ian Bogost’s Atlantic piece on hyperemployment. I don’t have an answer for this question, but I think it’s very important to consider. (Somebody’s probably already written a fabulous piece on it–and if they have, please point me to it!) So, in this post I want to set up the question for further discussion.

The gist of Bogost’s concept of hyperempoyment is this: if we are employed, we all work all the time. Digital technologies have made it easy for a second, third, fourth (and so on ad infinitum) shifts to be built in to every job (middle-class, managerial-style, non-retail or food-service job, presumably). He writes:

It’s easy to see email as unwelcome obligations, but too rarely do we take that obligation to its logical if obvious conclusion: those obligations are increasingly akin to another job—or better, many other jobs. For those of us lucky enough to be employed, we’re really hyperemployed—committed to our usual jobs and many other jobs as well. It goes without saying that we’re not being paid for all these jobs, but pay is almost beside the point, because the real cost of hyperemployment is time. We are doing all those things others aren’t doing instead of all the things we are competent at doing. And if we fail to do them, whether through active resistance or simple overwhelm, we alone suffer for it:  the schedules don’t get made, the paperwork doesn’t get mailed, the proposals don’t get printed, and on and on.

Gregory’s point–which I fully agree with–is that women and minorities have always had a second (and third, and fourth) shift. They’ve always been expected to do the things like make schedules, mail paperwork, and reproduce the conditions for productive labor generally. She writes:

I am wondering if what Bogost is drawing attention to has less to do with “employment” than with the uneven redistribution and privatization of the labor of social reproduction, an antagonism that feminist theorists have been writing about for more than thirty years…This tacit agreement, however, extends beyond social media and e-mail and is really a form of housework and maintenance for our daily lives.

For more than thirty years, Marxist feminists have been arguing that women’s unpaid labor–housework, reproduction, etc.–is a prerequisite for capitalist wage labor, surplus value extraction, and profit-making. Capital can extract surplus value from waged labor only because the wage laborer is supported by (extracts surplus value from) unwaged labor, mainly in the form of the wife. Gregory’s argument is that what Bogost is pointing to isn’t a new phenomenon so much as a reconfiguration of an ongoing practice: we are all our own wives and moms, so to speak. Indeed, as Bogost’s example suggests, our smartphones wake us up, not our moms, just as emails accomplish a lot of the relational work (scheduling, reminding, checking in, etc.) conventionally performed by women.

Women are trained from a young age to perform this relational, caregiving, extra-shift work. Femininity–the gender ideal and norm–is the technology that helps women perform these tasks with ease and efficiency. Conforming to feminine ideals like cuteness, neatness, cleanliness, attention to (self)presentation, receptivity to others, and so on, trains you in the skills you need to accomplish feminized care/second+ shift work. Need to persuade people to do unpleasant things (like get out of bed)? It helps to be cute and/or nurturing! Need to create a clearly legible calendar or schedule that represents a family’s hectic and convoluted schedule? It helps to have neat handwriting, fine motor skills, and design sense (which girls of my generation definitely learned by, say, passing elaborately-decorated and folded notes between classes)! You get the idea.

Now that “men” (by which I mean, masculinized or non-feminine subjects) are also expected to perform these sorts of tasks as part of their hyperemployment contracts, we rely on technologies other than femininity to assist us in accomplishing this work. Just think about the ways personal computers and smartphones regendered and re-classed secretarial labor. Typing isn’t feminized and classed in the way it once was (my mom’s boss’s wife still won’t type her own emails, because typing is for secretaries, not bourgeois housewives). Typing is universal, at least among the educated middle- and upper-classes. (At this point I want to go reread Sadie Plant’s Zeroes and Ones, which is an old-ish but great book about technology, gender, femininity, and women.)

So where does this leave femininity? I wonder if femininity functions as a way of disaggregating valuable ‘second shift’ work (qua hyperemployment) from valueless but still socially and economically necessary second shift work? There are definitely feminized ways of using these technologies that enable hyperemployment: texting sometimes gets derided as teen girl excess, Pinterest seems to be heavily feminized, etc. How do contemporary ideals of femininity train girls’ bodies to relate to technology in specifically feminized ways, ways that are tied to class, race, ability, etc.? (e.g., “good girls use technology wisely in their STEM careers, but bad girls use it excessively in their texting/shopping/selfies/etc.”) How might thinking about the feminization of digital technologies/platforms/etc. help us think about Gregory’s question: “I am wondering what solidarities can be drawn among bodies, selves, and data (and other nonhuman actors)—solidarities that might really take care of all of us”?

Robin is on twitter as @doctaj.

This is a cross-post from Its Her Factory.

With the 50th anniversary fete this weekend, I thought a quick post on Doctor Who and (mainly pop) music was in order.

First, the arrangement–and the signature sound–of the original theme song was performed by none other than the amazing Delia Derbyshire. If you want to read and learn more about her career and her work on the Dr Who theme, check here (for an amazingly well-stocked website full of interviews, work, photos, etc), here, and the latter part of this documentary.

YouTube Preview Image

 

Second, this theme inspired the KLF/JAMMS’s big pop music coup, The Timelords’s hit “Doctorin the Tardis.” Basically, it’s a mash up of Gary Glitter’s “Rock n Roll Part 2” with the melody of the Dr. Who theme. Here’s the video, but these Top of the Pops performance are better. The first one makes the KLF/JAMMs connection pretty visually evident, and even features Gary Glitter.

 

YouTube Preview Image YouTube Preview Image

 

The awesome thing about “Doctorin the Tardis” was that it was the conceptual basis for “The Manual,” The KLF/K Foundation’s quasi-satirical philosophical and technical treatise on “how to have a number one the easy way.” It also provided the cash that was set alight in the (again quasi-satirical) performance art piece documented in “The K Foundation Burn A Million Quid”:

 

Next, there’s Rotersand’s “Exterminate Annihilate Destroy,” an industrial track that samples Dalek vocals.

YouTube Preview Image

 

And last (but not least–there are plenty of other references to and uses of the Whooniverse in pop music) there’s the reference to Daleks in The Clash’s “Remote Control” (listen at about 2:31–“gonna be a Dalek”):

YouTube Preview Image

YouTube Preview Image

___With the Lilly Allen “Hard Out Here” video blowing up teh interwebs this week, I wanted to briefly revisit and expand on my earlier piece on “trolling as the new Love & Theft.” You can read it here. In particular, I want to expand my argument there in the direction of a conversation Nathan and I were having Tuesday over Twitter. We were talking about the idea of “coincidental consumption.” There Nathan defined it as a “passive byproduct of the sharing economy.” I don’t think it’s entirely passive..or active–it’s a both/neither case, as I see it. Coincidental consumption requires activity, input, and attention; it’s just that all these are indirect, or, if direct, momentary digressions. It’s a co-incidental consumption: it happens together with the primary mode of attention and address, but as a secondary or tertiary (and so on) concern.

There’s a case to be made that these coincidental modes of attention are necessary features of social media labor qua sharing. For example, one might share an article without really reading it. However, there still has to be content there, somewhere, both as the thing that’s shared, and as a thing that’s distinct enough from other things to warrant sharing. So coincidentally-attended-to content matters, even though the demands of the sharing economy focus our attention on other things and thus prevents us from seeing content as anything but a blur. I think the effect of this is that content has to be ‘loud’ or distinct enough to make an impact even when it’s blurry and out of focus.

This brings me back to the Lilly Allen video. It’s such a hot topic because it has ignited a debate between white feminist apologists and black/women-of-color/intersectional feminist critics. (Just to be clear, I’m using ‘white’ and ‘black’ to define types of feminist theorizing, not necessarily or only the identities of the feminist theorists.) Is the video a feminist anthem about overcoming the male gaze and the culture industry’s objectification of women, or is the video a racist work blames black culture for mainstream pop culture’s misogyny, and that frames black women as incapable of overcoming this misogyny like Allen’s narrator claims to do? I’m not going to address these questions here. That’s a topic for Its Her Factory (which I may get to someday); however, if you want to know my thoughts, @bat020 storified them here (thanks!!).

What I want to think about is the coincidental role of music in the contemporary music industry. It seems like music videos and/or performances are increasingly common fodder for so-called “thinkpieces” (and if anyone has any research on where this term comes from or how/when/why it was popularized, I would LOVE to know!). There’s Allen, Miley’s VMA performance, Robin Thicke’s “Blurred Lines,” Lorde’s “Royals,” Macklemore’s “Same Love,” and on and on and on. But what’s interesting to me is that most of these thinkpieces discuss the social and political implications of these pieces without talking about the actual music–as though the music was somehow separable from the social and political work these songs and videos accomplish. We need to think very carefully about what gets lost, what’s obscured, when we focus exclusively on the visual and lyrical content of these music videos/performances.

Has music become a means of producing social capital? Is music just a coincidental product in the means of social media production? Or, if music is to be profitable these days, it has to be relegated to ‘coincidental’ status? Record companies aren’t selling music so much as platforms for generating social/social media capital? Surplus value doesn’t come from selling records (commodifying the labor of the performers), but from social media labor (i.e., from the labor of fans) or the speculation on the stock price of, say, Twitter. Maybe coincidental consumption is a definitive feature of contemporary/late-capitalist labor? Classically, labor involves directly consuming a raw material in a way that transforms it: butter, sugar, flour, eggs, and chocolate plus mixing and heat become cookies. Social media labor coincidentally consumes some or all of what it shares and/or comments on. I suspect that coincidental consumption can also be found in more classical forms of labor and political economy; late capitalist social media labor and the sharing economy make it particularly prominent.

So, here is a demand for material about which to write thinkpieces. Maybe one function of the contemporary pop star is to supply this material? Music is part of the social media supply chain?

And there’s a distinction, I think, btw Gaga-style shock and the “trolling” I identified in my previous post. Gaga courts fame by performing excess–weirdness, disgust, avant-garde fashion, etc. But trolling isn’t about excess or vanguardism. In fact, these troll-gaze (can I coin that as a genre, lol?) songs are more ambiguous than avant-garde. To crib a bit from Le Tigre, troll-gaze songs have to straddle the “What’s your take on XYZ? Misogynist? Genius?” line. There have to be at least two irreducible, individually defensible positions or sides. That allows there to be ongoing back-and-forth debate. These songs court that debate.

Trollgaze songs are just one example of a broader meme-ification of pop music. There are some songs, like Katy Perry’s recent single “Roar,” that seem custom-designed for lip-dub videos; “Roar” was often promoted through lip-dub contests, e.g., on Good Morning America. These songs are crafted so that people will prosume them–use them as the basis for their own reworking or remix. I think this type of prosumption is different than coincidental consumption because it involves the more classic type of labor-as-transformation rather than just or solely labor-as-sharing. Thinkpieces are also this type of labor-as-transformation–authors transform media objects into their own arguments, ideas, etc.

I’ve identified, so far, two types of coincidental consumption: first, when all content is coincidental to sharing (e.g., sharing an article without really reading it); second, when some aspects of the content are coincidental to others (e.g., the song is coincidental to the video). I’ve also argued that coincidental consumption/sharing labor is different from prosumer labor-as-transformation. Though, at a certain level, I just wish music was more central and less coincidental to our consumption and our thinkpieces.
Robin is on twitter as @doctaj.