This is a cross-post from Its Her Factory.
Frank Swain has a hearing aid that sonifies ambient WiFi signals. A Bluetooth-enabled digital hearing aid paired with a specially programmed iPhone (and its WiFi detector), the device, named Phantom Terrains, “translate[s] the characteristics of wireless networks into sound….Network identifiers, data rates and encryption modes are translated into sonic parameters, with familiar networks becoming recognizable by their auditory representations.” The effect, Swain says, is “something similar to Google Glass – an always-on, networked tool that can seamlessly stream data and audio into your world.” (I’ll leave the accuracy of this comparison to people who have thought more about Glass than I have.)
Why would anyone want to do this? What’s the point of being able to sense, to detect and interpret, the flows of data that are transmitted in your environment? For Swain and his collaborator Daniel Jones, data transmissions are just as much a part of the material, engineered, designed, and planned environment as roads, pipes, and buildings are. We exist in a “digital landscape,” and just like all landscapes, this one has a social meaning and a politics. “Just as the architecture of nearby buildings gives insight to their origin and purpose, we can begin to understand the social world by examining the network landscape.”
But why hearing? Why is audition the (best? easiest?) medium for modulating the WiFi/data spectrum into part of the spectrum more or less “normal” human bodies can interface with? Why is “hearing” the “platform for augmented reality that can immerse us in continuous, dynamic streams of data”? [Aside: Does the platform metaphor frame hearing as an interface for data? And what conceptual work does this metaphor do? Is it a sort of mise-en-place that sets up the argument, makes it easier and faster to put together?]
As Swain writes in the New Scientist,
Hearing is a fantastic platform for interpreting dynamic, continuous, broad spectrum data. Unlike glasses, which simply bring the world into focus, digital hearing aids strive to recreate the soundscape, amplifying useful sound and suppressing noise. As this changes by the second, sorting one from the other requires a lot of programming.
Hearing is the medium for translating data into humanly perceptible form because it’s the best input mechanism we have for the kind of substance that data materially manifests as. Contemporary science understands hearing as a “problem of signal processing” (Mills 332). Because of more than a century of research into hearing (which was, as Jonathan Sterne and Mara Mills have shown so elegantly, completely tied to technology R&D in the communications industry) has led us to understand hearing itself as dynamic, continuous, broad spectrum signal processing, what better medium for representing data could there be?
The practice of translating between electrical signals and human sense perception is rooted in practices and technologies of hearing. As Mills writes, “Electroacoustics has been at the forefront of signal engineering and signal processing since ‘the transducing 1870s,’ when the development of the telephone marked the first successful conversion of a sensuous phenomenon (sound) into electrical form and back again” (321). Our concepts of and techniques for translating between electrical signals and embodied human (or, mostly human–cats apparently once played a huge role in electroacoustic research) perception are significantly shaped by our understanding of sound and hearing. Thus, as Sterne writes, “the institutional and technical protocols of telephony also helped frame…the basic idea of information that subtends the whole swath of ‘algorithmic culture.”
So, at one level, it’s pretty obvious why hearing is the best and easiest way to perceive data: a couple centuries of scientific research, both in audition and in signal processing technology, have constructed and cemented an extremely close relationship between hearing and electronic signal processing.
The whole point is this is a very particular concept of hearing, a culturally, historically, and materially/technologically local idea of what sound is, how “normal” bodies work, and how we interpret information.
There are some assumptions about listening and musicality embedded in Swain and Jones’s own understanding of their project…and thus also in Phantom Terrains itself.
Phantom Terrains relies on listeners’ acculturation to/literacy in very specific musical conventions. Or, its sonification makes sense and is legible to listeners because it follows the some of the same basic formal or structural/organizational conventions that most Western music does (pre-20th century art music, blues-based pop and folk, etc.). For example, “the strength of the signal, direction, name and security level on these [WiFi signals] are translated into an audio stream,” and musical elements like pitch, rhythm, timbre, and melody are the terms used to translate digital signal parameters (strength, direction, name, security level, etc.) into audio signal parameters. The Phantom Terrain relies on listeners’ already-developed capacities to listen for and interpret sounds in terms of pitch, rhythm, timbre, etc. For example, it builds on the convention of treating melodic material as the sonic foreground and percussive rhythm as sonic background. Swain describes the stream as “made up of a foreground and background layer: distant signals click and pop like hits on a Geiger counter, while the strongest bleat their network ID in a looped melody.” This separation of rhythm (percussive clicks and pops) and pitched, concatenated melody should be easily legible to people accustomed to listening to blues/jazz/rock music, or European classical music (the separation of a less active background and more melodically active foreground builds on 19th century ideas of foreground and background in European art music). In other words, Phantom Terrace organizes its sonic material in ways that most Western music organizes its sonic material.
The device also builds on established concepts and practices of listening. In the New Scientist piece, Swain describes listening in two different ways: as being “attuned for discordant melodies,” and as a kind of forced exposure to noise that one must learn to “tolerate.” “Most people,” he writes, “would balk at the idea of being forced to listen to the hum and crackle of invisible fields all day. How long I will tolerate the additional noise in my soundscape remains to be seen.” I’ll get to the attunement description shortly; right now I want to focus on listening as noise filtering or noise tolerance. It seems to me that noise filtering and tolerance is the basic, fundamental condition of listening in contemporary US society (and has been for 100+ years…just think about Russolo’s The Art of Noises, published in 1913). There’s SO MUCH NOISE: vehicles, animals, the weather, other people, ubiquitous music, appliances and electronic devices, machines (fridge, HVAC, etc.)…In order to hear any one thing, any one signal–someone’s voice, a recording or broadcast–we have to filter out all the surrounding noise. And buildings are built, nowadays, to help us do this. Architects incorporate white noise into their design so that it covers over disruptive noises: HVAC sounds can mute the conversation in the office next to you, making it easier to focus on your own work; restaurant muzak masks nearby conversations so it’s easier to hone in on the people at your table; there’s a bazillion “8 hours of fan noise” videos on YouTube to mask the night’s bumps and help you sleep. Noise doesn’t necessarily distract us from signal; it can help us hone in on the culturally and situationally most salient ones. All this is to say: I don’t think the “extra” layer of sound Phantom Terrain adds to “normal” human bodily experience will ultimately be that distracting. As with all other parts of our sonic environments, we’ll figure out how and when to tune in, and how and where to let it fade into the unremarked-on and not entirely conscious background. We just need to develop those practices, and internalize them as preconscious, habitual behaviors. Our senses process large quantities of information in real-time: they’re wetware signal/noise filters. To ‘hear’ data, we’d just have to re-tune our bodies–which will take time, and a lot of negotiation of different practices till we settle on some conventions, but it could happen.
Swain also describes listening as a practice of picking dynamically emergent patterns out of swarms of information: “we could one day listen to the churning mass of numbers in real time, our ears attuned for discordant melodies.” Let’s unpack this: what are we listening to, and how do we hone in on that? We’re listening to “a churning mass of numbers”–so, we’re not really listening to sounds, but to masses of dynamic data. We focus our hearing by “attun[ing] to discordant melodies”–by paying special attention to the out-of-phase patterns (harmonics) that emerge from all that noisy data. “Discordant melodies” aren’t noise–they’re patterned/rational enough to be recognizable as a so-called melody; but they’re not smooth signal, either–they’re “dis-cordant.” They are, it seems, comparable to harmonics or partials, which are the sounds that dynamically emerge from the interaction of sound waves (harmonics are frequencies who are whole-number multiples of the fundamental tone; partials are not whole-number multiples of the fundamental tone). To be more precise, these “discordant melodies” seem to most closely resemble partials: because they are not wholly resolvable into fractions of the fundamental frequency, they vibrate slightly out of phase with the fundamental, and thus produce a slight sense of dissonance. Swain’s framing of listening treats data as something that behaves like sound–dynamic flows of data work like we think sound works, so it makes sense that they ought to be translatable into actual, audible sounds. Phantom Terrains treats data as just another part of the frequency spectrum that lies just outside the terrain of “normal” human hearing.
The kind of listening that Phantom Terrains performs is really, really similar to the kind of listening or surveillance big data/big social media makes possible. I’ve written about that here and here.
Phantom Terrains is just more evidence that we (and who exactly this ‘we’ is bears more scrutiny) intuitively think data is something to be heard–that data behaves, more or less, like sound, and that we can physically (rather than just cognitively) interface with data by adjusting our ears a bit.
But why the physical interface? I think part of it is this: More conventional ways of interacting with data are propositional: they’re equations, statistics, ratios, charts, graphs, and so on. To say that they’re propositional means that they are coded in the form of words, concepts, and/or symbols. They require explicit, intentional, consciously thematized thought for interpretation. Physical interfaces don’t necessarily require words or concepts: you physically interface with the bicycle you’re riding. So, though you are ‘thinking’ about riding that bike, you’re not doing so in terms of words and concepts, but in more kinesthetic and haptic terms. When I’m riding a bike, I don’t think “whoops, off balance, better adjust”–I just intuitively notice the balance problem and adjust, seemingly automatically. I can also ride a bike (or drive, or walk, or make coffee, or fold laundry, or do any number of things) while doing something else that requires more focused, explicit intention, like talking or listening to music. So, these kinesthetic knowledges can themselves be running in (what seems like) the background while our more (seemingly) foreground-focused cognitive process take care of other business.
Phantom Terrains not only assimilates concepts and practices of data production and consumption to already pretty normalized concepts and practices of human embodiment, the physical interface naturalizes specific ways of relating and interfacing with data, sedimenting contested and historically/culturally local knowledges in our bodies as though they are natural, commonsense, instinctual capacities. In a way, the physical interface makes our relationship with data manifest in the same way that our relationship with white supremacy and patriarchy manifest–written in, on, and through our bodies, as a mode of embodiment and embodied knowledge. As Mills argues, “all technical scripts are ‘ability scripts,’ and as such they exclude or obstruct other capabilities” (323). So, the question we ought to ask is: what abilities are activated by physically interfacing with data in the form of sound, and what capabilities are excluded and obstructed?
Comments 1
Listening to Data: on Phantom Terrains - Treat Them Better — December 19, 2014
[…] By robinjames at Cyborgology […]