{"id":19516,"date":"2014-12-19T15:52:03","date_gmt":"2014-12-19T19:52:03","guid":{"rendered":"http:\/\/thesocietypages.org\/cyborgology\/?p=19516"},"modified":"2014-12-19T15:52:03","modified_gmt":"2014-12-19T19:52:03","slug":"listening-to-data-on-phantom-terrains","status":"publish","type":"post","link":"https:\/\/thesocietypages.org\/cyborgology\/2014\/12\/19\/listening-to-data-on-phantom-terrains\/","title":{"rendered":"Listening to Data: on Phantom Terrains"},"content":{"rendered":"<p><img loading=\"lazy\" decoding=\"async\" class=\"alignleft\" src=\"http:\/\/blog.opendns.com\/wp-content\/uploads\/2013\/06\/blog-wifi-webcast1.png\" alt=\"\" width=\"440\" height=\"286\" \/><\/p>\n<p>This is a cross-post from <a href=\"http:\/\/www.its-her-factory.com\/?p=462\">Its Her Factory<\/a>.<\/p>\n<p>Frank Swain has a hearing aid that sonifies ambient WiFi signals. A Bluetooth-enabled digital hearing aid paired with a specially programmed iPhone (and its WiFi detector), the device, named\u00a0<a href=\"http:\/\/phantomterrains.com\/\">Phantom Terrains<\/a>, \u201ctranslate[s] the characteristics of wireless networks into sound\u2026.Network identifiers, data rates and encryption modes are translated into sonic parameters, with familiar networks becoming recognizable by their auditory representations.\u201d The effect, Swain\u00a0<a href=\"http:\/\/www.newscientist.com\/article\/mg22429952.300-the-man-who-can-hear-wifi-wherever-he-walks.html?full=true#.VImcp2TF_xG\">says<\/a>, is \u201csomething similar to Google Glass \u2013 an always-on, networked tool that can seamlessly stream data and audio into your world.\u201d (I\u2019ll leave the accuracy of this comparison to people who have thought more about Glass than I have.)<\/p>\n<p><i>Why<\/i>\u00a0would anyone want to do this? What\u2019s the point of being able to sense, to detect and interpret, the flows of data that are transmitted in your environment? For Swain and his collaborator Daniel Jones, data transmissions are just as much a part of the material, engineered, designed, and planned environment as roads, pipes, and buildings are. We exist in a \u201cdigital landscape,&#8221; and just like all landscapes, this one has a social meaning and a politics. \u201cJust as the architecture of nearby buildings gives insight to their origin and purpose, we can begin to understand the social world by examining the network landscape.&#8221;<\/p>\n<p><!--more--><\/p>\n<p>But why hearing? Why is audition the (best? easiest?) medium for modulating the WiFi\/data spectrum into part of the spectrum more or less \u201cnormal\u201d human bodies can interface with? Why is \u201chearing\u201d the \u201cplatform for augmented reality that can immerse us in continuous, dynamic streams of data\u201d? [Aside: Does the platform metaphor frame hearing as an interface for data? And what conceptual work does this metaphor do? Is it a sort of mise-en-place that sets up the argument, makes it easier and faster to put together?]<\/p>\n<p>As Swain writes in the New Scientist,<\/p>\n<blockquote><p>Hearing is a fantastic platform for\u00a0<i>interpreting dynamic, continuous, broad spectrum data<\/i>. Unlike glasses, which simply bring the world into focus, digital hearing aids strive to recreate the soundscape,\u00a0<i>amplifying useful sound and suppressing noise<\/i>. As this changes by the second, sorting one from the other requires a lot of programming.<\/p><\/blockquote>\n<p>Hearing is the medium for translating data into humanly perceptible form because it\u2019s the best input mechanism we have for the kind of substance that data materially manifests as. Contemporary science understands hearing as a \u201cproblem of signal processing\u201d (Mills 332). Because of more than a century of research into hearing (which was, as\u00a0<a href=\"https:\/\/www.dukeupress.edu\/MP3\/\">Jonathan Sterne<\/a>\u00a0and\u00a0<a href=\"http:\/\/www.nyu.edu\/projects\/nissenbaum\/papers\/Mills,%20Mara%20-%20Do%20Signals%20Have%20Politics.pdf\">Mara Mills<\/a>\u00a0have shown so elegantly, completely tied to technology R&amp;D in the communications industry) has led us to understand hearing itself \u00a0as dynamic, continuous, broad spectrum signal processing, what better medium for representing data could there be?<\/p>\n<p>The practice of translating between electrical signals and human sense perception is rooted in practices and technologies of hearing. As Mills writes, \u201cElectroacoustics has been at the forefront of signal engineering and signal processing since \u2018the transducing 1870s,\u2019 when the development of the telephone marked the first successful conversion of a sensuous phenomenon (sound) into electrical form and back again\u201d (321). Our concepts of and techniques for translating between electrical signals and embodied human (or, mostly human&#8211;cats apparently once played a huge role in electroacoustic research) perception are significantly shaped by our understanding of sound and hearing. Thus, as Sterne writes, \u201cthe institutional and technical protocols of telephony also helped frame&#8230;the basic idea of information that subtends the whole swath of \u2018algorithmic culture.\u201d<\/p>\n<p>So, at one level, it\u2019s pretty obvious why hearing is the best and easiest way to perceive data: a couple centuries of scientific research, both in audition and in signal processing technology, have constructed and cemented an extremely close relationship between hearing and electronic signal processing.<\/p>\n<p>The whole point is this is a very particular concept of hearing, a culturally, historically, and materially\/technologically local idea of what sound is, how \u201cnormal\u201d bodies work, and how we interpret information.<\/p>\n<p>There are some assumptions about listening and musicality embedded in Swain and Jones\u2019s own understanding of their project&#8230;and thus also in Phantom Terrains itself.<\/p>\n<p>Phantom Terrains relies on listeners\u2019 acculturation to\/literacy in very specific musical conventions. Or, its sonification makes sense and is legible to listeners because it follows the some of the same basic formal or structural\/organizational conventions that most Western music does (pre-20th century art music, blues-based pop and folk, etc.). For example, \u201cthe strength of the signal, direction, name and security level on these [WiFi signals] are translated into an audio stream,\u201d and musical elements like pitch, rhythm, timbre, and melody are the terms used to translate digital signal parameters (strength, direction, name, security level, etc.) into audio signal parameters. The Phantom Terrain relies on listeners\u2019 already-developed capacities to listen for and interpret sounds in terms of pitch, rhythm, timbre, etc. For example, it builds on the convention of treating melodic material as the sonic foreground and percussive rhythm as sonic background. Swain describes the stream as \u201cmade up of a foreground and background layer: distant signals click and pop like hits on a Geiger counter, while the strongest bleat their network ID in a looped melody.\u201d This separation of rhythm (percussive clicks and pops) and pitched, concatenated melody should be easily legible to people accustomed to listening to blues\/jazz\/rock music, or European classical music (the separation of a less active background and more melodically active foreground builds on 19th century ideas of foreground and background in European art music). In other words, Phantom Terrace organizes its sonic material in ways that most Western music organizes its sonic \u00a0material.<\/p>\n<p>The device also builds on established concepts and practices of listening. In the New Scientist piece, Swain describes listening in two different ways: as being \u201cattuned for discordant melodies,\u201d and as a kind of forced exposure to noise that one must learn to \u201ctolerate.\u201d \u201cMost people,\u201d he writes, \u201cwould balk at the idea of being forced to listen to the hum and crackle of invisible fields all day. How long I will tolerate the additional noise in my soundscape remains to be seen.\u201d I\u2019ll get to the attunement description shortly; right now I want to focus on listening as noise filtering or noise tolerance. It seems to me that noise filtering and tolerance is the basic, fundamental condition of listening in contemporary US society (and has been for 100+ years&#8230;just think about Russolo\u2019s\u00a0<a href=\"http:\/\/www.artype.de\/Sammlung\/pdf\/russolo_noise.pdf\">The Art of Noises<\/a>, published in 1913). There\u2019s SO MUCH NOISE: vehicles, animals, the weather, other people, ubiquitous music, appliances and electronic devices, machines (fridge, HVAC, etc.)&#8230;In order to hear any one thing, any one signal&#8211;someone\u2019s voice, a recording or broadcast&#8211;we have to filter out all the surrounding noise. And buildings are built, nowadays, to help us do this. Architects incorporate white noise into their design so that it covers over disruptive noises: HVAC sounds can mute the conversation in the office next to you, making it easier to focus on your own work; restaurant muzak masks nearby conversations so it\u2019s easier to hone in on the people at your table; there\u2019s a bazillion \u201c<a href=\"http:\/\/www.youtube.com\/watch?v=WedvnI3L8dg\">8 hours of fan noise<\/a>\u201d videos on YouTube to mask the night\u2019s bumps and help you sleep. Noise doesn\u2019t necessarily distract us from signal; it can help us hone in on the culturally and situationally most salient ones. All this is to say: I don\u2019t think the \u201cextra\u201d layer of sound Phantom Terrain adds to \u201cnormal\u201d human bodily experience will ultimately be that distracting. As with all other parts of our sonic environments, we\u2019ll figure out how and when to tune in, and how and where to let it fade into the unremarked-on and not entirely conscious background. We just need to develop those practices, and internalize them as preconscious, habitual behaviors. Our senses process large quantities of information in real-time: they\u2019re wetware signal\/noise filters. To \u2018hear\u2019 data, we\u2019d just have to re-tune our bodies&#8211;which will take time, and a lot of negotiation of different practices till we settle on some conventions, but it could happen.<\/p>\n<p>Swain also describes listening as a practice of picking dynamically emergent patterns out of swarms of information: \u201cwe could one day\u00a0<i>listen to the churning mass of numbers in real time, our ears attuned for discordant melodies<\/i>.\u201d Let\u2019s unpack this: what are we listening to, and how do we hone in on that? We\u2019re listening to \u201ca churning mass of numbers\u201d&#8211;so, we\u2019re not really listening to sounds, but to masses of dynamic data. We focus our hearing by \u201cattun[ing] to discordant melodies\u201d&#8211;by paying special attention to the out-of-phase patterns (harmonics) that emerge from all that noisy data. \u201cDiscordant melodies\u201d aren\u2019t noise&#8211;they\u2019re patterned\/rational enough to be recognizable as a so-called melody; but they\u2019re not smooth signal, either&#8211;they\u2019re \u201cdis-cordant.\u201d They are, it seems, comparable to\u00a0<a href=\"http:\/\/en.wikipedia.org\/wiki\/Overtone\">harmonics or partials<\/a>, which are the sounds that dynamically emerge from the interaction of sound waves (harmonics are frequencies who are whole-number multiples of the fundamental tone; partials are not whole-number multiples of the fundamental tone). To be more precise, these \u201cdiscordant melodies\u201d seem to most closely resemble partials: because they are not wholly resolvable into fractions of the fundamental frequency, they vibrate slightly out of phase with the fundamental, and thus produce a slight sense of dissonance. Swain\u2019s framing of listening treats data as something that behaves like sound&#8211;dynamic flows of data work like we think sound works, so it makes sense that they ought to be translatable into actual, audible sounds. Phantom Terrains treats data as just another part of the frequency spectrum that lies just outside the terrain of \u201cnormal\u201d human hearing.<\/p>\n<p>The kind of listening that Phantom Terrains performs is really, really similar to the kind of listening or surveillance big data\/big social media makes possible. I\u2019ve written about that\u00a0<a href=\"http:\/\/www.its-her-factory.com\/2013\/06\/on-prism-or-listening-neoliberally\/\">here<\/a>\u00a0and\u00a0<a href=\"http:\/\/soundstudiesblog.com\/2014\/10\/20\/the-acousmatic-era-of-surveillance\/\">here<\/a>.<\/p>\n<p>Phantom Terrains is just more evidence that we (and who exactly this \u2018we\u2019 is bears more scrutiny) intuitively think data is something to be heard&#8211;that data behaves, more or less, like sound, and that we can physically (rather than just cognitively) interface with data by adjusting our ears a bit.<\/p>\n<p>But why the physical interface? I think part of it is this: More conventional ways of interacting with data are propositional: they\u2019re equations, statistics, ratios, charts, graphs, and so on. To say that they\u2019re propositional means that they are coded in the form of words, concepts, and\/or symbols. They require explicit, intentional, consciously thematized thought for interpretation. Physical interfaces don\u2019t necessarily require words or concepts: you physically interface with the bicycle you\u2019re riding. So, though you are \u2018thinking\u2019 about riding that bike, you\u2019re not doing so in terms of words and concepts, but in more kinesthetic and haptic terms. When I\u2019m riding a bike, I don\u2019t think \u201cwhoops, off balance, better adjust\u201d&#8211;I just intuitively notice the balance problem and adjust, seemingly automatically. I can also ride a bike (or drive, or walk, or make coffee, or fold laundry, or do any number of things) while doing something else that requires more focused, explicit intention, like talking or listening to music. So, these kinesthetic knowledges can themselves be running in (what seems like) the background while our more (seemingly) foreground-focused cognitive process take care of other business.<\/p>\n<p>Phantom Terrains not only assimilates concepts and practices of data production and consumption to already pretty normalized concepts and practices of human embodiment, the physical interface naturalizes specific ways of relating and interfacing with data, sedimenting contested and historically\/culturally local knowledges in our bodies as though they are natural, commonsense, instinctual capacities. In a way, the physical interface makes our relationship with data manifest in the same way that our relationship with white supremacy and patriarchy manifest&#8211;written in, on, and through our bodies, as a mode of embodiment and embodied knowledge. As Mills argues, \u201call technical scripts are \u2018ability scripts,\u2019 and as such they exclude or obstruct other capabilities\u201d (323). So, the question we ought to ask is: what abilities are activated by physically interfacing with data in the form of sound, and what capabilities are excluded and obstructed?<\/p>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>This is a cross-post from Its Her Factory. Frank Swain has a hearing aid that sonifies ambient WiFi signals. A Bluetooth-enabled digital hearing aid paired with a specially programmed iPhone (and its WiFi detector), the device, named\u00a0Phantom Terrains, \u201ctranslate[s] the characteristics of wireless networks into sound\u2026.Network identifiers, data rates and encryption modes are translated into [&hellip;]<\/p>\n","protected":false},"author":1929,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[9967],"tags":[],"class_list":["post-19516","post","type-post","status-publish","format-standard","hentry","category-commentary"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/thesocietypages.org\/cyborgology\/wp-json\/wp\/v2\/posts\/19516","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/thesocietypages.org\/cyborgology\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/thesocietypages.org\/cyborgology\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/thesocietypages.org\/cyborgology\/wp-json\/wp\/v2\/users\/1929"}],"replies":[{"embeddable":true,"href":"https:\/\/thesocietypages.org\/cyborgology\/wp-json\/wp\/v2\/comments?post=19516"}],"version-history":[{"count":1,"href":"https:\/\/thesocietypages.org\/cyborgology\/wp-json\/wp\/v2\/posts\/19516\/revisions"}],"predecessor-version":[{"id":19517,"href":"https:\/\/thesocietypages.org\/cyborgology\/wp-json\/wp\/v2\/posts\/19516\/revisions\/19517"}],"wp:attachment":[{"href":"https:\/\/thesocietypages.org\/cyborgology\/wp-json\/wp\/v2\/media?parent=19516"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/thesocietypages.org\/cyborgology\/wp-json\/wp\/v2\/categories?post=19516"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/thesocietypages.org\/cyborgology\/wp-json\/wp\/v2\/tags?post=19516"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}