0522092240718Mute

A favorite pastime of mine is listening to podcasts or stories while playing mindless mobile games. For me, it’s the perfect blend of engagement and passivity—just enough informational input to be stimulating while still being relaxing, and put-downable enough that I can pause to do other things like prepare a meal or… get back to work. The combination of aural and visual stimulus hits just the right balance. I’m always looking for other good combinations of aural and visual input for various tasks: a TV show I can watch while organizing my PDFs, the right music for reading, another for writing, and so on. Proliferation of media texts, and the increasing availability of them, has made much of my life a mixing and matching of sensory inputs.

The degree of this mixing and matching is in part afforded by digital media. Of course, other media epochs had their own soundscapes and visual texts, like listening to a record while reading or watching television while sewing. But the multitude of options for sensory input in digital media is of a different quality. And as platforms alter their interfaces and affordances, the character of our sensory experience of media changes as well.

Autoplay is one of the many blights upon the internet. I have a personal policy of not linking to any website that has an audible autoplay. In my early years on the internet, nearly every website had some kind of autoplay advertising, and I remember clicking and scrolling everywhere trying to find the pause button so my friends and I could watch our Destiny’s Child music video in peace. Then, for a long while, autoplay seemed to go away, or at least lessen. Ad blockers played an important role here. But now, autoplay videos are once again everywhere, with news sites being the worst offenders.

Can autoplay be done right? I think it can, and I hate to admit it, but Facebook seems to have the right idea. Their integration of autoplay in the newsfeed has been relatively unobtrusive and quite functional for a website whose updates so often disappoint users. Videos only play when they are on screen, audio only plays when clicked, and you can scroll back up to a video and it will continue playing right where you left off.

But I never click for sound. To be more accurate, 99.9% of the time, I don’t click for sound. An informal survey of my friends suggests that a lot of people don’t. It’s likely because so often the sound is… disappointing. How many people have ruined a perfectly good video of cats chasing lasers with some high pitched, annoying keyboard music? Even when the music is done well, as is often the case with Buzzfeed’s Tasty videos, it just isn’t worth it. Personally, the inclusion of sound in a 42 second video about mini biscuit pepperoni pizza balls isn’t worth the click. And then, I have to exit out of the video before it automatically starts playing another. What a hardship!

It’s difficult for those of us raised in the era of popular mass media to understand the viewing experience of silent film. But there seems to be a resurgence of silent visual media that, rather than developing under the constraints of media, develops thanks to the affordances of media.

Take Reddit for example. If someone posts a 1:30 long gif or HTML5 video on r/mildlyinteresting, inevitably someone will ask why on earth they didn’t just post the video. And, inevitably, someone will respond with why they prefer the silent version: I’m at work, or it loads faster, or I listen to music when I browse Reddit and why does the sound on this video of glass blowing matter anyway? I’m much more likely to open a link to a silent gif than a video; even if it’s the same content and the same length, it simply feels like less of an investment.

Walter Ong wrote a great deal about the ways media developments change our entire sensory experience and, subsequently, our cognition. Father Ong divided these media developments into three major categories: primary orality, literacy, and secondary orality. Primary orality describes cultures that do not have written language. Ong lays out several features of these cultures: memorization strategies such as proverbs and alliteration, circular story telling devices, and the interiority of thought. Literacy introduced very different language practices, such as linear thinking and abstraction.

Secondary orality came with the onset of electronic media; while Ong did not theorize this last concept in great detail, he was interested in the ways auditory media reintroduced some of the characteristics of primary orality. His argument was not that these characteristics had disappeared with literacy, but rather that human experience existed on a continuum of orality and literacy, and that electronic media such as the telephone and television had the potential to introduce a more hybrid phenomenon.

But what of digital media? Because digital media produces such a complex and variable mediascape, it is much more difficult to develop the kind of straightforward and containable categories that made Ong’s work so influential. Oren Soffer (academic text with paywall) introduced the concept “silent orality” as it occurs in SMS texts. This work analyzed the oral features of written, conversational text, troubling the idea of secondary orality and the division of primary orality and literacy.

Ong characterized electronic media as oral because, of course, of its aural quality. Fitting silent visual media into this schema is considerably more difficult because Ong did not make room in his continuum of orality and literacy for visual communication. Once visual communication becomes silent, as in the case of click for sound videos and gifs, its effect on the sensory experience becomes more difficult to describe. Perhaps a new category in Ong’s schema is needed—visuality may be a good start. With this category we may distinguish between aural visuality and silent visuality, and begin to theorize why silent videos are so popular across digital media from texting to social media.

Britney is on Twitter.