YouTube Preview Image

Who decided the Minority Report computer was the goal of 21st century interfaces? Why does anyone think its a good idea? Could you imagine doing a spreadsheet on that thing? And why the hell are they using physical media to transport information? What is so alluring, exactly, about this gigantic computer that requires two (two!) Nintendo Powergloves to operate and can only receive data (apparently) through physical media drives the size of VHS tapes? The resolution looks awful and, since the screens are transparent, I can only assume your computer always has to be up against a blank wall. But none of those things are nearly as important as the human element it ignores. The computer has no soul. Its a sterile interface meant to catch murders (or frame people as such), not share family photos. The obsession with the Minority Report computer is a betrayal of everything that is human about computers. 

I would rather sit in a La-Z-Boy with a keyboard and a scroll ball than squint at a hazy display while wearing raver hands.

Frankly,  I’d rather have Kevin Smith’s setup in Live Free or Die Hard. It looks comfortable: a place where I wouldn’t mind sitting down and reading Reddit in between emails. I cannot imagine blogging on the Minority Report computer, but with Kevin Smith’s basement command center, I would write volumes. Speaking of Reddit- consider /r/battlestations; the community of avid computer users that share pictures of their desk setups. A few weeks back I submitted my somewhat modest “battlestation” and was surprised by the enthusiastic response:

My own “battlestation.” It got 55 upvotes!

The photos and comments in /r/battlestations are not just about computer specs and tips for cable management (although there is plenty of that) the images are treated with admiration, respect, excitement, and even love. It sits somewhere in between photos of your souped up car and baby pictures. They are machines, but they are also intimate spaces [porn joke here] that have been tediously modified and perfected. They are self portraits of cyborgs.

A submission by user “aTriumphForMan” titled “His & Hers” to /r/macsetups, a subreddit that is very similar to /r/battlestations.
This one was titled “My Work-In-Progress ‘She Cave’ Battlestation. Submitted by GeoRhi

Most submissions are called “battlestations” but the more creative titles are very telling. The “she cave” above is one example. Others include, “Hermit Crablestation“, “My Sanctuary“, and “This is my haven.” The easy conclusion to make here is that the love these people have for their computers, borders on the pathological. These are inanimate objects that are no more deserving of love and affection as a toaster oven or a socket wrench. But unlike a toaster oven or socket wrench, these battlestations are places where friends and family keep in touch, where victories are won and lives are saved. They are sites of creativity and much-needed escapism. These spaces aren’t always small or modest but they are all personal.

I suppose the Minority Report computer could be someone’s future sanctuary. Samsung, at this year’s CES, demoed a “Transparent Smart Window” (an opaque window would be dumb, wouldn’t it?) that is meant to sit in a kitchen window and act as a 42-inch media center. I suppose this, connected to a hacked Kinect controller, would be a big step toward the Minority Report Computer. I can certainly see the benefit of using gestures to control a computer while I knead bread dough or wash the dishes, but never as part of my hermit crablestation sanctuary haven. Its too far away, too sterile. I want something with the contours of a typewriter, not a television.

When I say the Minority Report Computer betrays everything human about computers, I’m talking about that tacit relationship you have with your workstation. That special angle you tilt your screen, the perfect wear of the keyboard, the improvised iPad stand. This sort of material relationship is still possible with the Tom Cruise 5000, but it certainly does not seem to invite it. You’re not supposed to touch the computer -unless you are inserting the over-sized physical media into it- and it has to be enormous. I say this only because I can’t imagine making those huge hand gestures on an iPad-sized screen. I don’t hate the Minority Report Computer, I just hate what it makes designers value: the big, the detached, and the sterile. I want a future with computers that make havens and sanctuaries, not movie theaters.

The case I’m making here is obviously a polemical one. Not just because it is controversial, but because its solely based on my personal opinion. I think technology that you touch and feel are better than ones that you gesture and wave at. I think the space between a person and a typewriter is better than the space between a television and its viewer. The intimate desk spaces I see in /r/battlestations contradict the very title of the community. These don’t feel like battlestations, they feel like home.

Stacks of Kente and cotton cloth sit in piles, waiting to be stamped with Adinkra patterns. Note the “pixelated” patterns in the center stack.

In part 1 I opened with a run down of the different kinds of “digital divides” that dominate the public debate about low income access to technology. Digital divide rhetoric relies on a deficit model of connectivity. Everyone is compared against the richest of the rich western norm, and anything else is a hinderance. If you access Twitter via text message or rely on an internet cafe for regular internet access, your access is not considered different, unique, or efficient. Instead, these connections are marked as deficient and wanting. The influence of capitalist consumption might drive individuals to want nicer devices and faster connections, but who is to say faster, always on connections are the best connections? We should be looking for the benefits of accessing the net in public, or celebrating the creativity necessitated by brevity. In short, what kinds of digital connectivity are western writers totally blind to seeing? The digital divide has more to do with our definitions of the digital, than actual divides in access. What we recognize as digital informs our critiques of technology and extends beyond access concerns and into the realms of aesthetics, literature and society. I think it is safe to say that most readers of this blog think they know better: Fetishizing the real is for suckers. The New Aesthetic, a nascent artistic network, is all about crossing the boarder between the offline and the online. Pixelated paint jobs confuse computer scanners and malfunctioning label makers print code on Levis. The future isn’t rocket-powered, its pixelated. Just as the rocket-fueled future of the 50s was painstakingly crafted by cold warriors, the New Aesthetic of today is the product of a very particular worldview. The New Aesthetic needs to be situated within its global context and reconsidered as the product of just one kind of future.

I should start by saying that Bridle thinks “The New Aesthetic” is “a rubbish name.” Regardless of whether or not its a satisfying term, he must have originally chose it for a reason, and its probably the same reason that everyone else has embraced it as the moniker for all things de-resed, pixelated, and time-shifted. Its a straight-forward term that lets the reader know that it is not a movement, a school of thought, or an -ism. There’s no gate keeper that says what is or is not the New Aesthetic. The New Aesthetic is a consequence of a digitally augmented environment.   You know the New Aesthetic when you see it. As Bruce Sterling says,

[T]he New Aesthetic is culturally agnostic. Most anybody with a net connection ought to be able to see the New Aesthetic transpiring in real time. It is British in origin (more specifically, it’s part and parcel of a region of London seething with creative atelier “tech houses”). However, it exists wherever there is satellite surveillance, locative mapping, smartphone photos, wifi coverage and Photoshop.

The New Aesthetic is comprehensible. It’s easier to perceive than, for instance, the “surrealism” of a fur-covered teacup. Your Mom could get it. It’s funny. It’s pop. It’s transgressive and punk. Parts of it are cute.

Perceiving an object as “The New Aesthetic” and making a piece of art in the style of The New Aesthetic are two totally separate things, but the differences are usually inconsequential. For example, I might look at a mannequin and see abstract cubes, not pixels:

Photo credit: James Bridle, All Rights Reserved http://www.flickr.com/photos/stml/6203921904/

Now, look at the weaving pattern of this traditional woven cloth, stamped with traditional symbols from the Ashanti region of Ghana:

Weaving and stamping done under the supervision of Adinkra Master Gabriel Boakye, stamping done by Samaria Mitchell. Photo Credit: David Banks

Without getting too far into artist’s intent (and for my purposes, I don’t think I need to) let us consider the ambiguity of perception and intention. More specifically, why are we ready to see the pixels in the mannequin’s head, but not the cloth? The answer lies in something similar to a digital divide. Not an actual deficit of access, but a deficit in western perceptions of access.

The way we get online matters. It determines what “The Internet” means and what it looks like. At this year’s Theorizing the Web conference, I was on a panel with Dr. Katy Pearce who made a very convincing case for the “device divide.” (Slide show here.) Her work in Armenia showed that even when you control for age, socioeconomic status, and geography the device you use to connect to the internet has a powerful effect on what you do on the internet. The mobile internet may let you go on dating web sites and play games, but its much more difficult to work or create content through a mobile device. Ghanaians rely heavily on their mobile devices to get online, but they also rely on Internet Cafès. Ghanaians have internet access, but it doesn’t look like American internet access. Jenna Burrell describes Ghanian internet cafe use as,

…something akin to an arcade where they could chat with girls online (or offline) or watch American hip-hop and rap music videos. Most Internet users were using chat clients, especially Yahoo chat, or they were reading and writing e-mail in web-based applications like Hotmail. In interviews with users recruited from these Internet cafés, I was frequently told that they used the Internet to find foreign pen pals. The use of search engines and general web surfing activities were extremely uncom- mon. These online pursuits were often much more than a pleasant diversion and centered on improving their life circumstances by gaining powerful allies in foreign lands. (Burrell, 2010)

A rabbit, using a computer, made out of a spark plug and scrap metal. I bought this in a touristy market in Accra. It sat alongside other spark plug rabbits that played golf, rode horses, and carried suitcases.

This sounds a lot like Pearce’s conclusions, although Armenians seem less interested in foreign pen pals. It is important to know what The Internet looks like to different people in different parts of the world, because an American or British New Aesthetic is going to look different than a Ghanian or an Armenian one. This information is necessary, but not always sufficient, which is why a global artistic network must include as many people, from as many cultures, as possible. Back in May, I (ironically) offered a few critical questions about the New Aesthetic (now numbered for ease of reference):

  1. What parts/aspects/facets of the digital are offered up as an ironic twist on out-moded technologies? In other words: Low res to whom?
  2. Where do we find the New Aesthetic? Where is the New Aesthetic conspicuously absent?
  3. What is its perspective? I see a lot of top-down.
  4. What technologies bring the New Aesthetic into existence? I see lots of military technology, big science, corporate logos, and agribusiness. What happens when we appropriate these artifacts and perspectives? What happens when we consume them? What happens when we prosume them?
  5. What parts of the digitally augmented world are left out, over-simplified, or left unquestioned? I see very little code.
  6. Is the pixel the sine qua non of the computer screen or does something else pre-date it?
  7. What is involved in the process of making things that embody the New Aesthetic?
  8. What is not the New Aesthetic?
This entire essay is an exploration of question 8. Questions 7 and 6 speak to the TechnoCulture as Corruption or Achievement rhetoric that I introduced last week. Number 5, I am willing to admit, is too blunt to be useful. There is plenty of code to be seen, and the concept of “simplicity” is far too fungible to be of use in this context. What is left out are all the things we recognize when answering question 8. Questions 4 and 3 are deeply linked to 6. The artifacts that make up a majority of what we call the New Aesthetic, is just one (albiet very prominent) form that relies on well-known brands and large technical systems. We are blind to other forms because they do not belong to our lived, cybernetic, experience. I hope to implicitly reply to questions 1 and 2 although a more thorough treatment can and should be done at a later time.
The research team from RPI and KNUST pose with local Adinkra printers from Ntonso, Ashanti Region.

Contemporary anthropology and allied fields of study have spent the past thirty years challenging the technology/culture dichotomy, replacing it with a more fluid, dynamic, and adaptive depiction of both. From Haraway’s Cyborg Manifesto to Jenna Burrell’s work on Ghanian internet cafès, anthropologists are bridging the rift between the cultural and the technological. Our work in Ghana, under the direction of Dr. Ron Eglash (a student of Haraway, and -for full disclosure- my academic advisor) sits firmly in this tradition. Our research group works on many different things throughout the month and, for the purposes of this essay, I will ignore my own work and make some brief comments on another group’s research on Adinkra stamping and Kente cloth weaving. They are developing browser-based math and computer science teaching tools based on weaving and stamping techniques. You can try the kente computing and adinkra grapher (and the rest of our culturally situated design tools) at csdt.rpi.edu.

Science, technology, engineering, and mathematics (STEM) education is usually taught ahistorically and outside of anything typically identified as “culture”.  Two plus two equals four no matter where you were born, what god you believe in, or what kind of food you eat. The CSDTs are meant to introduce middle-school age students to math and science using real-world examples from various indigenous cultures. This goes beyond changing names in word problems (e.g. Jose and Maria trade apples, instead of Bob and Mary.) by actually demonstrating the implicit math and science that goes into these practices. For example, the trapezoidal Kente pattern below follows a very particular algorithm that produces predictable results:

With each new line the weaver must move a fixed number of strands towards the center. The vertical lines also follow a very specific pattern and width ratio. All of these patterns can be reproduced and expressed as functions. Our teaching tools allow students to program traditional and original Kente weaving patterns using a drag and drop user interface.

Certain Adinkra stamps follow very particular geometric curves. For example, the Gye-Nyame (meant to represent the supremacy of god and depicts a fist hold knives) has logarithmic spirals coming off the center “fist.” Adinkra symbols also rely heavily on the principles of transformational geometry: reflectiondilation, rotation, and translation.

A screen printed image of Barack Obama is framed by the Gye Nyame (left) and Sankofa (right). The literal translations of Gye Nyame and Sankofa are “Except for God” and “Return and Get It”, respectively. Contemporary meanings are “Supremacy of God” and “Learn from the past.” All-in-all, an excellent campaign banner for an American incumbent president.

The underlying math in these designs is important for the New Aesthetic because it forces us to reconsider our definitions of the digital. The Kente cloth is made line by line, just as your graphics card draws images on the monitor or your printer puts images to paper: line by line, dictated by a set algorithm. If/then statements, RGB values, and algorithmic patterns are all there, set in cloth by analog computers called looms. If this is not the New Aesthetic, images made tangible from the  binary world, then we need a more precise definition of the New Aesthetic. Consider each cloth and pattern akin to the computer punch cards of the last century. These outputs produce an image that also tell a story. Its practically hypertext. One image calls up an entire proverb and an entire cloth can tell a story. Does it have to come from networked computers? Does it have to mimic or resemble something a 1990s computer-user from America would recognize? Do we have to assume that the artist has regular access to our kind of online experience? Let us explore one final example.

Back in 2007, Eglash gave a TED talk about African fractals. My favorite part (and the topic I will be discussing presently) starts at 10:50:

YouTube Preview Image

As Eglash describes in the video, Bamana sand divination relies on a sudo random number generator to produce a narrative about the future. This divination system was picked up by European alchemists and caught the interest of the German mathematician Gottfried Leibniz. Leibniz used the base 2 counting sequence to develop what we now call binary code. Every computer on earth is, in essence, a collection of mystics divining the future in the sand. It is precisely this historical lineage, that makes the “New Aesthetic” a very old concept indeed. As we draw our own lines in the sand, defining what is and is not the New Aesthetic we should consider our source material as the products of only one particular TechnoCulture. One that is pretty late to the game, if you are willing to count these 11th century geomancers as early forefathers of the computing age. If the New Aesthetic is truly about the intersectionality of the online and the offline, the digital made analog and back again, then we must decide what makes pixelated camouflage the New Aesthetic and not Kente weaving, Adkinkra stamping, or Bamana sand divination. If, while studying these boarderlands, we find that there is no discernable moment where the virtual was not substantiated in the sand, on cloth, or on display in a SoHo art gallery what becomes of the “New” of the New Aesthetic? If one concludes that these indigenous patterns are mathematical, even programable, but not the New Aesthetic, then we need a new definition that makes a more explicit case.

The pixelated future is being sold to us by Google and Facebook just like the rocket age was sold to the American public by a nation run by cold warriors. In its current form The New Aesthetic is anything but “culturally agnostic.” The future might go in the direction American entrepreneurs want it to, but let us make sure everyone can participate on their own terms. Let us make sure that everyone is included and given their due. If the future’s aesthetic is purely a product of the London art scene and Silicon Valley, I see little hope for the various progressive, metropolitan projects of the left. Instead, I see another imposed vision dictated by a select, elite few. The New Aesthetic has its roots in a global, multicultural effort that stretches from the sand of Egypt to the lofts of Austin, Texas. Without acknowledging either the privilege of the future’s aesthetic, or including the African and Middle Eastern roots (source code?) of our augmented world, we are blindly following another pied piper of mythical progress. The New Aesthetic is crowd-sourced and accessible to anyone, but lets situate our Western New Aesthetic within its global context. What kinds of New Aesthetic are we blind to, or do not recognize as such? Let us make sure that the aesthetic of the future includes everyone.

An Ashanti enstooling ceremony, recorded (and presumably shared) through cell phone cameras (marked).

The “digital divide” is a surprisingly durable concept. It has evolved through the years to describe a myriad of economic, social, and technical disparities at various scales across different socioeconomic demographics. Originally it described how people of lower socioeconomic status were unable to access digital networks as readily or easily as more privileged groups. This may have been true a decade ago, but that gap has gotten much smaller. Now authors are cooking up a “new digital divide” based on usage patterns. Forming and maintaining social networks and informal ties, an essential practices for those of limited means, is described as nothing more than shallow entertainment and a waste of time. The third kind of digital divide operates at a global scale; industrialized or “developed” nations have all the cool gadgets and the global south is devoid of all digital infrastructures (both social and technological). The artifacts of digital technology are not only absent, (so the myth goes) but the expertise necessary for fully utilizing these technologies is also nonexistent. Attempts at solving all three kinds of digital divides (especially the third one) usually take a deficit model approach.The deficit model assumes that there are “haves” and “have nots” of technology and expertise. The solution lies in directing more resources to the have nots, thereby remediating the digital disparity. While this is partially grounded in fact, and most attempts are very well-intended, the deficit model is largely wrong. Mobile phones (which are becoming more and more like mobile computers) have put the internet in the hands of millions of people who do not have access to a “full sized” computer. More importantly, computer science, new media literacy, and even the new aesthetic can be found throughout the world in contexts and arrangements that transcend or predate their western counterparts. Ghana is an excellent case study for challenging the common assumptions of technology’s relationship to culture (part 1) and problematizing the historical origins of computer science and the digital aesthetic (part 2).

One of two surveillance cameras that sit just outside the palace doors.

Last Wednesday, our team attended the “enstoolment” (a ceremony similar to a coronation) of a local chief in the Ashanti region. Tons of people from all around dressed in their finest clothes and converged on the palace grounds to share in the festivities. Some wore traditional toga-like robes, while others dressed in polo shirts and slacks. The court sat on traditional stools, while most of the audience sat on plastic lawn chairs adorned with various Adinkra symbols. Black is the traditional color of formal events, which meant that when audience members took out their smartphones and tablets their lit screens shone like stars. Every few minutes, a new device emerged from underneath handwoven pieces of black cloth. The whole event challenges popular ideas about humans’ relationship to technology and what it does to our existing social habits. Just because there are cameras and internet connections does not mean people are disinclined to show up in person. An enstoolment is meant to happen in a particular place, with a bodily co-present audience. People still find the embodied ceremony meaningful, so they continue to show up and enjoy the ceremony. It is the same reason we still go to concerts even though we have plenty of recorded music- people like being around other people.

I am not trying to idealize or simplify the Ashanti culture, or the role of technology in our lives. Of course everyone doesn’t have a cell phone or an iPad. There are economic disparities in Ghana just like everywhere else in the world, and many people cannot afford a device that records video and puts it on the internet. Similarly, not every person at the enstooling ceremony was enraptured in the ecstatic wonder of ceremony. People made time for the enstoolment like you make time for a wedding or a city council meeting. Some people are more into it than others, some are tired, some were dragged there by a friend who is really into it, and a few felt obliged because their relative was a part of the ceremony. When we start thinking of “culture” as a distinct object, we begin to essentialize it: we turn it into a caricature of itself. An Ashanti businessman filming a royal ceremony with an iPad while dressed in traditional cloth might seem strange, novel, or even contradictory but such reactions are based in a reductionist logic. Culture is much more resilient and dynamic than something that can be dismantled by the very existence of an smartphone. As Clifford Geertz once wrote, “Culture is simply the ensemble of stories we tell ourselves about ourselves.” Those stories can be told in person, and they can be told via text message. The stories will change, and the culture will evolve, but it is up to those that self-identify as Ashanti (or whatever the culture may be) to decide what inventions are compatible with their culture. We heard that modernizing the facilities with indoor plumbing and asphalt roads was a major debate. I think it is safe to assume that there are strong opinions about smartphones as well. We might be tempted to assume that the royalty are the luddites, the keepers of the Old Ways, but one cannot ignore the modern sound system and surveillance cameras that were also installed and utilized.

The skyline of the Kumasi Central Market is dominated by cell phone and radio towers. In the market below, dozens of vendors sell first and second hand cell phones, electronics, and computer parts.

Even if the have an introductory knowledge of anthropology, most people think of culture as a static object. There are common references in the media to national culture, sports culture, geek culture, online culture, pop culture, and urban culture. Main stream media outlets and popular nonfiction authors play fast and loose with the term, using it as a shorthand for any kind of shared practice, while also confining it to some kind of exorcized other. Culture is something you outgrow (youth culture) or something you visit (Their customs are so unique!) As my previous Geertz quote shows, anthropologists like to point out that our actions are always embedded in culture. From your birthday to how often you do your laundry, culture (perhaps more accurately: cultures) informs what you say and do. Raymond Williams, describes culture as simultaneously significant and mundane:

“We use the word culture in these two senses: to mean a whole way of life–the common meanings; to mean the arts and learning–the special processes of discovery and creative effort. Some writers reserve the word for one or other of these senses; I insist on both, and on the significance of their conjunction. The questions I ask about our culture are questions about deep personal meanings. Culture is ordinary, in every society and in every mind.”

A row of stores that specialize in computer parts, electronic components, and machine repair. The narrow corridor acts as checkout line, sidewalk, and do-it-yourself repair shop.

Somehow, open definitions like Williams’ and Geertz’s have led many authors (claiming authority on the subject) to consider information technology as somehow apart from or even against culture. Culture, as the myth goes, is something established in the past and pushes against a constant barrage of modernizing and secularizing forces. We often think of culture as a paradox: on the one hand it is timeless, transcendent, and forms the bedrock of society. On the other it is fragile, easily sullied, and embattled. The former is utilized as a source of national pride, whereas the latter is often used to marginalize others. Technology is seen as the achievement of the strong culture (Technology as Achievement) and the corrupting influence of the weak (Technology as Corruption). Fascistic movements always appeal to the superiority of the national culture. It lasts because it is strong, and it is strong precisely because it can achieve great feats of technoscience. It trains athletes, fosters scientific pursuits, and wins great military victories. Eugenics campaigns were buoyed by the logic of Technology as Achievement. The movie “The Gods Must be Crazy” is an excellent example of Technology as Corruption. Produced and funded by South African apartheid supporters, the “Gods Must be Crazy” movies were made to convince audiences that racial segregation was meant to protect native populations from the corrupting influence of the modern nation state. South African tribes were depicted as child-like imps that lived in a world of natural mysticism. The most minor of intrusions -the existence of a coke bottle- would be enough to destroy their entire culture. Technology As Achievement and Technology as Corruption do not necessarily lead to fascism and apartheid but, taken to their logical extremes and popularly upheld, can justify the worst of atrocities.

Bottom left: a woman scans the audience with her iPad’s video camera.

Societies have always dealt with the problem of allowing certain kinds of technologies, and prohibiting others. From keeping kosher to outlawing semi-automatic rifles, groups of people must make collective decisions about what tasks can acceptably be delegated to machines and what sorts of human behavior can be augmented with machinery. We shouldn’t treat culture as a static entity: something formed in the past that other people do. When we do treat it as such, we run the risk of falling into the narratives of Technology as Corruption and Technology as Achievement. Anthropologists maintain a broad definition of culture for a reason: it lets us discuss technology as part of culture not outside of it.

The research for this essay was supported by and was originally posted in RPI’s 3Helix Program funded by the National Science Foundation.

You can follow David on twitter: @da_banks

Long-time Cyborgology readers might remember that last year I went to Kumasi, Ghana to install an automated SMS system to help Ghanaians find condoms. This year, we are going to install ten vending machines across the city in hopes that people are more comfortable anonymously buying condoms from machines, than from crowded pharmacies. Since street names and building addresses are rare, giving directions means relying on landmarks to navigate the urban environment. When I asked people to draw a map that would help someone get to a hospital I usually got something that looked more like a subway map than a bird’s-eye view of the area. This is interesting because 1) it calls into question our definition of a map might look like and how it would function and 2) mental mapping of cities are not only spatial, they can be relational and contingent. In other words, the most important thing about a landmark might not be its specific location in relationship to the rest of the city, but where it sits in a given set of instructions. This is the kind of urban navigation that we must work with when installing our condom vending machines.

We have ten machines. Five are rigged to send text alerts to the owners of the machines when it is running low on condoms. Another five will not have this feature. All ten will be locatable using the SMS finder system. Through our partners at the Suntreso Government Hospital we’ve been contacting club managers and gas station owners as potential sites for the vending machines. I’ll be in Ghana from July 1st through the 15th working on this next phase. If you’d like to follow the work, please visit our 3Helix site:

http://www.3helix.rpi.edu/?cat=13

Latour’s famous essay “The Sociology of a Door Closer” describes the process of replacing human action with technology:

It is at this point that you have this relatively new choice: either to discipline the people or to substitute for the unreliable humans a delegated nonhuman character whose only function is to open and close the door. This is called a door-closer or a “groom” The advantage is that you now have to discipline only one nonhuman and may safely leave the others (bell-boys included) to their erratic behavior. No matter who they are and where they come from-polite or rude, quick or slow, friends or foes-the nonhuman groom will always take care of the door in any weather and at any time of the day. A nonhuman (hinges) plus another nonhuman (groom) have solved the hole-wall dilemma.

We are doing something similar. Lots of people report harassment when purchasing condoms in crowded pharmacies. We are delegating the process of selling condoms to a machine, to avoid the possibility of intimidating behavior. Reducing the prevelance of shaming and sanctioning on the part of on-lookers and pharmacy owners could be done through education programs, (and should) but this takes much longer than the installation of vending machines. Hopefully, when paired with an effective education campaign (which the government is currently administrating) this can help increase condom usage. The important part, and the maps described above remind us of this, is that information technology is deeply social and you have to get that right, or the whole system won’t work. Hope you’ll follow along on this journey.

Pretty sure this is 'shopped. Original artist, as far as I can tell, can be found here.

Recently, I started following a new podcast from Slate.com called Lexicon Valley. The half hour-long, weekly podcast by Bob Garfield and Mike Vuolo covers a variety of topics but all of the episodes center around changes in language and the power of words. Their June 4th episode was devoted to Abraham Lincoln and his Gettysburg Address and I highly recommend giving it a listen. While Vuolo and Garfield conclude that the Gettysburg Address closely follows the structure of an ancient Greek funeral oration, they also note that the brevity of the address was both rhetorically deft and politically pragmatic. The address was reprinted verbatim on the front page of most major newspapers and was easily reproducible in every format imaginable– from pamphlets to marble plaques. Today, we can share huge amounts of information with little-to-no effort, yet the art of keeping it brief seems to hold sway. What are some of the unique properties of brevity that makes it so alluring and what can we expect to achieve with it?

The entire Gettysburg address carved into the Lincoln Memorial. Image c/o Wikimedia Commons

There’s a reason the Gettysburg Address is called an “address” and not a “speech”- its was only 246 words long and took a little over two minutes to deliver. Brevity has its benefits; more people are willing to engage with something that requires a short time commitment or is easy to grasp. Social movements, from the Protestant Reformation, to the American Revolution to the Arab Spring, have relied on pamphleteering to spread an idea and change minds. Uncritical futurists might say that the pamphlet has been replaced with social networking, but paper can still be found in the hands of protestors from Cairo to New York City. Pamphlets are usually written in plain language and take a few minutes to read. Memes, short videos, and hashtags also serve this purpose (among many other things). New technologies, whether it is the printing press, the telegraph, or Twitter can help us say a lot with very little.

Brevity has always had its place alongside longer and more nuanced work. Luther and other Protestant reformers wrote lots of pamphlets, but they also translated bibles and wrote long treatise about one’s relationship to God. Just as the printing press was good for both long and short-form media, the Internet lets us create and distribute a wide range of ideas in various formats. Full documentaries can be uploaded to YouTube and Vimeo, and full books can be published through ePub formats. Now, following the lead of services like Instapaper, everyone from Apple to Amazon have begun offering services and tools that make reading longer pieces more comfortable. (On a personal note, I’ve used most of these services and I find Readability to be my favorite. Their “send to Kindle” feature has actually made the reader useful to me again.)

Turn of the 20th Century poster of the Gettysburg Address. Image c/o Wikimedia Commons

Last week I wrote a (brief!) note on emoticons, in which I noticed that even when children aren’t texting they choose to use emoticons to express a certain tone. Also last week, Nathan Jurgenson wrote about learning the value of missing out: “…the value that users provide on social media happens precisely when they do slow down, when they log off, when they concentrate and when they ‘miss out’ on some things in order to more fully engage with others.” Taking in long-form media and slowing down to fully enjoy it is absolutely necessary for a well-balanced personal life as well as a well-informed public. But short form media and communication should not be discounted as superficial or (as the New York Times would say) a total waste. Brevity should not be confused with the simple. Brief and concise language can do the same –if not more– than verbose prose. Marx didn’t expect everyone to read all the volumes of Das Kaptial  but he thought his Communist Manifesto would get the point across. Edward Everett gave a two hour long speech before Lincoln’s two minute address. Everett wasn’t necessarily a better orator than Lincoln; the two men were doing totally different -yet complimentary- work.

A brilliant piece of work does very little, if no one reads it. All other things being equal, more people are going to read a tweet than a novel. Some might argue that we are less inclined to read a novel than ever before, but the novel as presently conceived is only a few hundred years old. The anthropological record is full of much shorter media such as songs, poems, and fables. Short-form media asks very little of its audience and its medium. It is memorized and shared to all sorts of people. I would even go so far as to say that short media is a more social kind of media. You can share it quickly (whether its a pamphlet or a tweet) and easily (shorter often requires less resources). If done right, short-form media can take up residence in the public imagination; it gets picked up and adopted by individuals and incorporated into all sorts of seemingly unrelated media.

This all seems relatively straight-forward, but the current literature on the topic says otherwise. Lots of authors mistake sharing short-form media with shallow, simple, or meaningless engagement. For example, Turkle writes in Alone Together, “When interchanges are reformatted for the small screen and reduced to the emotional shorthand of emoticons, there are necessary simplifications.” If these were just simplifications I would suspect that we would use more “complex” language when it was made available to us. But as I’ve already shown, that is not the case. Emoticons are not a simplification of complex thoughts. They bring complexity to brevity. Do facial expressions and body language communicate more than emoticons? Probably. But the short-form does not supplant the long-form, it sits beside it and serves a different function. The short moves quickly, and can be completely understood in a short period of time. Perfect for sending notes in class so the teacher doesn’t notice. I don’t see kids dumbing down their language, I see them coming up with exceedingly creative and dense forms of communication that can share a great deal in just a few characters.

Back in 2007 Nicholas Carr wrote,

The great paradox of “social networking” is that it uses narcissism as the glue for “community.” Being online means being alone, and being in an online community means being alone together. The community is purely symbolic, a pixellated simulation conjured up by software to feed the modern self’s bottomless hunger. Hunger for what? For verification of its existence? No, not even that. For verification that it has a role to play.

Four years later, in Alone Together Sherry Turkle wrote:

The narrative of Alone Together describes an arc: we expect more from technology and less from each other. This puts us at the still center of a perfect storm. Overwhelmed, we have been drawn to connections that seem low risk and always at hand: Facebook friends, avatars, IRC chat partners. If convenience and control continue to be our priorities, we shall be tempted by sociable robots, where, like gamblers at their slot machines, we are promised excitement programmed in, just enough to keep us in the game. (p. 295).

 
The Gettysburg Address reprinted in the New York Times on November 20, 1863. Image c/o Wikimedia Commons

This argument, that we are isolated individuals with only superficial ties to make us feel engaged, sounds awfully like an accusation of false consciousness. “You aren’t really having evocative or emotionally fulfilling conversations online, you’re just entertaining yourself with safe caricatures of your friends.” Turkle is fond of saying that we are “connecting but we are not communicating.” If people were communicating solely through online mediums, I might be inclined to partially agree. But online interaction is quite often an extension and an augmentation of offline interaction. The alone together argument hinges on the premise that these communication tools lure us in by taking advantage of deep-seated vulnerabilities and desires, then lets us do what we always wanted: lie to everyone and present a clean and perfectly curated image of what we want the world to see. Its a very cynical way of thinking about the world and our relationships to each other: that at the first chance we get we deceive and manipulate each other until we are wrapped in a cocoon of sweet-sounding lies. We take advantage of the emotionally blunt instruments of emoticons and Facebook profiles and hide amongst the small, the brief, and the simple. On the Internet, everyone thinks you’re perfect.

I wrote in a Live Journal account when I was 17 and I know that it did not present my best features. I was brutally honest every day and I wish I could remember my password (or the password for the associated email address) so I could go back and delete the embarrassingly candid thoughts I shared with my equally awkward friends. That journal encouraged me to write and inspired me to be a writer- which eventually morphed into my current profession. That’s why I think Carr and Turkle have it backwards. We don’t simplify our feelings to fit the blunt communication tools we’re given. We push the limits of these tools and try to express as much as we can with as little resources as possible. We can be brief and complex at the same time. We may not all be Abraham Lincoln, and it is good to sit back, disconnect, and deeply engage in long-form expression. But the mere fact that we are communicating with brief (and constantly evolving) language, is not all together bad. In fact, if I were better at it, you would have finished this post five minutes ago.

Emoticon, stitched with DMC rayon embroidery floss on 14-count light green Aida by Deviant Art user ~crafty-manx

As part of my research I spend a lot of time preparing and conducting science lessons with 8th graders. Today they got to learn how to make moss graffiti. After a quick botany lesson they were allowed to paint whatever they wanted onto a large canvas drop cloth. What surprised me the most was the students’ overwhelming desire to simply write their names. If they didn’t write their names, they usually wrote a short phrase. Out of about 80 students, there were only a handful of drawings. Almost every student decided to write text. Some of that text, strangely enough, took the form of emoticons. Why would anyone choose to draw an emoticon?

This is interesting for two reasons. First, it flys in the face of popular belief that kids are allergic to writing of any kind. Texting, Facebook, and Twitter may not demand proper grammar, but it does get kids (even at very young ages) to write. Schools could benefit from capturing this desire to write, if they can effectively route it through language art curriculums. Schools should install browser plugins and other software that –instead of the usual spell check– runs kids through grammar and spelling lessons before the text is released to Facebook.

While learning old grammar and spelling rules are important, educators should also be on the lookout for new forms of expression. Emoticons are the product of linear typing styles. There is, seemingly, no reason to hand-write an emoticon. And yet…

Drawing emoticons is also interesting because I simply do not know how to read it out loud.While I didn’t have the chance to ask, I do not think the whole message is meant to be read, “Imperfection Is Beauty broken heart”. Rather, the emoticon implies an inflection and tone meant to be read, not spoken. Is this a new literary device, or is there a pre-internet analog? I welcome your suggestions in the comments.

This post is adapted from an original post at the RPI 3Helix site.

You can follow David Banks on twitter: @da_banks

YouTube Preview Image

 

 

The original video I posted was taken down. Alexander calls cricket a “gay game” 5 minutes in.

In an interview with Craig Ferguson last week, Jason Alexander called the game of Cricket “a gay game.” It was clear (and you can see for yourself in the video above, starting at the 9 minute mark) that Alexander was equating “gay” with “effeminate” and juxtaposing words like “gay” and “queer” with notions of masculinity and being “manly.” After the show aired, the tweets started pouring in. This tweet by @spaffrath was pretty trypical:

At first Alexander defended his comments, saying the usual things embattled comics say:

It could have very easily ended at that- just another public figure saying heteronormative things that alienate a sizable minority of the world’s population. But a few days later, he tweeted a link to a very personal and honest apology:

I feel as though public apologies have become more prevalent in our networked society. When I Googled the titular question, two things became abundantly clear. First, lots of people seem to agree that public apologies are more frequent than ever. Second, the public apology has become reified. In other words, we know a public apology when we see it. It’s an identifiable thing that has definite attributes that can be discussed and critiqued.

What seems debatable, however, is whether or not all these apologies are necessary. David Mitchell, a columnist at The Guardian is sick and tired of all the demands for public apologies made by politicians and other public figures. He writes:

Those who wish to silence others have noticed that expressions of offence and demands that people say sorry are the best way of doing it. Once you’ve demanded an apology, you can logically continue to demand it until you receive it.

Mitchell is not annoyed that there are more public apologies per se, rather he is tired of the “taking umbrage” political trope that has created a “thriving apology extortion racket in public life.” He uses Newt Gingrich as his main example: He takes non-issues, according to Mitchell, and feigns offense until his target caves to the ensuing media attention and gives a half-hearted “I’m sorry if I offended anyone” press conference. Gingrich secured a solid portion of a news cycle when he demanded that Robert De Niro apologize for jokingly asking, “Is America ready for a white first lady?” The popularity (and effectiveness) of this tactic is worth noting, but it isn’t exactly what I’m looking for. I’m much more interested in popular demands for a single person to apologize for their actions.

The Christian Science Monitor has the most recent reporting on public apologies, concluding that we are apologizing more than ever, but the apologies are getting worse. The reason?

“We are in a pandemic of bad behavior,” says Dr. Aaron Lazare, chancellor and dean emeritus of the University of Massachusetts Medical School in Worcester and author of the 2004 book “On Apology.”

He has studied the frequency of apologies in published news reports from 1900 to the present day and says since the 1980s, “the number of apologies has tripled.” But, he adds, the effectiveness and sincerity of those apologies has plummeted.

“Most of these people simply want to have their cake and eat it too,” he says, noting that the key to a genuine apology is humility and restoration of dignity for the offended party.

Moreover, the internet (Bum Bum Bum!) is helping spur these apologies:

The 24/7 media culture is partly responsible for the explosion of apologies, says Ari Kohen, an associate professor of political science at the University of Nebraska in Lincoln.

A hyper-connected online culture means more and more opportunities to say or do something offensive, he notes. This also means that “more and more people are watching, listening, and most importantly ‘sharing’ the offensive thing that someone has said or done,” he says via e-mail, adding, “so we’re seeing the offensive statement or action more than perhaps we would have, which yields more calls for apology, which in turn yields more and more terrible apologies.”

Dr. Aaron Lazare, author of “On Apology”

I have not read “On Apology” and I could not find any published academic articles by Dr. Kohen that deal with public apologies or public affairs mediated through information technology. I have read some of Lazare’s essays and guest posts where he discusses aspects of his book, and I generally agree with his assessment of what makes for an effective public apology and the psychology of feeling that you are owed an apology. (His explicit recognition of apologies as a reconciliation of debt goes much deeper than he might think, as seen in Graeber’s most recent work “Debt: The First 5,000 Years.”) But when he starts talking about  the “global village” we all share, the analysis becomes vague: More interaction means more apologies will be necessary, and each apology will become increasingly more important as our lives becoming increasingly intertwined. Is that necessary true? I could see more war, more attentiveness to not offending people in the first place, and more apologizing. But the mix of these three possibilities (and countless others) is unpredictable. Do apologies go up in absolute numbers? As a percentage of interactions? It might be both, and a deeper conversation about debt (something I’m not prepared to do here) might hold the answers. But at this point, I am more convinced that the public apology has been reified, and it is that reification that allows us to easily demand it, and recognize a good one when we see it. Just as diagnoses rates go up as the diagnostic tools get better, public apology rates go up as we get better at recognizing when one is necessary and what form it should take.

I agree with Kohen when he says that the internet (and specifically social networking) makes it much easier to share an offensive statement made by a public official. The statement may be made in an “old media” venue, but it gains new life on YouTube and circulated on Facebook timelines and Twitter. A few examples include Tracy Morgan’s homophobic rant, Rush Limbaugh’s “slut” comments about Sandra Fluke, and just about everything Pat Buchanan has ever said on TV. Without the ability to upload, share and publicly comment on pieces of “old media” it would be much more difficult to drum up popular outrage and gather support for a public apology.

Self-styled experts of public and professional apology characterize YouTube as an instigator and magnifier of misunderstandings and arguments. Its ability to keep powerful people accountable is acknowledged, but more as a warning to the powerful than an optimistic observation for the masses. A perfect example is John Kador’s “Effective Apology: Mending Fences, Building Bridges, and Restoring Trust”. In the chapter “The Age of Apology” he writes:

…digital technology is contributing to the increase of apologies. Technologies such as camera cell phones and the video sharing service YouTube have invaded formerly private spaces, resulting in tectonic shifts in communications, accountability, and privilege. Apologies that once could be transacted discreetly between parties in private (and later denied, if necessary) are increasingly broadcast for all to scrutinize. Technology dramatizes offenses for all to see, so starkly that even the most recalcitrant must express their remorse. (P.23)

Jason Alexander, on the Craig Ferguson Show

What Kador, Kohen, and Lazare all seem to be neglecting, is the very idea that there is this one recognizable thing called “The Apology.” That there is an exact event or moment at which an individual performs a speech act and others evaluate its effectiveness. Every one of these authors talk about the “anatomy of an apology” or “the dimensions of an effective apology” all reinforcing and reifying the very idea of the generic apology. The very idea that there are dimensions to an effective apology, or that an apology has an anatomy, says something very profound about our augmented society. Once public apologies become identifiable by a certain set of characteristics, they become comparable and collectable. We can curate them and put them side-by-side. Kador actually comes out and says that the apology has become a commodity, but does not notice that the very existence of his book makes that a possibility:

Much of our culture is still organized to reinforce transactional behavior. This part of the culture still regards people and relationships as a means to an end, as assets to be used to bring something about, rather than as ends in themselves.

It’s no accident that apology has evolved to be another instrument to be bargained or transacted. And in rare cases, it is appropriate to use apology in the same way we use currency or any other medium of value. But if the only wat we can have a relationship with apology is by trading it like a commodity, we will miss out on an opportunity to increase our intimacy, transparency, flexibility, and security. (P.239)

We tend to recognize insincere aspects of our society once they take a digital form. The superficial changes get mistaken for a new institution, when what we are really doing is confronting long standing aspects of our society that we find undesirable. As we identify, collect, curate, and evaluate public apologies, we start to effect how public apologies are crafted and the etiquete surrounding them. I think talking about why Jason Alexander’s public apology was so effective –acknowledging the problematic nature of his language, understanding why someone would be offended by it– helps us all practice talking to one-another. Dissecting and evaluating the apologies of others can act as teaching tools for more private interactions. While the public apology is, by definition, meant for many different people the best ones are also intensely personal. They say something about our shared human foibles and our struggles to understand others that hold different values or traditions. We run the risk of over-doing it: pulling a Newt Gingrich and demanding apologies for our own benefit. This might also encourage more insincere, boiler plate apologies that affirm Kador’s fears but I think the benefits outweigh the risks. We now have a recognizable, singular event called the Public Apology and we should start putting it to good use.

Windows 8's Metro Interface is a radical departure from previous Windows releases.

My first PC was a frankenstein PC running Windows 3.1. I played Sim City and argued with people in AOL chat rooms. My first mac was a bondi blue iMac that ran OS 9, more AOL, and an awful Star Trek: Voyager-themed first-person shooter.  I was 13. In the intervening years, I’ve had several macs and  PCs, all of which have seen their fair share of upgrades and OS updates. Even my current computer, which is less than a year old, has seen a full OS upgrade. I am one of those people that like radical changes to graphic user interfaces (GUIs). These changes are a guilty pleasure of mine. Some people watch trashy television, I sign up for a Facebook developer account so I can get timeline before my friends. I know I’m fetishizing the new: it goes against my politics and my professional decorum. I have considered switching to Linux for no other reason than the limitless possibilities of tweaking the GUI. It is no surprise then, that I have already downloaded the Windows 8 release candidate and I am installing it on a virtual machine as I write this paragraph. What is it about GUIs that evoke such strong emotions? While I practically revel in a new icon set, others are dragged into the future kicking and screaming. What is it about GUIs that arouse such strong feelings?

I think the easiest answer lies in the “desktop” metaphor. We arrange the world around us to suit our needs. If someone were to come into your home and rearrange your office furniture, you would feel confused but also violated. When someone takes control over something we hold as intimate, we feel infringed upon. Something that should be under our personal control has been altered without our explicit consent and that makes us feel vulnerable. But Facebook and Windows are corporate properties, not our physical homes. Why would we develop similar feelings for a Facebook page and a room in our house? If you’re a regular reader of the blog, you know part of the answer: Our online and offline lives are enmeshed in an augmented reality. Digital dualism, the a priori assumption that offline interaction is inherently better or more “real” than online social activity, would have us search for some kind of antisocial proclivities. We are developing emotional ties with inanimate objects that should only be reserved for fellow humans. We are too wrapped up in superficial and meaningless relationships with gadgets and machines that we care more about iOS 6 than the human rights atrocities in Syria. These are both straw man arguments that have little baring on how humans relate to each other and inhuman objects. When we get outraged over a new interface (or, in my case, desperately await their arrival) we may have aesthetic objections (like this person and this person) but these are always couched in a functionalist argument. Just about every complaint about a major GUI redesign is a variation of the following:

The new design makes it difficult to do some things and makes less useful tasks easier to accomplish. The old design allowed me to do X very easily, in the new design X is removed/hidden/unrecognizable and I no longer know how I am going to use this product.

There’s also a lot of speculation surrounding the designers’ intellectual capacity and a fair amount of cursing.

Returning to the question I posed above: Why get bent out of shape about something that you should you develop strong feelings about the arrangement of pixels that you don’t even own? The answer has two parts. First, ownership is a fickel concept. Social media accounts do not square with the property ownership regime in which we find ourselves. You can opt to download the data of your Google account of Facebook profile, but that’s not the same. Your profile is more than the sum of its data. The EULA might say everything belongs to the company, but you are your profile. Changing your desktop experience or your profile layout is tantamount to some stranger running up to you and giving you a haircut against your will. Similarly, cloud computing and desktop emulation have been around for quite a while, and yet -all things being equal- I would choose to work on my small laptop over a spacious public access desktop any day of the week. You will also notice this at conferences. Given the option, most people want to use as much of their own stuff as possible for their presentation. I say “stuff” because, while its usually a digital hardware and software preference, I have been to conferences where someone whipped out some overhead transparencies and asked for a projector. This sort of behavior reminds us of a second definition of ownership: a synonym for responsibility. You want your presentation to go well, you’re responsible for your own performance, and so ownership (the first kind) over the presentation hardware might make you feel like you’re in control. The familiar is safe, and we like to feel safe when we put ourselves on display- whether at a conference or on Facebook.

This brings us to the second part of the answer: the desktop metaphor is becoming less and less accurate. We still use our computers to do “desk-like” things –writing, editing, and calculating– but we also talk to people, compose music, watch movies, and purchase goods and services. The computer has become an entire city, complete with movie theaters, music venues, markets and wide open public spaces. We give product reviews to each other, share family photos, and tell jokes to each other. When the interfaces that mediate these social activities change, it is a jarring experience. Imagine if your home town were totally rearranged tomorrow. You are used to the old design and had all of the shortcuts memorized. Now the back alleys and keystrokes are different and you have to memorize new ones all over again. This actually happens in the physical world, but it is usually in the wake of a major disaster.

This is a very social way of looking at information technology. The designers, as far as I can tell, still see interface design as a personal and individual experience. Sam Moreau, Microsoft’s User Experience Director gave an interview to Gizmodo back in February. I found this part particularly telling:

Giz: What has the negative feedback been like? What do people not like?

SM: To be honest, you know what I think it is? When you change something—this is my own personal observation—a lot of us know how the PC works, become the help desk for all of our friends and family. Inherent in that is a sense that I know. I’ve got this expertise now, I’ve got this power. We’ve changed something now, and leveled the playing field for all those personal help desks, so they’re no longer the guy. It’s human nature—I had invested in this, I knew this, and some degree of my self was aligned to the fact that I know how this stuff works. I do think that’s an aspect of what’s going on.

Giz: Do your friends come to you and tell you, this is what I want Windows to look like? What’s the craziest request?

SM: Last night at dinner. A friend of mine said, “Look it’s all great and everything. But I need you to make the fonts a little bit bigger. My eyes are getting older, so just make them a little bit bigger for me please.”

Giz: But the fonts are huge.

SM: You’re not the only one who uses it, it turns out. That type of stuff. I’ll get that from uncle who’ll call me. “It’s pretty hard to read. Could you make the button a little bigger? ” All the time.

JLG: Steven does this talk, mostly about Office, about how it’s like ordering pizza for a billion people. Some people are lactose intolerant. Some people don’t like mushrooms. Now make everyone happy.

SM: There’s a billion people, and pizza’s your only option. That’s what it’s like designing Windows.

I have been a help desk in formal and informal capacities for most of my life. I know lots of people who serve this function as well. I do not know of a single person that dreads the day when their dad will stop asking them computer questions. Maybe it is “an aspect of what’s going on” but there is something much bigger, much more fundamental, going on.

YouTube Preview Image

The pizza metaphor is an interesting one, but something designers do all the time. From buildings to coffee makers, one of the biggest challenges of designers is to make something that lots of different people can use. Granted, operating systems are incredibly complex things that must accomplish a wide range of things with relative ease, but then pizza is a terrible metaphor. A pizza is simply consumed. Consumption does not begin to describe what is happening when we use a computer. I agree with David Graeber when he says,

One thing I think we can certainly assert. Insofar as social life is and always has been mainly about the mutual creation of human beings, the ideology of consumption has been endlessly effective in helping us forget this. Most of all it does so by suggesting that (a) human desire is essentially a matter not of relations between people but of relations between in- dividuals and phantasms; (b) our primary relation with other individuals is an endless struggle to establish our sovereignty, or autonomy, by incorporating and destroying aspects of the world around them; (c) for the reason in c, any genuine relation with other people is problematic (the problem of “the Other”); and (d) society can thus be seen as a gigantic engine of production and destruction in which the only significant human activity is either manufacturing things or engaging in acts of ceremonial destruction so as to make way for more, a vision that in fact sidelines most things that real people actually do and insofar as it is translated into actual economic behavior is obviously unsustainable. (“Consumption” in Current Anthropology Volume 52 Number 4 August 2011)

Mac OS 9.2- the OS of my childhood. Image c/o Amit Singh

In other words, the fact that we purchase (consume) computers is only the beginning of our interaction with them, and it isn’t even the most interesting part. We socialize with and through them. The fact that there are many people doing similar things on computers, gives those computers a new meaning and function. (The internet is useless if there’s only one person on it.) What would an explicitly socially designed computer look and act like? I offer this only as a provocation, I have no speculative answers at the moment. I only suspect that drastic GUI changes would be less likely, and here’s why:

Drastically changing an interface makes us uncomfortable because we are experiencing a simultaneous phenomenological change in both our environment and our digital selves. This hints at the possibility that, one of the few things that are truly different between online and offline social activity is that the environment and the self are entangled to the point that they are almost indistinguishable. Our profiles and desktop environments are extensions of ourselves, but the sum of these individual parts constitute a digital public space. This is another reason why the “end of privacy” debates are so ridiculous. Privacy is not disappearing, rather the relationship between the public and the private is being redefined along entirely new boundaries. It is also why studying social activity in digital worlds is so important- fundamental assumptions about the embodied nature of social activity must be called into question. To write off social media as unimportant or anything less than society-formation (for better or worse) belies a significant underestimation of what humans are capable of achieving. Namely, we are not just playing with digital toys, we are investigating new potentialities of what it means to be human.

I suppose it is no coincidence that I also enjoy looking at construction sites, own an eclectic wardrobe, have a couple of tattoos, reorganize my office furniture at least once a year, and get dramatically different haircuts. I appreciate the process of change to both the environment and the self. Interfaces are another aspect of this preference. With each new forced iteration of Facebook, I find myself enjoying the proces of planned obsolescence. Some prefer keeping things the same. There are politically normative critiques of both, neither of which I am prepared to discuss at any length in this post. Instead I will simply say that I am enjoying my Windows 8 experience and contemplating getting a haircut.

David eagerly awaits changes to Twitter, Tumblr and Facebook. He regularly changes the entire style layout of davidabanks.org

Or, Tosh.0 is Racist, Classist, Homophobic, Sexist, and Just Plain Gross

I’m not really sure where to begin here. Tosh.0, the Comedy Central hit show hosted by Daniel Tosh, is so rife with sophomoric dick jokes (I prefer the classy kind) and heteronormative swill that I contemplated not even writing this post. Unlike Ellen or even It’s Always Sunny in Philadelphia, Tosh.0 is meant to be (as far as I can tell) the refined distillation of a 14-year-old-white boy’s id. The show is half sketch comedy, half sitting with your younger brother while he guzzles an energy drink and laughs at youtube videos of bums fighting. Jezebel has already written about his “lightly touching women’s stomachs while they’re lying down” campaign, and his fat-shaming caption contest.  Both posts deserve your attention, the former for its righteous anger, the latter for its history of the image used in the contest. I went through several pages of videos, looking for good examples of the “-ists” I listed above, but each one was so jam-packed with privilege and hate that I couldn’t pick just one. But if, you have never seen the show and need some mental flagellation, here’s a sexist one about MMA fighting; something called “fat girl gymnastics” (fat shaming with bonus racism); a video that’s actually titled “Racist Moments Montage“; and an even more racist one called, “stereotypes are not always true.” I understand that Daniel Tosh is a comedian, and to argue with one usually means you have already lost the fight, but I think there is a fruitful discussion to be had about how a public figure engages with his or her audience and the sort of behavior they encourage.

Daniel Tosh forces people like me to reevaluate some of the claims I make about interactive media. For example, back in March, when Ze Frank was about to start his new web series, I wrote this:

Ze Frank’s shows are recursive. The Show’s very existence creates a viewership community, which then creates content for more shows, which then makes for a more dedicated community. This sort of recursive, prosumer entertainment relies on the same self-organizing social forces that make Facebook or Performative Internet memes possible. The cohesion and dedication of these communities are constituted and constantly maintained through the development of various organizing logics. These include in-jokes, arcane vocabulary, and symbols (usually on t-shirts) just to name a few.

Ze Frank asks his viewers to send in little audio and video clips. He strings them together or hands them over to an artist, and really cool stuff happens. There are songs, cartoons, and other missions that fill the show with moments that are intensely personal, but broadly entertaining. By infusing the show with user-generated content, each new episode gives viewers something to organize around, while at the same time making the show a unique product of the viewership’s collective identity. There are inside jokes, reoccurring characters, and “friends of the show” that gives viewers the chance to build something together, rather than passively absorb ready-made content. Someone that watches a random episode of “The Show” or “A Show” will probably be entertained, but they might not understand all of the references or jokes. Ideas and physical things that exemplify group membership are called “boundary objects” by most social scientists. Ze Frank’s work is filled with the kinds of boundary objects that form a very creative, light-hearted community. But boundary objects can draw lines around hateful and destructive things as well. Asking your (presumably male) viewers to ostensibly molest women and record it on video, organizes viewers to perform a certain act, but the jokes, vocabulary, and symbols of male domination are not arcane or obscure. In fact they are so pervasive, that Tosh’s jokes come off as lazy and driven by tired racial stereotypes. The boundaries are drawn in such a way that anyone that is not a white male heterosexual has to be shamed or ridiculed at one point or another.

Nice Peter is another great example of interactive comedy: The Epic Rap Battles of History and Picture Songs are created with user input but don’t rely on racial stereotypes or sexism to be funny. Epic Rap Battles of history pit two voter submitted fictional or historical figures against one-another. The videos are well produced, smart, and hilarious. Picture songs rely on user-submitted pictures that are woven together in a song composed by Peter.

YouTube Preview Image YouTube Preview Image

By comparing Tosh.0 to these other two comedy shows we can add a useful corollary to my previous post that I quoted earlier: the communities that form around participatory media are still deeply rooted in societies that contain racism, sexism, classism, homophobia and many other forms of bigotry. It is the responsibility of the performer, as de facto leader of the community, to structure participation that discourages or explicitly denounces hateful behavior. This is harder than it sounds. An easy step is to simply avoid asking your audience to harm others. Ask people to dress up their vacum cleaners, instead of molesting the women around them is a good start. Keeping the show inclusive gets more difficult as the activity becomes more open-ended. Lack of structure allows existing social structures (i.e. fat shaming) to take over. Caption contests not only give individuals the opportunity to say something sexist, but it also makes it very easy to support the sexist claim by quickly voting it up. These might even become the “most popular” captions because hegemonic discourses have a way of winning voting contests whether they be the craptions on Cracked or the federal government.

Tosh.0 is a bunch of hate-filled schlock that is barely worthy of note on this blog. I am considering it only because it forces us to revisit our positive and normative claims about the benefits of participation. It makes me, personally, think critically about what is happening on Ze Frank’s show that is not happening on Tosh.0. I can only conclude that user-driven content is so much more than structuring action and curating the resulting submissions. Good internet comedy requiers a sociological imagination of sorts. Entertainers need to know what kind of activity will bring out the best in individuals –morally as well as creatively– while acknowledging the structural makeup of society. You have to think about individuals and what they like and find meaningful, but you also have to consider the larger picture. What happens when those meaningful ideas and images are invoked? Who laughs, and who is marginalized? Perhaps this comes down to the simple fact that Daniel Tosh is on corporate basic cable while Ze and Peter are working almost exclusively through the Internet to reach their audience. If that’s the case, Tosh might want to consider pulling a Dave Chappelle and saving his soul.

YouTube Preview Image

Just a quick post this Saturday about Twitter partnering with NASCAR to cover the Pocono 400. Via Mashable:

The Pocono 400 partnership will revolve around the #NASCAR hashtag, according to a joint press conference Twitter and NASCAR held Friday.

“During the race, we’ll curate accounts from the NASCAR universe and surface the best Tweets and photos from the drivers, their families, commentators, celebrities and other fans when you search #NASCAR on Twitter.com,” reads a post to the official company blog.

Full disclosure: I know next-to-nothing about NASCAR. The most idiosyncratic thing I know about NASCAR is that the headlights are painted on, and not real. Other than that, someone could tell me that you get extra points (less points?) if the car crash looks really cool, and I would believe them. But let’s blackbox the sport for a moment and take a look at the role Twitter plays in public events.

Social networking sites have been co-hosting election coverage since 2006. This makes sense, since Americans have been going to the Internet for their news for a while now. But why has it taken this long to get a social media company to recognize sports? Sure you can “Like” the Super Bowl on Facebook or follow North Carolina Chapel Hill’s Women’s Soccer team on twitter, but there’s no Pinterest-sponsored Indy Race car team and the Brazilian Football league isn’t covered in Orkut banners.  The domain host Go Daddy has objectified Danica Patrick for some time now (double sports points for doing it during the Super Bowl) but they aren’t a social media company.

Here’s my theory on social media’s mum stance on sports: Most sports teams are geographically based, and you do not want to pledge an allegiance to one team and lose the loyalty of other areas. These are global public spaces and to align yourself with something even the size of a country, means preferring one geographically bound community over another. Why doesn’t this apply to other companies? Well, it does- but selling trucks, bandwidth, and Doritios is a different beast than social media. Social media is public space, and public space is meant to facilitate and enable communication, not take sides in it. (Unless, of course, the ability of your customers to engage in communication is up for debate.) Twitter’s move into NASCAR keeps with my theory. NASCAR teams are not geography based, and Twitter is covering the whole race, not a single team. Just like a presidential debate, Twitter will be taking on the role of curator, not partisan.

Secondly, consider sports franchises as the first social media: Companies who look to draw in large crowds and sell their eyeballs to advertisers. They do not need these young bucks running around claiming that information wants to be free: the NBA, MLB, and NFL all have proprietary paywall systems and are doing just fine. I’ll leave the last word to a good friend of mine that works at ESPN/ He had this much to say on the matter:

 

You can follow david on twitter, wherein he will almost never talk about sports: @da_banks