Cyborgology readers, I need your help. I’ve put the post I was writing for today on hold because I’m short a key piece of terminology, and I’m hoping one of you can either a) point me to a good preexisting term, or b) help me to assemble a term that’s a bit more graceful than the ones I can come up with on my own.
The phenomenon I’m trying to describe is one that I’ve encountered a number of times over the past week, and is a theme I identify fairly often in conversations about newer technologies. I describe it below, first generally and then with a couple recent examples.
To set up my description, remember that ‘the physical’ and ‘the digital’ aren’t separate worlds, and that human behavior ‘online’ has a whole lot in common with human behavior ‘offline.’ Note that I’m specifically avoiding saying that behavior online “mirrors” behavior offline here, because that would imply that online and offline expressions of a given behavior are actually two separate behaviors that closely resemble each other; after all, your reflection closely resembles you, but you and your reflection are not the same thing. I’m starting from the assumption that the various online and offline expressions of a behavior (sharing, bullying, etc) are, at the most fundamental level, the same behavior.
Now that we’ve established that, here’s what I’ve observed: a new technology (or a change to an existing technology) enters the scene, and makes more explicitly visible to us some facet or aspect of human social behavior that a) is usually more latent, subtle, or obscured, and that b) makes us feel anxious, uncomfortable, or even repulsed. The behavioral facet we see on display through the new technology isn’t new, it’s just newly visible (or more visible than it was before); it is also not unique to behavior connected to the new technology, even if the affordances of that technology seem to encourage the specific behavior.
When we try to identify and explain our unpleasant feelings, however, sometimes we don’t correctly identify the source of our discomfort as having been forced to confront a distasteful aspect of how our society works that we would rather have kept ignoring. Instead, we blame the new technology—and we blame it not for being a too-effective lens, but rather for “causing” or even “being” the unpleasant aspect of our society itself.
To help illustrate what I’m talking about, here’s a couple recent examples:
1) Klout. We love to hate Klout—or at least, I love to hate Klout; as I’m so fond of repeating, Klout “encourage[s] nothing good”—but let’s face it: “social ranking” doesn’t happen only through Klout. Social ranking existed well before Klout (else, why would anyone have bothered to built Klout? The concept would have made no sense), and it had the power to affect who got jobs and preferential treatment before Klout, too. At the most basic level, Klout isn’t creating any new kinds of human behavior; Klout is just making more explicit and blatantly visible something that’s usually easier to hide or ignore. Does that something (social ranking) make us uncomfortable? Yes it does. And is Klout trying to smack a glossy veneer of Science™ onto social ranking? Yes it is (and that’s what really gets me). But in the end, what we’re doing when we hate Klout is resenting it for forcing us to acknowledge something about our society we’d rather ignore. Pretending that Klout is the cause rather than a symptom is just an attempt to re-obscure what’s too disquieting to have in direct view.
2) Facebook’s recent announcement that it will give users the option of paying to promote their posts on the site, so that more of their ‘friends’ see them. There’s a lot tied up in here to dislike (where’s that “dislike” button when you need it?): the idea that money talks, the idea that we have to buy our friends’ attention (we don’t like to think about friendship and money at the same time), the idea that our care and attention—two important aspects of friendship itself—can be purchased, the idea that people should act like corporations (first corporations get to be people, now this?), and the idea that your personal identity has become a brand identity, to name just a few. But again, promoted status updates are a symptom, not the cause; Facebook wouldn’t be rolling out this option if it didn’t think people would actually use it. We can defriend people who promote status updates all we want, but again, this is just an effort to re-obscure; the problem (problems, really) isn’t the promoted updates themselves.
There are other recent examples related to self-tracking and decision-making apps that I’ll be talking about next week, but for now, I’m looking for some new words:
What do we call what it is that we’re really reacting to when we lash out against technologies like Klout and promoted status updates, which is the fact that something threatening, distasteful, and inescapable is now too visible, too explicit, too overt, too blatant for comfort, is displayed in too-stark relief, has been distilled down to a too-bitter concentrate that’s near impossible to swallow? “Explicitization,” “salientization,” and “deobscuration” start to get at the point, but I have to admit: they’re pretty awful as words.
Similarly, what do we call our reactions, our misplaced resentment? What do we call the attempt to re-obscure that which we don’t want to confront by trying to turn the occasion for visibility into the phenomenon itself, by treating the setting of a behavior’s display as its root cause?
Please leave your ideas and suggestions in the comments section; I’m looking forward to your responses!
Blame the phone image from http://www.redorbit.com/news/technology/1112522497/when-carriers-fight-we%E2%80%99re-the-ones-to-blame/
xkcd Klout comic from http://www.xkcd.com/1057/
Strange creature can’t look photo from http://1funny.com/cant-look/