When my phone rings, it’s almost always my mom, or her mom, or my partner’s mom. It’s always somebody’s mom. For everyone else, the notification is a buzz, a ding, a quick vibration. For all of the not-moms in my life, we communication via text message, Facebook, Twitter, email, chat, or Skype. We connect regularly, but rarely through voice calls. When I do pick up the phone, I last about 30 minutes max. Then, my ear feels hot, my shoulders tense, and I refuse to ask “were you talking to me, or to Dad?”” one more time.
This is indicative of a wider trend. The telephone, as a medium of voice-talk, is in massive decline—at least amongst the texting public. A widely cited 2012 CDC study shows that over half of all American homes rely predominately on mobile devices, with almost 40% living in landline-free homes. And we all know, the cellphone is far better at just about everything than voice-to-voice communication. With smartphones, the talk function seems almost like an afterthought, available in case of emergency. (more…)
The plot of Scream is impossible without cordless phones.
In Children of Men Clive Owen’s character Theo is trying to secure “transfer papers” from his cousin Nigel who seems to be one of the few rich people left in the no-one-can-make-babies-anymore-dystopia. The two older men are sitting at a dining table with a younger boy, presumably Nigel’s son, who seems to be inflicted in some way. He’s pale and stares vacantly at somewhere just past his left hand which is eerily still in between the twitches of fingers that are adorned by delicate wires. He doesn’t respond to others in the room and isn’t eating the food in front of him. After Nigel yells at him to take his pill we notice that they boy isn’t really sick or particularly disturbed, he’s playing a game attached to his hand. (more…)
On New Year’s Eve the biggest fireworks display ever was launched off of the biggest tower in the world. Dubai’s fireworks show was, in terms less vulgar than the display itself, an undulating orgasm of global capital. The 500,000 fireworks mounted to Burj Khalifa Tower and the surrounding skyscrapers, were reportedly viewed live by over a million people on the ground and livestreamed to millions more around the world. I can’t find a price tag for the display (too gauche?) but given that your typical municipal fireworks display for proles can easily top six figures, lets just assume that you could measure the cost of this display in national GDPs. It was profane in the way Donald Trump’s continued existence is profane. The fireworks display was so huge —such an utterly perfect metaphor for capitalism itself— that no single person standing on the ground could witness the entire thing. It was a spectacle meant for camera lenses. (more…)
Earlier this week, the New York Times ran yet another hilariously digital dualist piece on a new surveillance system that lets retailers follow customers’ every move. The systems, mainly through cameras tied into motion capture software, can detect how long you stared at a pair of jeans, or even the grossed-out face you made at this year’s crop of creepy, hyper-sexualized Halloween costumes. The New York Times describes this as an attempt by brick and mortar stores to compete with data-wealthy “e-commerce sites.” (Who says “e-commerce” anymore? Seriously, change your style guide.) Putting aside the fact that most major retailers are also major online retailers, making the implicit distinction in the article almost meaningless, the article completely misses the most important (and disturbing) part of the story: our built environment will be tuned to never-before-seen degrees of precision. We have absolutely no idea what such meticulously built spaces will do to our psyches. (more…)
(This is not the dive bar in question)
I’ve been thinking a lot over recent weeks about digital media, smartphones, and absence-vs.-presence, all of which was compounded by an interesting experience I had last weekend. On one particular night, 1:00 AM found me in a Lower East Side dive bar playing pinball with a friend from Brooklyn and a friend from D.C.; I was also chatting with a third friend (who was in D.C.) via text message and Snapchat between my pinball turns, and relaying parts of that conversation to our two mutual friends there with me in the bar. More people joined us shortly thereafter, madcap shenanigans ensued and, sometime around stupid o’clock in the morning, I started the drive back to where I was staying.
As I was getting up the next day, I recalled various scenes from the night before. One such scene was from the earlier end of being at the dive bar: Getting to hang out with three people I don’t see often was a nice surprise, and how neat was it that we’d all gotten to hang out together? A few seconds later, however, it hit me that my mental picture of that moment didn’t match my memory of it. What I remembered was being in the dive bar spending time with three friends, but I could only picture two friends lit by the flashing lights of so many pinball machines. I realized that Friend #3 had been so present to me through our digital conversation that my memory had spliced him into the dive bar scene as if he’d been physically co-present, even though he’d been more than 200 miles away.
I wasn’t entirely sure what to make of this. On the one hand, yay: My subconscious isn’t digital dualist? (more…)
What will happen if more apps start to play more important roles in more of our lives?
Last week I wrote about a pattern I’ve been seeing, one for which I wanted to create a new term. I’m still working on the terminology issue, but the pattern is basically this:
1) A new technology highlights something about our society (or ourselves) that makes us uncomfortable.
2) We don’t like seeing this Uncomfortable Thing, and would prefer not to confront it.
3) We blame the new technology for causing the Uncomfortable Thing rather than simply making it more visible, because doing so allows us to pretend that the Uncomfortable Thing is unique to practices surrounding the new technology and is not in fact out in the rest of the world (where it absolutely is, just in a less visible way).
The examples I sketched out last week were Klout and Facebook’s new “sponsored” status updates (which Jenny Davis has since explored in greater depth); this week, I’m going to take a look at ‘helpful’ devices and smartphone apps. (more…)
This post combines part 1 and part 2 of “Technocultures”. These posts are observations made during recent field work in the Ashanti region of Ghana, mostly in the city of Kumasi.
Part 1: Technology as Achievement and Corruption
An Ashanti enstooling ceremony, recorded (and presumably shared) through cell phone cameras (marked).
The “digital divide” is a surprisingly durable concept. It has evolved through the years to describe a myriad of economic, social, and technical disparities at various scales across different socioeconomic demographics. Originally it described how people of lower socioeconomic status were unable to access digital networks as readily or easily as more privileged groups. This may have been true a decade ago, but that gap has gotten much smaller. Now authors are cooking up a “new digital divide” based on usage patterns. Forming and maintaining social networks and informal ties, an essential practices for those of limited means, is described as nothing more than shallow entertainment and a waste of time. The third kind of digital divide operates at a global scale; industrialized or “developed” nations have all the cool gadgets and the global south is devoid of all digital infrastructures (both social and technological). The artifacts of digital technology are not only absent, (so the myth goes) but the expertise necessary for fully utilizing these technologies is also nonexistent. Attempts at solving all three kinds of digital divides (especially the third one) usually take a deficit model approach.The deficit model assumes that there are “haves” and “have nots” of technology and expertise. The solution lies in directing more resources to the have nots, thereby remediating the digital disparity. While this is partially grounded in fact, and most attempts are very well-intended, the deficit model is largely wrong. Mobile phones (which are becoming more and more like mobile computers) have put the internet in the hands of millions of people who do not have access to a “full sized” computer. More importantly, computer science, new media literacy, and even the new aesthetic can be found throughout the world in contexts and arrangements that transcend or predate their western counterparts. Ghana is an excellent case study for challenging the common assumptions of technology’s relationship to culture (part 1) and problematizing the historical origins of computer science and the digital aesthetic (part 2). (more…)
Reason #15,926 I love the Internet: it allows us to bypass our insane leaders israelovesiran.com
— allisonkilkenny (@allisonkilkenny) April 22, 2012
Sherry Turkle, Author of Alone Together and a New York Times opinion piece on our unhealthy relationship to technology.
Sherry Turkle published an op-ed in the Opinion Pages of the New York Times’ Sunday Review that decries our collective move from “conversation” to “connection.” Its the same argument she made in her latest book Alone Together, and has roots in her previous books Life on the Screen and Second Self. Her argument is straightforward and can be summarized in a few bullet points:
- Our world has more “technology” in it than ever before and it is taking up more and more hours of our day.
- We use this technology to structure/control/select the kinds of conversations we have with certain people.
- These communication technologies compete with “the world around us” in a zero-sum game for our attention.
- We are substituting “real conversations” with shallower, “dumbed-down” connections that give us a false sense of security. Similarly, we are capable of presenting ourselves in a very particular way that hides our faults and exaggerates our better qualities.
Turkle is probably the longest-standing, most outspoken proponent of what we at Cyborgology call digital dualism. The separation of physical and virtual selves and the privileging of one over the other is not only theoretically contradictory, but also empirically unsubstantiated. (more…)
The tech world and consumers at large have been buzzing amid recent reports/leaks which indicate that Google will, in the next year, come out with smartphone-esque glasses. Apparently, these devices, often dubbed “Terminator” glasses after the cyborg technology portrayed in the 1980s classic film by the same name, will overlay the physical world with digital data—augmenting our practices of looking. (more…)
Everybody knows the story: Computers—which, a half century ago, were expensive, room-hogging behemoths—have developed into a broad range of portable devices that we now rely on constantly throughout the day. Futurist Ray Kurzweil famously observed:
progress in information technology is exponential, not linear. My cell phone is a billion times more powerful per dollar than the computer we all shared when I was an undergrad at MIT. And we will do it again in 25 years. What used to take up a building now fits in my pocket, and what now fits in my pocket will fit inside a blood cell in 25 years.
Beyond advances in miniaturization and processing, computers have become more versatile and, most importantly, more accessible. In the early days of computing, mainframes were owned and controlled by various public and private institutions (e.g., the US Census Bureau drove the development of punch card readers from the 1890s onward). When universities began to develop and house mainframes, users had to submit proposals to justify their access to the machine. They were given a short period in which to complete their task, then the machine was turned over to the next person. In short, computers were scarce, so access was limited. (more…)