IBM’s SAGE, a large semi-automated air defense system from the Cold War era. C/o Wikimedia Commons
IBM’s SAGE, a large semi-automated air defense system from the Cold War era. C/o Wikimedia Commons

I just left my department’s colloquium lecture series where Dr. Virginia Eubanks from SUNY- Albany was giving an excellent talk on the computer systems that administer and control (to varying degrees) earned benefits programs like social security, Medicaid, and Medicare. The talk was really fascinating and a question from Dr. Abby Kinchy during the Q&A really stuck with me: How do we study different (and often long-outdated) versions of software? Particularly, how do we chart the design of software that runs on huge closed networks owned by large companies or governments? Are these decisions lost to history, or are there methods for ferreting out Ross Perot’s old wares?

In open source projects and free software there are plenty of ways of charting version histories and forks. The code is out in the open and is typically posted to Github or a server running Subversion. There is a small but growing collection of ethnographies of open source and free software communities that outline the design decisions and politics of horizontally organized development projects. What is missing –and I don’t know how to go about filling this gap—is thick descriptions of IBM enterprise suites and server-side management software.

Corporate software is a “hard case” because there are dozens of institutional, legal, and technological barriers to software revision documentation. Hard cases are topics that typically “resist” ethnographic research. Typically because they are so normal, objective, or elite that they are considered “beyond” or outside of cultural critique or ethnographic investigation. Other examples of hard cases include mathematics, corporate boardrooms, and the federal government. There’s some great literature on all of these topics, but it is minuscule in comparison to “softer” subjects like kinship or social movements.

Another way to describe this research is “studying up.” That is, studying the elite and powerful. No one except the decision makers and their superiors have physical, virtual, and/or legal access to documents, code, or any other definitive text. Even if a researcher were to gain access to the documents, there is no guarantee that they would make sense to an outside observer. Proprietary systems are closely tied to the expertise that makes and administers them. In other words, you cannot understand the technical system without convincing someone that programmed the thing to talk to you. Access to these systems and people are made difficult by an array of barriers –guard gates and busy schedules just to name two common ones— that shield elites from unwanted attention. Studying up is extremely difficult but well worth the effort. As Laura Nader writes: “…our findings have often served to help manipulate rather than aid those we study.” She goes on to write: “We cannot, as responsible scientists, educate ‘managers’ without educating those ‘being managed.”  In order to bring about an egalitarian society, social scientists must help reveal the inner workings of elite institutions.

I wouldn’t go so far as to say these systems control everyone’s lives, but they do control some, and influence many others’ day-to-day lives. This kind of research is important because knowing what affordances and priorities are built into these systems tells us a lot about our technologically augmented society: Who should control what? What should be more efficiently administered? What does efficiency look like? What should be delegated to software and what is better accomplished by humans? Even if you’re not on welfare or food stamps you’re still going to the DMV, using a customer loyalty card, walking through TSA checkpoints, and relying on the myriad of networks run by insurance companies, credit agencies, and any employer with more than a few dozen employees.

What tools do social scientists need to study enterprise software? Dr. Ron Eglash, in that same colloquium Q&A suggested that interviewing retired engineers was extremely useful in his research on the widely used master/slave engineering metaphor [PDF]. Indeed, countless journal articles could be written from the contents of dusty attics and office basements.That might be a start, but it doesn’t let us see current iterations of software, nor does it give us the kind of fine-grained detail that we get from free software projects.

Consider this a Call for Methods. What can we, as social scientists interested in science and technology, do to illuminate this dark and unknown world of corporate software? How do we get at the hard cases that control or influence everyday life? What would it take to get our hands on the design decisions and product development for child protective services? How many other government systems use open source components like the Veteran’s Adminstration’s VistA system? Do widely used open source components provide a starting point for analysis, or are the more interesting cases when there are no open source components? I can’t wait to hear what you all come up with.

“The primacy of contemplation over activity rests on the conviction that no work of human hands can equal in beauty and truth the physical kosmos, which swings in itself in changeless eternity without nay interference or assistance from outside, from man or god.” –Hannah Arendt in The Human Condition

Praxis Exploding, Image c/o Paramount Pictures
Praxis Exploding, Image c/o Paramount Pictures

I’ve been thinking a lot about methods lately. I want to spend a few paragraphs considering the current state of affairs for social scientists interested in science and technology as their objects of analysis. What kind of work is impossible in our current universities? What kinds of new institutions are necessary for breaking new ground in method as well as theory? Think of this post as an exercise in McLuhan-style probing of institutions of higher learning. I’m going to play with a lot of “what-ifs” and “for instances.” None of this is particularly actionable, nor am I even interested in proposing anything that would be recognized as “realistic” or even “pragmatic.” Mainly, I’m interested in stepping back, considering the state of our technosociety, and asking what kinds of questions need asking and what kinds of science is being systematically left undone.

In some ways, I’m asking the same kind of question your high school guidance counselor asked: “If money weren’t an issue, what would you do with your life?” Aristotle asked this question (towards much different ends), and Hannah Arendt reconsidered the question of the vita activa in her book The Human Condition. Its also similar to the life Karl Marx hoped we would all live in a communist society:

“In communist society, where nobody has one exclusive sphere of activity but each can become accomplished in any branch he [sic] wishes, society regulates the general production and thus makes it possible for me to do one thing today and another tomorrow, to hunt in the morning, fish in the afternoon, rear cattle in the evening, criticise after dinner, just as I have a mind, without ever becoming hunter, fisherman [sic], herdsman or critic.”[1]

For academics, I always image the answers to these questions involve long stays in the field punctuated with lengthy sabbaticals where you can hold up in a comfortable personal library and read and write in big comfy chairs reminiscent of a Barnes & Noble circa 2003. No publish-or-perish pressures and no unreasonable teaching requirements.[2] Then again, if we live in a world of such abundance, perhaps many of us would be out of the job. What kind of utopia would let an academic take a 3-year all-expenses-paid sabbatical to study world hunger? What kind of academic would take such a Faustian bargain? Perhaps it is the sabbaticals and lengthy field trips that produce a world without hunger, but I suspect that’s not the best way to solve that problem. Many would say there’s no problem to solve, there’s only concentrations of power that result in unfair resource allocation. So lets say that a philosopher king comes to power and declares that all institutionally credentialed social scientists have the full resources of the world (after all, philosophy does not preclude nepotism) to solve the grand challenges facing global society.[3]  What would we do?

I love the idea of praxis, and I think it’s a good place to start. Unlike the two other “ways of life,” theoria (contemplation) and poesis (making), praxis means you go out into the world and start doing things. It’s probably the most social way of being. You can make and contemplate in total isolation but doing will eventually require that you run into someone whether you like it or not. You might not have a full lay of the land, and there are probably some unavoidable unknown unknowns, but rarely does anyone ever act with exactly the right amount of information anyway. By going out and tinkering, poking, prodding, and collaborating on shared projects, social scientists can learn about society through the act of making something useful. Praxis is learning by doing and I think if it’s done at the proper scale, it could produce great results.

Ultimately we would serve ourselves well to heed Bruno Latour’s dictum “We Have Never Been Modern.” This slogan of a book title contends that the separation between nature and society is an arbitrary construction and blinds us from seeing what is truly there: a mix of society and natural environments. These two are inextricably intertwined and, as the social ecologists would remind us, in order to solve problems that we’d typically define as “environmental” usually requires solving social problems as well. When there is widespread equality and justice, you can achieve a much deeper level of sustainability. A truly sustainable sustainability.

Obviously we don’t want physicists (to say nothing of sociologists) screwing around with nuclear reactors just to “see what happens” but there probably isn’t much harm in tinkering with Arduino microcontrollers or trying to make a vegan alternative to cheese. This may sound like some kind of concession to the aforementioned unknown unknowns: some things are just too risky and complex so lets go play with toys. That is almost the exact opposite of what I am proposing. What I am proposing is the great progress myth’s equivalent of “going back” to an old save point in a video game. A time when the scientific community made a collective decision (to put it simply) about how they were going to go about conducting science. I’m suggesting we try out the other path.

I will do science to it!
Image from Dresden Codak

In his Mangle of Practice Andrew Pickering offers an analysis of scientific action in what he calls the “performative idiom.” This means that instead of thinking about science as a collection of truths or a search for knowledge, we can think of it as a thing that people do: Science as a verb, instead of a noun. His case study is a history of early particle physics and the beginning of large-scale industrial science.  Here he quotes the recently deceased physicst Dr. Donald A. Glaser who had no interest in what he called “big science.” He knew that in order to find the mysterious subatomic particles that make up the universe, (or just as importantly, to find them first) he would have to build a much larger machine than any small research group could operate. He wanted to find what could best be described as a more “humane” way of doing physics:

“There was a psychological side to this. I knew that large accelerators were going to be built and they were going to make gobs of strange particles. Buy I decided that if I were clever enough I could invent something that could extract the information from cosmic rays and you could work in a nice peaceful environment rather than in the factory environment of big machines… I wanted to save cosmic ray physics.”[4]

Social scientists find themselves in a similar predicament. The influential positions and most resources are tied up in Big Social Science. It means writing Big Textbooks, editing Big Journals, and constantly applying for Big Grants. All of which, are becoming scarce and (in my opinion anyway) less wieldy. The answer isn’t exactly something called Little Social Science, but we do need to change our course. We need to move away from seeking the big grants (they won’t be around for much longer anyway) and start looking at how to do our work in a way that builds social capital, not political or financial capital. Otherwise, we’re letting the creeping influence of neoliberal capitalism affect (infect?) our work.

The anthropologist David Graeber has said about as much in his Fragments of an Anarchist Anthropology. Here he calls on anthropologists to focus less on High Theory and consider,

…what might be called Low Theory: a way of grappling with those real, immediate questions that emerge from a transformative project. Mainstream social science actually isn’t much help here, because normally in mainstream social science this sort of thing is generally classified as “policy issues,” and no self-respecting anarchist would have anything to do with these.

Political and financial capital, in the Bourdieuian sense, means holding sway in large bureaucracies and producing documents and theories that are meant to assist governments and large non-state actors do their work. I am more interested in the local, although that does not necessarily have to mean the analytical “micro.” It means finding new resources and gathering data in a new way. It means radically rethinking ethics in a way that is better and more critical than the standard IRB process. This new institution of social science research would probably be small, project-based, and possibly crowd sourced. It would have regular, long-standing and community-based projects. It might teach standard classes, but there would be a lot of hands-on work. Learn Weber’s theories on bureaucracy by volunteering at a big NGO and take a lesson in material agency by trying to get a garden’s irrigation system to work.

A lesson I learned relatively early on in my own research is that long-term sustainable projects require multiple iterations. Not just because getting it right the first time is near impossible, but also because one cannot take into account all of the little nuances of The Human Condition that make or break a project. Projects that are environmentally, as well as socially sustainable must be sensitive to the unexpected and contingent realities of a complex world. Current institutions are capable of it, but are not uniquely adapted to thrive in it. I contend that a new kind of institution, one formed around the idea of praxis and the vita activa, is absolutely necessary to break new ground in the social sciences. This institution would be equal parts community center and research lab. It would demand the best and the brightest, but acknowledge that sometimes the answer to a nagging question must come from a novice or layperson. This is a proposal for getting undone science done. [PDF]

Follow david on twitter and tumblr.


[1] From The German Ideology (1854)

[2] I think its safe to say that BlackBoard would not be in the Academic’s Utopia.

[3] The linked document is specifically for engineers, which I and Joseph R. Herkert have critiqued here, but we fail to note that these kinds of documents are –at least in theory- a great idea. Why don’t social scientists and philosophers even think to consider cataloging and outlining what they see as the grand challenges for all of humanity?

[4] Quote from page 43 of Pickering, Andrew. 1995. The Mangle of Practice: Time, Agency, and Science. Chicago: University of Chicago Press.

 

Note to readers: This article and its corresponding links discuss rape, victim blaming, “slut” shaming, and rape culture generally.

 

Image from Youth Vector
Image from Youth Vector

The disturbing events in Steubenville, Ohio have spurred some insightful reporting and analysis (collected by Lisa Wade at Sociological Images) that, one would hope, raise awareness about rape culture.  As a social scientist that studies social media, I am particularly interested in how privacy and connectivity have been framed within the context of the case. I cannot help but notice the sloppiness with which many reporters write about the “dangerous mix of alcohol, sex and social media that many teens navigate nowadays.”  Studying the role of social media in everyday life may appear as trivial or superficial: something fun or novel to study. But Steubenville shows us exactly why writers and scholars need to understand social media better.

First I want to acknowledge the wider discursive field of rape culture that Steubenville sits in and is a part of.  Many of the comments have been unthinking and callous examples of the pervasiveness of victim blaming and “slut” shaming in the United States. Rape culture is like some kind of subterranean fungus: It becomes visible in small blooms of sad and disturbing victim blaming, but the majority of the organism lies unseen underground and in the dark. Vast networks of misogyny and power fantasies connect seemingly unrelated utterances of “Those athletes had such a bright future!” And “Well that’s what you get for getting so drunk.” Rape culture goes far beyond the act of non-consensual sex. Men and women alike participate in this culture for a variety of reasons that are not immediately and presently about any particular action. As Amanda Marcotte writes:

Claiming that it’s the victim’s fault for tempting men with her drinking/sexual activity/mini-skirt means telling yourself that as long as you aren’t as “slutty” as the victim, you’ll be OK. Most importantly, in communities like Steubenville where the tide is against the victim, playing along and hating on the victim is a demonstration of loyalty to the sexist culture. It can make a woman more popular, which in turn can also make her feel more protected from rapists.

119The whole case is shot through with evidence of our augmented society. Pictures were posted to Instagram, videos were uploaded to YouTube, and the victim reportedly pieced together the night through Twitter, Facebook and text messaging. Anonymous drew national attention to the case and still remains a big part of the news story. When the story turns to the role of social media and the Internet in general, news outlets go from shallow analysis to deeply vexing misattributions of blame and power. The media defaults to what Nathan Fisk (@nwfisk) has called “the frame of ‘inescapable’ technologies.” On top of the victim blaming (but also independent of it) is a basic assumption that youth online are incapable of controlling their exposure, the boundaries of their privacy, or the subsequent social action that takes place offline. While Stuebenville goes far beyond “bullying” Nathan’s insights are still germane:

In the Internet safety arena, digital dualist frames do not simply draw distinctions between online and offline social life – they are used to blame existing social problems on the social technologies that make them visible in new ways. Bullying, predation and exposure to “inappropriate content” have been seen as problems long before the widespread adoption of the Internet and information technologies by kids, and yet all of these problems appear as “new” or, at best, made worse by information technologies.

This is really one of the biggest dangers of digital dualism and misunderstanding digitally mediated social action: national conversations about rape culture, or bullying, or just about anything are hobbled if not totally halted by inane or misguided attempts at understanding this seemingly new cyber-whatever. It’s the same old rape culture it ever was, and yet the conversation begins anew. At best, we see some careful reporting about social media being a “double-edged sword” but there is never an acknowledgment that very similar underlying social dynamics have been around for a long time. At its worst, digital dualism makes us lose sight of a problem because it appears to us as something new and unknown. Blaming the mini skirt turns into blaming some amorphous “culture of over-sharing.” Blame is shifted away from the responsible parties and is distributed across various human and nonhuman actors to the point that the perpetrator loses agency and thus guilt.

 Follow David on Twitter: @Da_Banks


It’s as if a TED conference smashed headfirst into a hackathon and then fell into an NGO strategy summit. CEOs sit next to non-profit employees and eat boxed lunches as a dominatrix (@MClarissa) presents a slide on teledilonics followed up by a garage hacker-turned-million dollar project director quoting Alexis de Tocqueville. It is a supremely uncanny experience that all happens within the confines of a movie theater (and, later, a sushi bar). This is what one can expect when they attend the Freedom to Connect conference (#f2c) held in Silver Spring, Maryland. The conference is meant to bring “under-represented people and issues into the Washington, DC based federal policy discussion…” I left the conference feeling generally good that there are people out there working to preserve and protect open infrastructures. I just wish that team were more diverse.

The full stated goals of #f2c are as follows:

F2C: Freedom to Connect brings under-represented people and issues into the Washington, DC based federal policy discussion to promote Internet freedom, to preserve Internet values such as public protocols and universal connectivity, and to promote the use of the Internet for people-oriented purposes.

After the wonderful experience that was Theorizing the Web 2013 (#TtW13) I rode my beloved Amtrak to the DC Metro area just as the second panel was starting.  #f2c was a memorable experience in part because of its proximity to #TtW13.  Whereas #TtW13 presenters make subtle and extended critiques of surveillance in both medieval and modern cultures, #f2c presenters are more interested in governments’ new capacity and interest in surveilling the Internet. I got the sense that, left to its own devices, the Internet is inherently a hierarchy-free, democratizing force. It can deliver us from tyrannical governments and greedy corporations so long as all connections are treated equal and the market provides those connections at cheaper prices and faster speeds: We are only as equal and free as the packets that carry out ideas.

Freedom to Connect LogoThis optimism is undergirded by what PJ Rey (@pjrey)  has called “The Myth of Cyberspace.” The idea that the Internet is a great equalizer: a place where all people are free to connect and transcend old hierarchies and arbitrary barriers. The myth is derived from the metaphors used to describe the early Internet as a far away place separate and distinct from the physical world which contains gender, race, class, and embodied meaning. The Internet is presented as a tabula rasa on which totally new social relations can be formed from whole cloth. This is simply not true. As PJ Rey writes, “ The Web reproduces existing social norms and geopolitical divisions more than it defies them; it is characterized by redundancy, not transcendence.” Nowhere was this more evident than at #f2c itself. The conference claims to bring together “under-represented people,” but I soon realized that my working definition of what constitutes “under-represented” was much different from that of others in the room.

While the problem of inclusivity goes well beyond a simple headcount, the numbers are worth mentioning. Of the 38 or so speakers, there were only four people of color and nine women. Half of the people of color that presented were mistakenly identified as another person after their presentations. I was mistaken for Ben Huh (@benhuh) the CEO of the cheezburger.com network (I am white.) and I later witnessed a middle aged white man mistake a black man in the reception area for the attending blues artist Lester Chambers. When the mistake was pointed out, the white man said, “Well, you two look a lot alike.” He looked nothing like Lester Chambers. Microaggressions such as these are nothing new, nor are they uncommon and that is exactly the point. A conference that claims to bring in unheard voices would serve itself well to actively diversify its attendees.

Which brings me to Derek Khanna (@derekkhanna).  As far as I can tell, Derek cares passionately about two things: 1) Americans being able to unlock their cell phones and 2) eliminating affirmative action in college. He goes out of the way to brag about his triumphs in this arena in his speaker bio:

Derek has a long history of trying to reform politics. While at his alma mater he led a campaign that successfully overturned an affirmative action system that he argued violated the Constitution…

The very existence of Derek at #f2c puts a chilling effect on the purportedly democratic and inclusive nature of the conference. I understand that many people have extremely nuanced opinions that bring them to the conclusion that affirmative action is bad or even counterproductive to its stated goals, but one must recognize the unintended consequences of proudly proclaiming such views when they are not at all germane to the topic of the conference. Getting yourself fired for penning a controversial memo on copyright law does not make you a freedom fighter or a champion of free speech. It just means you chose the wrong employer. To hold up the removal of affirmative action an example of “reforming politics” in the name of the Constitution does not create a safe space for the unheard or under-represented.

I do not want to make it seem as though #f2c is just one Donald Trump toupee from looking like CPAC. #f2c is an excellent conference and most of the attendees are doing excellent work with municipal broadband, online activism and copyright reform. The decisions made and ideas exchanged at past #f2c conferences have probably had more influence on yours and my access to the Internet than I will ever know. I know that. My criticisms, ultimately, come from my own dedication, passion, and academic interest in the same issues that #f2c fights for. We have never been totally “free to connect” and that holds true for some more than others. Fighting for the liberty of workers that build our devices and does more “to promote the use of the Internet for people-oriented purposes” than a thousand jail broken iPhones.

“Public protocols and universal connectivity” can refer to social as well as technical systems. TCPI/IP and mutual aid serve similar ends: to make a decentralized, robust, and scalable network of peers. Universal connectivity can just as easily refer to breaking the glass ceiling as it does net neutrality.  I saw an acknowledgement of this in the wonderful talk by Sascha Meinrath (@saschameinrath) (he’s the “a garage hacker-turned-million dollar project director” mentioned earlier) on the genesis of Commotion Wireless. The full video is below, but I particularly liked what he said about the social and economic agendas of the past 50 years. He states,

“…civil society realized that the most efficacious route to better our country was through civil disobedience. And today who we hold up as champions of that era, from Rosa Parks to Malcolm X we believe that they did the right thing by being a part of a sophisticated, nationwide, and purposeful intervention. And today, a half-century later, I would say we have a similar responsibility to push that envelope. To demand access to our public airwaves, to engage in electromagnetic jaywalking writ-large.”

Meinrath isn’t equating SOPA with Jim Crow, rather he is making a link between the tactics of both movements. And while the hints of mild digital dualism can be sensed, he later acknowledges that his young daughter and many like her will not distinguish between the online and offline components of their lives. That to make “purposeful interventions” in augmented reality will require tactics that are updated and tuned for the current technosocial environment.

Ultimately I blame my own discipline for this failure of inclusivity. Science and Technology Studies (STS) not only has its own demographics problems, it also does a bad job of getting its findings implemented or noticed by practitioners. The recent advances in critical making and appropriating technology are excellent starts, but they are not actively working to reform or revolutionize industries in the way that #f2c aims to do. Collaboration between these realms could produce very interesting results.

I still believe in the old saying, “another world is possible” but I think it requires a lot more praxis than either technologists or social theorists have been willing to try in the past. We won’t always agree, and there will be times that one makes difficult demands on the other. (For example, I would ask Alexis Ohanian if he thinks both Violentacrez and Columbia Records are oppressors of a free society.[1]) If social theorists knew a little bit more about the technical and legal workings of the internet, and technologists knew just a little bit of social theory, we could improve both professions immensely.

I’ll end with a small request: That the organizers and sponsors of Freedom To Connect consider “Social Justice and Liberty” as next year’s theme. That they invite speakers who understand that widespread bigotry and rape culture are just as big if not bigger barriers to a free and open Internet as over-zealous copyright laws and bandwidth caps. I understand that only so much can be covered in any single conference, but the kind of exchange of knowledge that I am proposing can only make for more informed and more socially just decision-making. Let’s build an Internet for everyone.

 David A. Banks is on Twitter and Tumblr.


[1] Ohanian was there to introduce Lester and Dylan Chambers. Lester Chambers, as part of the band “The Chambers Brothers” had never received payment from Columbia records despite their album going gold.

enlightenment cave

In the beginning, there was nature. And in spite of the obvious lack of humans to give names to the animals and to categorize the trees, it all basically looked and felt like it does now: Leaves were green and rocks were heavy. Over time, humans (those natural tool makers!) developed a plethora of explanatory concepts and ways of knowing that gave their universe a discernable order. At different times and in different regions of the world, the universe took on vastly different shapes and personalities. There were the four humors, animism, Feng shui and by the mid 1660s some white guys had developed something called experimental philosophy. Today we just call it the scientific method. One of those white guys, Robert Boyle, was particularly vocal about the benefits of the scientific method and objective observation.[1] He believed deeply that if enough men[2] of reputable repute watched something happen, you could call it true. No monarch or bishop required. Thomas Hobbes was skeptical. Not because he believed truth had to come from an authority figure, but because he was, among other concerns, suspicious that by observing effects one could derive the underlying physical causes. While both men had strong and informed opinions about society and the natural world, today we remember Hobbes as a political philosopher and Boyle as one of the first modern scientists. The separation of society and nature didn’t have to look the way it does, but historical and social circumstances encourage us to separate these two realms.

I just think that’s an interesting story.

It shows that fundamental orienting concepts are tenuous and contingent. Had Hobbes convinced the right people, Western society would look very differently. We might arrive at truth through a completely different process and the material artifacts that we produce would be regarded in a totally different light. When you get down to it, the only important rubric for a Theory of Everything is that it is reliable: That the conclusions that it draws can do meaningful and/or productive work. A professor of mine had a saying: “You can use witchcraft and western science to get to the moon, but only one of them are likely to work.” Disagreements over how to talk about digital technologies and social action have a similar valance. Sometimes it looks like a semantic argument, but there are key differences that can make an analysis more accurate more of the time.

Mr. Nicholas Carr has decided to levy a flatteringly thorough rebuttal to the theory of augmented reality (you know, that thing we’ve been tinkering with in “fits and starts”) on the eve of Cyborgology’s 3rd annual Theorizing the Web conference.  Naturally, all of us are busy preparing for the conference and therefore are unable to mount a complete and total rebuttal of Mr. Carr’s recent blog post. I’ll attempt to tackle some of his more egregious misunderstandings and errors today. Most of this is written from memory of the various documents in question, so I offer my advanced apologies for any incongruities, misspellings, or over-simplifications. More comments and posts can be expected and I’ll happily append this post if someone catches an error.

In his post, Digital Dualism Denialism, Mr. Carr asserts that ignoring the separateness of the online and the offline is “wrongheaded” and to act as though the distinction is totally meaningless is to ignore key facets of what digital technologies afford and allow. I agree with that, and I’d say most of my fellow writers do too. It sounds as though Mr. Carr is debating William J. Mitchell more than he is arguing with Nathan Jurgenson, myself, or the rest of Cyborgology.[3] Digital dualism is not just about the online and offline being “completely isolated worlds” and we never said as much. What we have said, is that there is no Second Self, or “cyberspace”; there’s just you and the environment. We are just as augmented as we ever were. Mr. Carr gets it wrong in the very first paragraph of his piece when he says,

As Net access has expanded, to the point that, for many people, it is coterminous with existence itself, the line between online and offline has become so blurred that the terms have become useless or, worse, misleading. When we talk about being online or being offline these days, we’re deluding ourselves.

The spatial metaphor of “cyberspace” is and always has been misleading. It is worth noting that the first instance of the phrase “online” was used to denote whether or not a city or town had a train station. You were considered “online” if a train stopped in your town, regardless of whether or not you used the train or the train station was on the other side of town from where you lived. We should probably think of our selves in a similar fashion. If you can conceivably access the Internet, you are online. It doesn’t matter if you’re signing onto a BBS in the 80s or Facebook today.

To wit, there was never a new and totally untouched cyberspace waiting to be populated. Digital networks were born into and are constitutive of the same social and cultural structures and patterns that exist outside of them. When we, as theorists and popular authors, ignore this fact, we tend to become determinist in our thinking. Technology has its affordances, and sometimes those new affordances enable new social practices or make existing ones more prevalent, but we should be extremely careful when making claims about technology’s ability to (just to pick a random example) make us all stupid.

Mr. Carr has accused me in particular of treating people as dopes. That I am ignoring what people feel and instead trying apply an overwrought, “torturous” argument about sexual deviance. This is not even close to accurate. I am simply stating that the Internet (and indeed many new technologies) appears as an addition to an already completed whole and that perception can make it seem superfluous or unnecessary. This perception also has the tendency to make dismissals of related social action easier.

To think as much is to deny not only the real and tangible benefits of the technology, but also the wide and varied experiences that people have with that technology. Yes, some people feel that the digital erodes “the real” and some even see it as an intrusion, but that is just one of literally millions of experiences. Without a hint of sarcasm, authors will describe how their vacations to Cape Cod and Madrid have been ruined by what they view as the anti-social properties of iPhones, while at the same time excoriating youth and the poor for “wasting time” on entertainment and social media. Cyborgology’s contributors should be the last people you accuse of dismissing people’s concerns.

I am not in the business of speaking for others in this arena. So rarely do people in that line of work retire that openings are few and far between.

It is important to be explicit and clear about whom you’re talking when you say “we” or “us” or “our.” These are implicit (and sometimes explicit) declarations of in-group status. There cannot be a “them over there” without an “us over here.” So when you say, “We sense a threat in the hegemony of the online because there’s something in the offline that we’re not eager to sacrifice,” I wonder two things: 1) if you intended to use hegemony incorrectly, and 2) that what you are really referring to is a change of power distribution and ability to speak and not a change in technological affordances.

Perhaps this all comes down to a fundamental misunderstanding of our intentions or motivations. I won’t speak for my fellow Cyborgologists, but I am fairly certain that none of us are techno-utopians. Nor do I think any of us would conflate technological innovations with social progress [pdf]. I wouldn’t even characterize myself as a technophile. Rather, I am motivated by the possibility that certain technological affordances and access to resources can induce human flourishing and liberatory potentials. So when I hear that Cape Cod is a little less comfortable or that sitting down to read War and Peace is a little more difficult I don’t really care. When computer networks allow the government to make very specific choices about what food stamp recipients can buy, I care. But I am critical of the sociotechnical apparatus and underlying ideology, not just the computer networks. When I see my students citing Wikipedia or constructing sloppy sentences I seriously reconsider the pace of my assignments and the quality of my pedagogical decisions. I do not jump to the conclusion that they are lazier or less disciplined than are my peers or me.  Perhaps I’m being hypocritical for churning out a response within 24 hours, but I know if I don’t get this out today, I never will.

I have not even addressed the fact that Mr. Carr does not appear to understand the use of ideal types (he calls them strawmen), or that to build a typology of digital dualism and augmented reality bookended by such theoretical constructs is not a concession but a clarification. That’s all I’ll say about that.

In the end, I’m happy that this discussion is happening out in the open and not in obscure academic journals. Derrida would probably say something about the inherent contradiction of complaining about the Internet on the Internet, but I’ll let that stand on its own. Mr. Carr ends by posing two short questions: What’s lost? What’s gained? It is important to note that, like energy, most things are not totally lost or created from nothing. They are shifted, reassigned, reallocated, and reconstructed in new and unfamiliar ways. Perhaps what Mr. Carr sees as a loss is a gain to others? Perhaps some of us are, in fact, alone together or are suffering from Google-induced stupidity. But there are also under-socialized and shy kids finding communities that understand them. Doctors in Ghana are staying in contact with patients and giving medical advice to the immobile. I’m not interested in a cost-benefit analysis of every technology, but I do need a theoretical perspective with enough resolution and nuance to account for the multitudinous possibilities that lay before us. It might be fun to play Thoreau and sit in judgment of what the Internet does to people, but I’d rather be a cyborg than a romantic.

David A Banks is on Twitter.


[1] If he were alive today, Boyle would have a million views on his TED talk.

[2] The paintings of the time, like the ones done by Joseph Wright of Derby, portray this very well. Women are in the paintings, but they are often depicted as terrified or grief-stricken as they watch animals suffocate during air pump demonstrations.

[3] It should be noted that Mr. Carr never considers the excellent piece that P.J. Rey published in The New Inquiry which could almost be mistaken for a rebuttal if one did not notice the April 13, 2012 publication date.

rousseau spiderman

Just about every one of our contributing authors has written a piece that challenges or refutes the claims made by tech journalists, industry pundits, or fellow academics. Part of the problem is technological determinism- the notion that technology has a unidirectional impact on society. (i.e. Google makes us stupid, cell phones make us lonely.) Popular discussions of digital technologies take on a very particular flavor of technological determinism, wherein the author makes the claim that social activity on/in/through Friendster/New MySpace/ Google+/ Snapchat/ Bing is inherently separate from the physical world. Nathan Jurgenson has given a name to this fallacy: digital dualism. Ever since Nathan posted Digital dualism versus augmented reality I have been preoccupied with a singular question: where did this thinking come from? Its too pervasive and readily accepted as truth to be a trendy idea or even a generational divide. Every one of Cyborgology’s regular contributors (and some of our guest authors) hear digital dualist rhetoric coming from their students. The so-called “digital natives” lament their peers’ neglect of “the real world.” Digital dualism’s roots run deep and can be found at the very core of modern thought.  Indeed, digital dualism seems to predate the very technologies that it inaccurately portrays.

What evidence do we have that the beginnings of digital dualism has been with us for centuries? Obviously any evidence would not mention yet-to-be-invented artifacts, but it would mention relatively new technology. (I’ll defend this conflation of new and digital technology later.) Let’s start with a quote from Plato’s Phaedrus, a Socratic dialogue that ends with a discussion on the merits of writing:

“In fact, it [writing] will introduce forgetfulness into the soul of those who learn it: they will not practice using their memory because they will put their trust in writing, which is external and depends on signs that belong to others…” [1]

Along the same lines, are some historical quotes that WIRED magazine collected in 2006 just as Hillary Clinton was crusading against violent video games in the United States Senate. [The full list, titled “Culture Wars,” can be found here.]:

“The free access which many young people have to romances, novels, and plays has poisoned the mind and corrupted the morals of many a promising youth; and prevented others from improving their minds in useful knowledge. Parents take care to feed their children with wholesome diet; and yet how unconcerned about the provision for the mind, whether they are furnished with salutary food, or with trash, chaff, or poison?”
– Reverend Enos Hitchcock, Memoirs of the Bloomsgrove Family, 1790

“Does the telephone make men more active or more lazy? Does [it] break up home life and the old practice of visiting friends?”
– Survey conducted by the Knights of Columbus Adult Education Committee, San Francisco Bay Area, 1926

These two quotes, along with others about motion pictures, comic books, and video games, give us some perspective. Reverend Hitchcock bares a striking resemblance to the New York Times’ article about the New Digital Divide and the Knights of Columbus sound a lot like Sherry Turkle.  Both quotes, Turkle, and the New York Times all share a common set of prerequisite assumptions. 1) There exists a qualitative and categorical difference between the technology identified and past ways of doing things; 2) this change is determinist in nature and; 3) the difference is the result of adding something unnecessary or superfluous to a system that was, while far from perfect, stable or more “natural.”

The popular movie series produced by South African apartheid supporters, "The Gods Must Be Crazy," is an excellent example of the technological determinist fallacy. Such thinking can easily be used to justify oppression or the withholding of resources as an effort to "save them from themselves."
The popular movie series produced by South African apartheid supporters, “The Gods Must Be Crazy,” is an excellent example of the technological determinist fallacy. Such thinking can easily be used to justify oppression or the withholding of resources as an effort to “save them from themselves.”

These three basic assumptions, exemplified by the quotes above, appear to be rhetorically similar and cognitively related. They’re not identical arguments by any means, but I suspect that they all arise from what the German philosopher Ludwig Klages called “logocentrism” or the privileging of the spoken word over other forms of communication. Klages was putting a name to what dozens of writers before him had already thought. Jean Jacques Rousseau (18th century), and De Saussure (19th century) had already described the written word as exterior to or a representation of speech. It is only until the 1960s, when French Philosopher Jacques Derrida challenges logocentrism directly, that we begin to see speech and written language on even footing.  Both forms of communication, according to Derrida, are deferrals from arriving at true meaning. According to Derrida, both speech and text are pointing toward a third pure idea that cannot be expressed. For example, the written word “computer” and the utterance computer are not computers, nor can they fully express the entire concept of everything encompassed by the author’s idea of a computer.

But what does all of this have to do with digital dualism? Plato’s fear that writing reduces one’s memory, and Carr’s infamous provocation that Google is making us stupid sound the same, but are they related phenomena? I believe that logocentrism and digital dualism are closely related, and my reason has everything to do with masturbation. Or, more specifically, Rousseau’s opinions on masturbation. Rousseau claims at different points in his Confessions that masturbation is a supplement to nature: something constructed or virtual that competes with an existing real or natural phenomenon. Derrida, in his Of Grammatology asserts that erotic thoughts not only precede sexual action (you think about what you do before you do it) but that there is no basis for finding sex any more “real” than auto-affective fantasies. This “logic of the supplement” mistakes something that was “always already” there with an unneeded addition.  Derrida writes,

“Auto-affection is a universal structure of experience. All living things are capable of auto-affection. And only a being capable of symbolizing, that is to say of auto-affecting, may let itself be affected by the other in general.”[2]

Just as speech was privileged over the written word in ancient Greece, we tend to privilege the physical over the digital. A hardbound book is the real thing, while the ebook is something ephemeral or unnecessary. As our own Sarah Wanenchak describes it, “This feeling is instinctive, gut-level; it can drive us without us being explicitly aware of it.” My own print book collection and skimpy Kindle library are a testament to my own digital dualism. The feeling is so hard to shake, it seems, because the logic of the supplement is so pervasive. I don’t want to get too caught up in the different technological affordances of digital and physical copies in this post. Instead, I want to focus on the differences of technologies. What is readily considered a supplement, and what is considered natural or part of the complete. My argument is a relatively simple one: I simply want to extend the boundaries of logocentrism to explicitly include digital media. We should treat Turkle, Carr, and The New York Times, the same way Derrida treats Plato and Rousseau. Derrida is like the friend that cannot help but point out internal inconsistencies within movies. Your friend might point out that there’s no way Princess Leia can “remember images and feelings” of Padmé because she died in childbirth, while Derrida is more interested in how late Enlightenment scholars can hate on writing but still produce so much of it. Derrida calls this focus on internal paradoxes and contradictions a “double reading” and it can be a useful tool for ferreting out digital dualism.[3]

Digital dualism is pretty easy to spot once you know about it, because the distinction is so glaring. It’s like noticing a chip on your iPhone’s screen. It’s a commonly held fallacy because centuries of western thought force us to look at new technologies as an unnecessary addition to some kind of completed whole. Derrida characterized the classics’ fear of writing as a fear of the dead. The text lies there, unchanged by its audience or the discovery of new facts. It is horrifying in its lifelessness. If the written word is dead, then perhaps our fear of hypertext comes from its uncanny ability to mimic life. It is a modern-day Prometheus; animated dead text seeking a willing audience. Zombified letters and images projected onto the faces of the sorts of people we deem too dim-witted to know any better: the poor, the young, the other. The entirety of the post-modern and post-structuralist projects in social theory have been about questioning, displacing, and ultimately dismantling old boundaries. This is not because the boundaries are bad (although some are) but because they let us see things in a brand new light.

The theory of augmented reality, the idea that digital technologies are not separate but are enmeshed in previously existing social structures, is not –inherently- uncritical of the technophile or new digital tools. Far from it. Augmented reality is simply an application of the last 40 years of philosophy and social theory to our increasingly networked lives. It eschews the outmoded frameworks that lead to uncritical thought and privileged conclusions, that is, digital dualism.

This is a shortened version of a full paper accepted to Theorizing the Web 2013. Full paper forthcoming. Thanks go to Britney Summit-Gil, and the Cyborgology editors and fellow authors for confirming some initial assumptions about students.

David A. Banks is on Twitter and Tumblr


[1] Quoted from page 79 in, Plato. Phaedrus. Indianapolis: Hackett, 1995.

[2] From Derrida, Jacques. Of Grammatology. Translated by Gayatri Chakravorty Spivak. Baltimore, MD: Johns Hopkins University Press, 1998.

[3] I would like to take this opportunity to note that I read Alone Together on a Kindle in a crowded car on the way to a conference. I spoke to no one.

Photo by: The Fayj

The concept of “risk” comes up a lot in the classes I TA. Usually, it comes up as part of a conversation about acceptable levels of risk for consumer products: How safe should a car be? How much money should we spend on fire safety in homes? If you’re utilizing a cost-benefit analysis that also means calculating the price of a human life. How much is your life worth? These questions are familiar to safety regulators, inspectors, CEOs, and government officials but as private citizens and consumers, we like to think that such questions are sufficiently settled. Cars are as safe as we can make them because human life is incalculably valuable. We won’t be able to know when something bad happens, so it’s better to get sp30 car insurance to avoid disturbing future costs. After all, these sorts of questions sound macabre when we invert the function: How many cars should explode every year? How many jars of peanut butter should have salmonella in them? These questions are largely considered necessary evils in today’s risk-based society, but what kind of society does that create?

The German sociologist Ulrich Beck wrote a very influential book in 1992 called Risk Society: Towards a New Modernity. The very first sentence of the first chapter summarizes the general thesis: “In advanced modernity the social production of wealth is systematically accompanied by the social production of risks.” (Emphasis in the original.) He goes on to say, “Accordingly, the problems and conflicts relating to distribution in a society of scarcity overlap with the problems and conflicts that arise from the production, definition and distribution of techno-scientifically produced risks.” In essence, we as individuals spend our days trying to mitigate our own personal risk. We buy insurance, wear seatbelts, wash our fruit, and bring an umbrella in case it rains. Large institutions like governments and corporations do something similar, but the implications of their actions are not only bigger, they are also more complex. We all learned that the hard way in the 2008 financial collapse.

Economists and political scientists love to play around with risk. They make models and games (not the fun kind) that explain how we mitigate risks and why we make certain risky decisions. Just when models start to settle out and things become somewhat predictable, a new technology will disrupt the calculus. The invention of Yelp reviews reduces your risk of going to a terrible restaurant, to give one mundane example. A more dramatic one is the invention and subsequent widespread use of drones in combat situations. All sorts of military incursions and surveillance missions become less risky when a drone brought into the equation. Drones provide a certain level of security for combat troops, but they also make it easier to kill people that are hard to get to. Just as rich people are at less risk of dying from treatable illness (because they can afford healthcare) enemy combatants are much more likely to die in battle than American troops. We can see clearly see this risk analysis playing out in the recently uncovered US Justice Department memos that provide the rationale for drone attacks. Drones are used when capturing the target is deemed too difficult or the threat is too imminent. You may disagree with what qualifies as “too difficult” or “imminent” but the logic remains. A wealthy country can produce sophisticated robots to mitigate its own risk and dramatically increase the risk of others. 

It’s important to note that we aren’t just talking about the risk of being killed in combat. Collateral damage, the death of civilians, is also important to note here. Not just for moral and ethical reasons, but because it helps us understand the transfer of risk. Relatively rich American civilians are made safer through transferring the risk of combat death to relatively poorer people in the Middle East.

Death from drone attack is an extreme example. Most risk analysis isn’t about the likelihood of civilians in the blast radius, its about unintended consequences and externalities. Where does pollution go? Who gets exposed to what most often? There are millions of examples of this, but it’s easier to speak in hypotheticals than any specific case. (For a book-length treatment of this sort of risk exposure I’d recommend Michael Mascarenhas’ (2012) Where the Water’s Divide.) Imagine a new coal power plant is proposed for a major metropolitan region. Where the plant goes is still up for debate, but there is a willing landowner just outside the city that is willing to sell. Neighbors catch wind of the possible plant and start yelling, “Not In My Back Yard!” They call their representatives, picket outside the coal company’s headquarters and maybe a few college students chain themselves to a tree on the proposed construction site. They cause enough of a ruckus that the company decides to not build on the site and go elsewhere. Where is elsewhere?

For the coal company, their biggest concern at the moment is more bad press and the extra costs associated with every time they pick a new site. They have to look for a place to build that won’t elicit public outcry. Through a mix of market-based land prices, racism, and classism environmental dangers tend to move to poor neighborhoods. Risk moves until it settles next to people that no one listens to. The poor rarely have time to call their congressional representative, nor do they have the political or social capital to collectively organize and persuade others to act. Nuclear power plants, water treatment facilities, phosphorous mines, and many other dangerous things are necessary for modern society, but the risks associated with having them are disproportionally experienced by people of color and the poor.

Photo by Bob August

There are other ways of going about risk mitigation. Most European countries exercise something called the “precautionary principle” which requires actors (like coal companies) to demonstrate (usually in front of a government or citizen’s panel) that their project is safer and more beneficial than other feasible options.  This has problems of its own, but it might be a start. More generally, deep and systematic change comes from individuals in industry that make smarter and more informed decisions about what their inventions will do to society. It means taking unintended consequences more seriously and spending more time thinking critically about what a technology will encourage. It is not enough for roboticists to tell themselves that drones will save the lives of soldiers, and not consider how their invention can make killing easier. It’ll take more than every engineer reading The Whale and the Reactor and eschewing the idea that technologies are as moral as their users. Perhaps it means making engineers responsible for “malpractice” like doctors and lawyers? A faulty car brake means a lead engineer gets his license revoked.

Ultimately, the problem is that we haven’t even begun to consider alternative social arrangements that could improve innovation while reducing risk in an egalitarian fashion. The dearth of actionable alternatives doesn’t mean the status quo is fine, it just means the source of the problem has not been properly identified. More regulation is not the answer, and calling for smarter regulation just sounds hackneyed. What we need is a deep structural shift in the entire process. From engineering pedagogy to product labeling, the whole system is stumbling over itself and too many innocents are dying. Modern technological society is outgrowing its capacity to gauge risk effectively and the consequences could be disastrous.

Follow David on Twitter with little-to-no risk of bodily harm: @da_banks

Facebook just enabled its new Graph Search for my profile and I wanted to share some initial reactions (beyond the 140 character variety). Facebook’s new search function allows users to mine their Facebook accounts for things like: “Friends that like eggs” or “Photos of me and my friends who live near Chuck E. Cheese’s. ” The suggested search function is pretty prominent, which serves the double role of telling you what is searchable and how to phrase your search.  More than anything else, Graph Search is a stark reminder of how much information you and your friends have given to Facebook. More importantly however, it marks a significant change in how Facebook users see each other and themselves in relation to their data.. You no longer see information through people; you start to see people as affiliated with certain topics or artifacts. Graph Search is like looking at your augmented life from some floating point above the Earth.

Graph Search makes Facebook feel “bigger” and more intimate at the same time. I say bigger because you are no longer an individual who likes The Talking Heads or has too many photos of food. Instead, Graph Search makes me feel as though my friends and I are single instances of Instagramed food within an enormous database of lonely eaters. You just see more of the social landscape and connections come into high relief. Seeing those connections (some of which I saw for the first time) provide an instant “in group” feel. If you didn’t come up in my search for “friends who like cats” you are, by default, a dog person. You are different and you don’t understand my cat tree.

Not only does your perspective change, but the order with which I think about my friends and our mutual interests change as well. Facebook has outgrown its namesake, and now resembles something closer to a social search engine. (Something Zuckerberg has wanted for a long time.) This shift, from a book of faces to an indexed catalogue of varying living and non-living objects, means Facebook can update me on things as well as people. Whereas before Graph Search you could search for people and groups of people (including bands and companies), now you can search for just about anything and see how it relates to your friends. It’s a total deconstruction of my augmented life. Obscure trends are suddenly completely obvious. For example, when I searched “Music my friends like” the top hit (60 friend likes) was “Ingtzi” my friend’s DJ-ing name. A Facebook PR person would call your attention to the fact that Ingtzi isn’t the most popular musician among my friends, but the most unique result among my particular friend distribution. (I have no doubt “The Beatles” or something equally generic outnumbers Ingtzi.)

In order to do work to lots of different people, you have to slice and dice them into many different kinds of subjects. Graph Search could not exist without all of the standardized “likes” and relationship status fields that turn humans into usable data sets. This process of standardizing and groups is not new: governments have been turning people into pieces of information for a long time. Towards the end of his career, Michele Foucault became intensely interested in what he called governmentality or how people become governable populations and civic-oriented citizens. He wanted to know how individual people become compatible with government bureaucracies and soverign entites. After all, you can’t have a Department of Health if you don’t have a body of knowledge and a set of practices that let you treat individuals like a bunch of medical patients or biological entities that contract and transmit disease. Graph Search works in the same way. Standardized “like” buttons and constant requests to “tag this photo” or “Add your hometown” not only encourage you to enter more data, they also make you compatible with search algorithms.

This process is mutually shaping. Just as Facebook makes you compatible with search, your constant searching starts making you think like the algorithm. You might base playlists for parties on Graph Searches, or plan outings based on search results. Your online searches will have consequences offline. Intriguingly, Graph Search seems to work better when I ask very specific things.  When I search for “Photos of Food” the top three photos are 1) a picture of the Waffle House Museum sign, 2) a picture of my cousin with his wife, and 3) a sketchbook drawing of a friend. The next four pictures look like stock photos of pizza. I also got this strange (and outdated) promotional sign:

Perhaps image searches are still best left up to Google. For now, I’ll choose my search engine based on two kinds of questions. Questions that start with “what” are meant for Google and questions that start with “who” are Graph Search questions.  (Apparently “who” questions that Facebook can’t answer will go to Bing. Good for Bing.) The commentariat seems to believe that Facebook is looking to compete with Google in search, and such questions are best left to business analysts. For me, it will be interesting to see how using both of these search engines will change personal decision-making. I also can’t wait for the inevitable Google is Making Us Stupid corollary: Graph Search is making us Anti-Social.

This and more #OverlyHonestMethods can be found here.

I really love putting things in order: Around my house you’ll find tiny and neat stacks of paper, alphabetized sub-folders, PDFs renamed via algorithm, and spices arranged to optimize usage patterns. I don’t call it life hacking or You+, its just the way I live. Material and digital objects need to stand in reserve for me, so that I may function on a daily basis. I’m a forgetful and absent-minded character and need to externalize my memory, so I typically augment my organizational skills with digital tools.  My personal library is organized the same way Occupy Wall Street organized theirs, with a lifetime subscription to LibraryThing. I use Spotify for no other reason that I don’t want to dedicate the necessary time to organize an MP3 library the way I know it needs to be organized. (Although, if you find yourself empathizing with me right now, I suggest you try TuneUp.) My tendency for digitally augmented organization has also made me a bit of a connoisseur of citation management software. I find little joy in putting together reference lists and bibliographies, mainly because they can never reach the metaphysical perfection I demand. Citation management software however, gets me close enough. When I got to grad school, I realized by old standby, ProQuest’s Refworks wasn’t available and my old copy of Endnote x1 ran too slow on my new computer. So there I was, my first year of graduate school and jonesing heavily for some citation management. I had dozens of papers to write and no citation software. That’s when I fell into the waiting arms of Mendeley.

Like any piece of software that runs on OS X and contains a database, Mendeley described its interface as “iTunes-like.” And while the interface was pretty  polished, that wasn’t what sold me. Mendeley was an organizer’s dream. It renamed and organized all of my PDFs just the way I wanted them. It had a burgeoning social function as well, which was interesting, but the userbase was still too small to be useful. For me, Mendeley was a well-designed piece of software that did exactly what I needed it to do, without memory leaks or an obtuse user interface. I had an impeccably organized PDF library and I was happy. Citing papers was almost an afterthought. Then, late last week, I got some really bad news on Twitter from @anneohirsch:

Elsevier isn’t the worst company in the world. They’re not dumping millions of gallons of oil into coastal ecosystems, nor are they a massive mercenary army that kills for top dollar. They are a publishing house and they make money by controlling the distribution of the knowledge that I and fellow academics produce. You may have never heard of the company, but you know their “products”: The Lancet, Grey’s Gray’s Anatomy (The reference book, not the TV show), and ScienceDirect are all Elsevier properties. A private organization that maintains such important tools must also shoulder a great deal of responsibility. To own The Lancet is to own a voice of scientific authority. In other words, if it is written in The Lancet then it is the forefront of modern medicine. But Elsevier does not always respect the trust that many have bestowed upon them. From 2000 to 2005 Elseveir published six periodicals that looked like peer reviewed medical journals but were actually nothing more than paid advertisements for pharmaceutical companies. They have lobbied against open-access publishing via the Research Works Act, and have sued their own customers on ambiguous legal grounds. They even sued The Vandals for their parody of the Variety magazine logo. They are despicable enough to warrant a dedicated, popular campaign calling on academics to boycott their journals. The Cost of Knowledge campaign has amassed over thirteen thousand signatories in a little over a year. One of my most-used tools, something that I rely on to do my work almost every day, will most likely be bought by this very large company. My methods courses did not prepare me for this.

My obsession with organizing has also meant that I am a magpie of note taking tips and recording device tricks. I like to see how other people organize things and see what rings true to my idiosyncrasies and helps me overcome my failings. In the social sciences, we typically push all of these little tricks of the trade into a large canopy called “methods.” We take and teach methods courses, we hold brownbags on methods, and we write books dedicated to this amorphous meta discussion of how we do what we do. It is trendy, in both class and in written form, to comment at length on how our very methods intimately and directly shape our knowledge production. But in all of the methods courses I have taken, and in all the books I have read, no one has tackled the issue of digital methods. There is no shortage of suggestions about note cards, pocketable spiral-bound notebooks, and tape recording devices. There are whole journal articles and book chapters devoted to the proper posture and demeanor for an interview with powerful interlocutors. These are all important skills to have, but I have yet to see a single methods text that weighs the pros and cons of Evernote, the built-in citation manager in Word, or the finer points of OCR’ed PDFs. Should I rely on my phone’s voice recorder or should I get a dedicated device? What am I supposed to do when my digital tools are bought by a large corporation that I hate? Do I add my citation management software to the list of things in this world that I rely on but don’t condone? Or do I run to the opposite extreme and disavow all digital tools in my work?

I don’t want to stop using citation management software. It saves me time and makes for a much more polished final product. I also don’t want to put myself at a disadvantage when it comes to producing publishable material. The job market has gotten so fierce that grad students, in most disciplines, are expected to have several peer-reviewed publications under their belt before they get their Ph.D. The cost of opting out, in my opinion, is too much to ask. I know some grad students that happily do their bibliographies by hand, and that’s fine for them. But that is not where my strengths lie. I need the help and want to benefit from the tools available.

At this point, you might be asking, “what is Elsevier going to do with Mendeley that warrants uninstalling it from you computer?” When I first heard about Mendeley’s possible acquisition I posted the story to the Academic Publishing subreddit. One of the commenters made a really good point that gets at the heart of the matter. Here’s the full comment:

“Ah-HAH! See??? I told you alllllllll!!!

Publishers love snapping up reference managers, because they know that an uncontrolled reference manager product will encourage storing and sharing libraries, taking them out of the loop after the first download. They want these software packages to be enforcers of their copyright claims, instead of tools for researchers.”

It’s an astute observation and one I want to dwell on for the remainder of this essay. In some ways, this is nothing new. Daniel Kleinman, in his essay, “Untangling Context: Understanding a University Laboratory in a Commercial World” demonstrates that commercial interests have been embedded in scientific methods for a long time. In his study of the Handelman lab at the University of Wisconsin-Madison he concludes that any lab that wishes to produce a patentable product will be “subjected, in many senses, to the ‘rules’ that govern the world of commerce.” Experiments that use ready-made instruments are easier (and cheaper) to do than ones that require custom or special equipment. Is Mendeley’s acquisition just another instance of “the world of commerce” influencing science, or is this a new and unique relationship between capital and knowledge production? Intellectual property control has long been at the heart of scientific work, but it has never been quite this fine-tuned. An experiment or field work might have to conform to the realities of the present economic condition, (not everyone can do their fieldwork in Mali, not everyone can use the Large Hadron Collider) but when you sit down to write up your conclusions, you shouldn’t have to worry whether the PDF you got from a colleague will get you sued. Note cards and locally saved bibliography documents will rarely rat you out to JSTOR.

This and more #OverlyHonestMethods can be found here.

My personal solution to my Mendeley dilemma is to find a software solution that embodies my politics. I want my software to have all the affordances and features of a society that cherishes open dialogue. It should be a tool that, through its use, reaffirms and establishes the politics I hold and it embodies. For me, that means adopting free or open source software. Free software is not a silver bullet, but it is an excellent start. Anthropologist Chris Kelty noted in his ethnography of open source developers that free software communities act as a recursive public. A recursive public…

“is vitally concerned with the material and practical maintenance and modification of the technical, legal, practical, and conceptual means of its own existence as a public; it is a collective independent of other forms of constituted power and is capable of speaking to existing forms of power through the production of actually existing alternatives.”

Now that Zotero has a stand-along client, I will be learning how to use that. Zotero is an open source project funded by nonprofit organizations and provides an actually existing alternative to the corporate interests of Elsevier and other publishing companies. It has no interest in the intellectual property status of my journal articles and is built by people who actively want such an alternative. By using Zotero I can play a small but active role in establishing the kind of political reality I want to experience. I can play a bigger role by contributing to the Zotero project through coding, writing editing or translating instructional material, or helping others users in a forum.

Robert K. Merton, one of the first sociologists to study the production of scientific knowledge, observed that all science follows a set of norms. One of those was communalism: the free exchange and communal ownership of ideas. Without communalism, according to Merton, scientists could not build off each other’s work. Merton’s descriptions were admittedly idealistic– the scientists that were working on the atomic bomb weren’t openly sharing their progress– but he was not wrong. Science is a social enterprise. When our accounts of reality are owned by profit-seeking organizations and those organizations control the very tools that help us exchange those accounts, we are in danger of losing something fundamental to the institution of science. Ideas should not end up behind prohibitively expensive pay walls, especially when so little of that money goes towards new scientific discovery. I will miss Mendeley’s automatic PDF filing, but perhaps I can work with the Zotero community to get that back into my life while also helping others.

David is trying to make #dropmendeley happen on twitter. Help him won’t you? @da_banks

Mentioned Titles:
Kelty, Christopher. Two Bits : the Cultural Significance of Free Software. Durham: Duke University Press, 2008. http://www.worldcat.org/title/two-bits-the-cultural-significance-of-free-software/oclc/183914703&referer=brief_results.
Kleinman, Daniel L. “Untangling Context : Understanding a University Laboratory in the Commercial World.” Science, Technology & Human Values 23, no. 3 (July 1998): 285–314. doi:10.1177/016224399802300302.

Please excuse the Atlantic Magazine-worthy counterintuitive article title, but its true. The Consumer Electronics Show, more commonly referred to as CES is cheesy, expensive, and out-dated. I used to really love CES coverage. It was a guilty pleasure of mine; an unrequited week of fetishistic gadget worship. I savored it all: the cringe-worthy pep of the keynote addresses, the garbled and blurry product videos taken by tech blog contributors, the over-hyped promises that never come true. But this year, after watching the entire 90-minute Waiting for Godot-style keynote, I don’t see the point anymore. All the coolest stuff was made by indie developers and they introduced their products months ago, through awkward in-house YouTube videos. CES might be convenient for gigantic multinational corporations, but what’s in it for the Kickstarter-fueled entrepreneurs?  Is a Las Vegas trade show the best medium to show off your iPhone-controlled light bulb, or e-ink wrist watch?  Why does half of Maroon 5 need to half-heartedly churn out three songs at the end of an hour-long product description? The industry has matured, and CES is no longer sufficient. 

Cyborgology has been in a confessionary mood as of late, and I’ll do the same: I watch every Apple keynote. When it was Steve, I marveled at his signature Reality Distortion Field. Small, incremental changes to software I would never use were absolutely riveting. (Coverflow on my iPod Touch? Incredible!) The same signature polish and fascistic attention to detail that we love in their products was there on the stage. When Maroon 5 played an Apple event, they made sure to get the whole band, and Adam Levine did not get to talk.  Even the worst moments in Apple keynote history were met with total confidence. Steve might say, “Well, its pretty awesome when it works.” and then move on. In contrast, almost everything demoed at the CES keynote worked perfectly but the whole production had the finesse of a middle school dance.

YouTube Preview Image

CES is, in may respects, and exercise in imagining a world without Apple. (Or Amazon, or Facebook, or Google) Until this year, the keynote was always given by Microsoft’s CEO. (Now, Microsoft isn’t even exhibiting.) Bill Gates and later, Steve Ballmer would give a sort of State of the Union address: Intel processors were still conforming (and confirming) to Moore’s Law, AMD was keeping the prices below monopoly rates, and Windows XP was about to get a new service pack. The rest of the show was a demonstration of the vast ecosystem of peripherals and software that could play within the walled garden of Wintel. The walls saw their first major cracks in 2001 when the iPod was introduced. Suddenly one of the most popular devices to be plugged into a computer was nowhere to be seen on the CES floor. (Apple stopped going to CES in 1992.) Then came the iPhone, the iPad, the rising adoption of Google products, and finally the thousands of products made by small design outfits were garnering much more attention than the latest additions to Windows 8.

Image from The Verge. Images Courtesy and Copyright the CEA. Reproduced for educational purposes only.

Microsoft used to be an obligatory point of passage, but now they have been reduced to another contender in highly competitive markets. They’re not has-beens, but in every major category (with the big, but debatable category of gaming) they are perceived as also-rans.  Windows 8 is off to “a slow start”, no one willingly decides to “Bing” something, and the Zune is all but a distant -unpleasant- memory. More important than the fall of Microsoft however, is who rose to take their place. The keynote for the Consumer Electronics Show did not go to a consumer electronics company. The dubious honor went to Qualcomm’s CEO Dr. Paul Jacobs. Qualcomm makes 3G mobile chipsets. You, as a consumer, don’t go out and buy a Qualcomm phone or Qualcomm e-reader. Companies choose to incorporate Qualcomm technology in their products. Its as if the Detroit Autoshow were headlined by the company that makes your timing belt. What does it mean when CES is run by a chipset maker? Matt Buchanan from Buzzfeed goes so far as to say CES is dead because hardware is dead: “Hardware […] has become increasingly commoditized into blank vessels that do little more than hold Facebook and Twitter and the App Store and Android and iOS.” I don’t think hardware is dead, not by a long shot. If anything, the “Internet of Things” (Jacobs insisted on calling it the Internet of Everything) signals the renaissance of hardware. But hardware will look more like a notebook with stickers, or Arduino than a desktop computer. Qualcomm’s ascendancy to the keynote demonstrates a willingness (or concession) on the part of CES organizers to let big companies act as providers of infrastructures and core technologies, and let smaller startups sell the final consumable product. If that’s the case, then CES is going to look very different in years to come.

The Verge’s Adrianne Jeffries observes that a friendly confederation is forming amongst the crowdsourced companies at CES. The “smartwatch” category is almost entirely filled with Kickstarter projects and they all made efforts to meet each other. A meritocratic hierarchy is already forming amongst these entrepreneurs: newbies are seeking out experienced veterans to talk shop and get fundraising campaign tips. Scott Wilson, maker of the very successful LunaTik touchscreen watch kit describes a steady stream of fellow Kickstarter-funded entrepreneurs that want nothing more than to swap campaign stories: “Overall, there is this overarching inspiration for doing your own thing, and that is partly driven by Kickstarter.”  Jeffries also quotes Pebble CEO Eric Migicovsky as saying, “CES probably has the largest amount of Kickstarter backers in the world concentrated in one place,” Every conversation on the show floor seems to loop back around to the phrase, “we/they did a Kickstarter and raised X.”

Do Kickstarter firms gain anything from CES? Successful Kickstarters that also had a presence at CES are usually described in the media as “graduating” or “moving on” to CES as if inclusion in the show automatically denotes an arrival to “the big leagues.” Its too early to draw any hard conclusions, but beyond networking I do not see how these companies benefit from flying themselves (and all of their equipment) to Las Vegas.  Anecdotal evidence suggests that, if anything, big companies wish they could get in on Kickstarter. The buzz for your product is started months in advance, and if you reach your funding goal before CES, you have a pretty compelling business plan. Conferencing is probably better for getting your gadget on brick and mortar store shelves, but that seems more like a service Kickstarter doesn’t offer yet, rather than a strength of CES. 

I want to return now, and conclude with a few more words on the concept of a trade show. Trade shows are always equal parts pomp and propaganda, but the propaganda sucks and the pomp is expensive. No adult with even the smallest shimmering spark of creativity would latch on to terrible, contrived terms as “Mobile Generation” or “born mobile.” Nor are they going to take your stilted and vaudeville levels of over-wrought delivery as honest or even authoritative. What good does it do your company when your CEO stands up on a stage and talks like a 10-year-old kid with a new Pokémon card? (And this chip does this, and if you bundle it with this it can do all of these things but not as good as this one which is better.) Why should your, otherwise proud employee, be dressed up like a “birdketeer” in a fake kitchen? The social net is too personal, too intimate, to be captured by hokey business jingles. There’s too much of myself –sensitive, identifying information as well as emotionally evocative content– to watch overt salesmanship. This isn’t toothpaste brand loyalty, this is my phone you’re talking about. Start taking this stuff seriously.  If you insist on presenting your work like its a Slapchop, go hire Ryan Seacrest. CES is always described as a fortune-telling device. To gaze upon the showroom floor is like looking at a Best Buy 10 years in the future. CES has to be more than a big box store at the other end of a temporal rift. It needs to demonstrate what my life is like after I leave the store. What will my daily life be like? Perhaps its time that the CES organizers consider putting together a World’s Fair?