I just left my department’s colloquium lecture series where Dr. Virginia Eubanks from SUNY- Albany was giving an excellent talk on the computer systems that administer and control (to varying degrees) earned benefits programs like social security, Medicaid, and Medicare. The talk was really fascinating and a question from Dr. Abby Kinchy during the Q&A really stuck with me: How do we study different (and often long-outdated) versions of software? Particularly, how do we chart the design of software that runs on huge closed networks owned by large companies or governments? Are these decisions lost to history, or are there methods for ferreting out Ross Perot’s old wares?
In open source projects and free software there are plenty of ways of charting version histories and forks. The code is out in the open and is typically posted to Github or a server running Subversion. There is a small but growing collection of ethnographies of open source and free software communities that outline the design decisions and politics of horizontally organized development projects. What is missing –and I don’t know how to go about filling this gap—is thick descriptions of IBM enterprise suites and server-side management software.
Corporate software is a “hard case” because there are dozens of institutional, legal, and technological barriers to software revision documentation. Hard cases are topics that typically “resist” ethnographic research. Typically because they are so normal, objective, or elite that they are considered “beyond” or outside of cultural critique or ethnographic investigation. Other examples of hard cases include mathematics, corporate boardrooms, and the federal government. There’s some great literature on all of these topics, but it is minuscule in comparison to “softer” subjects like kinship or social movements.
Another way to describe this research is “studying up.” That is, studying the elite and powerful. No one except the decision makers and their superiors have physical, virtual, and/or legal access to documents, code, or any other definitive text. Even if a researcher were to gain access to the documents, there is no guarantee that they would make sense to an outside observer. Proprietary systems are closely tied to the expertise that makes and administers them. In other words, you cannot understand the technical system without convincing someone that programmed the thing to talk to you. Access to these systems and people are made difficult by an array of barriers –guard gates and busy schedules just to name two common ones— that shield elites from unwanted attention. Studying up is extremely difficult but well worth the effort. As Laura Nader writes: “…our findings have often served to help manipulate rather than aid those we study.” She goes on to write: “We cannot, as responsible scientists, educate ‘managers’ without educating those ‘being managed.” In order to bring about an egalitarian society, social scientists must help reveal the inner workings of elite institutions.
I wouldn’t go so far as to say these systems control everyone’s lives, but they do control some, and influence many others’ day-to-day lives. This kind of research is important because knowing what affordances and priorities are built into these systems tells us a lot about our technologically augmented society: Who should control what? What should be more efficiently administered? What does efficiency look like? What should be delegated to software and what is better accomplished by humans? Even if you’re not on welfare or food stamps you’re still going to the DMV, using a customer loyalty card, walking through TSA checkpoints, and relying on the myriad of networks run by insurance companies, credit agencies, and any employer with more than a few dozen employees.
What tools do social scientists need to study enterprise software? Dr. Ron Eglash, in that same colloquium Q&A suggested that interviewing retired engineers was extremely useful in his research on the widely used master/slave engineering metaphor [PDF]. Indeed, countless journal articles could be written from the contents of dusty attics and office basements.That might be a start, but it doesn’t let us see current iterations of software, nor does it give us the kind of fine-grained detail that we get from free software projects.
Consider this a Call for Methods. What can we, as social scientists interested in science and technology, do to illuminate this dark and unknown world of corporate software? How do we get at the hard cases that control or influence everyday life? What would it take to get our hands on the design decisions and product development for child protective services? How many other government systems use open source components like the Veteran’s Adminstration’s VistA system? Do widely used open source components provide a starting point for analysis, or are the more interesting cases when there are no open source components? I can’t wait to hear what you all come up with.