I just left my department’s colloquium lecture series where Dr. Virginia Eubanks from SUNY- Albany was giving an excellent talk on the computer systems that administer and control (to varying degrees) earned benefits programs like social security, Medicaid, and Medicare. The talk was really fascinating and a question from Dr. Abby Kinchy during the Q&A really stuck with me: How do we study different (and often long-outdated) versions of software? Particularly, how do we chart the design of software that runs on huge closed networks owned by large companies or governments? Are these decisions lost to history, or are there methods for ferreting out Ross Perot’s old wares?
In open source projects and free software there are plenty of ways of charting version histories and forks. The code is out in the open and is typically posted to Github or a server running Subversion. There is a small but growing collection of ethnographies of open source and free software communities that outline the design decisions and politics of horizontally organized development projects. What is missing –and I don’t know how to go about filling this gap—is thick descriptions of IBM enterprise suites and server-side management software.
Corporate software is a “hard case” because there are dozens of institutional, legal, and technological barriers to software revision documentation. Hard cases are topics that typically “resist” ethnographic research. Typically because they are so normal, objective, or elite that they are considered “beyond” or outside of cultural critique or ethnographic investigation. Other examples of hard cases include mathematics, corporate boardrooms, and the federal government. There’s some great literature on all of these topics, but it is minuscule in comparison to “softer” subjects like kinship or social movements.
Another way to describe this research is “studying up.” That is, studying the elite and powerful. No one except the decision makers and their superiors have physical, virtual, and/or legal access to documents, code, or any other definitive text. Even if a researcher were to gain access to the documents, there is no guarantee that they would make sense to an outside observer. Proprietary systems are closely tied to the expertise that makes and administers them. In other words, you cannot understand the technical system without convincing someone that programmed the thing to talk to you. Access to these systems and people are made difficult by an array of barriers –guard gates and busy schedules just to name two common ones— that shield elites from unwanted attention. Studying up is extremely difficult but well worth the effort. As Laura Nader writes: “…our findings have often served to help manipulate rather than aid those we study.” She goes on to write: “We cannot, as responsible scientists, educate ‘managers’ without educating those ‘being managed.” In order to bring about an egalitarian society, social scientists must help reveal the inner workings of elite institutions.
I wouldn’t go so far as to say these systems control everyone’s lives, but they do control some, and influence many others’ day-to-day lives. This kind of research is important because knowing what affordances and priorities are built into these systems tells us a lot about our technologically augmented society: Who should control what? What should be more efficiently administered? What does efficiency look like? What should be delegated to software and what is better accomplished by humans? Even if you’re not on welfare or food stamps you’re still going to the DMV, using a customer loyalty card, walking through TSA checkpoints, and relying on the myriad of networks run by insurance companies, credit agencies, and any employer with more than a few dozen employees.
What tools do social scientists need to study enterprise software? Dr. Ron Eglash, in that same colloquium Q&A suggested that interviewing retired engineers was extremely useful in his research on the widely used master/slave engineering metaphor [PDF]. Indeed, countless journal articles could be written from the contents of dusty attics and office basements.That might be a start, but it doesn’t let us see current iterations of software, nor does it give us the kind of fine-grained detail that we get from free software projects.
Consider this a Call for Methods. What can we, as social scientists interested in science and technology, do to illuminate this dark and unknown world of corporate software? How do we get at the hard cases that control or influence everyday life? What would it take to get our hands on the design decisions and product development for child protective services? How many other government systems use open source components like the Veteran’s Adminstration’s VistA system? Do widely used open source components provide a starting point for analysis, or are the more interesting cases when there are no open source components? I can’t wait to hear what you all come up with.
Comments 5
Jordan Peacock — April 4, 2013
One major difficulty with proprietary software is that there is much less need for transparency; large open source projects have large meta-projects and preserved documentation about many of the major points throughout.
Large proprietary projects *may* have this, but often this information itself will be split up into various incompatible silos, or if old enough, simply gathering dust on abandoned systems/backups. Within teams, a huge amount of the knowledge is tacit, metis - working knowledge. It's when the teams have to work across organizational bounds, or have to report specific metrics, that you get any kind of visibility into the larger processes.
This was one reason for Amazon's huge API push; every group had to have a published, functioning API, so that other groups didn't have to acquaint themselves with any of the inner workings of other groups, but could simply trust the published API. This allowed the code beneath any given API to morph significantly without destroying work built upon the API. Most organizations aren't that advanced.
For actual examples of the contrary, I recommend The Daily WTF, or simply scanning questions in Stack Overflow.
Nathan Ensmenger — April 5, 2013
There is a small but growing literature on the history of software that might be useful to you. As you suggest, the usual problems that historians have in writing about corporations (which business historians and historians of technology have long had to grapple with) are compounded by the uniquely intangible nature of software. What do you need to have access to in order to "see" the software: the design specifications? the source code? a specific instantiation of the software in a particular machine architecture? the total socio-technical system in which that software realizes its ultimate function? These are difficult methodological challenges.
This is an emerging literature. Martin Campbell-Kelly's *From Airline Reservations to Sonic the Hedgehog: a History of the Software Industry* would be a great place to start, as would my work on the history of computer programming (*The Computer Boys Take Over: Computers, Programmers, and the Politics of Technical Expertise*). I have published a number of shorter historiographical pieces on the history of software in the Annals of the History of Computing. "Software as History Embodied" might be useful to you. http://goo.gl/kUUvM The late Michael Mahoney has written extensively on this topic as well. http://www.princeton.edu/~hos/Mahoney/
There is also an upcoming conference at NYU called "Governing Algorithms: A conference on computation, automation, and control" that you might be interested in. Some of the top scholars in software history and software studies will be there. http://governingalgorithms.org