For those of us eagerly awaiting the Winter Olympics this February, we got an Olympics of a different kind to tide us over. Last weekend, the “Robot Olympics” took place at the Homestead-Miami Speedway in south Florida. Schaft Inc., a Google-owned Japanese company, took first place at The Games, officially called the DARPA Robotics Challenge (DRC). The events in which competitors compete, and the criteria by which they are evaluated, are nicely illustrative of Earnst Schraube’s technology as materialized action approach, and present an opportunity to push the theory further.
Robots in the DRC complete a series of tasks relevant in disaster situations. They have to climb ladders, cross uneven surfaces, cut drywall, un-wrap hoses, turn knobs, and drive cars through an obstacle course. Points are distributed based on how well the robots approximate human movement and dexterity. Points are detracted for human intervention. The top 8 teams get to move on to DRC finals, and receive a million dollars from DARPA for further development.
Schraube’s materialized action approach combines Actor Network Theory with Critical Psychology. From the latter, Schraube uses the idea of objectification which argues that technology is always imbued with human intention. From the former, he takes the idea that technologies always act back upon humans. In short, the materialized action approach says that technologies and humans have a mutually constitutive relationship, but this relationship is lopsided. Although both humans and technologies each act upon the other, humans take the primary position. Humans construct technologies in response to human problems. They build into these technologies cultural values and intentions. Technology is the material form of human action, but one without definitive consequences.
This kind of human primacy is clear in the rules and incentive structure of the DRC. These robots are built to engage in human tasks, judged on the closeness of their human approximation, and penalized when they fall short—requiring human intervention. Following the materialized action approach, these technologies are constructed to solve human problems, in human ways. But let’s push this a bit. Let’s ask which human problems these technologies solve, and in whose interests are the technological solutions?
Human problems—both personal and collective—are infinite. Technological solutions are not. I argue that we should always ask which problems get solved, who gets to decide, and who benefits? These questions are, of course, deeply interlinked.
Let’s take the case of the DRC. The human problems they address are those disaster relief and/or military combat. This outcome is guided by DARPA—the Defense Advanced Research Project Agency—as they provide financial support for developers. Who benefits, is the U.S. Military and those willing to work for the betterment of the military. Here is DARPA’s mission statement:
DARPA’s mission is to maintain the technological superiority of the U.S. military and prevent technological surprise from harming our national security by sponsoring revolutionary, high-payoff research bridging the gap between fundamental discoveries and their military use…
Exploding out this example, it seems that the problems which get addressed, how this is done, and who benefits, is largely an effect of who can pay the developmental tab. Technology costs, and those who can pay will always have a stronger voice in shaping which technologies develop, and how.
Each decision is always a decision not to do something else, just as each bit of money and energy towards one technological development is energy and money not funneled into something else. I am not arguing for or against the value of DARPA or the DRC robots. Rather, I suggest we take these questions of source and benefit as an organizing frame with which to understand technological developments, where they come from, and where they might lead.
Jenny is a weekly contributor for Cyborgology. Follow Jenny on Twitter @Jenny_L_Davis
Comments 1
ArtSmart Consult — January 27, 2014
Automation of physical tasks are about 1/20th the importance of automation of mental tasks. The great thing about software is that it tells us what we don't want more than it tells us what we want.
Technological developments (if all goes as planned) will lead to technological singularity. And then we won't have any say in the matter. We will be at the mercy of the software so much more than we will be at the mercy of the hardware. Collectively, we need to be much more careful of what we wish for than we are currently.