Yager’s Spec Ops: The Line

Between the emotion
And the response – T.S. Eliot

Mistah Kurtz– he dead. – Joseph Conrad

A version of this essay was delivered at the military sociology miniconference at the annual meeting of the Eastern Sociological Society, 2011.

War is fundamentally a cultural phenomenon. It is profoundly entangled with shared meanings and understandings, stories both old and new, and the evolution of the same. These stories and meanings concern how war is defined, what it means to be at war, how enemies are to be identified and treated, how war itself is waged, and how one can know when war is finished – if it ever is. The shared meanings and narratives through which the culture of war is constructed are diverse: oral stories told and retold, myths and legends, historical accounts, and modern journalistic reports – and it’s important to note how the nature of those last has changed as our understanding of what qualifies as “journalism” has changed as well.

Video games are worth considering in this context, not only because of their pervasiveness but because of their narrative power. They share much in common with film: interaction with them is mediated by a monitor, and they almost always feature a narrative of some kind that drives the action on the screen. However, video games are also different from other forms of media in that they are simulations – they go beyond audio-visual narrative and into at least an attempt to approximate a particular kind of experience. Further, unlike movies and TV, a feature of the experience they offer is active participation. This isn’t to say that movies and TV are passive; they’ve been too often dismissed as such, when viewing those forms of media in fact often involves complex patterns of interpretation and meaning-making. However, the difference is still worth some attention.

I want to argue that this difference has particular implications for how we as a largely civilian population understand war and reproduce the meanings we attach to it. Further, I think that how our games tell stories about war reveals some powerful things about how storytelling in war has changed over time – along with war itself – and how we can understand our own collective psychological reactions to those changes. Finally, I argue that our relationship to war in the context of games highlights some of the ways in which war is digitally augmented – not only on the battlefield and among military population but here among civilians.

Given that we’re talking about games in the larger context of media and warfare, I’ll begin by outlining some of the ways in which media narratives – especially film – have historically contributed to the cultural construction of meanings of war.

As I said above, meaning-making and the construction of understandings of war are traditionally narrative in nature, or at least narratives of various kinds make up a great deal of what there is. Myths and legends are some of the oldest forms of this, and help to construct and maintain cultural ideas regarding what war is like, how it is to be fought and how a warrior should conduct themselves, and what is at stake in war (territory, property, beliefs, etc.). In addition to myths and legends—and sometimes inseparable from them—are historical accounts of war, which relate details about the wars that a nation or a people have been involved in over the course of the past and are therefore pivotal in a people’s understanding of how they arrived at their present condition, who their enemies were and are, who they themselves are in opposition to others, and what they hold valuable and worth fighting for—as well as what they might fight for in the future, and who they might fight for it.

This is an important thing to make an additional note of: images and stories of war tell not only about the wars that have been fought, but about what wars might be fought in the future; they contain information regarding what is both possible and appropriate in terms of war-making. But I want to focus this more narrowly in the recent past, so I rather than the older forms of war narrative, I’ll focus on propaganda, journalism, and film/television.

Wartime propaganda reached a new level of pervasiveness and complexity in the twentieth century, due in large part to emerging media which provided new venues for its spread to the public. Poster were naturally widely used, but film provided the most powerful new medium in which for propagandists to work, and movie theaters were increasingly sites for the proliferation of government sponsored information regarding how wars were being fought, what they were being fought for, and the nature of the enemy. Some of the government sponsorship was direct; some less so—It’s important to note that at this point, the lines between news, entertainment, and overt propaganda were often indistinct at best. World War II was framed as a struggle of good against evil, with the Axis powers presented as fundamentally alien and Other in comparison to virtuous Allies. These narratives were engaged in both constructing and reproducing an understanding of the war as a struggle against a barbaric enemy that could not be reasoned with and which bore no resemblance to the “good” side.

One example of this kind of meaning-making can be found in the Why We Fight film series, commissioned by the US Army shortly after the beginning of World War II and directed by the famed Frank Capra. These films, which were required viewing for American soldiers, presented the Axis as a vicious and barbarous marauding power, entirely bent on subjugating the world. Particularly important in the creation of the films were heavily cut and edited sections of captured Axis propaganda – Capra engaged in the kind of reframing via remixing that we see more commonly today in reference to a wide range of media and cultural sources.

It’s worth noting at this point that dehumanization of the enemy only implies what is at stake but suggests how the enemy is to be treated. An inhuman enemy that is fundamentally evil in the way that the propaganda on both sides depicted can only be eliminated. Killing is constructed as the only possible or reasonable action to take.

War film has a long history, especially in the United States, and different wars are dealt with differently in film, depending on both the war and the era in which the film is made. As the realities of how war is understood change, its depictions undergo a corresponding change in media intended for mass consumption. We can understand this as a response to cultural changes that precede the depictions—but the depictions also help to construct and reproduce the meanings emerging from the changes.

Many of the films depicting World War II were “romantic” in nature, featuring heroic sacrifice in which American determination and courage led to victory. The films both emphasize the conception of the Allied – and specifically American – forces as good people engaged in a righteous cause, and make powerful suggestions about the way in which war can be won. The emphasis on the sacrifice of the body and the meaning of injury is significant: death in war is not only not entirely a negative, but one can have confidence that the sacrifice is undertaken on behalf of ethical leaders and a good cause, and injury and death of a nationalized body take on a justifying function within conflict.

During the Cold War era, we see everything change, specially in the period immediately following the Vietnam War. Many of these films break with the tradition of honorable and necessary sacrifice by presenting Chinese and American soldiers as pawns without agency led by people who don’t value suffering or sacrifice. Films that deal directly with the conflict in Vietnam follow a similar formula, presenting death in war as fundamentally devoid of ideological significance, and criticizing the leaders whose decisions put men in the position to die in battle. Sacrifice is even presented as possessing no deeper significance at all, and the soldiers in war as little more than animals being slaughtered in a conflict of which they have no real understanding. War films made during this period therefore present a trend characterized by deep ambivalence to the meaning of war, to how it is fought and against whom, and to the trustworthiness of the political and military leaders of the nation.

We see this again more recently regarding both the first and the second Gulf Wars, with films depicting of war as surreally pointless. However, some (like 2008’s The Hurt Locker) take a more analytical, documentarian bent. I think the latter especially is a significant development, and can be explained at least in part by the increased prevalence of documentary journalistic accounts in the exposure to current and recent conflicts on the part of the general public. But documentation doesn’t equal a lack of mediation; it’s a form of meaning-making in and of itself, and it makes certain kinds of interpretation possible while precluding others.

The first Gulf War occurred at the dawn of the era of satellite TV and 24-hour news networks. It was arguably the beginning of war-as-spectacle: packaged for mass consumption, more immediate and more real—and yet more removed and more surreal. Despite the amount of news coverage, the image of war with which the American people were presented was bizarrely constructed, with, as Elaine Scarry noted, a marked lack of injured and dead bodies in the discourse around the war. There was no need to present sacrifice as honorable or righteous, since there was no sacrifice. With no concrete depiction of enemy casualties, the enemy remained an undefined, nebulous idea. Jean Baudrillard famously claimed that as a war, it “did not take place” at all, that the media event that was packaged and sold to the American public was a simulation of war that was too bodiless and too asymmetrical to be called a war at all.

Most recently, war film is increasingly technology-focused—and increasingly uncertain and paranoid in its depictions of the experience of war. As in the first Gulf War, the enemy is not clearly defined but is instead heavily abstract, though represented in conflict with Othered individuals: Terror is the enemy, not any specific persons or group of people. Additionally, following the phenomenon of ambient documentation on both an individual and institutional level, the military is shown filming itself, from high-altitude surveillance to video taken by soldiers on the ground and photos captured on cell phones. There is an essential lack of any heroic narrative in most films about the second Gulf War, and though much of the film footage is ostensibly meant to be realistic, it is in turn reflecting an unreal reality that’s simulated in nature, atemporal, and presented in a confusing multiplicity of narrative forms.

Finally, it’s worth noting that with the proliferation of image-altering apps like Instagram, images of war can be used in an attempt to recapture a kind of authenticity that’s both comforting and simplifying (a return to the manichean worldview of WWII) – that “faux-vintage” images of war are a reaction to war that’s becoming increasingly “unknowable” and removed from the perception of many, if not most.

War games themselves are extremely old, with one of the earliest known being chess. In his book Wargame Design, James Dunnigan notes that in terms of its practical design, chess bears a close resemblance to how wars were actually fought at the time, on flat terrain in slow incremental movements with lower class and less powerful front line soldiers defending a king who is less powerful militarily but immensely powerful as a figurehead and symbol.

Dunnigan goes on to explain that most wargames were designed and played by civilians with little military experience, but that by the 19th century, wargame play and design began to shift into the realm of the military itself. An important point to note here is that wargames were not only used by members of the military as a hobby and pastime, but in training and battle-planning. This meant that the games themselves, which had suffered in terms of realism from the limitations in knowledge on the part of their civilian designers, now put a premium on being as realistic a simulation of warfare as possible. They required that their designers and players have a detailed understanding of strategy and tactics, as well as military organization and maneuvers.

Use of wargames on the part of the military began in Prussia and then spread to other European states once it was proven to be an effective technique. It was also made use of in the United States, but its use was generally confined to specific battles. After the end of WWII and as the Cold War began in earnest, this changed, with military wargames taking on a wider scope in both time and space, and a greater consideration of political structure.

At this point it’s important to note the technological context of war itself, and how it was changing during the mid-20th century. It’s a well-known idea within military history and sociology of war circles that WWII introduced a technological component to the fighting of wars that had hitherto been minimal or absent; the idea that wars in general and killing in particular could be refined to a science, calculated and controlled, increasingly mechanized, with a significant degree of physical and emotional distance between killers and killed. Zygmunt Bauman famously tied the existence of the Nazis’ factory-style genocide with major elements of modernity. Joanna Bourke has identified the significance of technological discourse in the ease of killing and the reduction of death in war to numbers and statistics. Emotional pain and guilt on the part of soldiers engaged in bombing missions can literally be measured in terms of altitude from the target, and the aerial bombing of civilian targets became an acceptable method of warfare during both World Wars.

Probably the most important element in the changing landscape of warfare was, as Jeremy Antley points out in his comment on my last post, the existence of nuclear weapons. The line between combatant and noncombatant was already blurry; the spectre of nuclear warfare essentially erased it. Moreover, of all the techniques and weapons of war developed up to that point, nuclear war was the most explicitly scientific in nature, the province of physicists as much as generals.

All of this is to say that as the Cold War kicked into high gear, war and technology – and particularly death and technology – were arguably more inextricably enmeshed than they ever were before. War and technology have always had a close relationship, and the development of new weapons and fighting techniques has always been primary, but now killing was made explicitly calculable – and, by extension, controllable. War was something that could be planned for and explored through gaming and simulation – a set of variables that could be altered to construct a vast range of different scenarios. As wargames shifted from the tabletop to the computer, the variables and scenarios increased in both number and complexity, and simulations could be run at high speeds. The Cold War itself was primarily about strategy, both in the short and long term, about anticipating the movements of the other player in the game.

This reached a significant apex during the first Gulf War, which was extensively planned beforehand with the use of wargaming, specifically a manual game called “Gulf Strike”. The first Gulf War was particular in the history of American warfare, not only in its prominent use of wargames, but in its reliance on and use of digital technology. As a military operation, it was designed to showcase the United States as possessing the technologically dominant military of the future, precise and calculated and efficient, with high results and low casualties. As I mentioned in my previous post, it was this new form of warfare – referred to by the military itself as “full spectrum dominance” – that led Jean Baudrillard to claim that what was presented to the American public was not a war at all but a simulation of the same, bloodless and clean, and also entirely asymmetrical. It was iconic war-as-game, and many of the soldiers who were part of the operation remarked on it as such, that it felt more like a video game more than what they had been taught to think of as war.

Most recently this trend has arguably continued with the increasing prevalence of drone “warfare” and unmanned aerial vehicles, where warriors are no longer even physically present on the battlefields where they “fight”. It could be argued that in many ways this kind of technological war actually brings the experience of fighting closer to the soldiers controlling the drones, but the point again is the degree to which the fighting of war is now augmented – war by physical and digital means now inseparable.

Computer simulation and wargaming continues to perform a major function in the US military’s preparation for war. DARPA developed a vehicle simulator called SIMNET as early as 1980, and in 1990 SIMNET was integrated into STOW, the armed forces’ “Synthetic Theater of War”, which provides a significant digital component for military exercises.

Wargames and simulations have not only been used by the military in training and strategic planning, but more recently in recruiting. “America’s Army”, a first-person shooter released in 2002, was funded and released by the Army itself, and was explicitly designed to present the modern US Army to civilian youth. The game was created for the specific purpose of depicting an officially sanctioned image of the Army to the public – a exercise in meaning-making in terms of the public’s understanding of the experience of being a member of the armed forces. The fact that the Army chose a video game as a medium is also important; essentially, “America’s Army” functions as a meaning-making training simulator, not only conveying a particular understanding of the armed forces, but preparing potential recruits for the real-life training they might soon experience.

Above, I noted the power of stories in meaning-making and the production and reproduction of cultural understandings. I also noted that games are usually driven, implicitly or explicitly, by narratives. At this point I want to go a step further and argue that games play a role not only in learned meanings but in learned behaviors, and in how we contextualize those behaviors. The fact that simulation is used as an integral part of military training (and psychotherapy) is a strong indicator that it can be an effective tool in terms of shaping behavior and understandings of behavior, as well as the context in which that behavior occurs.

In First Person: New Media as Story, Performance, and Game, Simon Penny explains (parentheses mine):

When soldiers shoot at targets shaped like people, this trains them to shoot real people. When pilots work in flight simulators, the skills they develop transfer to the real world. When children play “first-person shooters”, they develop skills of marksmanship. So we must accept that there is something that qualitatively separates a work like the one discussed above (of an art installation where the viewer can physically abuse the projected image of a woman) from a static image of a misogynistic beating, or even a movie of the same subject. That something is the potential to build behaviors that can exist without or separate from, and possibly contrary to, rational argument or ideology.

This is not by any means to argue that playing wargames will always make someone want to fight wars, any more than it is to argue that playing a FPS will in and of itself make a teenager want to take a gun to school (I think we can all agree that’s a fairly tired argument at this point). It’s merely to point out that simulations have power – power to shape meaning, our perceptions of ourselves and others, and our understandings of our own behaviors, as well as what behaviors are appropriate and reasonable in specific contexts.

So, to make a long post short: there is something particular going on in regards to simulations and wargames in the context of technological warfare in the 20th and early 21st centuries, and especially wargames and simulations that are digital in nature. This something has a tremendous amount to do with the construction of meaning, with behavior, and specifically with our understanding of what wars mean and what it means to fight them.

I want to also note at this point that I’m not out to make any kind of conclusively causal argument. I’m not interested in claiming that video games definitely change how we think of war, or that how we think of war definitely shapes what kinds of games we play, or that either have anything definite to do with how we actually fight the wars we choose to fight. I think any or all of those are interesting arguments, but I also don’t think I have the data to support strong statements about them either way. Instead, what I want to suggest – and what I’ll be employing specific examples in support of – is simply that there’s Something Going On at the intersection of simulation/technology, culture, storytelling, and war, and that this Something is probably worth further thought and investigation.

A primary indicator of a story’s power to shape meaning is its prevalence – how many people are telling it how many times and in what context. It makes sense, therefore, that if we’re going to look closely at what might be happening at the point of intersection I refer to above, we should focus at least in part on war-themed games that have enjoyed considerable popularity.

(I should offer another caveat here: this is by no means and in no way a representative sample. If you wanted to write a truly thorough examination of war-themed video games and American culture, it would take the space of a book to do so. But I think these are revealing cases, albeit a small n.)

Within the genre of war-themed games, very few titles in the last decade have enjoyed the kind of success that the Call of Duty games can claim. The initial entries into the CoD series were set in WWII – as have been many other games – and to refer back to my first post in this series, I think a lot of that can be attributed to how we think of that war. Comparisons can be drawn between WWII games and films, with the themes of noble sacrifice and heroism, and a Manichean standoff between good and evil discussed above. World War II has been described as “the last good war”; this makes it a culturally comfortable reference point for imagining what armed conflict is like.

But CoD‘s most popular titles have been more contemporary in terms of setting – CoD 4: Modern Warfare and its two sequels have been immensely popular, with the first MW selling in excess of 13 million units worldwide. For the purposes of this piece, I’ll be focusing on the first two installments in the series.

Both MW and MW2 put the player in the point-of-view of different soldiers at different points in the story, both US Special Forces and British SAS. Both games also emphasize concepts that are vaguely consistent with what has been calledthe “New War” theoretical approach – the idea that regarding war in the latter half of the 20th and the early 21st centuries, states are less important, and that the borders between state and state, between soldier and civilian, and between state actor and non-state actor are increasingly porous. In both games, the player’s enemy ranges from an insane Middle-Eastern warlord who has masterminded a coup, to the leader of a Russian ultranationalist paramilitary group; in each case it’s understood that the enemy is not a state as such. Indeed, the movements of states often feel abstract and distant from the action in which the player finds themselves involved, however important they might be to the overall plot (indeed, at one point in MW2 the player takes part in a battle between the US and an invading Russian force; however, Russia is invading in retaliation for a terrorist attack perpetrated by Russian ultranationalists and framed on the United States). The player engages in small military operations, fighting against other small groups of soldiers, often to retrieve vital pieces of intelligence or personnel.

Despite the blurriness of combat identities and the de-emphasizing of state-level actors, the actions of the various soldiers that the player personifies are framed by other characters – such as commanding officers – as important, even crucial to the stability of the world. Some of this can be explained through simple narrative convention: in order to engage the player in the story, the player needs to feel that the action in which they are taking part is supremely significant in some way. But some of it also resonates with contemporary understandings of war, with decisive action taking place on a micro rather than a macro level, in small-scale conflicts rather than on massive battlefields between entire armies.

Another thing that makes MW and MW2 so noteworthy is what they suggest about what is permissible in war. MW2 in particular takes a brutally casual approach to the torture of detainees in one scene, where, after you have captured one of the associates of the game’s main villain, you leave him with your fellow team members. As the camera pans away, you see your captive tied to a chair with a soldier looming over him, holding clamps that spark with electricity. While your character does not carry out the torture, the torture is implied to have both taken place, and is implicitly presented as both necessary and reasonable. The game narrative encourages acceptance of this rather than questioning of it, partly because the action proceeds so rapidly from that point that meditation on what has occurred is not really possible. It’s worth noting at this point that game narrative, action, pacing, and design are often difficult to distinguish without doing violence to their meaning; in this case they’re one and the same. A particular narrative interpretation of a particular event is emphasized at least in part because of the mechanics of cutscenes and gameplay.

To turn back to the issue of nobility and sacrifice in war films, it is interesting to compare some of what I discussed in part I of this series with the first two installments of Modern Warfare. Sacrifice in older American war films – particularly death in battle, particularly for the sake of one’s comrades – is regarded as noble, sacred, and not to be questioned. It’s regarded as worthy in itself, and a worthy act by worthy soldiers fighting for a worthy cause at the command of worthy generals and political authorities.

MW and MW2 fall somewhere between war as depicted in heroic war films and in tragic war films. Soldiers, especially the soldiers the player takes on the role of, are presented as competent, well-trained, courageous, and effective, as well as possessing at least a surface-level knowledge of the significance their actions have. When SAS Sergeant John “Soap” MacTavish attacks Russian ultranationalists, he knows that it is because they are seeking to acquire a nuclear missile with which to threaten the US – he further understands that the situation is exacerbated by post-Cold War instability as well as a military coup in an unnamed Middle Eastern nation. When, in the same game (MW), USMC Sergeant Paul Jackson is sent to the Middle East to unseat the author of the coup, he understands that nuclear proliferation is the same threat, and that he and his comrades are literally defending the safety of the world. When, in MW2, you take on the role of Sergeant Gary “Roach” Sanderson and seek to capture a Russian criminal by the name of Vladimir Makarov, you understand that he has authored the instigation of an invasion of the continental US by Russia—while, back in Washington DC, Private James Ramirez literally defends the capital from foreign troops. None of this is in question, and the characters are both informed of their mission’s importance and committed to seeing it carried out. Suffering and sacrifice for the sake of that mission are depicted as noble (though of course, if the player character is killed – with several exceptions that I will address – the game is over). Traditional patriotic values are not questioned in either game.

In MW,when Paul Jackson and his comrades attempt to push through an advancing wave of enemy soldiers to rescue a downed Apache pilot, the player is clearly meant to be moved by their courage and their loyalty to each other. In MW2 the scenes of an invaded Washington DC – complete with a burning Capitol and White House – are clearly meant to be horrific by virtue of being nationalistic symbols that have been injured and destroyed by an enemy force, while the soldiers fighting to push back the invaders are brave defenders of the homeland.

However, where the games both approach tragedy is in their depiction of the essential senselessness in war, and of the betrayal of soldiers by their commanders. In Modern Warfare, Sergeant Paul Jackson succeeds in rescuing the downed pilot, only to have his own helicopter brought down by the shockwave and ensuing firestorm when the warlord in charge of the country detonates a nuclear bomb. In one of the most haunting – and unusual – sequences of the game, the player crawls from the wreckage of the helicopter, struggles to his feet, and looks around at a burning nuclear wasteland before dying of his injuries. There is no way to survive the level, no weapons to wield and no one to kill, and not even really anything to do but to exist in that moment and bear witness to horror before expiring. The way the narrative frames the sequence is fairly clear: Paul Jackson is a heroic man engaged in a worthy fight, and yet the tragedy of his death is that it’s fundamentally senseless.

Likewise, at the end of his last successful mission to one of Makarov’s safehouses, Sergeant Gary “Roach” Sanderson is brutally murdered by his commander, Lieutenant General Shepherd, who is revealed to have been using Roach and his comrades for his own ends. Again, there is no way to survive the episode; the betrayal of a soldier by his commander is part of the story and cannot be avoided. The plot element differs from much of what one finds in tragic war films in that both the soldiers on the ground and the ideals for which they fight are depicted as essentially honorable; it’s the men in power who should be questioned and held up for suspicion.

This last is especially telling, given a number of the dominant narratives that have emerged out of the Iraq War – that of American troops acting in good faith but betrayed by negligent and/or greedy politicians and commanders. In that sense, Modern Warfare is folding one narrative into another in a kind of shorthand; by now, this is a story with which we’re all familiar on some level, which makes it available for the game’s writers to use in a play for the player’s emotions. Deeper characterization isn’t necessary; you never really get a clear sense of who any of these people are. Deeper understanding of the geopolitics behind what’s happening is likewise de-emphasized; though the actions of the player’s character might be of immense importance and the character himself seems to understand why he’s been given the orders he has, all the player really needs to know is that they’re important, not why they’re important. The cultural shorthand of “betrayed soldier” is all that really matters, and once it’s been employed, the game can and does move on.

It’s important at this point to examine the question of choice, and how choice is understood to function within video games, particularly games about war. Choice in gaming has become something of a selling point; witness the proliferation of sandbox games, as well as games that at least attempt to present the player with some kind of narratively meaningful moral decision-making (see Bioshock and Bioshock 2). But what’s interesting about games like Modern Warfare is what they suggest about the choices that a player doesn’t have.

Someone who plays a video game is interacting with a simulated world, and the rules of that world dictate what forms of interaction are possible with which aspects of the game world. Game code determines how a player moves, what they can touch and pick up, what they can eat or use as tools—and what they can injure or kill or destroy. This is additionally significant because when a certain action is permitted on a certain object, code—the rules of the game—often dictates that it is the only action that can meaningfully be performed on that object. An object in the game world that can be consumed can frequently only be consumed; there is no other possible use for it. Likewise, a character in the game world that can be killed can often only be killed; with exceptions, they cannot be talked to, reasoned with, or negotiated with, and inaction on the part of the player usually leads to the player’s demise and the end of the game. Especially in combat-themed first person shooters, the rules are often quite literally as simple as “kill or be killed”, and regardless of danger to one’s character, destruction of some kind is commonly necessary to advance the game action. War as depicted in video games is therefore war without real agency: fighting and killing an opponent is the only rational or reasonable course of action, if not the only one even possible. Game code is not neutral; it tells a story, it sets constraints on how that story can be interpreted, and it determines what forms of action are appropriate or intelligible.

The thing for which Modern Warfare 2 is probably best known is the infamous “No Russian” level. In this level, the player has infiltrated a group of Russian nationalist terrorists who plan to open fire on civilians in an airport. They then do just that – and the player cannot stop it. They can take part in the slaughter or they can stand by and do nothing, but they can’t save anyone, and they can’t fire on the terrorists. The player is therefore being explicitly put in a position where agency is poignantly lacking (recall the nuclear wasteland level in MW), where civilians scream, beg for mercy, and attempt to crawl away, all while the player can do nothing narratively meaningful.

But the player can still shoot. The fact that they’re holding a weapon and can make use of it is significant in itself – there is agency, just agency of a particularly horrible kind. Mohammad Alavi, the game designer who worked on No Russian, explains it this way:

I’ve read a few reviews that said we should have just shown the massacre in a movie or cast you in the role of a civilian running for his life. Although I completely respect anyone’s opinion that it didn’t sit well with them, I think either one of those other options would have been a cop out… [W]atching the airport massacre wouldn’t have had the same impact as participating (or not participating) in it. Being a civilian doesn’t offer you a choice or make you feel anything other than the fear of dying in a video game, which is so normal it’s not even a feeling gamers feel anymore…In the sea of endless bullets you fire off at countless enemies without a moment’s hesitation or afterthought, the fact that I got the player to hesitate even for a split second and actually consider his actions before he pulled that trigger– that makes me feel very accomplished.

Essentially, the player can’t prevent horrible things from occurring. The most they can reasonably expect is to choose to participate in those things – or not. But when player agency is whittled down to that level, player responsibility also erodes; why should there be any sense of responsibility, given that one is playing in a world where the parameters have been narrowly set by someone else?

A more recent game that has some serious points to make regarding player agency is Spec Ops: The Line, a contemporary rehashing of Apocalypse Now, (which is itself of course a retelling of Joseph Conrad’s Heart of Darkness). Gaming writers have noted that it’s a game that seems to have active contempt for its players, drawing one in to thinking it’s a fast, flashy, Modern Warfare-style shooter before pulling the rug out and revealing to the player that every action they’ve taken since the game’s beginning was in fact utterly reprehensible. It’s a game that purports to be a giant comment on its own genre, and its success in doing so has apparently been mixed.

But one primary thing on which Spec Ops seems to comment is the question of what choices a player actually has in a war game – and, by extension, what choices a person has in a scenario like the one the game depicts. The player, the game seems to be saying at multiple points, has no choice; by choosing to play the game at all – by choosing to enter the scenario – they’ve locked themselves into a situation where not only is there no real win-state, but there is no inherent significance to any of their actions. The game – and the world – is not working with you but against you. Walt Williams, Spec Ops‘s lead writer, puts it this way:

There’s a certain aspect to player agency that I don’t really agree with, which is the player should be able to do whatever the player wants and the world should adapt itself to the player’s desire. That’s not the way that the world works, and with Spec Ops, since we were attempting to do something that was a bit more emotionally real for the player…That’s what we were looking to do, particularly in the white phosphorous scene [where a group of civilians is mistakenly killed], is give direct proof that this is not a world that you are in control of, this world is directly in opposition to you as a game and a gamer.

Video game blogger and developer Matthew Burns takes issue with this, pointing out that the idea of the removal of choice leading to a massacre of civilians has intensely troubling implications, not only for games themselves but for how we as a society understand the question of choice in extremis:

I present a counter-argument: in the real world, there is always a choice. The claim that a massacre of human beings is the result of anyone– a player character in a video game or a real person– because “they had no choice” is the ultimate abdication of responsibility (and, if you believe certain philosophers, a repudiation of the very basis for a moral society). It is unclear to me how actually being presented with no choice is more “emotionally real,” because while it guarantees the player can only make the singular choice, it is also more manipulative.

This, then, is what I think is the central question around video games, simulation, storytelling, and war: How do we understand the very meaning of action? How true is it that we always have a choice, and if we do – or don’t – what does that mean for responsibility, on the level of both individual and society? I argue that when we play games, even if we’re not playing very close attention to the story – even if there isn’t really very much of a story to speak of anyway – we still internalize that story on some level. We interpret its meanings and its logic; we have to, in order to move within its world. We are participants on some level, even if the extent of our participation is the experience of the gameworld. And when we’re participants in a story, that lends the story greater weight for us than if we’re merely passive observers – to the extent that one can ever be passive within the space of a story.

Like other forms of media, war-themed video games are arenas for reproductions of certain kinds of meanings and narratives about our culture and our wars. But more than that – and perhaps more than other forms of media – they are spaces for conversation and debate over how we process those meanings, and what it truly means to participate in something. In these games we talk about patriotism, honor, and sacrifice. But we also question ethics, choice, and the significance of death – and we as participants are brought uncomfortably close to those questions, not only in terms of the questions themselves, but in terms of what it means to ask them at all, and in the ways that we do. And as Spec Ops and No Russian reveal, many of the answers we find are intensely troubling. As Matthew Burns writes:

I played through No Russian multiple times because I wanted direct knowledge of the consequences of my choices. The first time through I had done what came to me naturally, which was to try to stop the event, but firing on the perpetrators ends the mission immediately. The next time I stood by and watched. It is not an easy scene to stomach, and I tried to distance myself emotionally from what was going on.
The third time, I decided that I would participate. I could have chosen not to; I could have simply moved on then, or even shut off the system and never played again. But a certain curiosity won out– that kind of cold-blooded curiosity that craves the new and the forbidden. I pulled the trigger and fired.


Also highly recommended:

Yager’s Spec Ops: The Line

Between the emotion     
And the response  – T.S. Eliot

Mistah Kurtz– he dead.  – Joseph Conrad

I’ve spent the last two posts in this series building up a background set of claims regarding a) how the stories we tell about war have changed over time, and b) how the relationship between technology and war has changed in the last century, particularly as regards different forms of simulation. These are important points to make, but they’ve also been leading up to what I want to talk about this week: specific examples of war-themed video games and the stories they’re telling, and what difference it all makes.

I want to also note at this point that I’m not out to make any kind of conclusively causal argument. I’m not interested in claiming that video games definitely change how we think of war, or that how we think of war definitely shapes what kinds of games we play, or that either have anything definite to do with how we actually fight the wars we choose to fight. I think any or all of those are interesting arguments, but I also don’t think I have the data to support strong statements about them either way. Instead, what I want to suggest – and what I’ll be employing specific examples in support of – is simply that there’s Something Going On at the intersection of simulation/technology, culture, storytelling, and war, and that this Something is probably worth further thought and investigation.

A primary indicator of a story’s power to shape meaning is its prevalence – how many people are telling it how many times and in what context. It makes sense, therefore, that if we’re going to look closely at what might be happening at the point of intersection I refer to above, we should focus at least in part on war-themed games that have enjoyed considerable popularity.

(I should offer another caveat here: this is by no means and in no way a representative sample. If you wanted to write a truly thorough examination of war-themed video games and American culture, it would take the space of a book to do so. But I think these are revealing cases, albeit a small n.)

Within the genre of war-themed games, very few titles in the last decade have enjoyed the kind of success that the Call of Duty games can claim. The initial entries into the CoD series were set in WWII – as have been many other games – and to refer back to my first post in this series, I think a lot of that can be attributed to how we think of that war. Comparisons can be drawn between WWII games and films, with the themes of noble sacrifice and heroism, and a Manichean standoff between good and evil discussed above. World War II has been described as “the last good war”; this makes it a culturally comfortable reference point for imagining what armed conflict is like.

But CoD‘s most popular titles have been more contemporary in terms of setting – CoD 4: Modern Warfare and its two sequels have been immensely popular, with the first MW selling in excess of 13 million units worldwide. For the purposes of this piece, I’ll be focusing on the first two installments in the series.

Both MW and MW2 put the player in the point-of-view of different soldiers at different points in the story, both US Special Forces and British SAS. Both games also emphasize concepts that are vaguely consistent with what has been calledthe “New War” theoretical approach – the idea that regarding war in the latter half of the 20th and the early 21st centuries, states are less important, and that the borders between state and state, between soldier and civilian, and between state actor and non-state actor are increasingly porous. In both games, the player’s enemy ranges from an insane Middle-Eastern warlord who has masterminded a coup, to the leader of a Russian ultranationalist paramilitary group; in each case it’s understood that the enemy is not a state as such. Indeed, the movements of states often feel abstract and distant from the action in which the player finds themselves involved, however important they might be to the overall plot (indeed, at one point in MW2 the player takes part in a battle between the US and an invading Russian force; however, Russia is invading in retaliation for a terrorist attack perpetrated by Russian ultranationalists and framed on the United States). The player engages in small military operations, fighting against other small groups of soldiers, often to retrieve vital pieces of intelligence or personnel.

Despite the blurriness of combat identities and the de-emphasizing of state-level actors, the actions of the various soldiers that the player personifies are framed by other characters – such as commanding officers – as important, even crucial to the stability of the world. Some of this can be explained through simple narrative convention: in order to engage the player in the story, the player needs to feel that the action in which they are taking part is supremely significant in some way. But some of it also resonates with contemporary understandings of war, with decisive action taking place on a micro rather than a macro level, in small-scale conflicts rather than on massive battlefields between entire armies.

Another thing that makes MW and MW2 so noteworthy is what they suggest about what is permissible in war. MW2 in particular takes a brutally casual approach to the torture of detainees in one scene, where, after you have captured one of the associates of the game’s main villain, you leave him with your fellow team members. As the camera pans away, you see your captive tied to a chair with a soldier looming over him, holding clamps that spark with electricity. While your character does not carry out the torture, the torture is implied to have both taken place, and is implicitly presented as both necessary and reasonable. The game narrative encourages acceptance of this rather than questioning of it, partly because the action proceeds so rapidly from that point that meditation on what has occurred is not really possible.  It’s worth noting at this point that game narrative, action, pacing, and design are often difficult to distinguish without doing violence to their meaning; in this case they’re one and the same. A particular narrative interpretation of a particular event is emphasized at least in part because of the mechanics of cutscenes and gameplay.

To turn back to the issue of nobility and sacrifice in war films, it is interesting to compare some of what I discussed in part I of this series with the first two installments of Modern Warfare. Sacrifice in older American war films – particularly death in battle, particularly for the sake of one’s comrades – is regarded as noble, sacred, and not to be questioned. It’s regarded as worthy in itself, and a worthy act by worthy soldiers fighting for a worthy cause at the command of worthy generals and political authorities. 

MW and MW2 fall somewhere between war as depicted in heroic war films and in tragic war films. Soldiers, especially the soldiers the player takes on the role of, are presented as competent, well-trained, courageous, and effective, as well as possessing at least a surface-level knowledge of the significance their actions have. When SAS Sergeant John “Soap” MacTavish attacks Russian ultranationalists, he knows that it is because they are seeking to acquire a nuclear missile with which to threaten the US – he further understands that the situation is exacerbated by post-Cold War instability as well as a military coup in an unnamed Middle Eastern nation. When, in the same game (MW), USMC Sergeant Paul Jackson is sent to the Middle East to unseat the author of the coup, he understands that nuclear proliferation is the same threat, and that he and his comrades are literally defending the safety of the world. When, in MW2, you take on the role of Sergeant Gary “Roach” Sanderson and seek to capture a Russian criminal by the name of Vladimir Makarov, you understand that he has authored the instigation of an invasion of the continental US by Russia—while, back in Washington DC, Private James Ramirez literally defends the capital from foreign troops. None of this is in question, and the characters are both informed of their mission’s importance and committed to seeing it carried out. Suffering and sacrifice for the sake of that mission are depicted as noble (though of course, if the player character is killed – with several exceptions that I will address – the game is over). Traditional patriotic values are not questioned in either game.

In MW,when Paul Jackson and his comrades attempt to push through an advancing wave of enemy soldiers to rescue a downed Apache pilot, the player is clearly meant to be moved by their courage and their loyalty to each other. In MW2 the scenes of an invaded Washington DC – complete with a burning Capitol and White House – are clearly meant to be horrific by virtue of being nationalistic symbols that have been injured and destroyed by an enemy force, while the soldiers fighting to push back the invaders are brave defenders of the homeland.

However, where the games both approach tragedy is in their depiction of the essential senselessness in war, and of the betrayal of soldiers by their commanders. In Modern Warfare, Sergeant Paul Jackson succeeds in rescuing the downed pilot, only to have his own helicopter brought down by the shockwave and ensuing firestorm when the warlord in charge of the country detonates a nuclear bomb. In one of the most haunting – and unusual – sequences of the game, the player crawls from the wreckage of the helicopter, struggles to his feet, and looks around at a burning nuclear wasteland before dying of his injuries. There is no way to survive the level, no weapons to wield and no one to kill, and not even really anything to do but to exist in that moment and bear witness to horror before expiring. The way the narrative frames the sequence is fairly clear: Paul Jackson is a heroic man engaged in a worthy fight, and yet the tragedy of his death is that it’s fundamentally senseless.

Likewise, at the end of his last successful mission to one of Makarov’s safehouses, Sergeant Gary “Roach” Sanderson is brutally murdered by his commander, Lieutenant General Shepherd, who is revealed to have been using Roach and his comrades for his own ends. Again, there is no way to survive the episode; the betrayal of a soldier by his commander is part of the story and cannot be avoided. The plot element differs from much of what one finds in tragic war films in that both the soldiers on the ground and the ideals for which they fight are depicted as essentially honorable; it’s the men in power who should be questioned and held up for suspicion.

This last is especially telling, given a number of the dominant narratives that have emerged out of the Iraq War – that of American troops acting in good faith but betrayed by negligent and/or greedy politicians and commanders. In that sense, Modern Warfare is folding one narrative into another in a kind of shorthand; by now, this is a story with which we’re all familiar on some level, which makes it available for the game’s writers to use in a play for the player’s emotions. Deeper characterization isn’t necessary; you never really get a clear sense of who any of these people are. Deeper understanding of the geopolitics behind what’s happening is likewise de-emphasized; though the actions of the player’s character might be of immense importance and the character himself seems to understand why he’s been given the orders he has, all the player really needs to know is that they’re important, not why they’re important. The cultural shorthand of “betrayed soldier” is all that really matters, and once it’s been employed, the game can and does move on.

It’s important at this point to examine the question of choice, and how choice is understood to function within video games, particularly games about war. Choice in gaming has become something of a selling point; witness the proliferation of sandbox games, as well as games that at least attempt to present the player with some kind of narratively meaningful moral decision-making (see Bioshock and Bioshock 2). But what’s interesting about games like Modern Warfare is what they suggest about the choices that a player doesn’t have.

Someone who plays a video game is interacting with a simulated world, and the rules of that world dictate what forms of interaction are possible with which aspects of the game world. Game code determines how a player moves, what they can touch and pick up, what they can eat or use as tools—and what they can injure or kill or destroy. This is additionally significant because when a certain action is permitted on a certain object, code—the rules of the game—often dictates that it is the only action that can meaningfully be performed on that object. An object in the game world that can be consumed can frequently only be consumed; there is no other possible use for it. Likewise, a character in the game world that can be killed can often only be killed; with exceptions, they cannot be talked to, reasoned with, or negotiated with, and inaction on the part of the player usually leads to the player’s demise and the end of the game. Especially in combat-themed first person shooters, the rules are often quite literally as simple as “kill or be killed”, and regardless of danger to one’s character, destruction of some kind is commonly necessary to advance the game action. War as depicted in video games is therefore war without real agency: fighting and killing an opponent is the only rational or reasonable course of action, if not the only one even possible. Game code is not neutral; it tells a story, it sets constraints on how that story can be interpreted, and it determines what forms of action are appropriate or intelligible.

The thing for which Modern Warfare 2 is probably best known is the infamous “No Russian” level. In this level, the player has infiltrated a group of Russian nationalist terrorists who plan to open fire on civilians in an airport. They then do just that – and the player cannot stop it. They can take part in the slaughter or they can stand by and do nothing, but they can’t save anyone, and they can’t fire on the terrorists. The player is therefore being explicitly put in a position where agency is poignantly lacking (recall the nuclear wasteland level in MW), where civilians scream, beg for mercy, and attempt to crawl away, all while the player can do nothing narratively meaningful.

But the player can still shoot. The fact that they’re holding a weapon and can make use of it is significant in itself – there is agency, just agency of a particularly horrible kind. Mohammad Alavi, the game designer who worked on No Russian, explains it this way:

I’ve read a few reviews that said we should have just shown the massacre in a movie or cast you in the role of a civilian running for his life. Although I completely respect anyone’s opinion that it didn’t sit well with them, I think either one of those other options would have been a cop out… [W]atching the airport massacre wouldn’t have had the same impact as participating (or not participating) in it. Being a civilian doesn’t offer you a choice or make you feel anything other than the fear of dying in a video game, which is so normal it’s not even a feeling gamers feel anymore…In the sea of endless bullets you fire off at countless enemies without a moment’s hesitation or afterthought, the fact that I got the player to hesitate even for a split second and actually consider his actions before he pulled that trigger– that makes me feel very accomplished.

Essentially, the player can’t prevent horrible things from occurring. The most they can reasonably expect is to choose to participate in those things – or not. But when player agency is whittled down to that level, player responsibility also erodes; why should there be any sense of responsibility, given that one is playing in a world where the parameters have been narrowly set by someone else?

A more recent game that has some serious points to make regarding player agency is Spec Ops: The Line, a contemporary rehashing of Apocalypse Now, (which is itself of course a retelling of Joseph Conrad’s Heart of Darkness). Gaming writers have noted that it’s a game that seems to have active contempt for its players, drawing one in to thinking it’s a fast, flashy, Modern Warfare-style shooter before pulling the rug out and revealing to the player that every action they’ve taken since the game’s beginning was in fact utterly reprehensible. It’s a game that purports to be a giant comment on its own genre, and its success in doing so has apparently been mixed.

But one primary thing on which Spec Ops seems to comment is the question of what choices a player actually has in a war game – and, by extension, what choices a person has in a scenario like the one the game depicts. The player, the game seems to be saying at multiple points, has no choice; by choosing to play the game at all – by choosing to enter the scenario – they’ve locked themselves into a situation where not only is there no real win-state, but there is no inherent significance to any of their actions. The game – and the world – is not working with you but against you. Walt Williams, Spec Ops‘s lead writer, puts it this way:

There’s a certain aspect to player agency that I don’t really agree with, which is the player should be able to do whatever the player wants and the world should adapt itself to the player’s desire. That’s not the way that the world works, and with Spec Ops, since we were attempting to do something that was a bit more emotionally real for the player…That’s what we were looking to do, particularly in the white phosphorous scene [where a group of civilians is mistakenly killed], is give direct proof that this is not a world that you are in control of, this world is directly in opposition to you as a game and a gamer.

Video game blogger and developer Matthew Burns takes issue with this, pointing out that the idea of the removal of choice leading to a massacre of civilians has intensely troubling implications, not only for games themselves but for how we as a society understand the question of choice in extremis:

I present a counter-argument: in the real world, there is always a choice. The claim that a massacre of human beings is the result of anyone– a player character in a video game or a real person– because “they had no choice” is the ultimate abdication of responsibility (and, if you believe certain philosophers, a repudiation of the very basis for a moral society). It is unclear to me how actually being presented with no choice is more “emotionally real,” because while it guarantees the player can only make the singular choice, it is also more manipulative.

This, then, is what I think is the central question around video games, simulation, storytelling, and war: How do we understand the very meaning of action? How true is it that we always have a choice, and if we do – or don’t – what does that mean for responsibility, on the level of both individual and society? I argue that when we play games, even if we’re not playing very close attention to the story – even if there isn’t really very much of a story to speak of anyway – we still internalize that story on some level. We interpret its meanings and its logic; we have to, in order to move within its world. We are participants on some level, even if the extent of our participation is the experience of the gameworld. And when we’re participants in a story, that lends the story greater weight for us than if we’re merely passive observers – to the extent that one can ever be passive within the space of a story.

Like other forms of media, war-themed video games are arenas for reproductions of certain kinds of meanings and narratives about our culture and our wars. But more than that – and perhaps more than other forms of media – they are spaces for conversation and debate over how we process those meanings, and what it truly means to participate in something. In these games we talk about patriotism, honor, and sacrifice. But we also question ethics, choice, and the significance of death – and we as participants are brought uncomfortably close to those questions, not only in terms of the questions themselves, but in terms of what it means to ask them at all, and in the ways that we do. And as Spec Ops and No Russian reveal, many of the answers we find are intensely troubling. As Matthew Burns writes:

I played through No Russian multiple times because I wanted direct knowledge of the consequences of my choices. The first time through I had done what came to me naturally, which was to try to stop the event, but firing on the perpetrators ends the mission immediately. The next time I stood by and watched. It is not an easy scene to stomach, and I tried to distance myself emotionally from what was going on.
The third time, I decided that I would participate. I could have chosen not to; I could have simply moved on then, or even shut off the system and never played again. But a certain curiosity won out– that kind of cold-blooded curiosity that craves the new and the forbidden. I pulled the trigger and fired.

 
Also highly recommended:

Guy Debord’s Game of War. Image by Richard Barbrook

In last week’s installment of this essay, I detailed the history of some of the kinds of stories that have been told about war in the 20th century, specifically in American culture and as part of American warfare. This week I want to focus on simulation itself and a little of the place it’s had and has in contemporary warfare, as well how it sits in the context of larger trends in the way wars are fought and understood.

War games themselves are extremely old, with one of the earliest known being chess. In his book Wargame Design, James Dunnigan notes that in terms of its practical design, chess bears a close resemblance to how wars were actually fought at the time, on flat terrain in slow incremental movements with lower class and less powerful front line soldiers defending a king who is less powerful militarily but immensely powerful as a figurehead and symbol.

Dunnigan goes on to explain that most wargames were designed and played by civilians with little military experience, but that by the 19th century, wargame play and design began to shift into the realm of the military itself. An important point to note here is that wargames were not only used by members of the military as a hobby and pastime, but in training and battle-planning. This meant that the games themselves, which had suffered in terms of realism from the limitations in knowledge on the part of their civilian designers, now put a premium on being as realistic a simulation of warfare as possible. They required that their designers and players have a detailed understanding of strategy and tactics, as well as military organization and maneuvers.

Use of wargames on the part of the military began in Prussia and then spread to other European states once it was proven to be an effective technique. It was also made use of in the United States, but its use was generally confined to specific battles. After the end of WWII and as the Cold War began in earnest, this changed, with military wargames taking on a wider scope in both time and space, and a greater consideration of political structure.

At this point it’s important to note the technological context of war itself, and how it was changing during the mid-20th century. It’s a well-known idea within military history and sociology of war circles that WWII introduced a technological component to the fighting of wars that had hitherto been minimal or absent; the idea that wars in general and killing in particular could be refined to a science, calculated and controlled, increasingly mechanized, with a significant degree of physical and emotional distance between killers and killed. Zygmunt Bauman famously tied the existence of the Nazis’ factory-style genocide with major elements of modernity. Joanna Bourke has identified the significance of technological discourse in the ease of killing and the reduction of death in war to numbers and statistics. Emotional pain and guilt on the part of soldiers engaged in bombing missions can literally be measured in terms of altitude from the target, and the aerial bombing of civilian targets became an acceptable method of warfare during both World Wars.

Probably the most important element in the changing landscape of warfare was, as Jeremy Antley points out in his comment on my last post, the existence of nuclear weapons. The line between combatant and noncombatant was already blurry; the spectre of nuclear warfare essentially erased it. Moreover, of all the techniques and weapons of war developed up to that point, nuclear war was the most explicitly scientific in nature, the province of physicists as much as generals.

All of this is to say that as the Cold War kicked into high gear, war and technology – and particularly death and technology – were arguably more inextricably enmeshed than they ever were before. War and technology have always had a close relationship, and the development of new weapons and fighting techniques has always been primary, but now killing was made explicitly calculable – and, by extension, controllable. War was something that could be planned for and explored through gaming and simulation – a set of variables that could be altered to construct a vast range of different scenarios. As wargames shifted from the tabletop to the computer, the variables and scenarios increased in both number and complexity, and simulations could be run at high speeds. The Cold War itself was primarily about strategy, both in the short and long term, about anticipating the movements of the other player in the game.

This reached a significant apex during the first Gulf War, which was extensively planned beforehand with the use of wargaming, specifically a manual game called “Gulf Strike”. The first Gulf War was particular in the history of American warfare, not only in its prominent use of wargames, but in its reliance on and use of digital technology. As a military operation, it was designed to showcase the United States as possessing the technologically dominant military of the future, precise and calculated and efficient, with high results and low casualties. As I mentioned in my previous post, it was this new form of warfare – referred to by the military itself as “full spectrum dominance” – that led Jean Baudrillard to claim that what was presented to the American public was not a war at all but a simulation of the same, bloodless and clean, and also entirely asymmetrical. It was iconic war-as-game, and many of the soldiers who were part of the operation remarked on it as such, that it felt more like a video game more than what they had been taught to think of as war.

Most recently this trend has arguably continued with the increasing prevalence of drone “warfare” and unmanned aerial vehicles, where warriors are no longer even physically present on the battlefields where they “fight”. It could be argued that in many ways this kind of technological war actually brings the experience of fighting closer to the soldiers controlling the drones, but the point again is the degree to which the fighting of war is now augmented – war by physical and digital means now inseparable.

Computer simulation and wargaming continues to perform a major function in the US military’s preparation for war. DARPA developed a vehicle simulator called SIMNET as early as 1980, and in 1990 SIMNET was integrated into STOW, the armed forces’ “Synthetic Theater of War”, which provides a significant digital component for military exercises.

Wargames and simulations have not only been used by the military in training and strategic planning, but more recently in recruiting. “America’s Army”, a first-person shooter released in 2002, was funded and released by the Army itself, and was explicitly designed to present the modern US Army to civilian youth. The game was created for the specific purpose of depicting an officially sanctioned image of the Army to the public – a exercise in meaning-making in terms of the public’s understanding of the experience of being a member of the armed forces. The fact that the Army chose a video game as a medium is also important; essentially, “America’s Army” functions as a training simulator, not only conveying a particular understanding of the armed forces, but preparing potential recruits for the real-life training they might soon experience.

In my previous post, I noted the power of stories in meaning-making and the production and reproduction of cultural understandings. I also noted that games are usually driven, implicitly or explicitly, by narratives. At this point I want to go a step further and argue that games play a role not only in learned meanings but in learned behaviors, and in how we contextualize those behaviors. The fact that simulation is used as an integral part of military training (and psychotherapy) is a strong indicator that it can be an effective tool in terms of shaping behavior and understandings of behavior, as well as the context in which that behavior occurs.

In First Person: New Media as Story, Performance, and Game, Simon Penny explains (parentheses mine):

When soldiers shoot at targets shaped like people, this trains them to shoot real people. When pilots work in flight simulators, the skills they develop transfer to the real world. When children play “first-person shooters”, they develop skills of marksmanship. So we must accept that there is something that qualitatively separates a work like the one discussed above (of an art installation where the viewer can physically abuse the projected image of a woman) from a static image of a misogynistic beating, or even a movie of the same subject. That something is the potential to build behaviors that can exist without or separate from, and possibly contrary to, rational argument or ideology.

This is not by any means to argue that playing wargames will always make someone want to fight wars, any more than it is to argue that playing a FPS will in and of itself make a teenager want to take a gun to school (I think we can all agree that’s a fairly tired argument at this point). It’s merely to point out that simulations have power –  power to shape meaning, our perceptions of ourselves and others, and our understandings of our own behaviors, as well as what behaviors are appropriate and reasonable in specific contexts.

So, to make a long post short: there is something particular going on in regards to simulations and wargames in the context of technological warfare in the 20th and early 21st centuries, and especially wargames and simulations that are digital in nature. This something has a tremendous amount to do with the construction of meaning, with behavior, and specifically with our understanding of what wars mean and what it means to fight them.

Next week I’ll be talking more about specific games and what I think they suggest about how we understand and conceive of war in the present, and how we imagine it in the future.

A version of this essay was delivered at the military sociology miniconference at the annual meeting of the Eastern Sociological Society, 2011.

War is fundamentally a cultural phenomenon. It is profoundly entangled with shared meanings and understandings, stories both old and new, and the evolution of the same. These stories and meanings concern how war is defined, what it means to be at war, how enemies are to be identified and treated, how war itself is waged, and how one can know when war is finished – if it ever is. The shared meanings and narratives through which the culture of war is constructed are diverse: oral stories told and retold, myths and legends, historical accounts, and modern journalistic reports – and it’s important to note how the nature of those last has changed as our understanding of what qualifies as “journalism” has changed as well.

Video games are worth considering in this context, not only because of their pervasiveness but because of their narrative power.  They share much in common with film: interaction with them is mediated by a monitor, and they almost always feature a narrative of some kind that drives the action on the screen. However, video games are also different from other forms of media in that they are simulations – they go beyond audio-visual narrative and into at least an attempt to approximate a particular kind of experience. Further, unlike movies and TV, a feature of the experience they offer is active participation. This isn’t to say that movies and TV are passive; they’ve been too often dismissed as such, when viewing those forms of media in fact often involves complex patterns of interpretation and meaning-making. However, the difference is still worth some attention.

I want to argue that this difference has particular implications for how we as a largely civilian population understand war and reproduce the meanings we attach to it. Further, I think that how our games tell stories about war reveals some powerful things about how storytelling in war has changed over time – along with war itself – and how we can understand our own collective psychological reactions to those changes. Finally, I argue that our relationship to war in the context of games highlights some of the ways in which war is digitally augmented – not only on the battlefield and among military population but here among civilians.

Given that we’re talking about games in the larger context of media and warfare, I’ll begin by outlining some of the ways in which media narratives – especially film – have historically contributed to the cultural construction of meanings of war.

As I said above, meaning-making and the construction of understandings of war are traditionally narrative in nature, or at least narratives of various kinds make up a great deal of what there is. Myths and legends are some of the oldest forms of this, and help to construct and maintain cultural ideas regarding what war is like, how it is to be fought and how a warrior should conduct themselves, and what is at stake in war (territory, property, beliefs, etc.). In addition to myths and legends—and sometimes inseparable from them—are historical accounts of war, which relate details about the wars that a nation or a people have been involved in over the course of the past and are therefore pivotal in a people’s understanding of how they arrived at their present condition, who their enemies were and are, who they themselves are in opposition to others, and what they hold valuable and worth fighting for—as well as what they might fight for in the future, and who they might fight for it.

This is an important thing to make an additional note of: images and stories of war tell not only about the wars that have been fought, but about what wars might be fought in the future; they contain information regarding what is both possible and appropriate in terms of war-making. But I want to focus this more narrowly in the recent past, so I rather than the older forms of war narrative, I’ll focus on propaganda, journalism, and film/television.

Wartime propaganda reached a new level of pervasiveness and complexity in the twentieth century, due in large part to emerging media which provided new venues for its spread to the public. Poster were naturally widely used, but film provided the most powerful new medium in which for propagandists to work, and movie theaters were increasingly sites for the proliferation of government sponsored information regarding how wars were being fought, what they were being fought for, and the nature of the enemy. Some of the government sponsorship was direct; some less so—It’s important to note that at this point, the lines between news, entertainment, and overt propaganda were often indistinct at best. World War II was framed as a struggle of good against evil, with the Axis powers presented as fundamentally alien and Other in comparison to virtuous Allies. These narratives were engaged in both constructing and reproducing an understanding of the war as a struggle against a barbaric enemy that could not be reasoned with and which bore no resemblance to the “good” side.

One example of this kind of meaning-making can be found in the Why We Fight film series, commissioned by the US Army shortly after the beginning of World War II and directed by the famed Frank Capra. These films, which were required viewing for American soldiers, presented the Axis as a vicious and barbarous marauding power, entirely bent on subjugating the world. Particularly important in the creation of the films were heavily cut and edited sections of captured Axis propaganda – Capra engaged in the kind of reframing via remixing that we see more commonly today in reference to a wide range of media and cultural sources.

It’s worth noting at this point that dehumanization of the enemy only implies what is at stake but suggests how the enemy is to be treated.  An inhuman enemy that is fundamentally evil in the way that the propaganda on both sides depicted can only be eliminated. Killing is constructed as the only possible or reasonable action to take.

War film has a long history, especially in the United States, and different wars are dealt with differently in film, depending on both the war and the era in which the film is made. As the realities of how war is understood change, its depictions undergo a corresponding change in media intended for mass consumption. We can understand this as a response to cultural changes that precede the depictions—but the depictions also help to construct and reproduce the meanings emerging from the changes.

Many of the films depicting World War II were “romantic” in nature, featuring heroic sacrifice in which American determination and courage led to victory. The films both emphasize the conception of the Allied – and specifically American – forces as good people engaged in a righteous cause, and make powerful suggestions about the way in which war can be won. The emphasis on the sacrifice of the body and the meaning of injury is significant: death in war is not only not entirely a negative, but one can have confidence that the sacrifice is undertaken on behalf of ethical leaders and a good cause, and injury and death of a nationalized body take on a justifying function within conflict.

During the Cold War era, we see everything change, specially in the period immediately following the Vietnam War. Many of these films break with the tradition of honorable and necessary sacrifice by presenting Chinese and American soldiers as pawns without agency led by people who don’t value suffering or sacrifice. Films that deal directly with the conflict in Vietnam follow a similar formula, presenting death in war as fundamentally devoid of  ideological significance, and criticizing the leaders whose decisions put men in the position to die in battle. Sacrifice is even presented as possessing no deeper significance at all, and the soldiers in war as little more than animals being slaughtered in a conflict of which they have no real understanding. War films made during this period therefore present a trend characterized by deep ambivalence to the meaning of war, to how it is fought and against whom, and to the trustworthiness of the political and military leaders of the nation.

We see this again more recently regarding both the first and the second Gulf Wars, with films depicting of war as surreally pointless. However, some (like 2008’s The Hurt Locker) take a more analytical, documentarian bent. I think the latter especially is a significant development, and can be explained at least in part by the increased prevalence of documentary journalistic accounts in the exposure to current and recent conflicts on the part of the general public. But documentation doesn’t equal a lack of mediation; it’s a form of meaning-making in and of itself, and it makes certain kinds of interpretation possible while precluding others.

The first Gulf War occurred at the dawn of the era of satellite TV and 24-hour news networks. It was arguably the beginning of war-as-spectacle: packaged for mass consumption, more immediate and more real—and yet more removed and more surreal. Despite the amount of news coverage, the image of war with which the American people were presented was bizarrely constructed, with, as Elaine Scarry noted, a marked lack of injured and dead bodies in the discourse around the war. There was no need to present sacrifice as honorable or righteous, since there was no sacrifice. With no concrete depiction of enemy casualties, the enemy remained an undefined, nebulous idea. Jean Baudrillard famously claimed that as a war, it “did not take place” at all, that the media event that was packaged and sold to the American public was a simulation of war that was too bodiless and too asymmetrical to be called a war at all.

Most recently, war film is increasingly technology-focused—and increasingly uncertain and paranoid in its depictions of the experience of war. As in the first Gulf War, the enemy is not clearly defined but is instead heavily abstract, though represented in conflict with Othered individuals: Terror is the enemy, not any specific persons or group of people. Additionally, following the phenomenon of ambient documentation on both an individual and institutional level, the military is shown filming itself, from high-altitude surveillance to video taken by soldiers on the ground and photos captured on cell phones. There is an essential lack of any heroic narrative in most films about the second Gulf War, and though much of the film footage is ostensibly meant to be realistic, it is in turn reflecting an unreal reality that’s simulated in nature, atemporal, and presented in a confusing multiplicity of narrative forms.

Finally, it’s worth noting that with the proliferation of image-altering apps like Instagram, images of war can be used in an attempt to recapture a kind of authenticity that’s both comforting and simplifying (a return to the manichean worldview of WWII) – that “faux-vintage” images of war are a reaction to war that’s becoming increasingly “unknowable” and removed from the perception of many, if not most.

Next week I’ll be focusing on the place of institutionally sponsored simulation in war and what that does to actual experiences of warfare, as a way to introduce a further discussion of how we experience and understand wars that exist entirely within games and how that affects our storytelling about war in general.

Welcome! To the Myspace of tomorrow!

Whitney Erin Boesel’s most recent post addressing the potential “regentrification” of Myspace helps to illuminate a further point that we should bear in mind whenever we’re considering the implications of a site – or an interface – redesign/reboot: it’s not a sweeping, instantaneous change that’s rooted entirely in the present, and its users almost never perceive it that way. This can help to explain the frequent emotional intensity with which users often respond to redesigns.

“Reboots” are, in many cases, last-ditch attempts to revive something’s usefulness and vitality. Reboots of movie franchises, comic series – and reboots of websites. The latter is especially interesting given what it involves: every reboot is a risk in that it might lose fans and might not gain enough new ones to replace them, but given how personal a lot of us feel our social media spaces to be, a website reboot can feel like someone’s come in and redecorated your house without asking for your input.

It shouldn’t be surprising to us that people have such powerfully knee-jerk emotional reactions to redesigns of their digital space. As David Banks has pointed out, we have profound emotional attachments to our interfaces with our digital spaces, whether those interfaces are websites or operating systems. While our feelings of ownership when it comes to digital corporate property may be problematic, and we may prefer to rationalize our essentially emotion-based feelings by couching our objections in terms of overall functionality, what’s really going on is far more complex, and has to do with the entwined nature of our perception of digital  and physical space. As David writes:

When someone takes control over something we hold as intimate, we feel infringed upon. Something that should be under our personal control has been altered without our explicit consent and that makes us feel vulnerable…The EULA might say everything belongs to the company, but you are your profile. Changing your desktop experience or your profile layout is tantamount to some stranger running up to you and giving you a haircut against your will.

It’s been noted by some in web design circles that when it comes to site redesigns, a significant percentage of users always seem to respond with negative backlash when they find their digital space altered – and this has to do not only with a recognition of the state of the present, but with a suddenly looming and potentially out of control future. In the link above, Cennydd Bowles observes that (emphasis mine):

A favourite site has an emotional connection for us: we like it, it likes us, and we can depend each other. We fear the disruption of that equilibrium: a redesign raises the question of whether the site will grow in a direction we don’t want to follow.

This is somewhat self-evident, but I think it’s also worth being very specific about. Digital spaces possess dimensions of a linear temporal existence, though in many respects they are atemporal: they exist in our present and extend their potential existence backward into the past and forward into the future. This gives their redesign temporal weight. For a user, the implications of this are profound. Not only has the user’s space changed, but now any number of other changes seem possible. Imagination of the future has the potential to become something fraught with discomfort and even fear.

I used to be a Livejournal user, and I recall when Livejournal made the move toward enabling its users to link their LJ accounts to their Twitter and Facebook accounts. This, in conjunction with some other site changes, created a strong user backlash that centered around the feeling that LJ’s overlords were attempting to make the site “more like Facebook”. The resounding response of the angry userbase was that they already had Facebook accounts and didn’t want their two separate spaces to become more like each other. They didn’t like the perceived direction in which Livejournal was heading; this was actually what seemed to make people more upset than the site changes in isolation from anything else. The other factor that seemed to be primary in their anger was a sense of a loss of control. Some LJ users admitted that the changes themselves didn’t bother them, but the way in which the changes were rolled out – with no user input – was profoundly troubling to them.

In short: User angst regarding interface redesigns is about anxiety in the future as much as it is dissatisfaction with the present.

This brings me back around to the idea of gentrification. Again, it’s perhaps fairly self-evident, but still worth being explicit about: Gentrification of physical space is a process, not a snap change from one state to another. It’s a process in which people exist, and their perceptions regarding the direction of that process might or might not affect in what direction that proceeds. Further, their temporally-laden perception of their changing space may be the source of – or in some cases the alleviation of – their anxiety. When we examine how a userbase response to a change in their digital space, we need to be sensitive regarding what specifically they’re responding to – the sum total of the changes themselves, or something more?

So far, response to Myspace’s redesign has been pretty positive – but I note that many of these positive responses seem to come from tech blogs and other sources that feel more external to Myspace’s actual userbase. A cursory scan through Twitter turned up a number of posts about the redesign, but most frequently from old users who said they were considering returning if the reboot took off (which is clearly one of Myspace’s primary goals). It’ll be interesting to see how the existing userbase responds – and to what exactly they’ll respond.

Image by Dave C.

It’s already been well-established by other posts on this blog that there’s something particular going on with regards to ICTs – especially social media technologies – and storytelling. My post last week dealt with how the atemporal effects of social media may be changing our own narratives and how those narratives are understood and expressed. This week I want to focus on some of the ways that social media technologies are making our narratives more communal in nature.

First, it’s worth noting that communal narratives – which we can understand as narratives that are constructed and related by multiple cooperating participants, sometimes in a hierarchical fashion and sometimes not  – are by no means new. Narratives have always been communal to some degree, simply by virtue of the fact that no story, fictional or factual, exists in cultural isolation. Every story is embedded within a matrix of cultural values, assumptions, norms, etc. Fiction often draws upon influences of other fiction, sometimes merely in the form of homage and sometimes in adaptation. By the same token, when you tell a story to your friends about how you spent a weekend, that story exists within the context of shared culture, relationships, history on both a personal and a social level, and many other things besides.

Narratives can also be more literally communal. Many cultures feature storytelling traditions wherein the story is told through forms of call-and-response, with the audience just as much a participant than the official storyteller. Plays are arguably communal narratives; a play may issue from a single written source but every director and actor involved brings their own interpretation to the performance of the story, making each performance subtly – or extremely, in some cases – different. Gary Allen Fine has written extensively on how tabletop role-playing games can be understood as communally constructed narratives of an excessively formal type. And in that same story about your weekend, your friends may interject with commentary or requests for more detail about certain elements.

So communal narratives are not new, in and of themselves. What is new, I want to argue, are the ways in which communal narratives are now being constructed and the spheres in which we find them.

 

Fiction

When we do talk about what ICTs have done to our narratives, I think we often neglect what we classically consider as “stories” – fictional narratives. But this kind of narrative is equally important to consider, especially given the ways in which our augmented fictional narratives are connected to fictional storytelling of the past.

One kind of augmented narrative with which I think most of us are familiar is, again, narrative constructed through digital role-playing. A lot has been written about Second Life and World of Warcraft. Both of these examples are somewhat tired by this point but still worth mentioning given that they present very different kinds of role playing – Second Life is essentially goalless, with the emphasis on creativity, environment construction, and socializing. It could be argued that World of Warcraft is also goalless in the long run, as there is no singular “winstate” at which the game is completed; nevertheless, players are driven by the powerful immediate goals of leveling up and accumulating the best possible arms and armor.

Narrative also works differently in these games: in Second Life, the player has a tremendous amount of agency in the construction of their character’s story, or freedom to actively construct no story to speak of (though simply by being in the game and interacting with others, a narrative still unfolds). In World of Warcraft, the game’s larger narrative can easily be ignored in favor of stat grinding and item accumulation, but it’s still there, and it subtly directs the background flow and logic of the game. Players still work within a narrative, even if they don’t make it the center of their attention.

And there are other games where the construction of narrative is actually the primary focus of the game. “Pan-fandom” roleplaying games on the sites Livejournal and Dreamwidth allow players to create journals for characters from various media and to “thread” those characters interacting with each other and working collectively to construct a larger storyline. Some of these interactions are plotted out before they are played, while some are constructed on the spot. But always the games are intensely narrative-focused and deeply communal.

It’s also interesting to note that the actual structure of these websites affects the structure and logic of the interactions – the idea of the interactions being centered around turn-based threads within larger posts is entirely by virtue of how sites like Dreamwidth and Livejournal work. Because one might have multiple threads with multiple different characters taking place in a character’s post, it’s implicitly understood that all these threads are occurring concurrently, something that would be difficult to impossible to depict in a traditional singular-streamed fictional narrative.

This is actually a very important point: Storytelling is shaped, limited, and facilitated by the medium through which it is told, and digital media allow for – and force – particular kinds of stories to be constructed and told in particular ways. This is also not necessarily new; we can see it in older broadcast media, print media, and even board games. What’s important to attend to is how newer forms of technology affect how this happens. In Christopher Franklin’s review of Spec Ops: The Line, he notes that the actual structure of FPS gameplay encourages the narratives driving those games to adopt a black and white Manichean morality, where any action that allows the player to progress through the game is understood as unequivocally good, and anything that stands in the player’s way is  unequivocally bad.

In terms of a transition from older kinds of fictional narratives to newer forms, it’s also worth tipping a hat to fanfiction. Fanfiction is often derided by many as silly, ridiculous, sex-obsessed, and of significantly lesser literary value than the sources from which it draws. Some of these things are true some of the time, but what the derision obscures are vital creative communities engaged in an ongoing process of complex interpretation, deconstruction, construction, and dialogue with elements of popular culture. These are stories that are created in an intensely communal process, often referring back to specific interpretations of the source material (called “fanon”, in reference to “canon”.

Fanfiction communities also aren’t the only communities that engage in this kind of storytelling. Websites like Wattpad enable writers to construct stories serially, developing them in dialogue with reader feedback. As Olivia Rosane notes in the article linked above, this goes directly against how most published stories are now written and delivered to the public, with all the messy creative and editing and marketing bits hidden behind a screen of polished packaging.

These kinds of narratives would probably exist without digital technology in some form; again, we tend to construct narratives communally anyway. But digital technology facilitates their construction, and affects what form they take.

 

Nonfiction

It goes pretty much without saying that things like the Facebook timeline have a tremendous amount to do with how we construct – and display – our self-narrative. At the end of that post, Jenny Davis notes that “Through links and tags multiple narratives weave together to co-constuct each others’ stories and digitize an analog past.” I want to build on that point, because I think it’s important that we understand personal narratives mediated by digital technology to have a fundamentally performative nature.

This doesn’t mean that those narratives are always either entirely public – or entirely private. It simply means that when we construct our self-narratives through digital media, we are engaged in an ongoing process of revealing and concealing, of showing some things to some people and hiding other things from others in a kind of digital Goffmanian dramaturgy. What kind of narrative we want to construct and display and how that’s done is the product of interaction with others in different spaces; you may direct your self-narrative, but you don’t construct it in isolation from others. Reality curation, as Jenny Davis has explained, works in both directions. Further, as people comment on your posts and status updates, share links, and tag you in photos, they participate in the construction of your stories.

Additionally, as Whitney Erin Boesel described in her post yesterday, we construct deeper forms of meaning and self-knowledge through technology as part of our narratives – and these forms of knowledge may be shared with certain people and not with others, subtly affecting our own understanding and interpretation of that meaning.

Essentially, all narratives constructed through and mediated by technology are either implicitly or explicitly communal in nature. Again, this is true of narratives in general, but it’s still important to pay attention to the how of that communal construction.

 

The end of narratological dualism?

I want to close by suggesting that something interesting may be increasingly possible – and necessary – concerning our augmented stories: the end not only of digital dualist thinking but of a kind of narratological dualism that draws sharp distinctions between fiction and nonfiction, and which privileges the latter as more legitimate and more meaningful. The Baudrillardian concept that it’s now difficult to impossible to pin down an exact, objective, and original reality is, of course, not a new one, and I think that this “fuzziness” when it comes to the truth of meanings and events suggests some powerful things regarding how we understand fiction to be different from nonfiction. But what I think new kinds of storytelling also highlight is how deeply meaningful all forms of stories are to us. Fiction moves us just as powerfully – if not more powerfully – than many forms of nonfiction. Fictionalizing in the interest of eliciting emotion is an old technique: David Simon, creator of The Wire, has admitted that relating factual accounts of Baltimore within a fictional frame very possibly makes people care more about issues of poverty, racism, and violence than would a strictly documentary approach.

Further, our imaginations are real spaces, just as the physical world is. We couldn’t interpret anything that happens to us in the physical world without imagination on some level; it would be unnavigable. The “reality” of imagination is just as meaningful as the “reality” of the world that comes at us through our eyes and ears and skin, though that meaning might be of a different kind. In order to understand our stories and how they’re changing, we need to understand that fiction and nonfiction are enmeshed, just as are the digital and the physical. And while we need to be sensitive to differences between the two, we can’t privilege one over the other. To do so does a disservice to the richness and complexity of our stories.

I mean, besides this guy.

Cory Doctorow’s recent talk on “The Coming Civil War Over General Purpose Computing” illuminates an interesting tension that, I would argue, is an emerging result of a human society that is increasingly augmented: not only are the boundaries between atoms and bits increasingly blurry and meaningless, but we are also caught in a similar process regarding categories of ownership and usership of technology.

Understanding the tension between owners and users – and the regulatory bodies, both civil and corporate,  who would like to have greater degrees of control over both – is necessarily going to be a consideration of the distribution of power in augmented human experience. If the categories of user and owner are increasingly difficult to differentiate clearly, it follows that we need to examine how power moves and where it’s located as the arrangements shift. I don’t mean just the question of whether users or owners have more power, but what kind of power they have, as well as who is losing what kind and who is correspondingly making some gains.

Doctorow’s initial point – and it’s an important point from which to start – is that not only is human life increasingly augmented, but it’s augmented by a collection of technologies that are at once more and less diverse than they used to be:

We used to have separate categories of device: washing machines, VCRs, phones, cars, but now we just have computers in different cases. For example, modern cars are computers we put our bodies in and Boeing 747s are flying Solaris boxes, whereas hearing aids and pacemakers are computers we put in our body.

If we understand these devices as “general purpose”, as Doctorow does, then power within that context takes on a very specific meaning: who controls what programs can run on these devices and how that ends up affecting how the devices are used? Owners? Users? Regulatory bodies? Corporations?

Traditionally we’ve understood an owner of something to have pretty much complete control over its use, within reason; this is fundamental to a lot of how we culturally conceive of private property rights. When we buy something, when we spend money on it and consider it ours, it’s been tacitly understood that we then control how it’s used, at least within the boundaries of the law. If you buy a car, you can have it repainted, switch out the parts for other parts, enhance and augment it largely to your heart’s content. If you buy a house, you can knock down walls and build extensions. I would argue that we tend to instinctively think of technology the same way: we – or, to paraphrase William Gibson, “the street” finds its own uses for things, and those uses aren’t subject to much constraint.

But increasingly, we can’t assume that.

When it comes to general purpose computing, both corporations and corporate-esque bodies with regulatory interests are exercising ever-greater degrees of control over what programs can and can’t run on our devices – in other words, how our “owned” devices can and can’t be used. As Doctorow points out:

We don’t know how to make a computer that can run all the programs we can compile except for whichever one pisses off a regulator, or disrupts a business model, or abets a criminal. The closest approximation we have for such a device is a computer with spyware on it— a computer that, if you do the wrong thing, can intercede and say, “I can’t let you do that, Dave.”

Such a a computer runs programs designed to be hidden from the owner of the device, and which the owner can’t override or kill. In other words: DRM. Digital Rights Managment.

Things like DRM are clearly problematic because they erode our very idea of what it means to be an owner of something; we can use it, install and run programs on it, and customize it to a degree – but only to a certain degree. Other entities can stop us from doing something with our devices that they don’t like, often through coercive means both subtle and not-so-subtle. And that line between okay and not-okay is subject to change, sometimes without much notice. Owners – people whose devices would traditionally be understood as their property – increasingly resemble users in many respects – people who can use and sometimes even alter or customize a device, but who don’t actually own it and whose power vis a vis the use of that device is necessarily limited. And, as Doctorow goes on to note, we are increasingly users of devices that we don’t even arguably own (such as workplace computers).

PJ Rey wrote an excellent piece in this vein a while back on Apple –  probably one of the more egregious offenders here. Apple, PJ notes, makes use of an aura of intuitive, attractive, user-focused design to suggest to its customers that it is empowering them – but this sense of empowerment is ultimately an illusion. Apple doesn’t want owners, it wants largely passive users – people who pay for the privilege of using the device but who will submit to the nature of that usage being severely curtailed:

[B]y burying the inner-workings of its devices in non-openable cases and non-modifiable interfaces, Apple diminishes user agency—instead, fostering naïveté and passive acceptance.

Even when a company is less overt about their desire to control the devices they’re selling, the presence of a net connection coupled with firmware updates can serve to reveal ways in which “owners” of a device have little control over what programs actually run on that device and how it can be used. I own a Playstation 3, and periodically I’m required to download a firmware update. I essentially have no choice in whether or not I download this update – I’m required to signal my agreement, but not doing so would deny me access to a number of features that pretty much make it possible for me to use the PS3 for the very things we bought it to do. I wouldn’t be able to access PSN (Playstation’s online store and software update network), which would mean that many of my games would be unplayable; they require regular software updates to run at all.

But by accepting one of these system firmware updates, I removed the ability of my PS3 to run a Linux-based OS – something that many users have found preferable and more flexible than the PS3’s default OS. The device I own is now less functional; I traded non-functionality for lesser non-functionality. Either way, I was reminded once again that I don’t necessarily “own” the device that is arguably my private property.

So power is in flux. It’s subject to a particular kind of contention here, and I’d argue that the form of that contention – or at least some of its elements – is new.

This picture is further complicated when we consider programs themselves. I’m old enough to remember a time when you bought software and it was basically yours in the traditional sense: you could install it on as many devices as you wanted and an internet connection wasn’t necessary for constant confirmation that you had actually paid for it. Where software is concerned, licensing is arguably supplanting traditional ideas of ownership – you are essentially paying for the privilege of installing it on a severely limited number of devices and you’re required to go through verification processes that frequently serve to make me feel like some kind of digital shoplifter.

Finally, Doctorow points out how this is all still further complicated by the ways in which people’s bodies are physically augmented and are likely to be so in the future (here he contrasts issues specific to owners with issues specific to users):

Most of the tech world understands why you, as the owner of your cochlear implants, should be legally allowed to choose the firmware for them. After all, when you own a device that is surgically implanted in your skull, it makes a lot of sense that you have the freedom to change software vendors. Maybe the company that made your implant has the very best signal processing algorithm right now, but if a competitor patents a superior algorithm next year, should you be doomed to inferior hearing for the rest of your life?…

[But] consider some of the following scenarios:

• You are a minor child and your deeply religious parents pay for your cochlear implants, and ask for the software that makes it impossible for you to hear blasphemy.

• You are broke, and a commercial company wants to sell you ad-supported implants that listen in on your conversations and insert “discussions about the brands you love”.

• Your government is willing to install cochlear implants, but they will archive everything you hear and review it without your knowledge or consent.

The point at which physical bodies are physically augmented by technology is a crucial crossroads here, one that Doctorow discusses but where I also think he could go further: the question of human rights vs. property rights. Doctorow is undoubtedly correct when he notes that users and owners don’t necessarily have the same interests – indeed, sometimes their interests conflict. But I think it’s also important to emphasize once again the the delineation between the two concepts isn’t always clear anymore – if it ever really was – and is likely to become less so. And along with the uncertainty about the boundaries between these two groups comes uncertainty regarding whether we can still meaningfully differentiate between property rights and human rights, when we not only own but are our technology.

On this blog we’re very used to the ideas of categories collapsing, and given that, it follows that once we accept the idea that those categories are collapsing, we have to ask ourselves what that exactly means – or might end up meaning in the long run. What we have now are questions – about where the power is, about where it’s going, and to what degree agent-driven technology use can survive the coercive control of corporate and government regulation of those technologies – especially when human life and experience and our very physical nature are so deeply augmented.

One final theoretical element that I think is useful here – and to which Doctorow makes no direct reference, though I think there’s a lot of room for it in his talk as well as a lot of indirect links already made – is Foucault’s concept of biopower – of power exercised by state institutions by and through and within physical bodies. The idea is an old one now – but within the context of the above, I think it’s changing in some significant ways. When technology is subject to institutional control, it’s deeply meaningful when that technology is literally part of our bodies – or so deeply enmeshed with our daily lived experience and our perceptions of the world around us that it might as well be. And when the lines between government institutional control and corporate institutional control become blurry in their turn, the traditional meaning of biopolitics is additionally up for grabs.

One of the more famous phrasings of the recent spate of technology-critical writing is Jaron Lanier’s You Are Not a Gadget. But more and more, that’s exactly what we are – we are our technology and our technology is us. Given that, we now need to understand how to defend our rights – property and humanity, users and owners, digital and physical, and all the enmeshings in between.

We have always been visual storytellers.

Last week I wrote a piece on the increasing prevalence of the Bokeh blur effect – and other filmic effects – in forms of visual media where we once saw them rarely if at all. Nathan Jurgenson then followed up with a response that articulated some interesting and important questions that this trend raises – which I want to consider further here, though please don’t mistake this as anything other than additional groping toward something that still needs a lot of working-through.

I tend to tie most things I blog about back to storytelling in some way – it’s just how I roll – and in my original piece I did this fairly explicitly. How we document things is ultimately about how we tell stories – and the important thing to bear in mind here is that any story told by someone else is necessarily going to be mediated by that person, regardless of whether or not it’s technically factual or fictional. Every account is filtered through someone else’s perceptions, assumptions and understandings. When events are documented on video, that documentation is still subject to what the person filming chose to focus on and what they ignored, what they saw and what they weren’t present for, and –  in the case of Bokeh – the technology they used.

What sets effects like Bokeh apart is, as I said, the visual-textual cues that they make up for us, which are some of the ways we know – or used to know – that we’re watching a documentary or news footage as opposed to a dramatic film. But with Bokeh becoming more common in footage that we’re used to viewing and interpreting as “factual” – with footage of supposedly real events looking more and more filmic – we have to consider the possibility that this encourages us to understand current events as other than current, and even other than “events” in the sense that we’ve used the term. Nathan suggests that filmic effects impose a sense of order and coherence on an otherwise confusing present; by framing messy, uncomfortable stories within more traditional narratives we can impose on them the comforting good/evil/right/wrong trappings of those narratives. In effect, the implication is that technology may be allowing us to mythologize our present.

This is actually an echo of other discussions we’ve had here before, perhaps most famously in Nathan’s essay on the faux-vintage photo but continued in an examination of what instagram means when applied to photos taken in present-day war zones, as well as HDR effects applied to photographs of ruined spaces. What’s common in all of these is an apparent desire – not caused by technology but certainly enabled by it – to view our messy present as a gauzy, comforting, over-romanticized past. Anything for which we can feel nostalgia must necessarily be not only more pleasant but also more meaningful; applying these effects to visual media therefore allows us to imbue with meaning and authenticity the very stuff that we fear is neither meaningful or authentic.

What happens when we make a document of current reality into a glossy fiction? Instagram is making a claim on age, albeit it an overtly artificial one, but what about Bokeh? In order to answer that, I think it’s useful to understand what stories are, at all times and in all places: in a subtle way, we understand all stories to be about events that have already taken place, regardless of what tense the story is told in. And often those stories are one step removed, told by a faceless and often omniscient narrator who may or may not be at all involved in the action. It’s no accident that the traditional baseline for fiction – at least in English – is second-person past-tense.

Telling stories has been used as a technique to rouse rabbles, to teach new generations, and to make it clear who we are and where we come from. Storytelling is unquestionably powerful. But by simplifying and subtly removing us from events with which we aren’t necessarily involved and over which we have no control, it also has the potential to let us off the hook. Done right, it requires less from us than something that we perceive in all its messy reality. See: the distress that civil rights activists express at the de-radicalization of Martin Luther King, Jr.

Making our present or very-near-past into a fictionalized distant past also has another effect that’s been covered here: it atemporalizes both our memory and our lived experience. By experiencing the present as a fictionalized past – or by, in Nathan’s words, “view[ing] our present as always a potential documented past” – we collapse the categories of past and present. By viewing a romanticized, glossy picture of a ruined space and extending our imagining of ruin and death into the present and the future, we collapse those categories as well. Atemporality isn’t just about technology – it’s about what our technology allows us to do to our stories.

 Atemporality is also – if one uses Bruce Sterling’s characterization, and I do – about multiplicity, about an abundance of different nonlinear avenues to information that is equally illuminating and confusing. There’s been a tremendous amount of writing on how the internet helps more voices to be heard (though often with ominous caveats about the signal-to-noise ratio). Digital technology – how we share, record, process, mediate, and store information – helps us to tell and to be told more and more stories from more and more people.

They were all always stories. We have all always been storytellers. But now we are louder, faster, more polished, both sharper and more pleasingly blurry, at once bursting with meaning and uncomfortably meaningless. The idea that everything is fundamentally a story has been around for a long time –  and the boundaries between fact and fiction have always been porous – but at least the different tellers were easier to parse. Now we’re all at once poets and scribes – and we’re also time-travelers, not moving along a stationary linear timeline but pulling all of our experience of time into ourselves like temporal black holes.

Perhaps most ironic is what – according to Nathan – all of this mythologizing, glossy present nostalgia, and grasping for the present is actually a reaction to: Our increasing messy and confusing Now, with its lack of certainties, knowabilities, and clear distinctions between categories.

We want everything to have a Once Upon a Time. We want to make the world a fairy tale because we all understand fairy tales; we know that witches and dragons are always wicked and princes and princesses are always brave and beautiful and good; we know that even if the princess pricks her finger on the spinning wheel’s spindle, she isn’t really dead and the prince will awaken her. We know that good will win out and evil will be clearly identifiable. If we know how the story begins, we immediately know how it ends, and it always ends Happily Ever After.

But there isn’t just one story. There isn’t just one form of documentary technology, one kind of mediation, one set of immediately identifiable textual cues – and what we have keep shifting their meaning, as news begins to look more filmic and fantasy films begin to look more like “reality”. Our technologically-mediated storytelling is every bit as world-destroying as it is world-creating; if it appears to clarify some categories it collapses many others. The very thing we look to in order to provide a little simplicity may only everything endlessly more confusing in the end.

Giles knows what’s up.

 

Image by brokenchopstick.

A couple of nights ago I noticed that Paul Mason’s piece on the BBC’s website “In Praise of Bokeh” had earned a number of a “the medium is the message” style comments on the community blog Metafilter. One could characterize these comments as cliche, and so they arguably are – but things have a way of becoming cliche because they happen to be extremely useful frameworks for approaching the world. And I think Bokeh as an effect is worth some approaching, because it suggests some powerful things regarding how we tell stories with visual media and how that storytelling is in the process of changing in conjunction with technology.

First, some explanation. Bokeh is a visual style, and since Mason puts it very well I’ll just quote him:

Bokeh is a Japanese term used by photographers to describe that pleasing effect where the background of a photo is defocused, often into blobs or hexagons, while the subject is razor sharp. It’s what you need a real lens for, and it’s produced by the effect of the little blades that open and close the aperture, letting the light onto the sensor.

Why does Bokeh matter? First of all because there’s more of it than there used to be, in places where it did not use to be seen. As Mason points out, once upon a time you only tended to see Bokeh in particular kinds of pieces of film – film and video more focused on creating a mood or telling a story in a way that emphasizes style of telling as much as the content of the story itself. Those of us who have grown up and live in a world immersed in visual media have learned to pick up on visual cues as a means of interpreting the context and meaning of what we’re seeing; in narratological terms, they are the discourse of the story. When we see Bokeh, we tend to understand instinctively that we’re watching, for example, an art film as opposed to cable news. So if the film is a text, Bokeh – and other visual styles – are the means by which we understand what kind of story we’re reading. They are the “once upon a time” that sets up our expectations of what we’re going to see.

Bokeh, then, has traditionally not been something that one would find in news reports – until recently, when journalists began to make use of digital HD video in the field because of its portability and cheapness. “Normal TV cameras, costing maybe five times as much as a Canon 5D MkII , don’t really do Bokeh,” Mason writes. “They’re designed to keep more of the scene in focus, and to maximize clarity over moodiness.” How video documentation works has done a complete 180 in this instance: what was once reserved for more stylistic pieces of film is now increasingly prevalent in some of the most raw forms of documentary film possible – footage of war zones and protests – because of its cheapness and ease of use.

When something like this happens, it calls attention once again to how important style is for how we consume visual media. Nathan Jurgenson has already covered this extremely well in his essay on digital photography and the faux-vintage photo – how apps that allow one to artificially age one’s digital photos lend a sense of importance and authenticity to the otherwise mundane, creating a kind of “nostalgia for the present” and encouraging us “to view our present as always a potential documented past“. What these apps allow us to do is to change how we use visual documentation to tell stories about ourselves to ourselves and others. They take straightforward images and give them a “once upon a time” feel; they change how we instinctively understand the story they tell. All memory is fundamentally about storytelling; recording and documentation are the same.

So when news reports and amateur footage of political protest begin to look more like art films or slick commercials, the result can be disconcerting; the stylistic visual cues we involuntarily read no longer mean what they once did, and the result is a mismatch between what we expect the visual text to be and what it actually is. Charlie Booker of The Guardian, on first noticing this effect:

Around 2005 things start making the transition to HD – and then we get to today, and a weird new trend is emerging. I first noticed it some time around the Egyptian revolution, when I was suddenly struck by a Sky News report from Cairo that looked almost precisely like a movie. Not in terms of action (although that helped – there were people rioting on camelback), but in terms of picture quality. It seemed to be shot using fancy lenses. The depth of field was different to standard news reports, which traditionally tend to have everything in focus at once, and it appeared to be running at a filmic 24 frames per second. The end result was that it resembled a sleek advert framing the Arab Spring as a lifestyle choice. I kept expecting it to cut to a Pepsi Max pack shot.

This mismatch works both ways, too. Booker notes that some early screenings of footage from Peter Jackson’s The Hobbit drew negative reactions from test audiences because the film simply didn’t look like what they expected:

The Hobbit is shot at 48 frames per second – twice as many frames as standard films. The studio claims this gives it an unparalleled fluidity. The viewers complained it was too smooth – like raw video. Some said it looked like daytime TV. What they meant, I guess, is that it seemed too “real”, and therefore inherently underwhelming. The traditional cinematic frame rate lends everything a comforting, unreal and faintly velvety feel, whereas the crisper motion of video seems closer to reality, and therefore intrinsically more harsh and pedestrian.

Tolkien’s Middle Earth is epic fantasy, and typically epic fantasy films have a stylistic look with which most of us are familiar. We expect things to look glossy and otherworldly – deep and rich but ever so slightly unreal, which serves to remind us that we’re watching a fantasy film while never shaking us out of the story itself. When that doesn’t happen – when Middle Earth looks like documentary footage – we no longer know how to read the story. The “once upon a time” is gone, hobbits and dwarves notwithstanding. The effect is more than confusing; it can be emotionally disturbing, preventing us from engaging with the visuals in a way that makes sense to us. Without visual cues that we understand, we no longer know how to respond.

Stories are at once spells, hypnosis, and telepathy – and in order to work, storyteller and audience must be speaking the same language and agree on narrative frameworks that render the story both meaningful and comprehensible to everyone involved. But these narrative frameworks are so implicit that we tend to not notice them at all unless they break down.

What digital technology is doing for news, as Mason points out, is both upsetting the existing frameworks and creating new ones that have to be made sense of. Rather than one or two kinds of documentary technology, we are now faced with a multiplicity of technologies and corresponding visual effects:

– the legacy cameras from the tape era which will always beat an SLR for long-range clarity but not rich colour, or bokeh

– iPhone footage with its unchallengeable “truth”

– SLR-shot footage, with or without a slower frame rate to make it look filmic

– live, static versions of the old TV cameras, with lighting etc in a studio or live satellite position.

– You could add, for increasingly cash strapped programmes that can’t afford satellite feeds, Skype interviews.

– And while I think about it, there is also the GoPro, an ultra-wide, ultra-sharp mini camera people use to film themselves, ski-ing, surfing and taking their dog in their kayak etc.

For some viewers, this spells confusion and trouble. But Mason holds that most viewers are pretty much fine with it, and I think I’m inclined to agree. Changes in visual technology aren’t even new, after all; as Booker points out, “Our perception of eras seems chiefly dependent on the limitations of the technology that records them. The 20s are speeded up in our heads because the cameras were cranked by hand, creating an unnaturally hasty frame-rate.” People adjust; they learn how to “read” all over again. But whenever adjustments like these need to be made, it’s a useful reminder that, as Susan Sontag famously noted, as visual documentation makes us scribes it also makes us poets; as we remember we also create. If the medium is the message, it’s also the story, and how stories are told is just as important as – and indeed, indivisible from – the stories themselves. And our stories are every bit as augmented as we are.

A recent marketing campaign from outdoor tool manufacturer Stihl is a classic – and pretty obvious, for regular readers of this blog – example of digital dualism. It’s right there in the tagline: the campaign presents “outside” as more essentially real by contrasting it with elements of online life. It not only draws a distinction between online and offline, it clearly privileges the physical over the digital. And through the presentation of what “outside” is and means, it makes reference to one of the most common tropes of digital dualist discourse: the idea that use of digital technology is inherently solitary, disconnected, and interior, rather than something communal that people carry around with them wherever they go, augmenting their daily lived experience.

But there’s more going on here, and it’s worth paying attention to.

First, Stihl isn’t only privileging the physical over the digital – the ads not only make reference to Twitter and wifi but to Playstations and flatscreen TVs. In so doing, Stihl is actually drawing a much deeper distinction than the one between “online” and “offline”; it is not-so-subtly suggesting that contemporary consumer technology itself is somehow less real. Stihl is questioning the legitimacy and worth of the whole of our relationship with our gadgets.

Pretty gutsy for a chainsaw company.

And I’d argue that there’s something else beyond even that going on here. These ads don’t merely draw a distinction; they issue an imperative. They order the viewer to alter their behavior, to leave their technology behind and embrace a more “real” existence “outside”. This is not only fetishization of “IRL”, but fetishization of the pastoralthe assumption that there is something more authentic, real, and legitimate to be found in the natural world than in human constructions.

This is actually a very old idea, and it can be found in art and literature going back to the dawn of the Industrial Revolution, especially in America (though I should note that Stihl’s ad campaign is Australian). America’s relationship with what has been culturally constructed as “wilderness” is a fundamental part of our historical national identity, and the industrial boom of the 19th century and the subsequent erosion of that wilderness made a mark on the American psyche. As Michael Sacasas writes in his essay on “The Smartphone in the Garden”:

America’s quasi-mythic self-understanding, then, included a vision of idyllic beauty and fecundity. But this vision would be imperiled by the appearance of the industrial machine, and the very moment of its first appearance would be a recurring trope in American literature. It would seem, in fact, that “Where were you when you first heard a train whistle?” was something akin to “Where were you when Kennedy was shot?” The former question was never articulated in the same manner, but the event was recorded over and over again.

Sacasas goes on to note that in art and literature that deal with pastoral themes, the encroachment of industrial technology is often presented with a certain degree of gentle threat or melancholy; it’s a kind of memento mori for rural American life, and a recognition that the world is changing in some fundamental and irreversible ways. Inherent in this is a nostalgia for the American pastoral, a sense that life lived within that context was somehow better and more real, as well as more communal. The pastoral was fetishized in much the same way that “IRL” is now, and its perceived loss was mourned in ways that should be familiar to anyone who’s read Sherry Turkle.

There’s an additional facet of this kind of thinking that’s worth pointing out: the idea that not only is human technology somehow separate from, in opposition to, and less real than the natural world, but that humans themselves are somehow divorced from the natural world. Lack of technology brings us closer to some vague construction of what we imagine as our true nature; use of technology takes us further away. In this understanding of us, we can exist in the natural world but we are not of it, and we can leave it. We aren’t animals. We aren’t inherently natural.

That’s about as dualist as you can get, I think.

What’s especially interesting about Stihl’s ads in this context is that they’re an outdoor tool company they exist to make instruments through which a human being alters and remakes their environment. If we make use of dualist thinking for a moment and consider “real” as a spectrum, with the completely unconstructed and untouched by humanity as one end and the entirely technological and digital as the other, then a suburban backyard is hardly “real”. It’s a landscape that has been constructed entirely for human use, and ecologically speaking, it’s pretty much a desert (actually not even that, given the ecological richness of deserts).

Stihl is therefore building its ad campaign on some arguably flawed assumptions. But they’re powerful assumptions, and they draw on powerful elements in a lot of Western thinking about the relationship between humanity and technology. This indicates some of why digital dualist thinking is so pervasive; it’s much older than the internet or the smartphone. In order to truly understand what it is and where it comes from, we need to be sensitive to not only  humanity’s relationship with technology, but how we’ve historically understood humanity’s relationship with what we perceive as the natural world.

Thanks, Tumblr.