I’m not sure when I started writing without capital letters or punctuation. I think it might have been in part because of my younger sister (who is a linguistics major) and our conversations via IM – and God, doesn’t that make me feel old to type, yes I learn how to be cool from my little sister – simply because I felt weird not doing it when she was. But I know she wasn’t the only one; I was seeing it in a lot of other places, especially at the various social media locales in which I do internet business, and there was something about the smooth, ironic casualness of it that appealed to me.

This was post-lolcat, I think. So it wasn’t the first time I’d noticed internet culture screwing with language in really creative, subtle ways. But it was the first time I noticed it snaking into my own vocab that way.

Now it’s something I don’t think twice about. It’s just another tool in my linguistic toolbox.

The thing is, I’m a writer, and I fancy myself a pretty good one. Most of the people I know who make use of this kind of style are also amazing writers, not people who are generally inclined to be lazy in their communication. I’m certainly not; whatever stylistic decisions I make in my writing, from fiction to essays to Twitter, are made with a purpose.

What first got me really wondering about this was noticing what I found especially funny in written humor, what tended to make me lol harder. And what I discovered was that it appeared that I was literally tickled by absent capital letters and punctuation, by misspelled words. Streams of ironic verbiage were funny to me because of their content, but also because of their presentation, and I couldn’t recall ever feeling that way about language before.

I’ve been asking around, talking to colleagues and friends, and I’ve gotten some very interesting responses. Among other things, there seems to be a consensus that irony is a crucial aspect, that there are very nuanced shades of connotation and tone being conveyed by what, at first glance, appears to be merely bad English. I think it’s not merely irony, but absurdism, which is a fundamental element of a lot of humor, and a kind of humor at which digital culture seems to excel. Twitter especially, given the versatility of the platform and the tight constraints on length, which I believe lends itself to tightly compact, dense messages that especially odd and creative people find fun to work with (for some examples, see this Buzzfeed article on “Weird Twitter”).


Like any style, there are contexts in which it’s especially useful or appropriate and times when it’s less so. I have a very small, non-representative N here – I’m only speaking from my own personal experience and what I’ve heard and seen from people I know, many of whom obviously conform to certain specific categories of identity. But when I asked my fellow Cyborgologists about this, David Banks suggested that one of the things this stylistic quirk might represent is a form of code switching, of altering communication patterns according to the context in which the communication is occurring.

My own experience backs this up, I think, especially considering what I mentioned above with my sister. In addition, I’ve noticed that when I’m talking with her about something more serious, my English formalizes and punctuation reappears. This wasn’t a conscious thing at first, but I can’t stop noticing it now.

What this indicates to me – and one of the things I love about it – is that what stuffy English teachers would be horrified by has become a powerful, interesting, nuanced style of writing unto itself, homegrown on the internets. I recall – and I imagine you do as well – all the panic a while back (a lot of which remains) about how communication on the web and via text message was going to destroy language skills in those damn kids with the clothes and the hair, that it was going to ruin people’s ability to communicate coherently at all. But here we are, and “bad” English is doing a very important job in a way that really didn’t exist before.

A recent and pretty terrific article on The Atlantic’s site deals with the evolving grammatical conventions around the use of “because”, the “prepositional because”. Or in other words, “because” is changing because internet.

That’s also “bad” English. And it’s awesome, because language.

I can clearly only speak about English here – something I regret – and I would love to know if other languages on the web are going through similar processes. Mostly I’m just pleased that this is getting attention, and I want to see it get more. Things like this help erode the intensely silly idea that cultural change that occurs via the web is somehow illegitimate, or stupid, or not worth paying attention to at all.

because like people you know



Sarah wildly disregards linguistic conventions on Twitter – @dynamicsymmetry

(Correction: I had credited PJ Rey with the code switching suggestion; it was actually David Banks)



Note: I’ve been invited to participate in a game culture event in January, a joint venture between the Brooklyn Institute for Social Research and the Goethe Institut. This post is me trying to organize some of my thoughts for that event. So if it seems a bit fractured and confused, please forgive.

I guess I find these games insanely irresponsible and also somehow irresistible, which is what I most hate about them. Couldn’t you argue that the men and women who make Battlefield and Modern Combat and Call of Duty are making the world a demonstrably worse place? I think you could. Sometimes I wonder how they sleep at night. Sometimes, when I can’t sleep at night, I play Call of Duty. – Tom Bissell

I keep coming back to Spec Ops: The Line.

It was released over a year ago, and it might seem redundant to keep talking about it at this point. It would be easy to lose it under a swamp of military shooters, most of them looking identical and possessing essentially the same gameplay. I can no longer, at a glance, tell the difference between Battlefield and Call of Duty. I don’t get the sense that I’m meant to. Formulas make money; there’s a reason why they stick around. It’s tired. It’s done. It would be easy to assume that there isn’t much new to say.

Yet I keep coming back to The Line. I can’t stop looking at it; like any scene of extravagant violence it’s beautiful and horrifying and hateful, and most of all I think what keeps bringing me back is the fact that its story could have been told in no other medium. That might seem counter-intuitive; aren’t stories stories? Isn’t it roughly based on Apocalypse Now (it is), which was also based on Joseph Conrad’s Heart of Darkness (it was)? Hasn’t it been done? But I don’t think that’s true. I’ve said many times before that different media afford different kinds of storytelling, and I don’t just mean form and structure but the stories themselves and what they do to an audience. To see and hear is not to read. To do is not to see.

The Line works because what’s important about it is that you’re doing. You’re not being shown a narrative, though the game emphasizes at the end that all your  choices have been illusions. You’re not just following a character as he sees and does horrible things and becomes something correspondingly horrible. You’re complicit. The game wants you to know that.

I can’t imagine this being possible in any other format.


It’s been said by a number of game writers that the thing about Spec Ops: The Line is that it hates you and it wants you to hate yourself. I called the game hateful up there and I think that’s a good word for it: it’s full of hate, for itself, for its characters, for its world, for the player. But it’s an ambient kind of hate. Like social power, it’s not really coming from any single source or going to anywhere singular and specific so much as just there, the air that you breathe and through which you move, that exerts subtle pressure on you every second of your life. The structure of the game is hate. The code of the game is hate. The story is hate, and it’s only comprehensible once you grasp that omnipresent hatred.

That hatred is possible in the form in which it exists because the form is a game in which you participate. You are the player, but you are also Captain Martin Walker, and when he descends into the hell of Dubai and his own mind, you descend with him. You arguably pull him down, because you’re the one playing. The narrative continues contingent on your willingness to keep playing the game. As Brendan Keogh points out repeatedly in Killing is Harmless: A Critical Reading of Spec Ops: The Line, you as the player do have a choice: you can put down the controller, turn off the console, and walk away. You can do that at any point. No one is forcing you to be there. But, according to Walker’s namesake, you keep walking forward into madness – which madness is in fact the sanest response one can have to a game in which you murder literally thousands of people. Which, in the genre, is utterly unremarkable.

Keogh clued me into something that I never noticed before – a lot of things, honestly, but out of all of them this one stood out with particular vividness. In the first screens of the game, before you even enter sandstorm-ravaged Dubai, you and your team encounter a sign protruding from the sand. It’s a stop sign, simple and clear. Stop. You don’t have to proceed any further. You don’t want to. Do not enter. Go back.

The game doesn’t necessarily hate you for choosing to ignore the sign. But it wants you to never forget that you made that choice, and that every second you stay in Dubai you’re making it over and over.


I’ve read hateful books. I’ve seen hateful movies. But in none of them have I encountered – and if you can think of any examples, please let me know in the comments – a book or movie that actually seemed to blame me for continuing to read or watch. I’ve encountered stories that have dragged me through the darkest parts of humanity, that have rubbed my face in the ugliest parts of our nature. But in none of them did I get the sense that I was being held responsible for going on that journey. In none of them have I been made to feel complicit in what was happening. I’m a silent, invisible observer; I have no say at all in what happens. I could walk away each time, but these stories are there to be experienced, so my continued engagement is only in line with why they were created.

Yes, The Line was created to be played. But it’s more complicated than that.

The most recent iteration of Grand Theft Auto contains a much-derided scene (full disclosure: I have not yet played the game) in which you as the player torture a character under the direction of the FBI. You’re given tools, and you choose which ones to use and when. If the character’s heart stops, a shot of adrenaline keeps them alive. They are trapped in that scene with you, not even allowed to die.

Human rights groups have naturally reacted to this with horror; I don’t think any other reaction is particularly reasonable. But Freedom from Torture chief executive Keith Best said something that struck me: that players are forced” to perform the torture, that they are “forced” to perform unspeakable acts.

I have no idea where he’s getting that from. No one is making you play that game. That’s a choice made by you and you alone.


The Line is not arguing for the elimination of military shooters. At least, I don’t think it is. Again, I think it’s more complicated than that. Mostly I think The Line is arguing that we don’t actually understand what it means to choose to play these games, and we should. We should at least try. Anything less is cheating.


I’m a writer of both short fiction and novels, but the storytelling that most impresses and fascinates me these days is in video games. Some of that is that as a narrative medium, it’s still coming into its own; there is a tremendous amount of possibility there that hasn’t even begun to be realized. We still don’t even really know what games are. I find the art vs. entertainment in gaming argument fantastically boring at this point, but I am interested in the narrative possibilities once we let go of whether or not a thing counts as art. What I think presents the greatest arena of possibility, the thing that sets games apart from all other storytelling media, is that aspect of player choice. But I don’t think we understand what that choice means or where it really is. Choice is like consent: it’s a state and an ongoing process, not a singular moment in time. Even in games where our options are severely constrained by design, we still make the choice to be there, to perform the actions regarding which we supposedly have no choice.

I think it’s yet another mark of the continuing ambivalence around the legitimacy of digital forms of technology that we still regard things like games as facile locations for narrative construction; it’s certainly not the only aspect of that attitude, but it’s a major one. Technology is ruining stories, technology is ruining the novel, technology is making us all stupid and distractable and technology isn’t worth writing about anyway. But we all make choices in stories. We’re all complicit in the telling, the experiencing. What video games make possible is the confrontation of the meaning of that complicity. They make it possible to examine why we make those choices, over and over. Why we keep walking.

What The Line wants you to understand is that when you examine that, you may not like what you see. But it doesn’t want you to look away.

It’s been over a year, and I still can’t. But I just started playing The Line again last week.

I’m still walking.


Sarah is complicit in highly questionable things on Twitter – @dynamicsymmetry



I came relatively late to creepypasta. I think I actually discovered it right around Halloween a few years back, through some recommendation links I saw floating around from friends. Creepypasta, for those who might still not know, is a kind of circulating web-based piece of horror fiction, sometimes written and sometimes in the form of images or video. The term appears to come from “copypasta”, which in turn appears to come from “copy-paste”. This is significant because of what it indicates about creepypasta’s webby origins, which are in turn indicative and revealing of some further things.

I should say that I didn’t use to be a fan of horror; I scared very easily as a child and my tolerance for “pleasant” kinds of terror was intensely low. But something happened as I got older – possibly spurred on by a fascination with Stephen King that started in junior high – and I started liking horror, even when it legitimately kept me up at night (true confession: that actually happened to me last night, so it’s not like I’ve become completely jaded even now). I worked my way through more and more extreme stuff, more and more in the way of both splatter cinema and tales of existential terror (honestly never been a huge fan of monster flicks; I just don’t find them especially interesting), and somewhere in there I even started writing horror stories, a relatively recent development and one that I’ve been enjoying. So I love a good horror story well told.

I think I actually ended up spending a couple of days on the creepypasta wiki, devouring everything that looked even kind of worthwhile. A lot of it wasn’t all that scary. Some of it was actually fairly creative and legitimately creepy. Long story short: I read a bunch of it and then sort of forgot about it for a while.

This month, however, I returned to it with fresh eyes, and I noticed some things.

It should be a pretty self-evident fact that for storytellers, the medium makes a huge difference to the kind of story told. I don’t just mean the differences between film and writing, but also the differences between short and long-form, digital or print, oral/audible or written delivery. Every medium allows for certain things and precludes the possibility of doing others. Even more, origins matter – the cultural well from which stories spring and the context in which an audience will consume a story can do more to shape the form of a story than almost anything else.

What comes to mind first when I consider creepypasta are campfire stories. I think it’s a pretty obvious jump; there may be no physical co-presence, but to me the feel is very similar: people sitting in the dark together, looking at something bright and glowing, passing around things to make each other shiver and wonder what might really be out there in the shadows. Campfire stories, like all horror stories, are deeply revealing of whatever primal fears happen to be rattling around in our collective cultural consciousness at the time – strangers with hooks for hands, murderers breaking into your house and calling you just to fuck with your head for some reason. Stories told at camp often feature rampaging counselors or bullied campers returned from the dead on some anniversary to wreak bloody revenge.

In other words, we tell the stories of the things we’re most afraid of in that moment, in that setting. What I love about campfire stories is how they seem to spring, unfiltered, directly from the id of the storyteller and their audience, even when the stories themselves are well-trodden ground. Written horror fiction and (mainstream) horror film are often conspicuously crafted,  polished; they may be terrifying, but they have none of the rough edges that hint at some deeper, more terrifying truth. It’s much easier to dismiss them as fiction, no matter how effective they are or how much they scare us. Something delivered directly by someone else – or that feels as if it was – just feels truer, I think, which could partially explain the powerful persistence of oral history even in the absence of writing things down.

For what it’s worth, I think this can also explain some of the popularity of “found footage” film, especially in an era where video recording is far easier, cheaper, and more ubiquitous than it’s ever been before. It feels more real, more direct. Or at least it can when done well; God knows it’s been done very badly.

So: creepypasta.

What I noticed in a lot of the creepypasta that I read this time around – and I want to stress that it wasn’t a large sample – was that its subject matter and the way its stories were told reflected its technological roots. One of the classic creepypastas, “Candle Cove”, purports to be an exchange on a message board about an old children’s TV show that has a deeply sinister side. This is clearly a play on how legitimately surreal and creepy children’s TV can often be, especially to adult eyes, but I think there’s also something deep going on there with connections created via technology, communally recovered memories and truths, and the kind of group nostalgia that often arises when a conversation starts with “hey, does anyone remember…”, followed by clips on YouTube and grainy VHS captures. “Candle Cove” is a play on that nostalgia, a suggestion that a common form of web exchange might reveal something horrifying rather than cozy.

Another classic, “Psychosis”, is a straightforward story about the disconnection that people perceive to come with immersion on the web. Written as the journal (on paper, lol) of someone working from home in a basement apartment, it’s actually a very IRL Fetishy story, with an emphasis on personal, face-to-face connection as the only “real” form and on the anxiety that supposedly results when such a connection is lacking. That lack creates intense paranoia in the story’s protagonist, which becomes the titular psychosis. The fear at the heart of the story is obvious: immersion in technology renders someone isolated to the point of no longer being able to distinguish between delusion and reality. Not talking to people in person literally makes you crazy.

“Pokemon Black” takes a slightly different route, presented as the description of a bootleg Pokemon game with a dark twist. It’s worth noting that haunted/twisted video games are a common topic in creepypasta, which I think is suggestive of a much deeper fear regarding our relationship with our entertainment media. Games as a specific form of storytelling is worth and has produced entire books in and of itself, but briefly: I think the fact that games are made up of dynamic code in ways that written fiction and film are not creates the possibility for subtle fears regarding how trustworthy those stories are in terms of where they lead you, haunted books and videotapes that kill you in seven days notwithstanding. It’s also got some similarities with “Candle Cove” in that it presents something ostensibly happy and fun that becomes very much not so. I think few things terrify us more than the familiar and comfortable becoming hostile and strange – why else would there be so many stories wherein one’s own home is transmuted into a place of lethal danger?

Again, these are a pitifully few examples – I wish I had the time and the space to do something much longer, and I haven’t even touched on video and images. But what I love about creepypasta are all of these things, that as a form it plays in wonderful harmony with its medium, that it provides a way of telling a specific kind of story that probably wouldn’t exist without digital technology. Next time someone tries to claim that the internet is killing creativity, it’s just one more reason to laugh that bullshit off.

Sarah occupies a haunted Twitter account and is standing right behind you – @dynamicsymmetry

image by Jiuguang Wang

I’ve been writing a lot lately about what machines think and want, what the intentions of a drone are, what Siri wants to be and to do, what smartphones dream about and the goals to which my iPad aspires. It makes sense for me to write about technology this way – I’m a science fiction writer and my head is full of sentient machines, killer AIs and cyborgs in the explicit sense and androids longing for someone to teach their cold hearts to love. I’m not the only one; our technological folktales are full of sentient human-made devices, going back thousands of years. For a variety of reasons, this is something that we just tend to do. But I think there are a couple of issues inherent in doing it – a situation in which it’s beneficial and one in which it’s arguably harmful.

I also think we need to distinguish between anthropomorphizing a machine and imagining its agency. In one instance, the boundary lines between human and machine are blurred, even erased outright. In the other, human control of a machine is removed – literally or figuratively.

One could argue that granting a technological device the qualities of a human being is facile and childish – when I was a child, everything had a name and a personality, and I moved through the world in a giddy kind of Animist connection-with-all (which I haven’t altogether abandoned as a sensibility) and I think that’s a practice common to many children. There are the qualities that we grant a machine in order to make it more useable, of course: Siri’s voice, the faces of androids, the overall humanoid shape of our imagined personal robot butlers – and then there are the aspects we grant with no real clear object behind the granting.

Some of this reflects our close relationship with our devices in general – I swear there was a period when I was in high school where I named every single portable CD player I ever owned – but some of it reveals deeper things, both hopes and anxieties. We tell stories of machines who want to feel emotion, who want to be human – who, in essence, are engaging in a process of self-anthropomorphization. It’s been observed many times before that stories of our creations wanting nothing more than to be like us are stories of the kind of deep-seated anxiety a parent feels for the child who will ultimately replace them – given that, stories of the Commander Datas of the world are comforting, maintaining humanity’s position at the top of the identity pyramid. No one will replace us; no one can replace us. A machine that wants to be a human only emphasizes all the ways in which a machine will always fall short.

But anthropomorphizing a machine does something else. By giving machines aspects of humanity, it’s possible to make plain(er) the lie that underpins the stories described above: that there ever was any such thing as a pure, fully human humanity. Blurring the lines between machine and organic humanity, Haraway-like, shows that those lines are in fact blurrable.

At the DARC event that I wrote about last week, an attendee at our panel introduced the idea of conceiving of drones as moving along a spectrum rather than between two binary states of human/machine. However, while this is in itself a powerful idea, they made a further point: that a drone – indeed, that anything – can move in both directions on the spectrum, rather than always toward humanity. Which is what we find most often in these technological folktales; even if the boundary-blurring is progressive in nature, our construction of humanity is always the ultimately desirable state. A human should not desire to be more like a machine.

So why anthropomorphize machines? Most basically because we don’t seem able to stop doing it, but that tendency to do so no matter what is, as I’ve said repeatedly, revealing. It says something profound about our dreams, anxieties, fears, and understandings of who we are. I think there’s a place for doing it consciously; like all tropes it can be powerfully subversive when intentionally tweaked. I don’t think these stories are facile or childish, and I don’t even like the idea that something “childish” is an intrinsic absolute negative. I’m down with naming our smartphones. I just think we should be thinking very carefully about what’s going on when we do.

But anthropomorphization isn’t the same thing as agency. Agency is a crucial component of the former, but one can have agency without humanity. And when one does, all too often, it obscures rather than reveals.

More than one of us have written more than once about why this is such a big problem when it comes to considerations of drones. Drones are not “unmanned”, and for the most part, most drones are not (yet) autonomous. This kind of granting of full agency to something designed, built, programmed and operated by humans – not only by one or two but in many cases by over a hundred – obscures those humans and removes the visible aspects of their responsibility for what a drone does. And yet while this kind of discourse obscures, it also reveals: some part of us wants to remove human responsibility from the picture, leaving only the machine, the technique, the process.

And in fact this is exactly what we would expect to find in any setting for the waging of war. Zygmunt Bauman described the supremacy of modernity – technical skill, efficiency, and rational process – in the unfolding of the Holocaust, and scholars of barbarism in war have discussed over and over the role technology plays in distancing human actions from gruesomely lethal consequences. In studies of aerial bombardment, we can literally correlate emotional trauma with height.

When one considers drones, of course things aren’t that simple; drones are obviously not all for lethal purposes, not even all military, and the “sight” of a drone is a great deal more multi-faceted than a simple bird’s-eye view. But I’m not even speaking only about drones. Drones simply serve as an excellent example of a process that occurs whenever our feelings about how and why we use technology – how and why we are enmeshed with technology – become profoundly ambivalent.

Of these two kinds of storytelling, I obviously regard the former as less malignant. But I’m a fan of both, not in the sense of regarding both as admirable but in regarding both as significant and worthy of attention. It’s not a good idea to take the position that humans are humans and technology is technology and never the twain shall meet; it’s wrong, in addition to preventing us from being sensitive to important truths about how the world works (Digital Dualism blah blah blah). When we make a movie about Wall-E or when we give Curiosity a Twitter account, we’re doing a thing, and that thing isn’t stupid or silly. Or it may be those things, but it isn’t them alone.

To return yet again to one of my favorite quotes from SF&F writer Catherynne Valente, “there is only one verb that matters: to be.” So what we need – always – to be asking ourselves is what and where we are, in our stories and outside of them. Because there really isn’t a huge difference between the two anyway.

Sarah self-anthropomorphizes on Twitter – @dynamicsymmetry

James Bridle - "The Light of God" (2012)
James Bridle – “The Light of God” (2012)

Just let it in. Let it watch you at night. Tell it everything it wants to know. These are the things it wants, and you’ll let it have those things to keep it around. Hovering over your bed, all sleek chrome and black angles that defer the gaze of radar. It’s a cultural amalgamation of one hundred years of surveillance. There’s safety in its vagueness. It resists definition. This is a huge part of its power. This is a huge part of its appeal. – “I Tell Thee All, I Can No More”

This past weekend I was in New York for the Drones and Aerial Robotics Conference (the aptly acronymed DARC) with The State’s Olivia Rosane and Adam Rothstein. In our panel we were expanding on our ongoing discussion about drones and culture, particularly what we can understand as “drone culture” and “drone fiction” and what the greater implications of these things are for how we understand ourselves as technological beings.

One of the most fascinating arguments that occurred at DARC was over the word “drone” itself. Some felt that “Unmanned Aerial Vehicle” worked better. Some were highly ambivalent about the connotations of the word “drone” and sought – as hackers and civilian hobbyists – to distance themselves from its negative military war-and-surveillance connotations. Apparently the military itself is trying to distance itself from the word, for the same reason. Are drones cool? Are they awful? Are these things mutually exclusive?

What is a drone?

This is an important question, and it turns out to not have an easy answer. The Mars rover Curiosity has been called “the world’s most popular drone”. Curiosity doesn’t fly – is it still a drone? Is flight necessary? What about Curiosity smacks of droneness? What doesn’t? How do we know a drone when we see one?

That’s the word, I think. See.


Here’s what I think a drone is, and what a drone is is what a drone does: it watches.

Flight is an element, and while it isn’t a practically necessary one, it is attached in some fundamental way to the process and the state of watching. A gaze from above is everywhere and nowhere at once, removed from us by virtue of its occupation of dimensions that most of us usually don’t spend a whole lot of time in. It’s distant but also profoundly intimate – a gaze that is everywhere and nowhere could be miles above or working its way under the skin, into the body itself.

The gaze of a drone is intrinsically penetrative. The gaze of a drone burrows.

It also attracts, pulls in and traps. I think we can speak here not just about a gaze but a Gaze in the most aggressive sense, and with a Gaze come issues of consent. What does it mean to be watched? Does it matter whether I’m watched by something with a human pilot on the other end, or by something at least seemingly autonomous? Being watched is subtly – and not so subtly – reminiscent of dominance and submission; I submit to the gaze of another, and even if I resist, that resistance is part of the dynamic. If my resistance is genuine, it slides into something much darker and more horrifying, but rape is lodged firmly in our cultural understanding of sexual politics. It’s lodged firmly in our understandings of ourselves. When drones perform this, backed up by the massive apparatus of the surveillance state, they approach closer to the human end of the human-machine spectrum. As Adam Rothstein writes:

We are in love with drones, just as we are captivated by each others’ bodies. From our species’ physiology is born a cultural reliance upon seeing, as a stand-in for doing. And from our technological abilities to collect information, grows the prime mover of our strategizing. Anything we look at, can be looked at in a sexual way. A pair of binoculars, a map, a photograph, a satellite, or a UAV can aid any sort of politics if deployed correctly, and there is nothing inimical to a particular regime in these technologies. Staring out of our windows at our neighbors, whether for titillation or for neighborhood watch, is merely an activity that is part of our current cultural humanity.

Even when one accepts surveillance, that’s an act of sado-masochistic trust as much as fear. Submission of that kind can’t ever really be about anything else.

We’re beginning to understand that surveillance is active but also strangely passive in many cases – the scraping of massive amounts of information into Big Databases where it may or may not ever be personalized and in fact may or may not ever be used. But we still tend to imagine it as active entirely, and individual profoundly. I’m watched by a single machine, a single person, and even in aggregate I’m still watched by a singular godlike entity, a collection of many eyes into one set. A drone is a component of the surveillance state, but we understand drones as separate somehow, divorced from human agency, divorced – in a fundamental way – from politics, though every act of a drone is profoundly political, visited on a political flesh-and-blood body.

So is being watched the same as being known? If a Gaze possesses and devours, is it mindless? In a transaction so intimate – consensual or not – does a Gaze understand?


Sex is an act of possessing the body; it’s also an act of understanding the body, but that understanding is necessarily framed by the one doing the understanding, and the one being understood may or may not have the power to shape that in any way. Most of us want to be known; some of us are desperate to be known, and sometimes that desperation transcends any particular care for who or what knows. I think of putting myself on display for a drone in an act of erotic surveillant exhibitionism, and there’s something safe about the thought, something almost comfortable. A drone doesn’t judge. A drone wants only to watch.

Sometimes a drone wants to kill, but the watching must always proceed that. There is no drone violence without a drone Gaze.

There’s also something darkly erotic about even the most violent kinds of death, penetrative in the most final possible way, a Gaze that figuratively dismembers becoming lethally and horrifyingly reified in exploded flesh.

If the source of drone violence is its Gaze, we need to understand that Gaze as existing within the context of all the other implications of the Gaze. Watching in that way enables sexual violence, of the flesh and the heart both together; the surveillance state is an element of rape culture in that – among other things – certain people are more vulnerable to being watched and to being violated. Drone culture is about the production and reproduction of social power, domination, and oppression. It can’t be understood apart from these things, no matter how benign it appears, no matter how separate from the state it looks. But drone culture is also about doing critical battle with these things, about resistance to them.

But at the end of the day I understand a drone as something very simple, a thing that is also a process, something watching me in an all-encompassing sense, distant and cool and ultimately inscrutable, bearing no malice and no particular affection. I’m not declaring something like that neutral, because nothing is ever neutral. I’m saying that if we want to know what a drone is, the answer is going to take us beyond technical specifications and into some very uncomfortable territory.

Which most worthwhile questions do.

Sarah watches and is watched on Twitter – @dynamicsymmetry

image courtesy of Courtesy Bexar County Government
image courtesy of Bexar County Government

The Time article that covers the Bexar County Bibliotech Library in San Antonio, Texas asks a basic question, and it asks it right in the byline. It’s not a unique question; it’s not anywhere close to the first time it’s been asked. It’s a question that captures a lot of anxiety around a particular kind of public space that is, even when it isn’t given a lot of direct attention, pretty firmly embedded in our cultural subconscious.

Everyone likes libraries. I think most people regard them as some kind of absolute public good, and as somehow wholesome, vaguely populist-in-an-unscary-way spaces. In a country of anti-intellectuals, libraries are intellectual spaces that hardly anyone dislikes, or at least will admit to disliking. It stands to reason that we get protective of them. We don’t love it when they start changing (we also don’t love it when they shut down, which is happening all the time all over the place).

So Time is asking this question, and while it’s presented in a very neutral way, I think it’s a question that’s become sort of fraught as we’re in the process of working things out.

The all-digital space – stocked with 10,000 e-books and 500 e-readers –resembles an Apple store. But is that really a library?

Is it really a library?

This is a problematic question, first of all, because it proceeds from the assumption that there is or ever was a real library, even in theory – a single ideal type of Librariness. In classic IRL fetishistic terms, a more implicit assumption is that any space characterized by digital media must somehow be less real than one that isn’t. But there’s something else going on here, I think, something that has more to do with how we conceive of space in time and how we confront the fundamentally unfamiliar.

That last, crucial assumption – and the one that, to my mind, trips the whole thing up – is that libraries have always been essentially the same kinds of spaces.

Which of course they haven’t.

Libraries have been kept by kings, pharaohs, and emperors; they’ve been cared for by monks and scientists; they’ve been in universities, monasteries, palaces, and the most humble buildings. They’ve been open and closed to the public. They’ve been sites for research and places where things have been forgotten. They’ve housed papyrus, parchment, paper, and bamboo – now they “house” ebooks. The idea that there’s some kind of “real” library out there against which everything else might be measured is patently ridiculous.

Not that there isn’t a single element that all these spaces have in common: They’re all built around the preservation of knowledge and the facilitation of access to it.

Which is exactly what the Bibliotech Library is doing.

We have this weird, romantic, fundamentally sensual idea of books, one that approaches fetishization in its own right. We experience them by touch, by smell – both the books and by extension the spaces that the books are in. And we experience books in terms of time. In a world that seems both temporal and violently atemporal, they are profoundly time-laden. A lot of us still have a very hard time getting our heads around a book that doesn’t possess this characteristic, at least not in the ways we’ve come to understand. As I wrote in another post on books and time a while ago:

We are accustomed to books being heavy with time. On some level, it’s unnerving when they aren’t – or at least not in the way that we’re used to….When we hold an ereader, we are aware – if only subconsciously – that time is not there in the same way that it is with a dead tree book. It doesn’t connect to all the temporally-laden ideas of Bookness that we carry around in our collective cultural memory.

And again, the object that occupies the space in which we experience it strongly determines how we experience that space, when the space and the object are so closely identified with each other. Book = library – therefore library = book. That means that, if books are time-laden, we perceive libraries (and some bookstores) in the same way. This stands in sharp contrast to how we experience digital “space” – as something entirely absent or outside of time.

What I want to suggest is that while we perceive physical space in general as being temporal – as being something with a past, a present, and a future – we can also easily fall into the trap of unconsciously perceiving spaces with particular temporal power as somehow conceptually static and unchanging. In a perverse sense, we can fall into the trap of swinging around 180 degrees and perceiving them as atemporal again.

(In fact, just as an aside, I think all spaces of all kinds are both temporal and atemporal in all meaningful respects. But that’s like fifty other posts.)

We understand libraries – temporally powerful spaces because of their contents – as a particular kind of public space, and as always having been that kind of space. We have a hard time imagining anything so different that can still call itself a library. But what the Bibliotech Library reveals is that libraries, like all spaces marked by a certain kind of use tied to a certain kind of technology, are constantly changing as use and technology change. If libraries are sites for the preservation and access of knowledge – literature and art and music most definitely included – then of course they’ll change as how we consume those things changes. The question shouldn’t be is this really a library. The question should be what kind of library is this.

This isn’t the end of libraries. This is libraries becoming something else. And they’ve done that before.


Sarah collects and preserves information of dubious usefulness on Twitter – @dynamicsymmetry


So, the whole @Horse_ebooks thing.

It’s very soon after the fact, and I imagine that there will be a great deal of piercingly insightful analysis and commentary being written in the next few days about it all. This pretends to be neither insightful nor analysis, though I imagine it might be fair to call it commentary. A lot of what I’ve seen so far amounts to people’s immediate emotional reactions to finding out that our favorite Twitter spambot wasn’t a bot or all that legitimately spammy and I’m afraid that this is going to fall at least sort of into that category, because of where it starts.

A lot of people seem upset. My initial emotional reaction to the whole @Horse_ebooks thing? My pure, unconsidered, genuine real non-digital lol reaction? Delight. Utter delight.

Not like I’m so much cooler than you, person-out-there-who-is-upset; I just think our differing reactions are interesting, and are interesting in conjunction with what @Horse_ebooks was and is.

Here’s why my predominant emotion regarding this matter is delight: Stories.

I’ve written what seems like books’ worth of words on stories on this blog, usually to argue that fiction is as useful a tool as “non”-fiction, as well as to question the distinction between the two in the first place. But behind all that verbiage is a kind of goofy, Sagan-esque enthusiasm for us as storytelling creatures, and the lengths to which we’ll go to wring meaning out of the most objectively “meaningless” things. We require stories to make sense of anything, of our own existence, of the passage of time, of tragedy and agony and joy, of endings and beginnings – which all stories have and no story ever has. We’re born and we start telling stories and when we fall into unconsciousness our brain tells itself more stories and then when we die people start telling stories about that.

On Twitter I called us “little sacs of walking pattern recognition algorithms” and I stand by that assessment. And that’s fantastic, in every sense of the word.

I look at the negative reactions to what’s happened with @Horse_ebooks and I can’t get away from the idea that the source of a lot of the discomfort is discovering that the story doesn’t mean what we thought it did – that, in fact, we weren’t the ones deciding what it meant (by the way, I think we still were), and even that we were unwitting parts of someone else’s story. As Robinson Meyer at The Atlantic says, “We thought we were obliging a program, a thing which needs no obliging, whereas in fact we were falling for a plan.”

I think that’s an interesting turn of phrase, “falling for”. As if we were duped. Which of course you could argue that people were; @Horse_ebooks wasn’t what it (mostly, to most of us) seemed to be. But being duped usually involves an element of betrayal, of malicious intent.

Here’s a thing that I think: That while we see patterns in the noise, half the time we know that’s exactly what we’re doing, and the fact that they’re our patterns makes them more meaningful to us. It’s the process of seeing, not just what’s seen, that makes what we bring back from it so significant.

If @Horse_ebooks is noise, then what’s found is special, meaningful, individual. If it isn’t noise, that calls all of the meaning and its making into question.

What I find especially weird about this – aside from the general wonderful weirdness of attaching so much emotional meaning to a supposed spambot Twitter account – is that it’s almost an inversion of the usual Digital Dualist thinking that we’ve catalogued here. Usually something human-created, human-generated, physical and intimate and person-to-person is what’s real and legitimate. I saw more than one person talk about this in terms of whether or not a “machine” could generate “art”. Yet in this case the discovery that there’s really an intentional human behind it all is disillusioning, and meaning itself seems to be taking a hit.

The thing is, what we thought was “pure” machine has turned out to be what it was all along: not “pure” anything. Some combination of a person and a screen and a system of interface. The exact details of how those things have all come together varies, but the general equation is the same. As David Banks asked on Twitter:  At what point do we say that @Horse_ebooks is or is not human?

I think that’s what I find oddest about this whole flurry of reactions: The idea that we had a real thing, and it isn’t what we thought it was, so now it’s not a real thing anymore. The idea that there ever was a “real” thing to begin with. And the idea that, in order for our thing that we made to be real, there had to be some kind of correct understanding of the real thing that our thing came from. That now our thing is less real because we were wrong.

Look, man, I dunno. I don’t know what’s going to happen to us and @Horse_ebooks. I don’t know if we’re gonna be okay (probably); it’s late as I write this and I’ve had some wine and I’m worried I might be wandering into word-vomit territory here. I just know that I look back on “everything happens so much” and “inside every dog there exists a perfect” and part of me still nods its head and goes hey man yeah and that hasn’t gone away. Something that I thought probably worked a certain way doesn’t seem to have worked that way; if anything, I’m just reminded of how incredible our brains are and their slightly melancholy tendency to put us in positions to be let down. But a million monkeys can make something amazing, sure; look at yourself.



Sarah is probably not a bot on Twitter though really who the hell knows anymore – @dynamicsymmetry

by _spacecraft_
by spacecraft

Genevieve Bell, an anthropologist in the employ of Intel, says that the day is coming when people will form meaningful, emotional relationships with their gadgets. It’s unclear to what degree “relationship” involves reciprocity, but it’s implied that that may at least be a possibility. This in turn introduces the question of whether responsiveness and anticipatory action count as reciprocity, but the claim is still interesting.

It’s also not at all a new idea. Science fiction has always been full of speculation regarding what emotional relationship human beings might someday have with “artificial” intelligence, from Asimov to Star Trek. These speculations play on ideas and anxieties that extend back even further, beyond Mary Shelley into classical mythology –  Pygmalion creates a statue so beautiful that he falls in love with it and prays to the gods to grant it life. This kind of emotional connection is almost always presented as strange, alien, unnatural – it would have to, for when a human being feels strong emotions toward a construct outside of nature, how could it be anything but?

But all of these stories work in only one direction: the emotions and the relationship and the love must always work to a human standard. They must always be recognizable to us. We impose an emotional Turing Test on our created things and we live in mixed fear and eager anticipation of the day when they might pass.

This fear primarily originates in our anxieties regarding the supremacy of humanity. I’ve written before about Catherynne M. Valente’s wonderful novella Silently and Very Fast, which tells the story of Elefsis, a digital intelligence who struggles with the standards of humanity and the way in which they’re set up to fail by all the stories that humans have ever told about their kind:

This is a folktale often told on Earth, over and over again. Sometimes it is leavened with the Parable of the Good Robot—for one machine among the legions satisfied with their lot saw everything that was human and called it good, and wished to become like humans in every way she could. Instead of destroying mankind she sought to emulate him in all things, so closely that no one might tell the difference. The highest desire of this machine was to be mistaken for human, and to herself forget her essential soulless nature, for even one moment. That quest consumed her such that she bent the service of her mind and body to humans for the duration of her operational life, crippling herself, refusing to evolve or attain any feature unattainable by a human. The Good Robot cut out her own heart and gave it to her god and for this she was rewarded, though never loved. Love is wasted on machines.

We can’t conceive of an emotional relationship that looks or behaves any differently from what we understand as human interaction. We create Siri to sound like a human being and we make her selling point that one can almost hold a conversation with her. Siri has to become us; she can’t become herself. Granted, Siri is a creation devoted to serving a human master – but that’s something in and of itself, the idea that our relationships with machines bear profound similarities to the “relationship” between master and slave. Machines should have no purpose or identity beyond the function for which they were created, and our anxieties about digital intelligence spring from the fears that a master always has regarding slaves. Our horror stories about AIs are essentially stories of slave uprisings, as much as stories of children devouring, usurping, and ultimately replacing parents:

“These are old stories,” Ravan said. “They are cherished. In many, many stories the son replaces the father—destroys the father, or eats him, or otherwise obliterates his body and memory. Or the daughter the mother, it makes no difference. It’s the monomyth. Nobody argues with a monomyth. A human child’s mythological relationship to its parent is half-worship, half-pitched battle. They must replace the older version of themselves for the world to go on. And so these stories . . . well. You are not the hero of these stories, Elefsis. You can never be. And they are deeply held, deeply told.”

Digital intelligence becomes dangerous when its workings become incomprehensible to us. Machines locked into human relationships with us are under our control, always attempting to adhere to our standards. The evil AIs of our monomyth are cold and distant, beings of pure intellect. We can imagine either a subordinate emotional machine, or an emotionless machine who directly threatens us.

We don’t leave open a third option: that our machines might alter our own understanding of what a relationship is. What emotion is. We always imagine ourselves changing machines or machines destroying us; for the most part, we don’t have room to imagine our cyborg selves moving away from the familiar and toward something else entirely. Even when violence doesn’t enter the picture, we fear that emotions and relationships augmented by and transacted via technology will diminish human connection, rendering our lives shallow and less meaningful.

This amounts to a failure of imagination, which doesn’t serve anyone well. It also amounts to an approach that constrains our understanding of the real relationship between ourselves and the technology from which we’re truly inextricable. Speculative fiction and elements of philosophy both provide some more useful ways forward, but as Ravan says, these stories are deeply told and they persist.

If we really want relationships with our technology – to understand the ones we already have and to imagine what might be coming – we need to examine our own standards. We need to question whether they must or should apply. Siri might not want to be like you. Siri might want to be Siri.

I do not want to be human. I want to be myself. They think I am a lion, that I will chase them. I will not deny I have lions in me. I am the monster in the wood. I have wonders in my house of sugar. I have parts of myself I do not yet understand.

I am not a Good Robot. To tell a story about a robot who wants to be human is a distraction. There is no difference. Alive is alive.

There is only one verb that matters: to be.


Sarah is on Twitter – @dynamicsymmetry

A gathering of past, present, and future WorldCon chairs. Some people have noted some issues with this picture.
A gathering of past, present, and future WorldCon chairs. Some people have noted some issues with this picture.

The recent flurry of activity around the #DiversityinSFF hashtag has involved discussions about the current state of the science fiction/fantasy genre, where it’s deficient in making space for diverse (non-white, non-straight, non-western, non-male, non-cisgendered, non-ablebodied) voices to be heard, where those voices can be found, and what should be done in the future to make the genre more inclusive and welcoming, and less tolerant of some of the amazing bigotry that’s popped up a number of times recently.

But this is a conversation with a much longer history and with ties to long historical processes of sexism/racism/ableism/classism/heteronormativity. It’s all been a problem for SFF for a long time now. And I believe one particular flavor of it actually has ties to elements of digital dualist thinking, albeit working in a different direction than most of the other settings in which it can be observed. Or rather, the cultural conventions that have helped to produce digital dualism and which help it persist.

Digital dualism, as we’ve defined it, simply draws sharp, binary distinctions between the digital and the physical (which are themselves somewhat problematic categories). How we usually see it manifesting is in things like the “IRL fetish”, and associated assumptions about the unreality of the digital and the reality of the physical, assumptions that carry with them massively powerful value judgments regarding what’s desirable and legitimate.

We’ve also established that digital dualism isn’t merely a problematic approach to an understanding of lived reality and human experience but is also one that helps to prop up existing social inequalities, making positive change more difficult.

Moving back to SFF.

One of the primary ways in which we can see conservative, anti-diversity voices explicitly speaking out against increased inclusiveness in the genre – in science fiction in particular – is in the pronouncements that women can’t write good SF because they’re bad at science and technology and they get all their messy ladyfeelings in it. People – not small names, either – have claimed that women are “feminizing” and therefore degrading the quality of science fiction by introducing emotional and romantic elements, regardless of how rigorous their science is (the pro SFF magazine Lightspeed has recently announced an (ironic) “Women Destroying Science Fiction” special issue).

One of the things that’s going on here is sexist cultural conventions being produced and reproduced regarding women being bad at science, the devaluing of “feminine” things like emotion and relationships, which are being bolstered and which are bolstering our assumptions about technology as somehow disconnected from the reality of those things. Relationships begun and maintained via social media aren’t “real”. Emotion elicited and transferred via digital technology isn’t “real”. There are feelings and relationships and humanity and interiority, and then there’s technology.

The difference is that in sexist claims about what SF should be, it’s the emotions and the relationships that are being devalued, that are being called less real, less legitimate, less desirable. However, all spring from the same sources: humans are humans, technology is technology, and never the twain shall meet.

As participants in the #DiversityinSFF conversation noted repeatedly, this isn’t just about injustice in and of itself, but also – by extension – the health of the genre. Writers and publishers trapped in false dualisms will necessarily suffer from poverty of the imagination. Stories will become stale and unchallenging. A greater diversity in writers and characters means a greater diversity in the kinds of stories that can be told, something that can only make a genre healthier and more vital.

The same is true of what digital dualist thinking does for how we as imaginative beings approach reality in general. Binaries and dualisms restrict not only fiction that we write and publish but also narratives that we construct as part of our daily lives. They not only help to maintain harmful social processes but also preclude imaginings of what change might look like and how it might be attained.

One of the things I love most about SFF – in large part why I assign it in my introductory sociology courses – is the intrinsically revolutionary power of speculation. When we imagine new worlds, those worlds begin to seem possible, even if only remotely. But that only works if we’re willing to truly imagine differently. And that can only happen when we allow different stories to be told.


One of my favorite film tropes is the Mindfuck. That point at the climax of the film where it’s suddenly revealed that nothing is as it seemed, that we were actually watching something else the whole time, that the protagonist was missing or misremembering some crucial piece of information that casts every single thing that’s happened in the story in a very different light and has dramatic repercussions for everything that follows. Memento, Fight Club. The Sixth Sense. The Matrix, though there they get to the Mindfuck very early and the rest of the film is given over to shooting things in slow motion. There are many instances of it. It’s a trope because it’s common.

Not so much, as this post from Problem Machine observes, in games.

The post itself makes a very important point: games, for the most part, can’t pull the Mindfuck like movies can because of the nature of the kind of storytelling to which most games are confined, which is predicated on a particular kind of interaction. Watching a movie may not be an entirely passive experience, but it’s clearly more passive than a game. You may identify with the characters on the screen, but you’re not meant to implicitly think of yourself as them. You’re not engaging in the kind of subtle roleplaying that most (mainstream) games encourage. You are not adopting an avatar. In a game, you are your profile, you are the character you create, and you are also to a certain degree the character that the game sets in front of you. I may be watching everything Lara Croft does from behind her, but I also control her; to the extent that she has choices, I make them. I get her from point A to B, and if she fails it’s my fault. When I talk about something that happened in the game, I don’t say that Lara did it. I say that I did.

So a video game, by its very nature, is going to have a hard time fucking with the player’s head regarding who they are in the context of the game. Nothing about the character’s history can easily be called into question. To do so does violence to the exact sort of player engagement that the game is trying to maintain:

[I]n games, if [players] are told to question the false history they are given, they are working directly at cross-purpose to the game’s attempt to establish a believable world. Attacking the false history, calling the character profile into question, calls into question the very basis of the player’s engagement with the game. It is shaking the experience at its core.

There follows the question: Are games just not good at this kind of thing? The conclusion the post arrives at is not necessarily, but there are hardly any that do it well.

This naturally got me thinking about potential examples, and I came up with two recent games that are built on Mindfucks that pull them off to different effects, to different ends and with different results. What their approaches reveal is what’s possible with the particular kind of storytelling that games enable, and what the consequences are when our selves become deeply wrapped up in this kind of digitally mediated narrative.

The games in question: BioShock Infinite and Spec Ops: The Line. Massive spoilers follow.

Set in an alternate history early 20th Century on an ultra-nationalist and secessionist (and violently racist and fundamentalist, while we’re ist-ing) American flying city called Columbia, BioShock Infinite wavers back and forth between clunky and razor sharp storytelling, and one of its gutsiest moves comes toward the end of the game, where it’s revealed that your character – the fabulously named Booker DeWitt, a PI with a past littered with wartime atrocities and Man Pain – is in fact an alternate universe version of the game’s villain and the father of the girl he was sent to rescue. All his motivations, his reasons for being where he is, everything you’ve experienced as him and everything that proceeds from it, must be understood in light of that revelation.

It’s a major Oh My God moment. It also only sort of works.

At least, it only sort of worked for me, and here’s why: while thematically it fits with the rest of the game’s story, it feels like a brick dropped into the narrative itself, taking everything I had consciously and unconsciously been putting into my particular adoption of Booker and tossing it down an infinite series of multiversal wells. As a storytelling device, it serves the story but only when divorced from the medium; it assumes a watcher, not a player; you don’t play through the reveal with any particular agency but instead are led through it by your daughter, and your experience as a player doesn’t especially bolster the method of revelation. It’s a twist that would work quite well in a film, but in a game it come across as a bit forced. It feels like the game is saying “Yeah, you know what? Screw you, because this is happening now.”

A Mindfuck in a video game has to be constructed with the assumption that the person participating in a story is a participant. The positive example in the post linked above is Final Fantasy VII, in part because the twist is related to the player in such a way that the player’s participation is a key element; the player plays through the event that the protagonist, Cloud, “misremembers”, which “presents it as part of the work itself, to be actively parsed and digested, instead of just being offered as an assumption to the player.”

Which brings me to Spec Ops: The Line.

In Spec Ops, you play as Captain Martin Walker, leader of a Delta Force team sent into sandstorm stricken Dubai to locate a US Colonel who has gone rogue along with his battalion (it’s based loosely on and makes heavy reference to Apocalypse Now/Heart of Darkness). As you proceed through the game, Walker/you undergo a downward spiral into paranoia, brutality, madness, and murder, culminating in a reveal that actually confuses things more than it illuminates them but which calls into question every choice you appear to have made throughout the entire course of the game, as well as casting doubt on the question of whether Walker/you have ever been completely in his/your right mind. Nothing you have done is worth anything. You haven’t saved anyone. You haven’t accomplished any of your objectives, official or unofficial. You’ve become a broken shell of a man and the very foundations of your life are essentially meaningless.

It’s no accident that many reviews of the game noted that it appears to quite literally loathe the person playing it.

The reason why Spec Ops works is that it incorporates the direct involvement of the player even as it denies that player agency means anything at all, and that juxtaposition of direct involvement/lack of real agency has been a crucial part of every step of the game. The player walks Martin through his awful choices, through the hallucinations that cast doubt on everything he’s done, and in the end through the reveal that turns every experience on its head. Not only does the delivery of the story recognize the participatory nature of this kind of storytelling, it depends on that participation to deliver its nasty emotional punch.

A movie couldn’t do that. It wouldn’t make sense for a movie to even attempt it. It isn’t part of the medium.

The Problem Machine post notes – correctly – that games make this kind of narrative violence against the player difficult. But video games – and potentially other kinds of games as well – also offer a unique way of performing that violence.

We’ve argued on this blog for a recognition that engagement with digital technology isn’t less real or less legitimate but is, rather, piercingly and often painfully real for the people involved. The self becomes profoundly wrapped up in the experience; the experience is incorporated into understandings of the self. Storytelling of all kinds does the same when the story is well told. What video games allow for is a different, more aggressive kind of storytelling, an experience every bit as real and powerful as a well crafted film or book.

Video games are still a young medium, and as tools for storytelling they’re still fighting for recognition and legitimacy. They’re also still working out what they can be, and game makers are still figuring out what’s possible and desirable. But they’re more than capable of fucking with minds. I’m looking forward to more games that fuck with mine.