kinect3

A comment from a friend of mine on my recent post about the XBox One reveal got me thinking, about games and bodies and how exactly it is that we talk about both.

The thing about video/computer games – and in fact this is true of a huge amount of digital technology – is that we have this mythology built up around them that implies that they render bodies unimportant, that they divorce identity and movement and interaction from the physical embodied state in which most of us have historically experienced all of those things. This mythology is a classic example of Strong Digital Dualism, the digital and the physical profoundly disconnected and entree from one to the other amounting to an entry into a wholly different world where none of the rules are the same.

Which is clearly not the case. And games are a fantastic arena in which to see how un-true this is.

Video games are often presented as bodiless experiences, especially when the controls are extremely simplified; In The Elder Scrolls: Skyrim I run, jump, cast fireballs and swing battleaxes, all from the comfort of my office chair. When I do these things, my body fades into the periphery of my conscious awareness, except for when my wrist or back (or butt) start to ache. And because there’s no force feedback in my game setup, the input is all about vision and sound; nothing is tactile, but because the tactile doesn’t directly matter, I tend not to notice that it’s not there. Likewise, when I play a game on my PS3, I only use my fingers and my thumbs, and I feel the vibrating feedback on  the controller, but otherwise it’s all in my ears and eyes. I think it’s this experience of perceptual disregard of the physical that allows us to imagine that the digital “space” of a game allows us liberation from the limitations of our bodies; I can’t run for miles up mountains, but my high elf in Skyrim can. I sure as hell can’t shoot fire out of my hands, but my high elf in Skyrim roasts frostbite spiders without breaking a sweat.

But all of those things that I don’t notice anymore still matter.

I can move easily through Skyrim without any significant game setup modification because I can see (with glasses) and hear just fine. I can play for hours and not be bothered by my body because I have no chronic pain conditions that make sitting or remaining in one position for long periods difficult (though I know it probably isn’t good for me to do so). I can follow complex storylines and read lengthy text without any trouble because I have no significant cognitive disabilities. In short, I can temporarily disregard aspects of my physical experience and focus more fully on interaction with a form of digital media because of my able-bodied privilege. The game was designed for people like me – able-bodied, primarily neurotypical people – so my experience is seamless and mostly effortless.

The point is that the kind of Strong Digital Dualism that creates the above mythology is intrinsically ableist. It assumes that people are able to stop caring about their bodies. But this is patently impossible, as well as unfair. If I were blind or deaf, I would need elements of the game to be modified in order to be able to play, or I might not be able to play at all. If I had difficulty controlling my hands, a keyboard + mouse interface would obviously not work well for me.

Games are therefore a fantastic example of ways in which digital technology is profoundly embodied and usually designed with the able-bodied default in mind – and when you don’t possess the “default” body, this fact becomes (often literally) painfully obvious.

Which brings us to the Kinect.

When the Kinect was released, a lot of people assumed it would remain a niche form of controller, and for the most part this has been the case. But with the announcement of the XBox One, it looks like the console will be working to incorporate the Kinect more fully into more and more of its games – and into more and more of its functioning in general. This is significant because the Kinect, rather than allowing for temporary disregard of aspects of physical experience, makes bodies central to the experience of a game. Bodies are how you use a Kinect; you move in order to make things happen.

Which clearly has some consequences regarding can and can’t use a Kinect. What’s interesting about this is that some people with disabilities are finding that a Kinect is allowing them to play games that were impossible to interact with before (for example, people with difficulties in fine motor control required for traditional hand-held controllers), while others with mobility issues (such as needing to remain seated) find using many features of the Kinect difficult or impossible.

Microsoft, to its credit, appears to have made at least some moves in the right direction, recognizing the disparities of use in their Kinect FAQ and meeting with the AbleGamers foundation for a useability/accessibility roundtable. But as Microsoft correctly points out, the accessibility of a game is only partially to do with the controller; a huge amount depends on the design of the game itself. So a truly accessible game depends on many people at many points in the development process incorporating considerations of bodies and neurological arrangements other than the go-to able default. In other words, it depends on designers not approaching design from a Digital Dualist perspective but instead recognizing that the digital and physical together make up someone’s experience of technology, and that not everyone’s “physical” is the same.

It’s worth noting that, once again, those of us on this blog who’ve written against Digital Dualism don’t just do so because we feel that it’s not a useful conceptual framework with which to approach the world, but because we believe it helps to perpetuate existing forms of inequality.  Technology isn’t neutral; neither is theory. Nor should it be.