In 2006, the body of Joyce Carol Vincent was found in her apartment. The TV was still on and she was surrounded by unwrapped Christmas presents.

She had been dead for three years. No one had noticed.

This might seem like odd subject matter for a game, but in fact a game was planned around it, to coincide with the release of a documentary about Vincent entitled Dreams of a Life. I finally watched it last night and then,  as I often do when I watch movies that affect me strongly on an emotional level, I went looking for more information. What I found was a Kotaku article from last year that tells the story of the development of the game, a story that ultimately ends in (partial) failure. What interests me, aside from how astonishing it is to me that someone would even try to make a game about Vincent’s life and strange death, is why the game failed in the end.

Most obviously, of course, it’s heavy subject matter that touches on some of the social facts that generate tremendous anxiety and fear for a great many of us – for the same reason that Vincent’s story struck a chord for so many people when it became known. Who are my friends? How close are we, really? Will they remember me when I’m gone? Will they even notice? If I was hurt or in trouble, how many of them would help me? Just how expendable am I in their eyes? Will I someday be completely alone?

But the primary reason why the game failed is actually much simpler and more fundamental: Games aren’t (currently) structured in a way that allows for an effective story to be told about something like this, and that structure has as much amount to do with the assumptions that we bring to the medium as the objective structure of the medium itself. More specifically, the kind of storytelling that the subject matter seemed to call for would fail in its intended effect the second that the player started thinking of it as an actual game.

The idea behind the game was that players would be brought face to face with some of the questions listed above and would be offered the chance to connect something of theirs to a person in their past – which, it quickly became clear, just wouldn’t work. Basically, as designer Margaret Robertson explained:

We were confident that posing those questions would get people to think about things they’re not usually thinking about. The problem was that, the minute we enclosed them in a game structure, we tainted their answers. Even if this is a game that isn’t about winning or losing or dying or enemies or anything like that, the minute you understand that your progress is being impeded and that your inputs and choices are going to free that progress, you want to free that progress. We can’t not want that. So the minute you say: ‘Who do you want to give the ring to?’ I’m thinking, ‘Well, shit, what does the game want me to say here?’

This reveals something significant about the logic behind games, and, more generally, how we interact with most forms of technology – and how that menu of interactions is limited to what we can imagine. We understand games as fundamentally puzzles, albeit puzzles with potential narrative significance. Puzzles need solving. Solving them allows for progress through the game; we unlock more content by successfully completing certain tasks. When we’ve solved everything and progressed as far as we can, we’ve won the game.

In other words: Games, by definition, have winstates. And we expect them to. The instant we’re engaging with a game, we’re instinctively trying to discern what the winstate is and how we can reach it.

This is true even of games that are intended to be experiences of a created world as much as they are puzzles – games like Myst, and more recently That Game Company’s Flower and Journey. We’re still trying to do whatever it is we need to do in order to progress through the world. Journey is probably one of the most emotional games I’ve ever played, yet it’s still made up of a series of puzzles.

So what we’re dealing with here is actually one of the limits inherent in games: The point at which winstates stop being the goal and start becoming distractions. Because it’s still very hard for most of us to shed the fundamental assumption that they are the goal.

This becomes especially problematic when what we’re doing to achieve the winstate is objectively kind of horrible. I’ve heard it observed that when you kill a character in a shooter, you’re not really killing someone in your head – even someone fictional, and as I’ve argued before, fiction is a significant component of reality – so much as you are solving a puzzle. Killing someone is just what you need to do in order to progress through the game; in Call of Duty, you kill a bunch of dudes to get to the next area so you can kill a bunch more dudes, and so on ad nauseum. If you want to, say, tell a story about exactly what it means to commit murder on this scale, the emotional and ethical weight of those murders is still going to be diminished by what it actually means to kill in the context of a game. By what we assume it means.

Some games have commented on this pretty effectively by questioning precisely what it means to kill in this way with this level of significance. Spec Ops: The Line, for example, gives the player the standard kill-dudes-next-area-kill-more-dudes shooter experience and then turns around and heaps abuse on the player for doing exactly what the game constrains them into doing, as well as on the very assumptions with which we all engage with shooters. It’s a game that actively hates itself for what it is and hates the player for playing it. It knows why its own structure and relationship with a player constrain the kinds of stories that can be told and the kinds of actions that can be performed. It’s not trying to break out of those limitations; it’s questioning whether a breakout is even possible.

Our actions are naturally constrained by what we perceive as not only appropriate but possible. We can’t do certain things with certain technologically mediated forms of storytelling because there are limits to what users can imagine within the context of those media. What I want to emphasize here is that this is a very real problem for anyone trying to do anything innovative with design; too innovative, too unfamiliar, and the user won’t possess the baseline  assumptions, imaginings, and understandings necessary to experience the medium in the way the designer intended. This is a particular problem with operating systems, as the backlash to the rollout of Windows 8 reveals. Even if people can figure out a new thing, they might not find it a comfortable space to be in if that space doesn’t conform to their expectations of what that space can and should be like. Having to expand the bounds of what one expects is not always – if ever – a pleasant experience. And sometimes it simply can’t be done.

But I want to return to storytelling in particular, and especially about what it means to tell stories with emerging forms of technology, with things that are still arguably in flux. Computer and video games are still very much a new medium; we’re still figuring out what they can even do, and there’s been a lot of debate around what’s really possible. Our perceptions of what’s possible tend to be persistent; with visual media, there are certain assumptions coded within different forms, and when there’s a mismatch between those assumptions and what the artist is using them to do, the effect can be jarring.

Our assumptions about how to engage with different technologies will almost certainly expand along with how we use them (and in many ways changing assumptions will probably be the driving force behind new kinds of use). We’ll probably see a day when games aren’t defined by winstates. In the meantime, however, death can only mean so much.

 

Sarah confronts the narrative limitations of 140 characters on Twitter (@dynamicsymmetry)

image source: US Air Force

A great many words – though a lot of people would probably say not nearly enough – have been spent on the United States’s drone war, on what it means, on who dies, on what it suggests about what war will look like in the future, though of course we appear to remain generally unconcerned about what it looks like to civilians on the ground watching their villages explode. But a recent piece by Adam Rothstein in The State makes a powerful and provocative claim: That when we write and think and talk about “drones”, we’re really writing and thinking and talking about a thing that needs to be understood as distinct from the actual specific varieties of UAVs themselves. In fact, Rothstein argues, when we engage with the concept of a “drone” we have stepped from the realm of nonfiction into the realm of fiction:

Drones are not real–they are a cultural characterization of many different things, compiled into a single concept. One writes non-fiction about the RQ-4 Global Hawk, the RQ-14 Dragon Eye, or the iParrot Quadrocopter. These are all unmanned aerial vehicles, or UAVs, of which there are so many sizes, types, and ranges of purpose, as to make them impossible to conflate in a non-fiction manner. A iParrot quadrocopter has more to do with a model train than it does with a Global Hawk, and yet when we write about “drones” we are always referencing both of these together, and therefore, we are already out of the domain of non-fiction, even if we still surround ourselves in facts.

There are a number of points here that I want to address. First and foremost, the implications of what Rothstein is describing don’t merely tell us a lot about how we think about drones and drone warfare; they also have a lot to tell us about how we experience and imagine reality itself. This is very heavy stuff already, but I think it’s even heavier than it initially might appear.

In dealing with this first point, I actually need to proceed to the second one, which also amounts to a mild disagreement with/desire to expand on the characterization and terms of Rothstein’s argument. I think Rothstein is exactly correct in pointing out that when we engage with different aspects of our world in different angles and with different elements of specificity and connotation, we often aren’t engaging with them in ways that we would recognize as “nonfictional”. That’s all fine and good and true. The quibble I have – and it’s at once minor and kind of important – is that Rothstein is still writing about fiction and nonfiction as if they were clearly distinct categories of understanding, though they overlap somewhat.

And I don’t think they are. As least not so distinct as all that.

I’ve touched on this idea at a couple points here before, and now I want to expand it somewhat.

Rothstein describes nonfiction as, among other things, a “historical project.” In fairness he’s mostly using the term in order to point out the ways in which nonfiction – to his thinking – isn’t confined to “restricting itself to the face of a cultural characterization” in the way drone are. But the mention of history is significant whenever we end up talking about fiction and nonfiction.

Historiography is rife with a long and ongoing debate about the degree to which historians can speak with any objective accuracy about basically anything, or whether any historical project is necessarily going to be bent and biased by the historian’s own assumptions, cultural and temporal context, mode of writing, narrative conventions, and a host of other problematizing things. That argument is a little beside the point for my purposes; what I want to use it to highlight is the fact that fiction and nonfiction aren’t dichotomous binary categories but names for a porous and often nebulous reality of story and narrative and memory through which all of us move, and which all of us experience differently. This doesn’t mean that nothing is knowable – not necessarily – but more that it’s just not that simple. Fiction is characterized by invention born in imagination, but every time we open our mouths to talk about anything we’re more or less embedded within that process.

There are elements of the created and elements of the “objectively true” in everything we talk about. In this sense, I think it’s fair to draw a comparison between this kind of (what I’ll call) narratological dualism and the concept of digital dualism. Rather than distinct categories that don’t intersect – you can be in one but not the other at any given time – I want to argue that we need to understand them as categories with different natures, uses, and intents that nonetheless constitute the same “reality”, the same lived experience.

But also: discussions of fiction and nonfiction are not only marked by this kind of dualism but tend to privilege one over the other as more legitimate and real and – often – good. Fiction is regarded as wonderful by those who love it, but I think there’s a general sense in our culture that as nice as it can be, it’s just escapism in the end (especially what literary gatekeepers snootily refer to as genre fiction, best said with the nose uplifted and a faintly condescending smile) and ultimately kind of silly in comparison to the grounded and “real” work of nonfiction. The argument about fiction in historiography first really began when a bunch of historians in the nineteenth century started complaining that historical fiction -which was quite popular at the time – was muddying the waters of the discipline and degrading its truthtelling mission. What this argument really comes down to is whether or not fiction –  or, in my characterization, fictional elements of understanding – can allow us to meaningfully engage with the truth of the past. Australian writer David Malouf argued that it could, and that in fact it was uniquely well-suited to do so:

Our only way of grasping our history—and by history I really mean what has happened to us, and what determines what we are now and where we are now—the only way of really coming to terms with that is by people’s entering into it in their imagination, not by the world of facts, but by being there. And the only thing really which puts you there in that kind of way is fiction. Poetry may do so, drama may do so, but it’s mostly going to be fiction. It’s when you have actually been there and become a character again in that world.

This brings me to my final point: that fictive writing doesn’t just allow us a deeper understanding of our past but a richer window into our present and a more vital imagining of our future. As I’ll argue extensively to anyone who has the misfortune to raise the topic with me (I am so much fun at parties), far from being merely escapism, fiction – especially speculative fiction – is a fantastically useful arena in which to do social theory, yet it’s one that most social scientists roundly ignore. Rothstein points out that science fiction is uniquely well-suited to allow us to engage with what we really understand by “drone” and what it can tell us about our general experience  and construction of specific forms of technology:

This is why we turn to science fiction to hear about drones–because this writing corresponds to our imaginary world, and the characterization we have formed around drones. We pull UAVs into our fantasies of the future and technology. To allow us a separate dimension of speculative investigation drawing upon the world of facts is science fiction’s purpose, at which it excels.

Speculative fiction, among other genres, allows us to explore the full implications of our relationship with technology, of the arrangement of society, of who we are as human beings and who we might become as more-than-human creatures. It’s useful not because it’s expected to rigidly adhere to the plausible but because it’s liberated from doing exactly that: it’s free to take what-if as far as it can go. This differentiates it from futurism, which is bound far more to trying to Get It Right and therefore so often fails to do exactly that. William Gibson didn’t set out to imagine right now, but he was able to get far closer to it than a lot of futurists precisely because he wasn’t subjected to the pressure to do so. I think it was far more chance than any temporally piercing insight, but when we can imaginatively go anywhere, we usually get somewhere.

And then we can look back on what we imagined before, and it can tell us a great deal about how we got to where we are now and where we might go in the future – and where we need to go. We can’t do pure nonfictive work on “drones”, but to the extent that the work we do is fictive, and to the extent that we recognize this, it tells us so much in ways that other things can’t and don’t:

The problem, is that in other less speculative forms of fiction that are more related to our present day emotions–like, to take one example, the novel–we are completely unwilling to engage with drones. We read and write in a world divorced from the spectacle of drones, and even more so, beyond reach of the fact of UAVs. The problem with fiction like Zero Dark Thirty is not simply that it is historically inaccurate. It is that it is alone in the field. War movies, terrorism TV series, and major news outlets have a monopoly on the characters of drones…There is barely any art and literature that attempts work with the more surreal aspects of our understanding of drones, let alone in a way that might connect our attention back to the facts of UAVs.

Fiction is part of what constitutes The Real. It’s an investigative tool as useful as any other. We need to use it. But we can’t do that until we understand it for what it is.

For a novelist, the route to publication is frequently strange and even more frequently frustrating. For me, one of those frustrations has been really frustrating because I get the distinct sense that I shouldn’t even feel that way.

The publisher I’m with (Samhain Publishing) is primarily an ebook publisher, though they do send every book over a certain wordcount to print. Mine falls into this category, which is fun, but what I’ve been finding less fun is the fact that the print edition of a book follows the digital edition by several months. So I have to wait to hold my book. And it’s driving me crazy. I was thinking about this the other day, about why it’s driving me crazy, and I realized something kind of disturbing: My own book will not feel real to me until I can hold it in my hands. You can visit recommended site to know about 3D printing business.

This is digital-dualism; more, I’m privileging the physical over the digital; more, this is regarding  my own work, something I created. I can’t seem to talk myself out of feeling this way, even though I know I should know better.

I get the sense that a lot of writers feel this way, even the ones who publish primarily through digital means. One of the appeals that scam vanity publishers like PublishAmerica milk for all they’re worth is the idea of having your work in print, something that you can give people and sign for them, isn’t that exciting, wouldn’t you pay us thousands of dollars for that. For many of us, an ebook just doesn’t feel quite as real as a trade paperback, though most of us couldn’t begin to articulate why this is so. A lot of it is probably related to traditional understandings of what it means to be published at all; for the longest time, getting your book into print wasn’t only the preferred way to be published but the only way to be published, and I think that idea is still highly persistent, even with big New York publishers in trouble and Barnes & Nobles closing up shop everywhere. Ebooks are still regarded in a lot of circles as the oddball newbie, sort of interesting in their way and undoubtedly profitable at times but still not really worth taking seriously (though that’s changing). Print is legitimate. Print is For Real Published.

Some of it is also probably the idea of having passed through stricter gatekeepers. There are at least as many ebook publishers now as print publishers – though many, like Samhain, also put books into print – and the impression among some is that it’s much easier to get a contract from ebook publishers than from big print publishers. To a degree, this is true, and a great deal of that is likely to do with the low production and distribution costs of ebooks; a publisher can quite simply release a lot more of them, and so some publishers might be inclined to be less selective regarding the manuscripts  they acquire. Being in print is more prestigious; it carries the implication that your book is somehow better for having successfully negotiated the transition from digital to physical.

But these are mostly rational reasons, and I don’t think that they account for the entirety of what I and a lot of other writers feel. This feeling is instinctive, gut-level; it can drive us without us being explicitly aware of it. It can even drive us away from things that might be hugely beneficial to us. When I was looking for a publisher, I deliberately discounted an ebook-only imprint of a much larger and very well established publishing house. They probably would have been a good bet, financially. They probably would have done right by me and my co-author. But I wanted print. Goddammit, ebooks aren’t as real.

I think this highlights something interesting about not only the deep cultural assumptions we still make about the nature of reality and about the relationship of the digital and physical, but also about how we as artists understand our creations to accumulate value. When I write a manuscript, it should be real to me; I brought it into being, shaped and manipulated it until I was happy with it, put it into the words that I chose. And yet it’s not really real to me until someone has paid me money to publish it, and it’s still not as real as it could be until it’s in physical, tactile form. A lot of this, again, is about external validation, but most of it is how I personally navigate what I perceive as different orders of real. Not necessarily physical/digital and real/unreal, but rather a spectrum along which this thing that I made moves.

But most of all, I think this highlights the ways in which digital dualist assumptions and understandings run deep, and don’t just confine themselves to intellect and rational discussion. Digital dualism is something we feel, and as such it can persist in some surprising ways, even when – intellectually – it’s an idea that I don’t buy.

I don’t want to want my book in print this badly. But I do. It shouldn’t matter this much, but it does.

Either way, I’m still looking forward to holding the damn thing.

 

Sarah yells shamelessly about their book a lot more on Twitter – @dynamicsymmetry

This Christmas, we got my father-in-law an iPad. He’s basically never used a computer before now.

I knew that watching him start to get acquainted with it would highlight some interesting stuff. What I didn’t expect was exactly what stuff that would be. He’s been struggling to get the hang of it, of course – though he’s doing much better than he thinks he is – but one of the things that my husband and I have struggled with as we (mostly he) play periodic tech support is in getting my father-in-law to understand that he should learn by trying, that the device itself really is pretty much impossible for him to break, short of dropping it. That he shouldn’t be afraid of experimenting.

It was at this point that I realized the nature of some of what I was seeing: it’s not that Apple’s interface isn’t intuitive or simple to use for someone who’s not especially tech-literate. It is. The trouble is that my father-in-law doesn’t understand that it is.

In days of yore, dealing with computers was a fractious and frequently antagonistic affair. It could easily descend into a fight just to get the machine to do what one wanted. The relationship between the average user and the average piece of computer technology could probably be fairly described as hostile. Apple came onto the scene thirty years ago intending to change that, and they were a huge part of why it changed. Computers for everyone is now the core of the culture; we’re so used to the idea that our highly complex technology should be highly usable that, for the most part, I don’t think we’re aware of what a remarkable shift that was.

My father-in-law is still approaching his iPad as if it were hostile, breakable, and pretty inaccessible in general. On both sides, there’s a mismatch of assumptions about the other party. The iPad wants to be used. Our consumer technology wants to be used.

But is it really that simple? When we look closer at the ways in which Apple – and others – appear to have made their interfaces as accessible as possible, what do we really find?

“Technology wants to be used” is no more a normative statement than “information wants to be free”, though the latter often gets misused that way. Both statements are purely descriptive in nature, and what they describe is the tendency toward appearances or uses in the practical world. The phrase “information wants to be free” was coined by Stewart Brand, founder of the Whole Earth Catalog, in conversation with Apple’s Steve Wozniak:

On the one hand information wants to be expensive, because it’s so valuable. The right information in the right place just changes your life. On the other hand, information wants to be free, because the cost of getting it out is getting lower and lower all the time. So you have these two fighting against each other.

Brand is actually describing two opposing forces rather than a single tendency – a kind of natural law of information. What Brand doesn’t directly acknowledge – though he’d probably agree – is that one of the things that makes information valuable is that it’s powerful, and one of the things that makes information powerful is that it’s dangerous. That danger if, of course, neither simple nor uniform; dangerous (and powerful) information in the possession of one group could look like a fantastic opportunity for another group (as an example, think highly classified state documents). But in order to exploit the power of this kind of information, one usually has to know how – one needs a degree of technical skill or expertise. And the less powerful usually don’t have this, let alone a reasonable chance of being able to gain access to it.

Therefore, the point of conflict – or one major point of conflict – in this battle between cost and value comes down to accessibility and practical knowledge. In other words, one is technically capable of making use of powerful information, but is running up against restrictions to the access of that information.

So what does this have to do with design?

Apple has long been lauded as a company that gets design right, both in terms of their devices themselves and in terms of their OSs. But of course, it’s not that simple; tech enthusiasts have also long noted that Apple’s design is accessible but restrictive; Apple technology wants to be used, but only in the proscribed way that Apple itself determines. Sit down and shut up; Apple is controlling this pretty ride – and as a result the ride you get is very stable and safe, but only if you accede to Apple and stay on its tracks.

I should note at this point that for the most part, Apple gets this balance pretty much right for most people, and most people are happy. But after a new Apple release, there’s always the race to see who can jailbreak the device first and most successfully. In other words, Apple’s highly accessible but highly restrictive design runs into direct conflict with a user when that user possesses a particular level of technical knowledge and skill and a desire to control their technology in a way that Apple doesn’t want or doesn’t allow for.

Technology wants to be used, but technology also wants be restricted in order to protect itself from the user.

So it’s useful to think about design – in as much as it’s practiced by Apple and companies like Apple – as a kind of power struggle at its core, between the desires of users and the restrictions on that use. And this struggle is intimately connected to a much larger one, that of the increasingly problematic distinction between users and owners, between the freedom and associated risks assumed by someone who truly owns a thing, and the safer but more restricted use of someone who merely pays for the right to use a thing.

And companies like Apple make the ride so pretty that many of us are content to simply look out the windows and admire the view, and thank God this whole business isn’t a wrestling match anymore.

 

Sarah uses Twitter – or possibly allows Twitter to use them, they aren’t sure – @dynamicsymmetry

The concept of the Uncanny Valley is pretty well known by now – the idea, popularized by robotics professor Masahiro Mori, that as non-human objects approach a human likeness they start, for lack of a better phrase, to really creep us the hell out. It’s a fairly simple idea, and one that usually seems reasonable to us on both a visceral and an intellectual level, at least partly because it’s something that most of us have experienced aspects of at one time or another. But there are parts of this concept that often go unexplored, which is what I want to deal with here.

In his essay on what he terms “anthropomorphobia”, Koert van Mensvoort notes that products are increasingly made to appear humanesque or to possess human-like qualities. Our coffeemakers talk to us, robots clean our floors, vending machines are programmed to be friendly to us as they sell us things. van Mensvoort argues that one of the things that potentially makes us uneasy about this is not only the blurring between the human/non-human boundary, but the blurring between the boundary of people/product. In other words, what makes this particular valley so uncanny is not just that it suggests that we might have to rethink what “human” actually means, but that we are forced to confront the degree to which our own bodies are increasingly managed, produced, and marketed to us as products:

It is becoming less and less taboo to consider the body as a medium, something that must be shaped, upgraded and produced. Photoshopped models in lifestyle magazines show us how successful people are supposed to look. Performance-enhancing drugs help to make us just that little bit more alert than others. Some of our fellow human beings are even going so far in their self-cultivation that others are questioning whether they are still actually human − think, for example, of the uneasiness provoked by excessive plastic surgery.

I would actually extend this point to argue that to the degree that products – and objects/machines in general – take on increasingly human-like qualities, we’re actually pretty much okay with this, provided that the objects fail in an extremely evident fashion. I would even argue that we like objects to become slightly human-like, precisely because their failure to truly become human both solidifies our self-concept as an ideal creature and strengthens our perception of our constructed categories. We like a Good Robot that wants to be human but never truly can be. It places our essential humanness firmly out of reach; it makes it exclusive to us alone. It makes us feel unique and special.

So where we dip into the Uncanny Valley is in the threat that these objects might actually succeed. Suddenly our humanness is not so exclusive. Suddenly we’re not so special anymore.

As Jenny Davis observed in her post on Lance Armstrong, one of the things that makes most uncomfortable about something – human or non-human – that threatens to transgress the human/non-human divide is the degree to which it call to our attention how blurry and porous that divide actually is. We don’t really want to confront our truly cyborg nature, because our existing ideas of what makes up a “natural” human being are intensely important to our understanding of ourselves. We need boundaries and binaries; we desperately want to be one thing or the other, because that Other is so often the standard against which we measure ourselves. As long as the non-human remains an Other, we’re safe.

But again, as Jenny so rightly points out, there’s another side to this. One of the things that often goes unexplored when we consider the Uncanny Valley is whether there’s actually an Uncanny Valley on the other side of the category of the fully “natural” human ideal. In other words, whence transhumanism?

Jamais Cascio has an interesting theory about this, one that essentially amounts to: Not only are evident augmentations/enhancements in humans likely to provoke a negative gut-level response, but it’s actually the near-human augmentations that will provoke that response the most intensely. In other words, there is indeed another side to the typical Uncanny Valley graph, and it’s a mirror image. As human augmentations and enhancements extend further and further from our conventional ideas of what it is to be human and toward the truly posthuman, our negative response will decrease in intensity.

This may or may not be so – it’s difficult to be sure, in what are arguably still early days of this particular kind of human augmentation, but again, I would take this a step further: that, as both Jenny and I have argued, what makes us the most uneasy right now about human augmentation is the idea that it might make people – especially people with disabilities – better than abled humans. We can usually stomach humans with close relationships to objects and machines, provided they don’t begin to transgress the boundary that not only delineates a category but defines that category as an ideal.

By the same token, there are few things that frighten us more than the idea that a machine might not only seek to be fully human – and succeed – but that it might desire to be better than human. And to replace us, in the end.

In short, we need to understand the Uncanny Valley not only as an instinctive reaction to something unexpected – often described as neurological in nature – but as a profound disruption of our constructed categories of identity. It’s the threatened removal of the line between self and Other; it threatens our definitions of who we are. This threat works both ways, in both directions; we’re not only afraid of what might become us, but of what we might become.

Sarah is creepily near-human on Twitter – @dynamicsymmetry

The thing about identity is that the stories we tell ourselves about our own are already forcibly consistent. We don’t need Facebook to make this so.

Last week there were several great posts on this subject. Nathan kicked things off with his claim that the social pressure to have a consistent identity is both subtly reinforced by how Facebook encourages – and constrains – the way we prosume identities, and that it potentially enables us to confront the fact that this consistency is an impossible and unreasonable standard to meet; identity is perhaps much more fluid than we’re comfortable thinking. Rob Horning was in agreement with at least part of this, suggesting that Facebook decontextualizes identity, making it seem less rooted in personal experience and more in data. Whitney went on to make a fascinating claim: that when our process of identity formation is recorded and made immediate in this way, it collapses the past into the present and removes temporal distance that basically alleviates the pain that’s all too often wrapped up in who we used to be.

This is the point I want to latch onto. Because this is significant. This is really about our stories. And this is really about not only remembering a more immediate past, but inhabiting that past. We are faced with who we used to be and we become that person – “to read the words that came from that person is to be that person again”. We occupy our old self’s cognitive and emotional space.

This hurts. There are a couple of reasons why.

First, as Whitney already pointed out, there’s often a tremendous amount of pain wrapped up in our past selves, especially when those selves are adolescent. Growing is a painful process; we stumble, we make horrible mistakes, we do things that we can’t really believe we were ever stupid enough to do. This isn’t just about the fluidity of identity, but also sheer embarrassment. We can understand this almost in a Goffmanian sense: we use frames to present ourselves, to construct stories about ourselves, to maintain personae, and then the past crashes through those frames and creates breakages in our engagement with both the world and ourselves. These breakages are true – and the truth is profoundly uncomfortable.

It’s worth emphasizing that we don’t need social media for this. Memory does it for us. Memory loves to do this for us. I don’t even know how many times I’ve been doing something entirely innocuous and then suddenly my brain decides that it’s a fabulous idea to bring up that time in middle school where I accidentally dressed like I was an extra from Grease. But social media makes it easier, clearer, less mercifully blurry. The feelings are so much more raw.

But the jarring effect is also about narrative.

We expect narratives to be internally consistent. Consistency is one of the ways in which we identify narratives. Stories are always fundamentally about 0ur desire to make sense of things; through them we impose order on the world. Creation myths select elements of what little we know about ourselves and our surroundings, create new elements to tie everything together, and present us with the comforting illusion that everything makes sense, that there is a broader plot that unites people and events into coherency. Our myths of origin don’t stop at the creation of the world; these are myths we construct around ourselves. We make ourselves protagonists and build a world in which to live and act.

Again, this obviously predates social media. We’ve always done this; we have always been storytelling creatures.  What social media does is introduce a new – or at least an intensified – wrinkle into this process, by disrupting the coherence and consistency of our narratives.

We want very badly to believe that our lives have been logical progressions from points A to B to C and beyond. Sometimes this desire is subconcious, and the construction of the story is subconscious as well. But I’d argue that it’s always there. We don’t see our pasts as fragmented and chaotic as they often are. We don’t want to confront how many times we’ve stumbled, how much was left entirely up to chance, how completely out of control we happen to be.

Stories make us feel powerful. We’re not.

Social media – any technology that records the details of our past – pulls back that curtain. The more we try to maintain the illusion of a single consistent identity, the way Mark Zuckerberg would like us to, the more obvious it becomes that such a thing is impossible, was never possible, will never be possible.

It’s possible for narratives to be inconsistent and fragmentary, non-linear things and still be narratives that we can embrace. But the stories that do this are notable exceptions to understood rules, and they’re still difficult for most of us. They play havoc with our entire process of sense-making. They are disorienting.

This is exactly what happens when I screw up the courage – or become masochistic enough – to look back through my Livejournal posts from high school. I’ve locked them, so for the most part I don’t need to worry about them disrupting my more public self-presentation. But they almost make me dizzy. Who was this person? How distant are they, really? Who am I? The more comforting myths I’ve built around the transition from who I was to who I am are destroyed by the sheer weight and solidity of that person’s words. Once again I’m behind their eyes looking out at the world, I’m typing that post up about how much I don’t like my parents, I’m squealing about that band I was stupid for liking, I have the most ridiculous hair, and I’m doing all of those things right now.

I can hide from these entries; I can just not read them. But I still know that they’re there, disproving me.

So why don’t I delete them, if they make me so uncomfortable? I really don’t know. When I left Facebook, I didn’t delete my account outright, but the idea that it might someday be gone doesn’t trouble me very much. Perhaps, with sites like Livejournal, it’s because they’re so uncomfortable, so fraught with pain and embarrassment. They’re a visceral record of who I was and am; they may render my self-narrative incoherent but they’re also the truth of me, wrung out of me at a time when I was becoming me. My digital self feels just as real as my physical self. It hurts to be made to remember, but I don’t want to forget.

That, too, is a kind of sense.

Sarah maintains an extensive record of highly questionable decisions on Twitter – @dynamicsymmetry

Image by Monch_18

She passes through a sheet of bloody glass. On the other side, she is being born. – Catherynne M. Valente

 

1.

My self began with words, which were stories.

It’s always important to understand that words do not belong to the digital. Nor do they belong to the physical. Words belong to people. People are in both. Nevertheless: my first overt experience of the digital was in words. Words have always been my playthings; I was always a storytelling child. They have always been a means of performance but more for the benefit of myself than anyone else. I have always engaged in a dialogue. Who am I? What do I want to be today? We create mythologies with extraordinary explanatory power. We cannot separate ourselves from our stories.

I have words. In the end they are all I have.

I don’t remember my first moment online. That part isn’t saved, isn’t crystallized, isn’t retrievable data. This is not true of everything that came after, and these are things I remember. My first connections with others in this space began with the stories that we told each other, about who we were, what we wanted; I now know that much of this was not true, but you must decide for yourself how important true really is.

All of our stories are, to a greater or lesser extent, perfectly true.

From ourselves we created others, or we pulled others from stories that were not ours but which became ours. We created a complex weaving of fiction and roleplaying. We created journals for these characters, for ourselves. We created story-as-interaction; turn by turn we threaded out what we wanted the world to be. Sometimes things slipped out of our control, because our stories are also never truly ours once they come into the world. Often this slippage was embarrassing. Overly emotional, too intensely attached. I lost friends when our words meant  too much to us, but for so many of us, smashing head-first into adolescence and freshly lonely in new places and new bodies, it was all we had.

Words are fluid. They are slippery. They do not behave. I remember that at some point, my words began to feel genderless. The part of me that consists of bits simply was. This was social construction before I knew what social construction meant. Everything I am now comes from the point at which the digital and the physical and I and me and we all together collided in the stupidity that is high school and I had to decide what to do with the pieces.

I have a record of all of this, that I do not read. Sometimes I have to forget that it exists at all. But that never works very well; I can’t forget who I am.

 

2.

We need to believe that fiction and nonfiction are not the same. We need to believe that digital and physical are not the same. We need to believe that online  and offline are not the same. We need to believe that past and present and future are not the same. Some of the most heated arguments I had with my parents as a teenager were over whether or not words on a screen actually meant anything. Over whether or not I had a right to them.

When we tell ourselves certain stories over and over, it’s important to ask why. It’s also the question that no one really wants to hear. People have been excommunicated for less.

 

3.

My handwriting has always been terrible. It’s also painful. Two minutes of it and it’s too uncomfortable to continue. I don’t hold the pen the right way; perhaps I simply never learned how. The words I produce are often illegible even to me. They don’t come fast enough. My brain races ahead and what doesn’t hurt is endlessly frustrating. It’s like being gagged.

Keyboards gave me my words.

When I was small I would type out nonsense sentences on my father’s Kaypro and print them and make him read them aloud, and I would laugh hysterically, because I had the power to create nonsense, which felt like real power indeed. I could disorder the world. Later I ordered myself through a digital conversation that has been going on for the majority of my life.

We had this conversation before the internet, before Livejournal and Blogspot and Facebook and Twitter and Tumblr. The what doesn’t ever really change. It’s the how to which we need to pay very close attention. It’s the how that determines who can converse and who is abandoned to older silences.

 

4.

I ran away from Facebook. I’m not proud of this. But some of the reason, I think, was that it felt as though it was forcing the river of me into narrow concrete channels. It wasn’t letting me play.

I am not comfortable with that kind of collapsing of worlds. I contain multitudes. So do we all, but please understand how loud mine are, how I found them, how they grew. I gave them names that had no legal reality. There is nothing more political than the power to self-identify.

I could have found a way to work within it, I think. It wasn’t really as narrow as it felt. But sometimes we run because we have to. Sometimes we never find a way to go back.

 

5.

A cyborg created many masks to amuse themselves, and behind each mask a face came into being, for when we create spaces for things those things are always filled in the end. The masks had power, and through each the cyborg was able to tell a different truth, and all truths were equally true. The cyborg was happy. Also confused. Also contradictory. These were things that had always been so, but now they had solidity and reality; they could be framed in a mirror and seen beyond abstractions.

As the cyborg’s faces grew in number, some found this unsettling. No, they said. No, you should have only one. We should only be able to see one. Cast off your other masks and give the one that remains a name and wear  it always for us so that we will always know where to look.

Of course the cyborg refused. It couldn’t do anything else.

This is not a surprise ending. You already know this story.

 

6.

I don’t read my old Livejournal entries. They are there, I can’t delete them, because it feels like dishonesty. But when I summon the courage to look back at any of them, it feels like being skinned alive. I was so raw and foolish; to read the words that came from that person is to be that person again. But of course that person is already me. If we create boundaries where no boundaries were, we lie to ourselves. But please understand that lies are stories which are also real enough to matter.

In those old stories are too many voices. They drown me out. I have locked them away; no one can see them but me and again, I hardly ever look. Like the face of an Old Testament god, I can only see them in fragments before they start to burn out my eyes. It is also true that I live, a little, in fear of discovery.

I also have a paper diary. I never read that either.

 

7.

The characters I roleplay online have always been men. They have always been the same kinds of men, hurting, looking for someone to fill all the gaps, needing to care and terrified to care in equal parts. When I was very small and lost in worlds of let’s-pretend, this was also true. Let’s pretend. Let’s pretend that we can be this and we are this and when I talk to this thing that is sometimes you I become more myself. I am faceless but I have avatars that are also me. I build masks and place people behind them. You believe because you want to but also because it’s all true.

I will never be entirely sure whether I was already a transgressor of the gender binary and that’s why I told myself the stories I did, or whether the stories I told myself pushed me into transgression. I will never be entirely sure if it makes any difference.

 

8.

Poetry is not only dream and vision; it is the skeleton architecture of our lives. It lays the foundations for a future of change, a bridge across our fears of what has never been before. – Audre Lorde

Cyborg writing is about the power to survive, not on the basis of original innocence, but on the basis of seizing the tools to mark the world that marked them as other.  – Donna Haraway

 

9. 

I still roleplay. I still inhabit characters that were not born as mine but I make them mine when I wear their masks. One cannot do this with bodies. One has to forget bodies, for a little while. One is still in a body, but the body is the interface. The body must disappear.

I am still telling myself stories about myself, about who I was and about who I will be. I can’t separate any of this from itself and still make any sense of it at all. I am not internally consistent. I am not sure why I am supposed to be. My body is disciplined but I want to fight this; can words on a screen help me fight this? Where are the master’s tools? Did I seize the words or was I given them? How do I move freely within the code when I know the code is not neutral? The code is never neutral. The code has never been neutral. Someone else sets the rules. We can only do so much to break them.

10.

The cyborg made gods of their masks and tore those gods down and put them to the fire. The cyborg collapsed their many gods into one god and gifted it a single name, but while others fell down and worshiped this one god, the cyborg could not, because they knew too much to believe.

Every story requires a suspension of disbelief. The cyborg is monotheist, polytheist, atheist. The cyborg recognizes that everything contains a spirit, the cyborg invokes the ghost in the machine and makes of it a digital animism, the cyborg understands that all of this is superstition. The cyborg knows that the world is haunted by many demons. The cyborg is haunted by themselves.

 

11.

All of this is a frame through which I meet others, myself, the world. Screens, touchpads, numbers, bytes and bits. My eyes; my glasses restrict my field of vision but years ago I stopped being aware of this except at certain times and in certain places. It is the world, now.

I mean frame in the social theory sense and I mean frame in the sense of: here is something around a picture and the picture is always changing, and: here is a looking glass hung on a wall and everything you see through it is running backwards forever and ever.

But you can, if you wish, step through. Unless you are already on the other side and looking back at me.

Interfaces are the point and they are also distractions. The best interface is the one that disappears.

 

12.

The cyborg turned at last and faced themselves, their many-faced self. They opened their arms and embraced each one. Each one was immediate, ever-present, flesh made words and unignorable. They were sharp and as the cyborg danced with them they cut the cyborg’s feet to ribbons.

You are all me, the cyborg said. There is so much pain but I love you.

 

13.

I want to believe that the existence of all of these memories and all of that pain and all of that hopeless teenage awkwardness is a net good. That these are stories about myself told to myself that will tell me something about the future. I want to believe in the essential collapse of the temporal. I want to believe in self-integration, and I want to believe that self-integration is never necessary. I want to believe that no one will blame me for any of this. I want to believe that later I won’t have to regret anything. I want to believe that all of my digital masks are equally me and that all of my digital ghosts mean me no harm, but that, like all ghosts, they simply have business that remains unfinished.

I don’t know if I know too much for this.

I am always turning toward a painful past that becomes a painful present. I am always stepping through the looking glass, I am breaking through the frame. On the other side is always me. This is always true.

I have words. That is enough.

 

Sarah is endlessly pretentious on Twitter – @dynamicsymmetry.

The right to petition the government for redress of grievances is a significant part of the Constitution of the United States. Not only is it yet another way in which citizens can shape the form that governance takes, but it specifies a particular relationship between those citizens and that government, one that – along with several other specified rights – legally establishes a public sphere as an arena within issues of politics can be debated and potentially translated into changes in policy. But petitions didn’t start with the Constitution. Nor are they abstract components of an abstract set of rights. Rather, they’re grounded in the history of technology – specifically communications technology – and are arguably an intrinsic part of the development of what we understand as an augmented society. Moreover, the birth of petitions as an intrinsic part of the construction of the public sphere has implications for how we understand contemporary augmented social movements (Occupy being a particular example).

Probably the most in-depth examination of petitions-as-technology is a 1996 article (later a book) by David Zaret, and it’s worth summarizing before I move on to my primary point. Zaret’s argument is essentially that technologies of mass printing and accompanying shifts in the political structure of 17th century England brought the practice of petitioning out of its normative secrecy and into the hands of the public at large. The practical implications of this were A) the establishment of a specific open arena for political debate, propaganda, and lobbying for policy change; B) an early form of the more modern democratic public sphere, and C) the dawn of authority derived from public opinion rather than the preferences of an elite few.

This last is probably the most important for this piece. What’s dramatic about that development is that it anticipates most of the appeals to authority used tactically by recent and contemporary social movements, from the push for women’s enfranchisement to the Civil Rights era to ACT-UP to Occupy Wall Street. The legitimacy of democracy is – in theory, at least – founded on the idea of government by the people; governance shaped by the power and, more importantly, the legitimacy of public opinion is a relatively recent idea.

Moreover, this development most likely wouldn’t have been possible except by changes in printing technology. Prior to mass printing it was too expensive and time-consuming to circulate petitions, leaflets, or books to the general public, even if people were inclined to do so. Once printing on a large scale was cheap and relatively easy, wide circulation of printed political material became not only possible but necessary. Public opinion was no longer at worst a minor annoyance and at best something that could be safely ignored. Instead it was a valuable tactical tool for anyone who wanted to affect political change. It became a tool with moral weight, the idea that paying heed to the people resulted in more just government. In essence, therefore, technology not only forged a new relationship between people and government but altered the nature of that relationship and the source of the legitimacy of government itself.

This is all to say that contemporary augmented social movements/revolutions are functioning on the basis of some very old tactical assumptions – that their legitimacy is not only derived from ethical considerations in general but specifically on the basis of the moral authority of numbers. And just as petitions provided an arena for those numbers to make themselves explicit, the internet and digital technology provide a powerful locus for the expression of the same authority of numbers. There are a lot of us, Occupy argued. Therefore you should pay attention. Therefore our claims are legitimate. Philip Howard’s work on ICTs and Middle Eastern political cultures argues the same: that technology provides for a kind of public sphere that would probably not be possible, at least not in the same way, in the context of even very repressive regimes.

It’s worth noting that visibility is a vital part of these technologically augmented claims to the authority of numbers. Petitions made numbers explicit through the signing (and occasionally the forging) of names; there are a lot of us was no longer the only argument, but instead became you can count us and see for yourself. With television – and more recently technologies that encourage constant and ambient documentation – visibility takes on a new, and often vaguer weight. Specific numbers are less important but the visual spectacle of physical presence is more powerful. Witness images of mass protest during the Civil Rights movement and the encampments of Occupy; analog/digital documentation of the occupation of physical space provides new forms of moral appeal to numbers.

We can see elements of the same process in the recent election. Appeals to numbers aren’t just tactics for affecting political change; they’re also indicators of political trends – backed up when those numbers move beyond the realm of claimsmaking and into real countable votes. The Right made certain appeals to moral authority on a number of social issues; the response by the majority of the public was okay, but no. And this response was arguably a moral counter-claim in itself.

To focus more explicitly on digital technology, appeals to numbers are even implicit in the prevalence of politically inspired memes. As Nathan Jurgenson notes, memes may often be silly and facile but they are also often critiques of the political content of the election itself; they are discursive responses to discourse. And a meme’s power is derived at least in part by its digital visibility; the more one sees it, the more powerful an appeal to public opinion it makes. When Romney’s “women in binders” was translated into memetic terms and went viral, the general takeaway was that a significant percentage of the public found his argument not only unconvincing but patently ridiculous – worthy of public ridicule.

But again, my point – as it often is when I talk about these things – is that the basic tactics, and their claims to legitimacy, are not actually all that new. Even when the forms change the old roots remain, and we should bear those roots in mind whenever we consider the intersections of technology and political change. As we on this blog have often noted, technology is not design-neutral; the forms it takes determine the forms that other things take, and, in particular, who is empowered and privileged in any given situation. These kinds of technology help to place at least the appearance of political focus on public opinion. That’s not all they do. That’s not even what they actually do in a lot of cases. And that’s not to say that they’ll always continue to do so in future.

Image by Robert S. Donovan

Last week Jenny Davis posted a great critique of the position taken by Jessica Helfand, that on-demand TV is corrosive for both the attention of the viewers and the quality of the product. Aside from some nebulous concerns about viewers no longer being part of a viewing community as a result of being tied to a regular episodic schedule – and I add my voice to those who can say from personal experience that no community is lost as a result of this – Helfand holds that once viewers/consumers can pick and choose what elements of a narrative they want to consume, violence is done to the narrative integrity of shows. Additionally, she worries that the focus of narrative production is shifting to the technology through which stories are told – to “the box, or the screen” – rather than the stories themselves.

Jenny efficiently punctures these arguments, pointing out that waiting for an entire TV series to be completed and released as a collection requires patience and arguably provides a more immersive narrative experience (again, I can attest to this, given the month I spent this past summer utterly lost in the worlds of Buffy the Vampire Slayer and Supernatural; please do not mock my taste). Additionally, Jenny notes, when viewers can focus only on the aspects of shows that they enjoy the most, creators and writers can get a more explicit idea of what they’re doing right and what they aren’t.

I want to take this argument a step further, however, and argue that TV on-demand – and, by extension, narratives constructed within and mediated by new forms of technology – have the potential to not only result in richer forms of existing narratives but to expand the means available for telling different kinds of stories.

Discourse – the means by which a narrative is related – and story – the content of the narrative itself – are, like the digital and the physical, enmeshed and intertwined. They constitute a single reality – the story someone tells – but they have different properties and play different roles in the constitution of that reality. But because they are inextricably linked, the discourse – the form of the narrative – plays a huge role in determining not only how the story is told but the content that the story can contain. We can see a very simple example of this within novels, where the point of view in which the novel is written determines what information can reasonably be related to the reader, thereby affecting how the reader will understand and interpret the events of a story. A story told in first-person will be intensely bounded in terms of what the reader can know; they can know no more than the narrator themselves.

However, just as first-person confines what a story can do, it also allows for things that wouldn’t otherwise be possible: an intimate look at a central character, the feeling of getting to know that person on a profound level as the reader experiences the story through their eyes. First-person also more easily allows for stories to be told in stream-of-consciousness, making them flowing and almost dreamlike. It’s a tool like any other in the narrative toolbox, and we largely have the existence of the novel-as-discursive-form – and the technologies of cheap printing – to thank for it.

As far as episodic stories go, technology has arguably been the source of their success since advances in moveable type in the 19th century led to the explosion of the serial novel. Serial novels moved on to serial shows in the age of radio, and then to episodic TV shows as television became one of the most predominant forms of entertainment and information technology. Being able to tell stories in this way was significant; it enabled not only intense interest on the part of consumers but enabled unusual depth and length in mainstream fiction, as well as potentially – and even more unusual for a narrative – stories that don’t even really have an end, per se (ongoing daytime soap operas are probably one of the best examples of this).

So technology enables new discursive narrative forms, which lead to expansions not only in how narratives are consumed but in how they can be told in the first place. What does on-demand TV do? I think it’s honestly too early to say what the long-term effect on narrative might be, but if I can speculate for a moment, I would anticipate the possibility that if viewers are waiting until a show is complete and then watching the entire thing in a very short span of time, TV writers might be able to construct narratives within shows that are less discrete, largely self-contained packets of half-hour or hour-long plots and more stories that flow seamlessly into each other, like scenes of a film or chapters in a book rather than what we’re used to thinking of as episodes.

A few other expansions of narrative forms that technology has allowed for: the hypertext novel, which makes use of links to enable greater freedom in the linearity – or nonlinearity – of a story; machinima, which makes use of video games to produce cinematic narratives (a particularly successful example is the series Red vs. Blue: The Blood Gulch Chronicles, which makes use of Halo: Combat Evolved); and the increasing number of good amateur short films on sites like Youtube (there are a number of excellently-produced fan films based on the game series Half Life, as well as one of my personal favorites, Eagles are Turning People into Horses).

In short, technologically-driven changes in how we consume narratives have the potential to not only improve the quality of existing common forms of narrative but to expand how we can tell stories in general. And I can’t help but think, given how things have worked in the past, that this is good – on balance – for stories and storytelling across the board.

But as Jenny concludes, the problem is still that existing measures for the success of shows – and, by extension, those narratives – are based on financial models that favor advertising over what viewers actually become invested in watching. I hold that changes in how we consume narratives are good for narratives. What will be even better is if we can figure out how to alter those models, and change how good storytelling and consumer investment in those stories are valued.

The library in the Kirby Hall of Civil Rights building at Lafayette College. Photo by Benjamin D. Esham.

Most of us still think of books as physical things by default. This is in the process of changing, as anyone who’s taken a look at recent sales and consumption statistics for ebooks will know very well, but I think it still holds true most of the time. We think of “books” as things on shelves, possibly dusty, often dog-eared – or perhaps in carefully kept condition: hardback first editions, family heirlooms, or books that are simply old and kept mainly for the simple fact of possession more than the act of reading.”Book” to us does not yet mean – or necessarily even include – “ebook”. The fact that we linguistically differentiate between the former and the latter is significant. The physical, dead tree “book” is the default; the “ebook” is the upstart Other that is essentially defined by what it isn’t as much as by what it is. This Basic Interior book design services is ideal for simple layout books (such as fiction and basic non-fiction that don’t consist of equations or, formulas).

The tension between those who stubbornly prefer the sensory experience of physical print books and those who rush to embrace the ease and convenience (and frequently the relative cheapness) of ebooks is ongoing and unlikely to wane anytime soon. That tension is correlated with less antagonistic tensions regarding the practicalities of storing, consuming, and lending books in digital form. But I want to highlight the former kind of tension, because it illuminates something important regarding the discontents inherent in navigating an augmented world, and the point at which those tensions make the transition from the individual to the social.

In order to discuss this, I’ll need to go back to a much earlier post that I wrote on how we experience time differently in abandoned physical and in digital spaces. Recall that my primary point in making that distinction was that both spaces are at once both temporally-laden and atemporal, but that we mark physical abandoned spaces by what time has done to them – by the process of ruin – while we mark abandoned digital spaces by what time hasn’t done – by the existence of staticity.

The implicit point there is that for us, physical spaces in all states of maintenance are by necessity temporal spaces; we orient ourselves within them and understand our relationship to them by virtue of at least a recognition that time is present and important, even if we don’t know a space’s exact age or history. Time is a background-level context that we assume is there. It makes sense to assume this, and indeed, a world where that couldn’t be assumed – where a physical space had no time – would be unintelligible to us. It would be too far removed from our experience of the world to make sense. We can – with a stretch – imagine an atemporal world: a world where past, present, and future are in a state of constant implosion. Indeed, it could be argued that this is exactly the world in which we live now. But we can’t imagine a world where there is no time at all. Trying to do so is uncomfortable, to put it lightly.

Additionally, in as much as we perceive every physical space in which we exist to be implicitly temporal, there are some spaces – and indeed, some object – that we perceive as more temporally-laden than others, regardless of whether or not those spaces and objects are in a state of ruin. This has to do with both ideas of relative permanence – or at least longevity – and with ideas of the tremendous amount of time that permanent or extremely long-lived things accumulate. Buildings, I would argue, tend to be one of the things we think about in this way. Old buildings of course are assumed to have accumulated a tremendous amount of time; that’s how we can call them “old”. But even new buildings, especially large or grand buildings, immediately take on the assumed weight of temporality in our minds. We see them as built, literally, to “stand the test of time”; they will last for a long while, and we can imagine a future in which we may be gone but the buildings themselves are still present (this is corollary to an “imagined future ruin”, only without the ruin part).

Books are another object that we tend to perceive as temporally-laden, moreso than other things – and I think it makes sense to talk about physical books along with physical space, given that we can also be said to exist with the “space” of a physical book when we hold it and read it. We share the occupation of physical space with it when we hold it in our hands; we share the occupation of cognitive space with it when we read its words and consume its ideas. So books have space. But even more, books have time, and the reasons for this have a tremendous amount to do with our cultural history of books and what books are.

We tend to think of books as physical by default. I want to argue that we also tend to think of books as old, even when they aren’t necessarily. “Libraries” are frequently depicted not as they’re experienced by most of us, but as a kind of Victorian (or older) Oxfordesque archetype: dimness, dark wood paneling, those really tall shelves that you need ladders on wheels to access, overstuffed armchairs, green desk lamps, fireplaces (I think it shouldn’t escape our attention that there’s also something implicitly masculine about this kind of library) . Books exist within these spaces; books are also of these spaces. Contemporary mass-market paperbacks aside, the default quintessential Book is old, hard-bound, possibly large and heavy, frequently dusty.

This is before we even speak about content, and there, too, our cultural understandings of what a Book is and does play a huge role. The earliest books in post-Roman Europe – where a lot of Western ideas of what books and literature are come from – were religious texts, ancient stories and histories meticulously copied and ornately illuminated. It took a lot of time to make books, and books themselves contained a lot of time within them as part of their content. Though none of the books we read now are produced in that way, the past of books still works to shape our present imagining of them. The guys at ranchobernado press who can print business cards san diego same day also print books and believe that the quantity of print media is going down to some degree.

We are accustomed to books being heavy with time. On some level, it’s unnerving when they aren’t – or at least not in the way that we’re used to. This is not to say that all people find ebooks unnerving but simply to account for some of why some people expressly prefer dead tree books on a visceral level. When we hold an ereader, we are aware – if only subconsciously – that time is not there in the same way that it is with a dead tree book. It doesn’t connect to all the temporally-laden ideas of Bookness that we carry around in our collective cultural memory.

But there are other tensions inherent in the mass transfer of literature into digital space. As Stephen Marche points out in a piece for the Los Angeles Review of Books, literature itself is “terminally incomplete”:

You can record every baseball statistic. You can record every trade over the course of a year. You can work out the trillions of permutations and combinations available on a chessboard. You can even establish a complete database for all of the legislation and case law in the world. But you cannot know even most of literature, even English literature. Huge swaths of the tradition are absent or in ruins … Among the first Anglo-Saxon poems, from the eighth century, is “The Ruin,” a powerful testament to the brokenness inherent in civilization. Its opening lines:

The masonry is wondrous; fates broke it
The courtyard pavements were smashed; the work of giants is decaying.

The poem comes from the Exeter Book of Anglo-Saxon poetry and several key lines have been destroyed by damp. So, one of the original poems in the English lyric tradition contains, in its very physical existence, a comment on the fragility of the codex as a mode of transmission. The original poem about a ruin is itself a ruin.

Literature is in a state of ruin. It is even explicitly identified with ruin. It’s rich, vibrant, unquestionably alive to the people who love it, but it also crumbling and fragmentary. We can see the mark of time on it, just as we can with physical ruins. If the Book is temporally-laden, literature as a whole is even more so.

Digital information storage is also fragile. But again, that fragility is different and experienced differently; time in digital space is marked by the absence of change, not by its spectre. When the experience of books – and of literature – is mediated by digital technology, our experience of it alters, and in ways that are both subtle and, for some, profoundly discomfiting. The digital and the physical constitute the same reality, but are, as Nathan pointed out recently, still in possession of differing properties. In short, those differing properties can create tension when they don’t quite line up for us, and much of that tension has to do with how we understand and experience time.

For many people – especially younger generations – the experience of reading ebooks is becoming the primary way in which they experience books at all. As this number grows, perhaps our cultural ideas of what a Book is and does will change, and this particular tension between the digital and the physical will dissipate. Time, as usual, will tell.