January 29th, 2011 @ 19:11:28

So it happened that, after about a year of unemployment and almost nothing but writing and editing books, I returned to video games.

I used to both play them a lot and write about them a lot, and I missed them. I genuinely think my mental health took a hit when I (largely) stopped. Video games engage a part of my brain that really nothing else does, and that brain-part gets engaged actively. Game critic Eric Kain wrote that killing in video games is essentially puzzle-solving, and I agree (though I don’t believe that’s all it is), because that’s exactly how it feels.

I prefer games where you shoot things, including games that I objectively recognize are not very good. I’ve played and loved a bunch of the entries in the Call of Duty franchise. I confine myself to the single-player campaigns and steer clear of multiplayer because I’m not a cisgender man and am not the kind of masochist that would make multiplayer bearable because of this, and also I play video games in significant part to get away from people (I’ve been informed multiple times that this is not the correct way to play CoD; the thing about that is that I don’t care).

But I also stick to the single-player campaigns because I like my games to have stories, even stories of the flimsiest and most ridiculous kind (something else I love? Just Cause 2. So). It’s like getting to play through a silly schlocky action movie. It’s a gaming fast food hamburger. Not everything has to be or should be Art. Yet the story really is important for me, and the gameplay within and alongside the story.

Just Cause 2: this is fine
Just Cause 2: this is fine

And this brings me to Deus Ex: Human Revolution.

Released in 2011, Deus Ex: HR is a relatively old game by now, but I honestly hadn’t played it before, because of time. Which is a poor excuse, because it was basically grown in a lab specifically for me. I love Deus Ex. I effing love. It. I’m in the middle of my second play-through, which I began about ten minutes after completing my first (I’m attempting to do a 95% non-lethal run; 95% because in the version I have, you have no choice but to kill your way through the profoundly irritating boss fights). It looks and feels unapologetically like Blade Runner. The protagonist is a cyborg with a Christian Bale-as-Batman voice and a barrel of man-pain. He has swords in his arms. He can leap off buildings and fall in a ball of golden lightning and land unharmed in a fist-to-the-ground Iron Man pose (not a single bystander ever seems to find this in the least bit startling, so I guess in 2027 it’s just a Thing). You can roll through the game as an angel of death or you can be nice and just punch guys in the head, combat-heavy or stealth-focused, exploring cyberpunk Detroit and Shanghai and hacking everything in sight.

Did I ever get tired of leaping off buildings? No. No, I did not.

Even more to the point, it’s story-heavy. The story is interesting, if not the most original thing ever, and the characters are fairly well-realized and engaging. I’m playing the game over again not just because I enjoy the gameplay but because I really love the storyworld and I wasn’t ready to leave just yet.

And lest this descend into a thousand words of me gushing about how much I love a slightly doofy cyberpunk game released half a decade ago, let me talk about the difficulty levels.

Difficulty levels in video games might seem like one of the simpler and more basic elements of game design, but they’re actually very complex and somewhat a matter of contention. There’s the difficulty in designing them, creating a selection of gameplay experiences that are capable of satisfying a range of players, but there’s also the matter of how they’re defined, how one conveys the meaning of things like “easy” and “hard” to a player and, perhaps even more, how those meanings are guided and constrained by what has traditionally been understood as as gaming culture. Specifically, difficulty levels often function as forms of identity-policing/gatekeeping via value judgment attached to the disparate levels of difficulty and, by extension, judgment of the player. In other words, serious gamers don’t play on easy, and the degree to which you’re “serious” about the game you’re playing is the degree to which the game itself deems you worthy to be playing it at all.

Put most simply: difficulty levels are the means by which a game judges you as a person.

(I think it’s kind of an unnecessarily jerk-ass thing to do.)

Something I noticed right away is that Deus Ex: HR at least makes an attempt to avoid this kind of judgmentalism through how it defines its easy level. Easy is called “tell me a story”, and the language is refreshingly positive about what this means:


But it begins to slip back into the old unpleasant tropes when you hit the “normal” and hard levels:


Yeah, see? It’s not coming out and explicitly calling the player a wuss for taking the first option, but the subtle condescension is arguably still there.

Okay, so far I’m not saying anything that isn’t pretty obvious. But here’s the thing about how this game breaks its difficulty levels down in discursive terms: I think it reveals an interesting and uncomfortable tension between what games are increasingly capable – and designed – to do in terms of the ambition and complexity of their storytelling, and how combat-oriented gameplay has traditionally worked and continues to work. Again, sure, Deus Ex: HR was released in 2011. But I don’t think that tension has disappeared. I think it’s just as present now, and it’s just as tense.

The debate over difficulty levels themselves is ongoing – how they should be defined and whether they should even exist at all (at least in their presently recognized form). In an essay for Gamasutra last month, Mark Venturelli breaks down what he views as the inherent problems of how difficulty levels function, and suggests a variety of new approaches suited to a variety of game types. It’s the issue of game types I want to focus on here, and moreover, why exactly a player is playing a specific game to begin with.

“To have fun.” Yeah, that doesn’t really tell me much. There are a hundred thousand ways to have fun, and everyone’s fun is going to be different, often from mood to mood and situation to situation. Sometimes I want to solve a puzzle (Portal). Sometimes I want to shoot rendered human figures in the face (Call of Duty). In both of those situations, as I said before, the presence of a story does matter to me, however silly and thin, and in fact I’ve suffered through somewhat mediocre gameplay multiple times because I loved the story and the characters so much (Enslaved: Odyssey to the West).


The games that have meant the most to me, the ones I return to over and over, are the ones with the powerful and interesting stories. The Last of Us (possibly one of the most stunningly good pieces of media of any kind that I’ve ever encountered). Seasons one and two of The Walking Dead (see The Last of Us). The Uncharted franchise. The Mass Effect trilogy. All three entries in the Bioshock series. Spec Ops: The Line. Portal and Portal 2. And now, Deus Ex.

What makes this such an exciting time for someone who loves and even requires stories in their video games is that smaller studios without huge backers are producing games that are easily 70% – 80% story, games where the entire point is the story. Dear Esther, one of my favorite games of all time, consists of the player wandering a linear path around a deserted Hebridean island while listening to voiceovers of a letter written by a man to his dead wife (that’s it, that’s the game). Both Amnesia games – The Dark Descent and A Machine for Pigs – are slightly more conventional in terms of the presence of gameplay, but the story remains the focus. The Stanley Parable is a game about games, and indeed about stories themselves, a genius work of meta. There’s Gone Home, a release from a few years back that received wildly positive critical response and functioned as a reimagining of old point-and-click adventure games, and now there’s what I feel is in many ways Gone Home’s spiritual successor, this year’s Firewatch.

mission accomplished

And now this piece has become a rec list. I’m sorry. My eventual point is this: game designers are coming to understand on a very deep level that players like stories, and that many players like involved, ambitious stories. That many players, in fact, like stories more than just about anything else, even if they also want more conventional combat-oriented gameplay and they want that gameplay to be good.

Older attempts on the part of designers to deliver on this have created some problems, which Clint Hocking noted in his piece “Ludonarrative Dissonance in Bioshock”. Exploring the conflict between Bioshock’s story and its gameplay, he writes:

To cut straight to the heart of it, Bioshock seems to suffer from a powerful dissonance between what it is about as a game, and what it is about as a story. By throwing the narrative and ludic elements of the work into opposition, the game seems to openly mock the player for having believed in the fiction of the game at all. The leveraging of the game’s narrative structure against its ludic structure all but destroys the player’s ability to feel connected to either, forcing the player to either abandon the game in protest (which I almost did) or simply accept that the game cannot be enjoyed as both a game and a story, and to then finish it for the mere sake of finishing it.

And arguably this problem exists in significant part because the makers of Bioshock really really wanted to tell a story. Not only that, but they were trying to enmesh the story with the gameplay in a way that made sense and augmented both. They failed – really badly, if you agree with Hocking – but without actually asking any of them, I believe the sincere desire to do so was there.

Game designers are still in the process of figuring out how gameplay and story can work together, how games can tell stories in a way that manages to be immersive while granting players the kind of agency a player tends to want. Whether that last is even possible remains something of an open question. But what designers are wrestling with, I think, isn’t just a matter of writing and practicality.

The tension Deus Ex: Human Revolution reveals in its difficulty settings is in the question of what value its own story even has. Open up that menu and you have a front row seat to an identity crisis.

The makers of Deus Ex: HR wanted to tell a story. They wanted to tell a fun story (I think they mostly succeeded). Even more to the point, they wanted to express pride in that story, and not toss judgment at the people who were in the game for that story more than to stab dudes in the neck with their arm swords. They refer to it as an “experience”, and I interpret that language as legitimizing. Yet then you move up to the next two levels and it’s the same old deal. If you’re serious about the game, you’re not playing for the story.

The game fundamentally doesn’t know how it feels about itself, and about you as a player.

Once again, yes, I recognize that this is a study of a case that happens to be half a decade old. And once again, I think this identity crisis is still being worked through, at least on the part of AAA games coming from large studios. Deus Ex is just one of the more explicit examples of it that I can recall seeing.

I’ve seen a number of game critics over the last year or so say that they’re abandoning AAA games altogether, that there’s no longer anything meaningful or interesting being done there. I disagree with that. A lot. And not just because big budget AAA games continue to make my lists of faves.

Because these kinds of games are inherently conservative by nature, because change within them tends to be ponderous and plodding, because they tend to be conceptually clumsy and unable to quickly adapt to change, watching ambitious people working within that system and trying to push as much change as they can within the restrictions they’re dealing with is fascinating to me. I don’t think you see these kinds of crises of identity in what people tend to define as indie games. At least in my experience, indie games are more likely to know who and what they are, what they’re doing and what their player is playing them for. They’re lean and nimble. There’s a sharpness to the best ones, a kind of clarity of self. I like that.

But a lot of the big story-heavy games I play – and love – seem to be a little confused about themselves. And I like that too. I trust that they’ll figure it out eventually. I have faith in them.

Now if you’ll excuse me, I have to go play with my arm swords.


Fourth_of_July_2016_4534_1_Fireworkscredit to Fourandsixty

This July 4th, PBS viewers in the DC metro area were outraged to be reminded of the fact that they were watching television.

It’s actually not quite that simple, though it’s fun to phrase it that way. Here’s what happened: this past Monday was an extremely muggy and cloudy one in our neck of the woods; in other words, not at all the idea climatic conditions for a fireworks display. PBS, in something of a bind regarding how to maximize the spectacle for its live broadcast of the Independence Day celebration in front of the White House, elected to include archival footage of past fireworks displays with its live broadcast of the currently-happening fireworks.

People were displeased.

Specifically, people who took issue with the decision claimed that it was an act of fakery, that it was a cheap move and made the broadcast less “authentic”. That it was almost somehow a lie. PBS responded on Twitter with: “We showed a combination of the best fireworks from this year and previous years. It was the patriotic thing to do.”

I’m going to leave aside the interesting fact that PBS is characterizing this move as “patriotic” and instead focus on the two other things that I find interesting.

First, these viewers forgot the Baudrillardian truth about TV and indeed about all media and really kind of everything ever: it’s basically “fake” by definition. What a viewer is shown is almost always carefully edited and packaged, or at the very least presented with a specific intent in mind. Even when it comes to live TV, there is no such thing as a fully objective and solely factual depiction of what’s actually going on. This frequently has little to do with a political agenda (the vitally important patriotism of this particular decision aside) and far more to do with spectacle, and PBS’s intent in showing the fireworks was to provide exactly that.

I think the people who were offended by what PBS did believed that PBS’s primary goal was to show something as it truly was. And I’m not saying that PBS didn’t have that goal at all. But I doubt that – consciously or unconsciously – it was at the top of their list.

People like to believe that in cases like this, what they’re seeing is “real” and “true”. It’s jarring and even disturbing to be reminded that they can’t be sure of that, and that indeed they should assume that what they’re seeing is never real or true. That by definition it basically can’t be. People want to buy into an illusion – an illusion that PBS is selling, which it is obviously not alone in – and they become uncomfortable and upset when the illusion is destroyed.

Which leads me to the second thing that strikes me about this: the fact that it was PBS.

As a long-time PBS viewer, I get the distinct sense that someone watching the channel might expect the exact opposite of a focus on spectacle and selling a constructed package of imagery, and far more of a focus on soberly presenting things with a commitment to authenticity. Given the culture of PBS and the probable cultural affiliation of many PBS viewers, I think it’s reasonable to believe that whether or not PBS intends it, there’s a kind of implicit contract between the network and the people watching it, as well as a desire for authenticity in particular on the part of those people.

So I think what we have here is in some respects a twofold destruction of illusion and a perceived twofold betrayal: What people prefer to believe they’re seeing in this kind of broadcast, and what a specific cultural product promises the people consuming it. When I found out about this, I almost immediately wondered how someone watching CNN, MSNBC, or Fox News (oh my lord Fox News) would be likely to feel if the same editorial decision was made.

And I’d still like to ask PBS what exactly they meant by “the patriotic thing to do”. Maybe thick, low clouds are un-American.

Yeah, that’s probably it.


Rose Eveleth’s piece for Fusion on gender and bodyhacking was something I didn’t know I needed in my life until it was there. You know how you’ve always known something or felt something, but it isn’t until someone else articulates it for you that you truly understand it, can explain it to yourself, think you might be able to explain it to others – or, even better, shove the articulation at them and be all THAT RIGHT THERE, THAT’S WHAT I’M TALKING ABOUT. You know that kind of thing?

Yeah, that.

Eveleth’s overall thesis is that “bodyhacking” isn’t new at all, that it’s been around forever in how women – to get oversimplified and gender-essentialist in a way I try to avoid, so caveat there – alter and control and manage their bodies (not always to positive or uncoercive ends), but that it’s not recognized as such because we still gender the concept of “technology” as profoundly masculine:

Men invent Soylent, and it’s considered technology. Women have been drinking SlimFast and Ensure for decades but it was just considered a weight loss aid. Quantified self is an exciting technology sector that led tech giants such as Apple to make health tracking a part of the iPhone. But though women have been keeping records of their menstrual cycles for thousands of years, Apple belatedly added period tracking to its Health Kit. Women have been dieting for centuries, but when men do it and call it “intermittent fasting,” it gets news coverage as a tech trend. Men alter their bodies with implants and it’s considered extreme bodyhacking, and cutting edge technology. Women bound their feet for thousands of years, wore corsets that altered their rib cages, got breast implants, and that was all considered shallow narcissism.

As a central personal example, Eveleth uses her IUD, and this is what especially resonated with me, because I also have one. I’ve had one for about seven years. I love it. And getting it was moderately life-changing, not just because of its practical benefits but because it altered how I think about me.

The insertion process was not comfortable (not to scare off anyone thinking of getting one, TRUST ME IT IS GREAT TO HAVE) and more than a little anxiety-inducing ahead of time, but I walked out of the doctor’s office feeling kind of cool. I had an implant. I had a piece of technology in my uterus, that was enabling me to control my reproductive process. I don’t want children – at least not right now – and my reproductive organs have never been significantly important to me as far as my gender identity goes (probably not least because I don’t identify as a woman), but managing my bits and what they do and how they do it has naturally been a part of my life since I became sexually active.

And what matters for this conversation is that the constant task of managing them isn’t something I chose. Trying to find a method that worked best for me and (mildly) stressing about how well it was working was a part of my identity inasmuch as it took up space in my brain, and I wasn’t thrilled about that. I didn’t want it to be part of my identity – though I didn’t want to go as far as permanently foreclosing on the possibility of pregnancy – and it irked me that it had to be.

Then it didn’t have to be anymore.

And it wasn’t just about a little copper implant being cool on a pure nerd level. I felt cool because the power dynamic between my self and my body had changed. My relationship between me and this set of organs had become voluntary in a way entirely new to me.

I feel like I might not be explaining this very well.

Here: Over thirty years ago, Donna Haraway presented an image of a new form of self and its creation – not creation, in fact, but construction. Something pieced together with intentionality, the result of choices – something “encoded”. She offered a criticism of the woman-as-Earth-Mother vision that then-contemporary feminists were making use of, and pointed the way forward toward something far stranger and more wonderfully monstrous.

The power of an enmeshing between the organic and the technological lies not only in what it allows one to do but in what it allows one to be – and often there’s no real distinction to be made between the two. We can talk about identity in terms of smartphones, but when we come to things like technologies of reproductive control, I think the conversation often slips into the purely utilitarian – if these things are recognized as technologies at all.

Eveleth notes that “technology is a thing men do”, and I think the dismissal of female bodyhacking goes beyond dismissal of the utilitarian aspects of these technologies. It’s also the dismissal of many of the things that make it possible to construct a cyborg self, to weave a powerful connection to the body that’s about the emotional and psychological just as much as the physical.

I walked out of that doctor’s office with my little copper implant, and the fact that I no longer had to angst about accidental pregnancy was in many respects a minor component of what I was feeling. I was a little less of a goddess, and a little more of a cyborg.

And lingering cramps aside, it felt pretty damn good.


I only heard the term “blockchain technology” for the first time this past autumn, but in the last few months, I’ve became pretty absorbed in the blockchain world. Initially I was intimidated by its descriptions, which struck me as needlessly abstruse — though, in a perfect chicken-and-egg scenario, I couldn’t be sure, since the descriptions didn’t offer an easy understanding of how it worked. What compelled me to press on in my blockchain research was the terminology surround it. I’m a long-standing advocate for open source, and blockchain’s default descriptors are “distributed” (as in “distributed ledger”) “decentralized” (as in “decentralized platform,” a tagline for at least one major blockchain development platform [1: https://www.ethereum.org/])  and “peer-to-peer” ( the crux of all things Bitcoin and blockchain). These words all spoke to my f/oss-loving heart, leading me to click on article after jargon-heavy article in an effort to wrap my head around the ‘chain. As I learned more about it, I came to understand why it’s begun to garner huge amounts of attention. I don’t like to get too starry-eyed about a new technology, but I too became a blockchain believer.

I’m in growing company. Even though the technical structure has been around since at least 2008 [2: www.bitcoin.org/bitcoin.pdf], when Bitcoin (which blockchain was originally developed for) was introduced to the public, blockchain-without-Bitcoin has been infrequently discussed until the past year. In January 2016, the World Economic Forum listed it as one of the foundational technologies of the Fourth Industrial Revolution [https://www.weforum.org/agenda/archive/fourth-industrial-revolution]; in March, the Chamber of Digital Commerce held the first-ever DC Blockchain Summit, which addressed issues of policy and regulation at the federal level. Since 2015, the number of blockchain conferences and major news stories has been snowballing. Blockchain’s status in the world of tech media has become formidable, and general-interest outlets have wasted no time in spreading the digital gospel. It’s arguably gotten to the point where blockchain mythos now overshadows its reality. It strikes me as irresponsible to write about it without first giving a few words to its image— the hype has become a fact unto itself, and any accurate reporting about it must deal with it as such.

In my Theorizing The Web talk “Block Party People: Off The Bitcoin Chain” I offer that blockchain offers tech media a unique opportunity to benefit from years of hindsight. Internet technologies in their earliest stages have historically been written in terms that designed to appeal to their developers and early adopters. In other words, those with the professional power, money, and intellectual access to take part in shaping the future of technology. This is by its very nature a narrow group, particularly where early adoption is concerned. It entails financial and cultural privilege that’s unavailable to most people.

Of course, there are plenty of reasons to target specific audiences when writing about emerging technologies. One is sheer comprehensibility: when a tool is in its earliest stages of development, layman’s terms and easily-understandable use cases have yet to materialize. As a general rule, the appropriate metaphors only emerge after a certain amount of time. But those interested in learning about new technologies in non-layman’s terms, the ones who want to pore over dense, jargon-filled texts and abstraction-heavy descriptions aren’t always professionals, and they’re not necessarily in the financial situation to become early adopters. They also don’t always fit the stereotypical image of an early adopter: sometimes they’re female, sometimes they haven’t gone to college, sometimes they live far away from a major city. Though the mainstream media has fallen in love with its (moneyed, masculine) image, the Silicon Valley techie is a very particular flavor of geek.

As one would imagine for a tool designed specifically for Bitcoin, blockchain is uniquely well-suited to streamline digital financial transactions. It can impact virtually anything that relies on Internet protocol, its applications within finance are much more apparent than for any other business sector (at least for now). In line with this, those most heavily invested in blockchain aren’t exactly the Occupy Wall Street crowd. One major blockchain initiative is called The Hyperledger Project [3: https://www.hyperledger.org]. It’s an open source, cross-industry effort to develop an open standard for blockchain. The Hyperledger Project is spearheaded by the Linux Foundation and IBM; partners include J.P. Morgan, Wells Fargo, Hitachi and Intel, along with a number of other large companies and V.C.-backed startups. Though it’s not the only blockchain research and development initiative, the Hyperledger Project is emblematic of the general scope of interest in blockchain. It’s fair to say that the financial industry and corporate world is very well-represented in this world.

I don’t want to suggest that this group should divest its interest in blockchain. Far from it: we need that type of power to develop broad-scale research. However, I do believe it’s critical that groups more representative of the average citizen — the person who’s not in a position of power at a global financial institution, large tech company or well-funded startup — become a part of the blockchain conversation. Those individuals may have different ideas about the technical protocols that become standard for blockchain over the coming decades. We’re a more tech-savvy society than ever before, and opening up the discussion to as many people as possible now, when the technology is still in its infancy, can work to ensure that it helps as many people as possible in the future. A big part of that starting that conversation relies on how blockchain is being presented to the public.

In simple terms, my work on blockchain has been guided by a desire to include more diverse audiences in the subject. As I was developing my research, though, I began to get cold feet. In the midst of fleshing out my clarion call for blockchain reporting to value inclusivity, I realized that I’d be treading in all sorts of dangerous territory. On one hand, there’s a lot of antagonism in the Bitcoin community about the use of blockchain without Bitcoin. Suggesting uses of blockchain not only outside of cryptocurrency, but for non-finance-related, socially equitable causes is a far cry from the freewheeling anarcho-capitalist ends championed by certain Bitcoin enthusiasts. I have no interest in inciting the wrath of cryptocurrency community, but my perspective on this is undeniably at odds with large parts of it. On another hand, I’m not a blockchain developer, and despite spending months reading about it, writing about it (including reporting on the DC Summit for a major Bitcoin news source [4: https://www.coindesk.com/dc-blockchain-summit-2016/]) and generally immersing myself in all things blockchain, I still doubted whether I was qualified to offer a real opinion on it.

I’m aware of Impostor Syndrome [5: https://en.wikipedia.org/wiki/Impostor_syndrome] but I still couldn’t help but wonder if I’d ventured too far into forbidden territory with my blockchain investigations. I worried that I’d be called out as having a naive understanding of technology and business and would walk away from the whole project having done damage to my credit as a researcher. In fact, throughout the course of the work, I frequently thought that I should just give this project up.The irony of this isn’t lost on me: as I was trying to offer encouragement for those who may not think of themselves as having power and influence in the tech world, I was losing confidence in myself. I became the very person I was trying to write for.

Of course, I didn’t quit — if I had, you wouldn’t be reading this. Part of my motivation to keep going was in realizing that by giving up, I’d be turning my back on the ideals. I’ve made some peace with the reality that I may not fully understand what I’m talking about. The fact of the matter is that such a risk is always there, no matter how far you advance in a research-based career. That’s true even for those writing the code. Fear of appearing naive or ill-suited to offer a perspective on technology is toxic, and I would argue that it’s a small part of why the community isn’t more diverse.

The title of the panel I’m on is “Hack The Planet,” which I thought was odd at first, since my talk doesn’t directly relate to hacking. In a way, though, what’s kept me going throughout my work on blockchain are what I think of as hacker values: curiosity, playfulness, and a tolerance for risk. So it’s in the spirit of hacking that I’m doing this work, and that I encourage others to take an open mind about it. It’s not always easy, but I think that in the long run, it’s for the best.


Emma Stamm is a writer, web developer and PhD candidate at Virginia Tech. You can find her online at www.o-culus.com and @turing_tests.


[1] This is Ethereum: https://www.ethereum.org/

[2] This is based on the 2008 publication of Satoshi Nakamoto’s white paper describing Bitcoin, which is generally recognized as the beginning of Bitcoin/blockchain. www.bitcoin.org/bitcoin.pdf

[3] https://www.hyperledger.org

[4] https://www.coindesk.com/dc-blockchain-summit-2016/

[5] https://en.wikipedia.org/wiki/Impostor_syndrome


Stephen Hull, editor of Huffington Post UK, created a bit of a stir a week ago when he admitted that the site does not pay its writers.

That statement alone would have raised eyebrows high enough. What made a lot of eyebrows especially frowny and angry is the way in which he then proudly defended this practice as something admirable, something the site’s unpaid writers should not only accept but be pleased about:

…we don’t pay them, but you know if I was paying someone to write something because I wanted it to get advertising pay, that’s not a real authentic way of presenting copy. So when somebody writes something for us, we know it’s real. We know they want to write it. It’s not been forced or paid for. I think that’s something to be proud of.

Let’s unpack that language. Let’s call particular attention to the words authentic and real. Authentic is a kind of ideal, an unquestioned Goodness; real accompanies it as a matter of course. At the conceptual opposite end is fake, which is unquestionably Bad. So writing that’s been paid for – even worse, that’s been produced with the expectation of payment – is neither authentic nor real. It’s fake, and therefore unreal, undesirable, and bad.

(Apparently paying someone is tantamount to “forcing” them, which I can’t even.)

(Actually, no, I can. The implication there – I think, it’s not entirely clear to me – is that writing produced for pay has somehow been pried out with a crowbar rather than created with a magical flourish of heartfelt inspiration. So again: fake and bad.)

(I’ve written most of my professional fiction with crowbar firmly in hand.)

I don’t think this can be emphasized enough: Stephen Hull is essentially saying that if you accept payment for your writing, your writing is bad and you should feel bad. He would probably disagree that he’s going that far, but he would be wrong.

He would also disagree with the claim that because writers aren’t paid, they aren’t compensated, because writers who have their work published by the Huffington Post get something far more valuable than fake bad money – which is, of course, exposure. Which, as Wil Wheaton says, does not enable you to pay your rent.

There are a lot of things that are fairly horrible about this, and so far I haven’t said anything that other people haven’t already articulated better. Aside from the issues above, there’s the fact that the Huffington Post is profitable to the tune of millions of dollars and can completely afford to pay their writers (just as an aside, the money I generally take for my fake bad short stories starts at the Science Fiction and Fantasy Writers of America’s minimum rate, which is six cents a word – between $200 and $300 per story – and which is paid to me by relatively unprofitable fiction magazines who nevertheless somehow manage to scrape together the resources to do so, maybe by digging between the couch cushions or something), so we’re dealing with a pretty cut and dry case of exploitation.

But beyond that, as Chuck Wendig points out, the even more problematic assertion here is that writing should not actually be considered labor at all:

The lie is this: writing is not work, it is not fundamental, it is a freedom in which you would partake anyway, and here some chucklefuck would say, haw haw haw, you blog at your blog and nobody pays you, you post updates on Twitter and nobody pays you, you speak words into the mighty air and you do it for free, free, free. And Huffington Post floats overhead in their bloated dirigible and they yell down at you, WE BROADCAST TO MILLIONS and DON’T YOU WANT TO REACH MILLIONS WITH YOUR MEAGER VOICE and THIS IS AN OPPORTUNITY FOR YOU.

The background for this is an even larger and more pervasive problem, and one that Millennials arguably face to an unprecedented degree: that the most important thing is to “do what you love”, and that anything not done for love is less legitimate (and I would add that in some cases the argument is that if you’re fortunate enough to do that, the love should compensate for low or even absent pay; see also unpaid internships). It’s the same kind of exploitation wrapped up in ostensibly noble ideology, and it’s one that emphasizes the gap between those who are privileged enough to survive just fine on Doing What They Love, and those who have to make a living however they can:

One consequence…is the division that DWYL creates among workers, largely along class lines. Work becomes divided into two opposing classes: that which is lovable (creative, intellectual, socially prestigious) and that which is not (repetitive, unintellectual, undistinguished). Those in the lovable-work camp are vastly more privileged in terms of wealth, social status, education, society’s racial biases, and political clout, while comprising a small minority of the workforce.

For those forced into unlovable work, it’s a different story. Under the DWYL credo, labor that is done out of motives or needs other than love—which is, in fact, most labor—is erased.

The credo of DWYL is a primary part of what allows the Huffington Post to get away with this offensive nonsense. Or at least to believe that it can and ought to.

Again, I’m not really saying anything new here. But what I don’t think I’ve seen addressed specifically enough is the fact that the Huffington Post is assuming and encouraging the assumption that a writer shouldn’t draw distinctions between the various kinds of writing they do. That if sometimes you write for passion alone, all your writing should be for passion alone. If you’re a real authentic writer, all the writing you do is either imbued with this real authenticity – or it isn’t.

This is insidiously, romantically appealing. It’s also utterly ludicrous. My professional fiction writing is not my fanfiction writing is not my essay writing is not my academic writing is not the writing I’m doing right this minute. These are all different realms and they’re different kinds of work, despite obvious similarities. Leave aside for the moment the extremely pertinent question of whether someone other than you is materially profiting from the writing you do for free, and consider that while I count all of those forms of writing labor in their own way, I personally determine whether I should be materially compensated for that labor. I do this by drawing distinctions not only between those different categories of writing, but by drawing distinctions regarding what I get out of doing this labor and who my audience is and what my relationship with them happens to be.

When I write professional fiction, I’m writing for a professional community that simultaneously takes writing seriously as an art form and considers it something worth set amounts of money. When I write fanfiction, I’m writing for a community that operates on the basis of a gift economy, where not only am I happy to not be paid but would in fact prefer that capitalism never get involved at all. When I write blog posts, I’m doing something similar in that I’m engaging in a conversation with a community and I’m doing so on my own time. Those distinctions are meaningful and legitimate and important, but by throwing words like authentic around, the Huffington Post is arguing that they aren’t. The only meaningful distinction is whether or not the writing is real.

Real writing isn’t worthy of compensation. In fact, it’s too good for compensation. It’s not work. It’s passion. It’s art. And something cannot be all three of those things simultaneously.

So while this is bad in and of itself, it’s part of something worse. And the lie isn’t only that passion and payment are mutually exclusive but that all writing is basically the same at the molecular level, and it exists as one option in a binary set. Which needs to be fought, and hard. As John Scalzi wrote back in 2012:

If you try to mumble something at me about writing for free on this site, I might feed you to wild dogs. When I write here, it’s me in my free time. When I write somewhere else, it’s me on the clock. Here’s a handy tip to find out whether I will write for you for free: Are you me?

And what I think every writer should adopt as a motto (emphasis mine):

If the answer is “no,” then fuck you, pay me.


This past year, I sort of disappeared from Twitter. Not completely – I’d poke my head in now and then – but for a number of reasons I stopped checking it at all regularly.

One of the things that ended up keeping me away for longer than I might otherwise have been was how it felt, those times when I poked my head back in. It was intimidating in a way it hadn’t been before. It was like I had been missing a long series of conversations that added up to one enormous conversation, and I no longer had any idea what was going on. Friends and colleagues and friend-colleagues with whom I used to be in nearly constant touch were suddenly discussing things I didn’t know anything about, and the prospect of trying to catch up was overwhelming. I felt like I had nothing to contribute to the conversation I left behind months ago. It was like a party I would wander into, circulate in kind of a distant and awkward fashion, and leave again. Because I had nothing to say.

I like people, but I’m very bad at feeling like I belong anywhere. It’s my default to feel like a fraud in any crowd I’m a part of, and awkwardness has a way of turning into a withdrawal spiral. This began in physical space, but physical space doesn’t have a corner on making me feel that way. Not at all.

I still don’t check Twitter very regularly.


One of the things that exacerbates this, in the SFF writer community, is cons. People talk a lot about cons. Who went, what the panels were like, what happened, who said what, what horribly embarrassing things occurred shortly before sunrise after the consumption of large quantities of strong beverages. People tweet during the cons, about the cons. People tweet after the cons, about the cons. People tweet prior to the cons, about the cons.

Please note the extraordinary self-control I’m exercising here by not making a conversation pun. There’s already enough of that in the names of the cons itself and I don’t think I should add to it.

So great. If you go to the cons, it’s wonderful. But cons are expensive. Some of them are very expensive. Some of them have registration fees well in excess of $100, and that’s often the smallest expense.

Cons are important. Cons are where you meet people. Cons are where you make friends. Cons are where you make connections. Cons are where you get your work known, yourself known. If you want to make a career of this, you really need to go to cons. Or man, it sure does help.

And then when you get back you have something to talk about on Twitter, with the people you now know.

Unless you’re bad at Twitter, and you can’t afford to get to a con.

I’ve heard many people say they can only afford to go to one con a year. They have to pick carefully. This is their social circle. This is their career. If they don’t go, there are consequences.

…Okay, I legitimately didn’t mean to make that one. Sorry.


Here’s another wrinkle: at least in SFF writerdom, there is really no meaningful distinction between friends and colleagues. Which, sure, is true of a lot of fields. But these relationships are particularly close, and the professional utility of these friendships can be very high. There are costs to missing out, to not being at the right place at the right time to meet the right person. Missed connections are a real thing. Because here’s another wrinkle: it’s not just about being talented. It’s about being noticed.

Yeah, generally you get noticed when you’re very talented. But not always. Sometimes people just… don’t get noticed. You can write the best book ever but people can’t buy it if they don’t know it’s there. There are thousands of short stories published every year, and many of them are fabulous. Not all of the fabulous ones get seen by the right people at the right time. There are cracks, and people and work together fall through.

My sense is that this isn’t a truth people are very comfortable with, because its implications aren’t comfortable ones. But I do think it’s true.


For me personally, this becomes especially poignant around about awards season. People are talking about the work that caught their attention, that excited them, that they think is especially worthy of note. People are making lists. People are posting all their eligible stuff and inviting examination.

I don’t know of a single person who will cop to enjoying that, the Here Is My Stuff thing. I hate it. It makes my skin crawl. But you sort of have to. Because there’s so much stuff out there, and it’s easy for your voice to get lost.

It’s easy, if you’ve been away for a while, to come back and feel lost. It’s easy to be silent about your own stuff.

So it’s easy to be forgotten. Or God, it really feels that way.


It’s an old and very bitter myth, this idea that being successful in writing is “all about who you know.” And yeah, it’s not all about that. But it is about that. It’s about which conversations you can be part of, with who, regarding what. It’s about who’s keeping an eye on you and what you produce – which attention you earn, but even so. It’s about the room parties and the panels and BarCon, HallwayCon, FloorCon, all the places people congregate and talk shop and talk shit. It’s about making friends and it’s about self-promotion, and again, I think that when you’re a writer the line between those things is practically nonexistent.

There are all kinds of reasons why someone might be bad at social media, having to do with both the body and the mind – because engagement with social media is embodied, and no mental illness or emotional problem exists in isolation from someone’s physical experience of life. Social anxiety isn’t just about the difficulties of walking into an actual room full of actual people. Depression isn’t just about not going outside.

There are all kinds of reasons why someone might not be able to go to cons – money and health being the two big ones, though there are lots of others.

So big surprise: the things that make life and work difficult in terms of everything – the things that makes it easier for certain people to be marginalized and unheard and rendered invisible – is at work here too.

This is especially ironic when we’re talking about writing, which is by nature such a solitary thing. The actual business is done alone and internally. The business side of the business is the exact opposite, and I don’t think it comes easily to many of us. For some of us it’s nearly impossible. A lot of us are not exactly rolling in cash. I’m probably not going to Worldcon this year, and World Fantasy Con is a big question mark. But I’m scraping together money and courage and medication, and going to whatever cons I reasonably can, because I’m lucky enough to be in a position where I can go to any, and because I basically have to.

Because I know there are consequences for not doing so.

And I guess I’m also hoping that when I get back, I’ll hop on Twitter and have some things to talk about.

LC pack 1 texture 24There are two primary things that background this, that are probably necessary to know.

The first is that this past year has been extraordinarily hard for me. The second is that it’s been very difficult to talk openly about.

I’ve always tried to be honest online – about what I’m going through, about what I’m wrestling with, and especially about mental illness, which I think is much less of a forbidden topic of conversation than it used to be but which I also think can still stand to be discussed more than it is, and especially in what we would probably call professional settings.

I’ve done this because I value vulnerability – or I want to. I feel like it’s something to aspire to, in no small part because I absolutely suck at pretending that everything is fine if I have to do so for more than five minutes at a stretch. It’s going to be awkward and uncomfortable no matter what I do, so generally I go with what I regard as the lesser of two evils. When I think I can.

And there’s also that I hope vulnerability might eventually help me.

But it turns out that I’m even worse at everything – pretending and talking openly about my shit – when things genuinely get rough.

So this past year things genuinely got rough, and for the most part, in most places, I clammed up. Because I didn’t know what else to do. I didn’t actually hide the fact that things weren’t going so great, but I didn’t do a whole lot of talking in a public way about the specifics and the uglier parts of what I was feeling regarding everything. I just didn’t want to go into it, in significant part because I was terrified of what people might think.

Things are better now. Sort of. And part of what I’m doing as an effort to make them better is to un-clam, to break myself open from the inside out and be – literally – painfully honest about stuff. At least a lot of stuff. Or to try. It remains incredibly difficult.

Getting going on this post, for example, was much harder than once it would have been.

I made a post a few days ago on my author blog. Just clenched everything and threw it all out there, and left it for people to do whatever they wanted with it. I don’t know that I felt better after doing do, but I certainly didn’t feel worse. A few people on Twitter and Facebook told me that they were really feeling what I was talking about. That was nice. Then I sort of moved on and left it alone.

Then a day or so ago it ended up in a WordPress recommendation Twitter account, and my inbox hasn’t stopped exploding since.

I don’t even know how many people have commented to say, essentially, me too and I really needed to read this. I haven’t tried to keep count. I haven’t honestly looked at the page. I think I’m a little afraid to and I’m not completely sure why. I do know that it’s a lot. I’ve been getting message after message that amounts to what I was talking about in the post itself: people in pain looking for connection. I knew they were out there; part of why I wrote the post in the first place was to state my belief that a fair number of us aren’t doing so hot and don’t know how to talk about it to anyone. But I didn’t expect to hear from so many of them.

It wasn’t until that happened that I realized something strange (though I don’t think it’s surprising): I wrote about looking for connection in vulnerability and the sharing of pain, and I didn’t expect to connect. Not like this.

Which got me mulling over vulnerability itself, and this kind of writing and the context in which we shoot it out into the world.

There isn’t only one kind of vulnerability, is the thing. There’s intimate person-to-person vulnerability, direct communication with a particular someone or someones about what’s going on in your head and heart. By no means does this have to be taking place face-to-face; what really matters – as far as I’m concerned and as far as my own experience goes – is that you’re speaking to someone, and that person is actively listening to you, and both of you know it.

In other words, you’re having something at least vaguely resembling a conversation.

Then there’s the kind of vulnerability I engaged in when I wrote the post. Which was directionless, openly broadcasted vulnerability. There were specific people I had in mind when I wrote it, sure. Some of them talked to me about it. But I wasn’t writing to them. I was writing to everyone and everything, writing to an undifferentiated public, some of whom were people I knew but the vast majority of whom are not. I wrote it, left it there, walked away, and on some level I think I never expected anything else to happen.

What happened is that the latter form of vulnerability began to slide into the former, and I didn’t know it was coming. And it was jarring. It was a little disturbing.

It’s a lot overwhelming. I’m still working up to responding to most of it.

I don’t think I’m saying anything that isn’t pretty self-evident. I don’t think there’s anything piercingly insightful here, or new or surprising. Yet I was surprised. It didn’t occur to me that one of these things could become the other; it didn’t even occur to me that there was a difference. I wasn’t thinking about it at all.

I believe it’s worth thinking about. Because I was writing about loneliness and connection, making myself available for it, and people reached out. Strangers, but also not. Because none of us are okay.

And that’s a deep thing to be united by.

Here’s the point (maybe, assuming I have one): The conversation about disconnection and loneliness regarding digital technology is old and tired and boring and I don’t think any of us want to have it ever again. But disconnection and loneliness can be more piercingly, viscerally felt in these digital spaces, and they can be confronted in an immediate, nuanced, and difficult way that I don’t think any other arena allows for. The ways in which we’re lonely and why. What exactly we’re afraid of. What hurts. How we want to get well. How we want to reach out and hope that there might be someone reaching back. And how we might not expect that when it happens, because private and public are after all not binary categories and connection means a hopeless number of different things.

Like I said, no piercing insights, and I’m not coming away from this with any answers of any kind. What I’m coming away with is the knowledge that a complicated thing is even more complicated than I thought, and there’s a lot more to be afraid of.

But I think there’s also a lot more to reach out for.


“Public sociology”, for me, has always meant teaching. I obviously don’t mean that teaching is the only legitimate kind at all times and in all places, but to the extent that I’m still a sociologist, and a public one, teaching is how I do that. It’s what I feel comfortable with. It’s what I know I can do well, and it gives me real and observable and frequently immediate results, when I get results at all. I convey all this information about an entire discipline, an entire approach to the business of everything in a single semester, I make it as coherent as I can to a bunch of – usually – total beginners, and I hope for the best.

And every semester there’s at least one student who comes up to me and says this is so weird and so cool, I never looked at anything like this before, I didn’t know you could, this is my favorite class now.

(That has already happened in both of the sections of Intro I’m teaching this semester)

But more and more frequently, students are coming up to me – or, alternately, talking to me about it via email and writing assignments – and saying that what they love about the class is how it’s either giving them a vocabulary they can use to articulate stuff they already knew, or augmenting a vocabulary they already possessed.

More and more of my students are coming into these courses already knowing a lot of the concepts I’m teaching them. I used to get some balking when we got to privilege, no small degree of resistance when we started discussions of race and racism. But now I introduce privilege and I see nods. Okay. Yeah. They know this already. It’s not so scary for most of them. They get it. It’s a feature of how they perceive reality. Privilege. Absolutely.

This was especially noticeable to me, the first couple weeks of this semester, because it’s been a year since I taught anything.

I did a bit of mulling before I arrived at what is frankly a pretty obvious conclusion: a significant portion of the work I was there to do was already being done elsewhere.

And going by the number of hands that always go up when I ask how many of them have never taken a sociology course, it’s being done somewhere that isn’t a classroom.


I stopped teaching a year ago because my graduate assistantship concluded – and I saw this as an opportunity to find another job for a while, which didn’t happen, so here I am again – but also because I was feeling increasingly disillusioned by what I perceived as the place of teaching in an R1 state school sociology department like the one to which I’m attached. I should be clear: it’s not that we have bad instructors. We have some amazing instructors. We do very good work. We do have people who value teaching as much as it deserves – in my opinion – to be valued.

But I couldn’t escape the feeling that a lot of the rest of it was lip service. That undergraduate teaching was, for many people, an afterthought. It was something you slung at the graduate students because it was a distraction from what the faculty was really there to do.

Look: I fully believe that intro courses are some of – if not the – most important courses any sociology department ever provides. I think they’re everything. I think they’re our one big chance to engage fresh generations of students, many of whom are extremely bright, regardless of whether or not we get sociology majors out of them. I honestly don’t want everyone to walk out of my class as sociology majors. I would prefer that the vast majority don’t. I want people to walk out of my class and go off to work in government, in medicine, in law, in business, in advertising. I want this way of thinking to go everywhere that’s not a sociology department. I don’t think this work can be overvalued. I don’t think it’s possible to do that.

Fight me.

I couldn’t escape the feeling that I was… Well, clearly I wasn’t alone. I know I’m not.

But I felt alone.

So it felt, at that point, like maybe it was time to say goodbye.


Here’s what I did on my last class day of the 2014 spring semester: I sat down on a table in front of the class and I just talked. I talked for nearly an hour, with no notes and almost no plan, and I talked without pausing. I told them everything: I told them about the state of a lot of academia, about the probable state of a lot of the departments through which they were moving, about the damage that encroaching capitalism was causing, about the corrosive power of money. I told them about how I felt, about how I had seen some of my brightest fellow students beaten down by what a lot of this whole thing has become, about how this system fails people. About how it fails them. About how a great deal of higher education in this country is increasingly a form of fraud, about the place of students in a university like the one in which I teach. About how I felt, about what I saw when I looked back at the last five years of my graduate career, about how sad I was and how angry and how scared. For myself, and for them.

Clearly my opinion is biased, clearly it shouldn’t be taken at face value, but I swear, looking at them, no one in my position had ever talked to them like that before. A bunch of them stayed after and talked to me for another hour, essentially an extended conversational version of everything I had been saying.

What I closed that speech with – and what I said over and over in the conversation that followed – was what Nathan Jurgenson said to me in a conversation a couple of years back, about this very thing.

I told them that the work that a lot of academia used to do – the work it’s very good at but is being prevented from doing by its own damn self – will still be done. It’ll just increasingly be done elsewhere. It will find a way. It’s the kind of work that can’t really be stopped, even when the framework built to support it collapses.

I told them they could make the spaces in which that work would be done, if they wanted. I told them they didn’t need systems that didn’t work for them. Or they could, with a lot of time and a lot of effort, find ways to separate themselves. I told them they made me hopeful. It was misty-eyed, yeah. It was profoundly sentimental. That’s who I am and I make no apologies for it.

So then I said goodbye.


And here I am again, after a year away. Wasn’t sure what it would feel like. Had deeply mixed feelings, as a matter of fact. Yes, I love teaching – and I genuinely believe I’m very good at it – but I said goodbye and I was ready for that goodbye to last a long time. It’s been jarring to return this soon. I’ve honestly been feeling a little resentment at being forced back by practical considerations.

Then I started the semester… And little by little it began to dawn on me that maybe I was right when I told them what I told them. Maybe Nathan was right.

No; no maybe. We were right. Not all of these kids are coming in already knowing a lot of what I want them to know. But many are. They pay attention – which is at the core of being alive in the world. There’s a lot they don’t know, and I have a lot to teach them. But there’s so much they do know, and I look at them and I see that the work – in terms of how they think, how they approach what they see around them – is being done. It’s being done, and it’s not necessarily being done in universities, and it’s not necessarily being done by formally trained people with degrees.

People disparage Tumblr and its ability to teach people the theory that underpins social justice, and there’s all that ridiculous ew SJW bullshit flying around. And I actually don’t disagree with some of it. But Tumblr is where a lot of that work is being done.

Again, if you disagree, I invite you to engage with me in combat.

Not just Tumblr, either. I think everywhere. I think these conversations – about what a just society looks like, about lives that matter and forces that attack those lives, about why inequality persists and what can be done, about the deepest elements of institutions and culture that produce and reproduce oppression, about looking at individuals and seeing the larger contexts within which they exist – they’re happening. They’re happening all over. They’re finding ways to happen. People want them to happen.

I still think intro courses matter. I think they might matter more than any other course we teach.

But I’d like to think that maybe, one year later, in some very specific ways… they matter a little less.


Okay, so. Apple’s iOS8 Health app is an issue, at least potentially.

To recap, it’s an issue in significant part – and for the purposes of this – in terms of its effect on people who experience disordered eating and/or obsessive-compulsive behaviors and thoughts. Health trackers in general have the potential to do this, and in fact to be quite harmful. This is primarily because health trackers are highly quantitative in nature and extremely oriented toward the monitoring of details, and obsessive-compulsive tracking is one of the primary symptoms of an eating disorder – and the Health app is a focal point for this kind of monitoring. Though it allows for the entry of data, its primary purpose is to allow better curation of data from other health apps, but it still exists. In fact, it not only exists, but it can’t be removed. It can be hidden, but you – the user – still know it’s there. It will be difficult to ignore even if it can’t be seen. It gnaws. Trust me, things like that do.

It’s additionally an issue because these kinds of thoughts and behaviors aren’t something that people can just choose to stop doing. That’s why it’s a disorder, and it’s one of the most distressing things about this kind of disorder: if you’re presented with a relatively easy way to manifest symptoms, often you will even if you desperately don’t want to:

One of the nastiest things about OCD symptoms – and one of the most difficult to understand for people who haven’t experienced them – is the fact that a brain with this kind of chemical imbalance can and will make you do things you don’t want to do. That’s what “compulsive” means. Things you know you shouldn’t do, that will hurt you. When it’s at its worst it’s almost impossible to fight, and it’s painful and frightening.

Even if you don’t do anything, you’re still thinking about it. Over and over, obsessively. Thoughts are harmful, often physically. Thoughts themselves can trigger a relapse in someone in recovery from these kinds of disorders.

So now there’s the Apple watch, and some things have been added that are even more problematic.

Specifically, there are some shiny new apps. There’s a workout app, which naturally allows one to input goals and plans for physical exercise and track their progress, but the real kicker here is the activity app, which tracks almost every important aspect of the user’s regular physical activity through the day: the number of calories burned, the amount of time spent in motion, and a reminder to move when one has been stationary for a certain amount of time.

So what? So: the app is active all day. Or it possesses that capability. If we conceive of this kind of tracking as invasive for people with particular disorders – and remember that with these kinds of disorders something can be invasive simply by being there – then tracking that functions all day, tracking everything one does, is invasive to the nth degree.

It’s important to note here that it’s not only thoughts that matter in this context, for someone using this specific device. Concepts also matter. Even if someone else might not find the very prospect of this kind of app upsetting, someone who deals with the world this way might very well be upset by it. Upset is probably not a strong enough word. People are often skeptical about “triggers”, about people who are triggered by things – especially things generally seen as innocuous – but they’re real and they’re legitimate.

This is a health tracking app that’s locked into a device and is capable of constantly measuring just about meaningful everything you do, in terms of Apple’s standard of “health”. Yeah, that’s a problem.

Okay, just don’t buy an Apple Watch. But the problem there is that – as with the original Health app – Apple wasn’t thinking about this. The idea that someone might want to buy an Apple Watch but feel unable to do so because of disordered thoughts and behaviors simply wasn’t on the designers’ radar. They made these apps for a generalized default person with a generalized default attitude toward a generalized default idea of what”health” means. At the very least, design that essentially erases an entire category of potential users is probably worth some consideration. As Selena Larson points out in the Kernel article linked above:

Fitness apps and health trackers aren’t inherently bad or good. They’re tools that can used in different ways and come with their own built-in blind spots and biases. Apple’s decision to force Health onto iOS 8 devices could endanger those who have a compulsion to track themselves already. But for the great majority of people, monitoring their health should pose no harm.

Apple sees the world a certain way and designs their products accordingly. No, I’m not faulting them for that. But I do want to call attention to it, because – like all ableist design – it stands out as part of a much larger set of cultural and social marginalizing processes. Apple’s focus is on the “majority”. We need to ask whether that’s all we should expect, or whether better and more accessible might be possible, and what greater effects that better design might have.

But there’s another interesting question here, and it’s the degree to which an Apple Watch truly differs from an iPhone, in terms of how one might use it. Tom Greene argues:

For the Apple Watch to take off, it will need to carve out a distinct value proposition that a smartphone alone cannot deliver. After all, we all pretty much “wear” our smartphones everywhere we go. The combination of Apple’s iPhone 6 technology, coupled with my Withings products seem to make the health-related aspects of Apple Watch unnecessary.

What’s the difference, in practical design terms, between a watch on your wrist and something you carry around? This raises even bigger questions about physical relationships with physical devices, in terms of proximity and how – in an embodied sense – we experience what are essentially cases for apps.

What’s the difference between a watch case and a smartphone case? Does the packaging itself matter? How?

I think these are questions for another post. But I wanted to leave them here, because I don’t think we can consider the design of an app without – at least to some degree – considering the physical thing the app rides around in.

Sarah is on Twitter – @dynamicsymmetry

photo by Aaron Thompson
photo by Aaron Thompson

For last year’s iteration of Theorizing the Web, we took a new step in our development as a conference and produced an anti-harassment statement.

We felt it was important, for a number of reasons. It’s not a matter of feeling; it is important. It seems like – fortunately – this is an issue to which more and more conferences and conventions are paying attention. There’s more in the way of an ongoing discussion than there once was. There’s a growing recognition that these kinds of stands need to be taken and these kinds of explicit guidelines need to be established in order to make spaces safe for all attendees – or at least to try to make those spaces as safe as possible.

Because here’s the thing: these kinds of policies/statements are always going to be works in progress. They’re never going to be finished. They’re going to be subject to the forces and pressures of real-world application, and as such they’re going to be tossed into situations for which they were written but for which they frequently weren’t specifically designed. There are always things you don’t anticipate. Especially if you’re coming from a place of privilege, which – among other things – stunts the growth of the imagination. It just does. It hurts one’s ability to prepare.

So last year we had an anti-harassment statement. We put it online before the conference and tried to call people’s attention to it. We solicited and gratefully listened to a number of extraordinarily helpful comments and criticisms. We needed that help. We couldn’t do that alone.

We still need that help, because we still can’t do this alone.

Last year the statement was tested. I’m not going to go into the details now, but I invite you to read that post so you have some sense of what the process was like. This year it might very well be tested again. So we want to make sure it’s as clear and strong a statement as it can possibly be.

Here it is:

anti-harassment statement

In light of recent public conversations spurred by incidents at other conferences, and in the spirit of being both proactive and inclusive, it is important that we communicate the Organizing Committee’s commitment to providing a harassment-free space for participants of all races, gender and trans statuses, sexual orientations, physical abilities, physical appearances, body sizes, and beliefs. Harassment includes, but is not limited to: deliberate intimidation; stalking; unwanted photography or recording; sustained disruption of talks or other events; inappropriate physical contact; and unwelcome sexual attention. We ask you, as participants, to be mindful of how you interact with others—and to remember that harassment isn’t about what you intend, but about how your words or actions are received.

In keeping with a central theme of Theorizing the Web, we also want to remind you that what is said online is just as “real” as what is said verbally.

By attending Theorizing the Web, you agree to maintain and support our conference as a harassment-free space. If you feel that someone has harassed you or otherwise treated you inappropriately, or you feel you have witnessed inappropriate or harassing behavior, please alert any member of the Organizing Committee (identifiable by our badges; you can also find photos of all Committee members on our “Participants” page). If an attendee engages in harassing behavior, the conference organizers may take any lawful action we deem appropriate, including but not limited to warning the offender or asking the offender to leave the conference. We welcome your feedback, and we thank you for working with us to make this a safe, enjoyable, and welcoming experience for everyone who participates.

If you have any comments, concerns, any suggestions for ways in which we can make this thing better, please get in touch with us here or in the comments below.

We’re going to do all we can to make this year’s Theorizing the Web a safe space for everyone. Thanks so much.