Fourth_of_July_2016_4534_1_Fireworkscredit to Fourandsixty

This July 4th, PBS viewers in the DC metro area were outraged to be reminded of the fact that they were watching television.

It’s actually not quite that simple, though it’s fun to phrase it that way. Here’s what happened: this past Monday was an extremely muggy and cloudy one in our neck of the woods; in other words, not at all the idea climatic conditions for a fireworks display. PBS, in something of a bind regarding how to maximize the spectacle for its live broadcast of the Independence Day celebration in front of the White House, elected to include archival footage of past fireworks displays with its live broadcast of the currently-happening fireworks.

People were displeased.

Specifically, people who took issue with the decision claimed that it was an act of fakery, that it was a cheap move and made the broadcast less “authentic”. That it was almost somehow a lie. PBS responded on Twitter with: “We showed a combination of the best fireworks from this year and previous years. It was the patriotic thing to do.”

I’m going to leave aside the interesting fact that PBS is characterizing this move as “patriotic” and instead focus on the two other things that I find interesting.

First, these viewers forgot the Baudrillardian truth about TV and indeed about all media and really kind of everything ever: it’s basically “fake” by definition. What a viewer is shown is almost always carefully edited and packaged, or at the very least presented with a specific intent in mind. Even when it comes to live TV, there is no such thing as a fully objective and solely factual depiction of what’s actually going on. This frequently has little to do with a political agenda (the vitally important patriotism of this particular decision aside) and far more to do with spectacle, and PBS’s intent in showing the fireworks was to provide exactly that.

I think the people who were offended by what PBS did believed that PBS’s primary goal was to show something as it truly was. And I’m not saying that PBS didn’t have that goal at all. But I doubt that – consciously or unconsciously – it was at the top of their list.

People like to believe that in cases like this, what they’re seeing is “real” and “true”. It’s jarring and even disturbing to be reminded that they can’t be sure of that, and that indeed they should assume that what they’re seeing is never real or true. That by definition it basically can’t be. People want to buy into an illusion – an illusion that PBS is selling, which it is obviously not alone in – and they become uncomfortable and upset when the illusion is destroyed.

Which leads me to the second thing that strikes me about this: the fact that it was PBS.

As a long-time PBS viewer, I get the distinct sense that someone watching the channel might expect the exact opposite of a focus on spectacle and selling a constructed package of imagery, and far more of a focus on soberly presenting things with a commitment to authenticity. Given the culture of PBS and the probable cultural affiliation of many PBS viewers, I think it’s reasonable to believe that whether or not PBS intends it, there’s a kind of implicit contract between the network and the people watching it, as well as a desire for authenticity in particular on the part of those people.

So I think what we have here is in some respects a twofold destruction of illusion and a perceived twofold betrayal: What people prefer to believe they’re seeing in this kind of broadcast, and what a specific cultural product promises the people consuming it. When I found out about this, I almost immediately wondered how someone watching CNN, MSNBC, or Fox News (oh my lord Fox News) would be likely to feel if the same editorial decision was made.

And I’d still like to ask PBS what exactly they meant by “the patriotic thing to do”. Maybe thick, low clouds are un-American.

Yeah, that’s probably it.

IUD_0

Rose Eveleth’s piece for Fusion on gender and bodyhacking was something I didn’t know I needed in my life until it was there. You know how you’ve always known something or felt something, but it isn’t until someone else articulates it for you that you truly understand it, can explain it to yourself, think you might be able to explain it to others – or, even better, shove the articulation at them and be all THAT RIGHT THERE, THAT’S WHAT I’M TALKING ABOUT. You know that kind of thing?

Yeah, that.

Eveleth’s overall thesis is that “bodyhacking” isn’t new at all, that it’s been around forever in how women – to get oversimplified and gender-essentialist in a way I try to avoid, so caveat there – alter and control and manage their bodies (not always to positive or uncoercive ends), but that it’s not recognized as such because we still gender the concept of “technology” as profoundly masculine:

Men invent Soylent, and it’s considered technology. Women have been drinking SlimFast and Ensure for decades but it was just considered a weight loss aid. Quantified self is an exciting technology sector that led tech giants such as Apple to make health tracking a part of the iPhone. But though women have been keeping records of their menstrual cycles for thousands of years, Apple belatedly added period tracking to its Health Kit. Women have been dieting for centuries, but when men do it and call it “intermittent fasting,” it gets news coverage as a tech trend. Men alter their bodies with implants and it’s considered extreme bodyhacking, and cutting edge technology. Women bound their feet for thousands of years, wore corsets that altered their rib cages, got breast implants, and that was all considered shallow narcissism.

As a central personal example, Eveleth uses her IUD, and this is what especially resonated with me, because I also have one. I’ve had one for about seven years. I love it. And getting it was moderately life-changing, not just because of its practical benefits but because it altered how I think about me.

The insertion process was not comfortable (not to scare off anyone thinking of getting one, TRUST ME IT IS GREAT TO HAVE) and more than a little anxiety-inducing ahead of time, but I walked out of the doctor’s office feeling kind of cool. I had an implant. I had a piece of technology in my uterus, that was enabling me to control my reproductive process. I don’t want children – at least not right now – and my reproductive organs have never been significantly important to me as far as my gender identity goes (probably not least because I don’t identify as a woman), but managing my bits and what they do and how they do it has naturally been a part of my life since I became sexually active.

And what matters for this conversation is that the constant task of managing them isn’t something I chose. Trying to find a method that worked best for me and (mildly) stressing about how well it was working was a part of my identity inasmuch as it took up space in my brain, and I wasn’t thrilled about that. I didn’t want it to be part of my identity – though I didn’t want to go as far as permanently foreclosing on the possibility of pregnancy – and it irked me that it had to be.

Then it didn’t have to be anymore.

And it wasn’t just about a little copper implant being cool on a pure nerd level. I felt cool because the power dynamic between my self and my body had changed. My relationship between me and this set of organs had become voluntary in a way entirely new to me.

I feel like I might not be explaining this very well.

Here: Over thirty years ago, Donna Haraway presented an image of a new form of self and its creation – not creation, in fact, but construction. Something pieced together with intentionality, the result of choices – something “encoded”. She offered a criticism of the woman-as-Earth-Mother vision that then-contemporary feminists were making use of, and pointed the way forward toward something far stranger and more wonderfully monstrous.

The power of an enmeshing between the organic and the technological lies not only in what it allows one to do but in what it allows one to be – and often there’s no real distinction to be made between the two. We can talk about identity in terms of smartphones, but when we come to things like technologies of reproductive control, I think the conversation often slips into the purely utilitarian – if these things are recognized as technologies at all.

Eveleth notes that “technology is a thing men do”, and I think the dismissal of female bodyhacking goes beyond dismissal of the utilitarian aspects of these technologies. It’s also the dismissal of many of the things that make it possible to construct a cyborg self, to weave a powerful connection to the body that’s about the emotional and psychological just as much as the physical.

I walked out of that doctor’s office with my little copper implant, and the fact that I no longer had to angst about accidental pregnancy was in many respects a minor component of what I was feeling. I was a little less of a goddess, and a little more of a cyborg.

And lingering cramps aside, it felt pretty damn good.

bitcoin-blockchain

I only heard the term “blockchain technology” for the first time this past autumn, but in the last few months, I’ve became pretty absorbed in the blockchain world. Initially I was intimidated by its descriptions, which struck me as needlessly abstruse — though, in a perfect chicken-and-egg scenario, I couldn’t be sure, since the descriptions didn’t offer an easy understanding of how it worked. What compelled me to press on in my blockchain research was the terminology surround it. I’m a long-standing advocate for open source, and blockchain’s default descriptors are “distributed” (as in “distributed ledger”) “decentralized” (as in “decentralized platform,” a tagline for at least one major blockchain development platform [1: https://www.ethereum.org/])  and “peer-to-peer” ( the crux of all things Bitcoin and blockchain). These words all spoke to my f/oss-loving heart, leading me to click on article after jargon-heavy article in an effort to wrap my head around the ‘chain. As I learned more about it, I came to understand why it’s begun to garner huge amounts of attention. I don’t like to get too starry-eyed about a new technology, but I too became a blockchain believer.

I’m in growing company. Even though the technical structure has been around since at least 2008 [2: www.bitcoin.org/bitcoin.pdf], when Bitcoin (which blockchain was originally developed for) was introduced to the public, blockchain-without-Bitcoin has been infrequently discussed until the past year. In January 2016, the World Economic Forum listed it as one of the foundational technologies of the Fourth Industrial Revolution [https://www.weforum.org/agenda/archive/fourth-industrial-revolution]; in March, the Chamber of Digital Commerce held the first-ever DC Blockchain Summit, which addressed issues of policy and regulation at the federal level. Since 2015, the number of blockchain conferences and major news stories has been snowballing. Blockchain’s status in the world of tech media has become formidable, and general-interest outlets have wasted no time in spreading the digital gospel. It’s arguably gotten to the point where blockchain mythos now overshadows its reality. It strikes me as irresponsible to write about it without first giving a few words to its image— the hype has become a fact unto itself, and any accurate reporting about it must deal with it as such.

In my Theorizing The Web talk “Block Party People: Off The Bitcoin Chain” I offer that blockchain offers tech media a unique opportunity to benefit from years of hindsight. Internet technologies in their earliest stages have historically been written in terms that designed to appeal to their developers and early adopters. In other words, those with the professional power, money, and intellectual access to take part in shaping the future of technology. This is by its very nature a narrow group, particularly where early adoption is concerned. It entails financial and cultural privilege that’s unavailable to most people.

Of course, there are plenty of reasons to target specific audiences when writing about emerging technologies. One is sheer comprehensibility: when a tool is in its earliest stages of development, layman’s terms and easily-understandable use cases have yet to materialize. As a general rule, the appropriate metaphors only emerge after a certain amount of time. But those interested in learning about new technologies in non-layman’s terms, the ones who want to pore over dense, jargon-filled texts and abstraction-heavy descriptions aren’t always professionals, and they’re not necessarily in the financial situation to become early adopters. They also don’t always fit the stereotypical image of an early adopter: sometimes they’re female, sometimes they haven’t gone to college, sometimes they live far away from a major city. Though the mainstream media has fallen in love with its (moneyed, masculine) image, the Silicon Valley techie is a very particular flavor of geek.

As one would imagine for a tool designed specifically for Bitcoin, blockchain is uniquely well-suited to streamline digital financial transactions. It can impact virtually anything that relies on Internet protocol, its applications within finance are much more apparent than for any other business sector (at least for now). In line with this, those most heavily invested in blockchain aren’t exactly the Occupy Wall Street crowd. One major blockchain initiative is called The Hyperledger Project [3: https://www.hyperledger.org]. It’s an open source, cross-industry effort to develop an open standard for blockchain. The Hyperledger Project is spearheaded by the Linux Foundation and IBM; partners include J.P. Morgan, Wells Fargo, Hitachi and Intel, along with a number of other large companies and V.C.-backed startups. Though it’s not the only blockchain research and development initiative, the Hyperledger Project is emblematic of the general scope of interest in blockchain. It’s fair to say that the financial industry and corporate world is very well-represented in this world.

I don’t want to suggest that this group should divest its interest in blockchain. Far from it: we need that type of power to develop broad-scale research. However, I do believe it’s critical that groups more representative of the average citizen — the person who’s not in a position of power at a global financial institution, large tech company or well-funded startup — become a part of the blockchain conversation. Those individuals may have different ideas about the technical protocols that become standard for blockchain over the coming decades. We’re a more tech-savvy society than ever before, and opening up the discussion to as many people as possible now, when the technology is still in its infancy, can work to ensure that it helps as many people as possible in the future. A big part of that starting that conversation relies on how blockchain is being presented to the public.

In simple terms, my work on blockchain has been guided by a desire to include more diverse audiences in the subject. As I was developing my research, though, I began to get cold feet. In the midst of fleshing out my clarion call for blockchain reporting to value inclusivity, I realized that I’d be treading in all sorts of dangerous territory. On one hand, there’s a lot of antagonism in the Bitcoin community about the use of blockchain without Bitcoin. Suggesting uses of blockchain not only outside of cryptocurrency, but for non-finance-related, socially equitable causes is a far cry from the freewheeling anarcho-capitalist ends championed by certain Bitcoin enthusiasts. I have no interest in inciting the wrath of cryptocurrency community, but my perspective on this is undeniably at odds with large parts of it. On another hand, I’m not a blockchain developer, and despite spending months reading about it, writing about it (including reporting on the DC Summit for a major Bitcoin news source [4: https://www.coindesk.com/dc-blockchain-summit-2016/]) and generally immersing myself in all things blockchain, I still doubted whether I was qualified to offer a real opinion on it.

I’m aware of Impostor Syndrome [5: https://en.wikipedia.org/wiki/Impostor_syndrome] but I still couldn’t help but wonder if I’d ventured too far into forbidden territory with my blockchain investigations. I worried that I’d be called out as having a naive understanding of technology and business and would walk away from the whole project having done damage to my credit as a researcher. In fact, throughout the course of the work, I frequently thought that I should just give this project up.The irony of this isn’t lost on me: as I was trying to offer encouragement for those who may not think of themselves as having power and influence in the tech world, I was losing confidence in myself. I became the very person I was trying to write for.

Of course, I didn’t quit — if I had, you wouldn’t be reading this. Part of my motivation to keep going was in realizing that by giving up, I’d be turning my back on the ideals. I’ve made some peace with the reality that I may not fully understand what I’m talking about. The fact of the matter is that such a risk is always there, no matter how far you advance in a research-based career. That’s true even for those writing the code. Fear of appearing naive or ill-suited to offer a perspective on technology is toxic, and I would argue that it’s a small part of why the community isn’t more diverse.

The title of the panel I’m on is “Hack The Planet,” which I thought was odd at first, since my talk doesn’t directly relate to hacking. In a way, though, what’s kept me going throughout my work on blockchain are what I think of as hacker values: curiosity, playfulness, and a tolerance for risk. So it’s in the spirit of hacking that I’m doing this work, and that I encourage others to take an open mind about it. It’s not always easy, but I think that in the long run, it’s for the best.

EmmaStammHeadshot

Emma Stamm is a writer, web developer and PhD candidate at Virginia Tech. You can find her online at www.o-culus.com and @turing_tests.

Citations

[1] This is Ethereum: https://www.ethereum.org/

[2] This is based on the 2008 publication of Satoshi Nakamoto’s white paper describing Bitcoin, which is generally recognized as the beginning of Bitcoin/blockchain. www.bitcoin.org/bitcoin.pdf

[3] https://www.hyperledger.org

[4] https://www.coindesk.com/dc-blockchain-summit-2016/

[5] https://en.wikipedia.org/wiki/Impostor_syndrome

writing-1

Stephen Hull, editor of Huffington Post UK, created a bit of a stir a week ago when he admitted that the site does not pay its writers.

That statement alone would have raised eyebrows high enough. What made a lot of eyebrows especially frowny and angry is the way in which he then proudly defended this practice as something admirable, something the site’s unpaid writers should not only accept but be pleased about:

…we don’t pay them, but you know if I was paying someone to write something because I wanted it to get advertising pay, that’s not a real authentic way of presenting copy. So when somebody writes something for us, we know it’s real. We know they want to write it. It’s not been forced or paid for. I think that’s something to be proud of.

Let’s unpack that language. Let’s call particular attention to the words authentic and real. Authentic is a kind of ideal, an unquestioned Goodness; real accompanies it as a matter of course. At the conceptual opposite end is fake, which is unquestionably Bad. So writing that’s been paid for – even worse, that’s been produced with the expectation of payment – is neither authentic nor real. It’s fake, and therefore unreal, undesirable, and bad.

(Apparently paying someone is tantamount to “forcing” them, which I can’t even.)

(Actually, no, I can. The implication there – I think, it’s not entirely clear to me – is that writing produced for pay has somehow been pried out with a crowbar rather than created with a magical flourish of heartfelt inspiration. So again: fake and bad.)

(I’ve written most of my professional fiction with crowbar firmly in hand.)

I don’t think this can be emphasized enough: Stephen Hull is essentially saying that if you accept payment for your writing, your writing is bad and you should feel bad. He would probably disagree that he’s going that far, but he would be wrong.

He would also disagree with the claim that because writers aren’t paid, they aren’t compensated, because writers who have their work published by the Huffington Post get something far more valuable than fake bad money – which is, of course, exposure. Which, as Wil Wheaton says, does not enable you to pay your rent.

There are a lot of things that are fairly horrible about this, and so far I haven’t said anything that other people haven’t already articulated better. Aside from the issues above, there’s the fact that the Huffington Post is profitable to the tune of millions of dollars and can completely afford to pay their writers (just as an aside, the money I generally take for my fake bad short stories starts at the Science Fiction and Fantasy Writers of America’s minimum rate, which is six cents a word – between $200 and $300 per story – and which is paid to me by relatively unprofitable fiction magazines who nevertheless somehow manage to scrape together the resources to do so, maybe by digging between the couch cushions or something), so we’re dealing with a pretty cut and dry case of exploitation.

But beyond that, as Chuck Wendig points out, the even more problematic assertion here is that writing should not actually be considered labor at all:

The lie is this: writing is not work, it is not fundamental, it is a freedom in which you would partake anyway, and here some chucklefuck would say, haw haw haw, you blog at your blog and nobody pays you, you post updates on Twitter and nobody pays you, you speak words into the mighty air and you do it for free, free, free. And Huffington Post floats overhead in their bloated dirigible and they yell down at you, WE BROADCAST TO MILLIONS and DON’T YOU WANT TO REACH MILLIONS WITH YOUR MEAGER VOICE and THIS IS AN OPPORTUNITY FOR YOU.

The background for this is an even larger and more pervasive problem, and one that Millennials arguably face to an unprecedented degree: that the most important thing is to “do what you love”, and that anything not done for love is less legitimate (and I would add that in some cases the argument is that if you’re fortunate enough to do that, the love should compensate for low or even absent pay; see also unpaid internships). It’s the same kind of exploitation wrapped up in ostensibly noble ideology, and it’s one that emphasizes the gap between those who are privileged enough to survive just fine on Doing What They Love, and those who have to make a living however they can:

One consequence…is the division that DWYL creates among workers, largely along class lines. Work becomes divided into two opposing classes: that which is lovable (creative, intellectual, socially prestigious) and that which is not (repetitive, unintellectual, undistinguished). Those in the lovable-work camp are vastly more privileged in terms of wealth, social status, education, society’s racial biases, and political clout, while comprising a small minority of the workforce.

For those forced into unlovable work, it’s a different story. Under the DWYL credo, labor that is done out of motives or needs other than love—which is, in fact, most labor—is erased.

The credo of DWYL is a primary part of what allows the Huffington Post to get away with this offensive nonsense. Or at least to believe that it can and ought to.

Again, I’m not really saying anything new here. But what I don’t think I’ve seen addressed specifically enough is the fact that the Huffington Post is assuming and encouraging the assumption that a writer shouldn’t draw distinctions between the various kinds of writing they do. That if sometimes you write for passion alone, all your writing should be for passion alone. If you’re a real authentic writer, all the writing you do is either imbued with this real authenticity – or it isn’t.

This is insidiously, romantically appealing. It’s also utterly ludicrous. My professional fiction writing is not my fanfiction writing is not my essay writing is not my academic writing is not the writing I’m doing right this minute. These are all different realms and they’re different kinds of work, despite obvious similarities. Leave aside for the moment the extremely pertinent question of whether someone other than you is materially profiting from the writing you do for free, and consider that while I count all of those forms of writing labor in their own way, I personally determine whether I should be materially compensated for that labor. I do this by drawing distinctions not only between those different categories of writing, but by drawing distinctions regarding what I get out of doing this labor and who my audience is and what my relationship with them happens to be.

When I write professional fiction, I’m writing for a professional community that simultaneously takes writing seriously as an art form and considers it something worth set amounts of money. When I write fanfiction, I’m writing for a community that operates on the basis of a gift economy, where not only am I happy to not be paid but would in fact prefer that capitalism never get involved at all. When I write blog posts, I’m doing something similar in that I’m engaging in a conversation with a community and I’m doing so on my own time. Those distinctions are meaningful and legitimate and important, but by throwing words like authentic around, the Huffington Post is arguing that they aren’t. The only meaningful distinction is whether or not the writing is real.

Real writing isn’t worthy of compensation. In fact, it’s too good for compensation. It’s not work. It’s passion. It’s art. And something cannot be all three of those things simultaneously.

So while this is bad in and of itself, it’s part of something worse. And the lie isn’t only that passion and payment are mutually exclusive but that all writing is basically the same at the molecular level, and it exists as one option in a binary set. Which needs to be fought, and hard. As John Scalzi wrote back in 2012:

If you try to mumble something at me about writing for free on this site, I might feed you to wild dogs. When I write here, it’s me in my free time. When I write somewhere else, it’s me on the clock. Here’s a handy tip to find out whether I will write for you for free: Are you me?

And what I think every writer should adopt as a motto (emphasis mine):

If the answer is “no,” then fuck you, pay me.

DeathtoStock_Wired8

This past year, I sort of disappeared from Twitter. Not completely – I’d poke my head in now and then – but for a number of reasons I stopped checking it at all regularly.

One of the things that ended up keeping me away for longer than I might otherwise have been was how it felt, those times when I poked my head back in. It was intimidating in a way it hadn’t been before. It was like I had been missing a long series of conversations that added up to one enormous conversation, and I no longer had any idea what was going on. Friends and colleagues and friend-colleagues with whom I used to be in nearly constant touch were suddenly discussing things I didn’t know anything about, and the prospect of trying to catch up was overwhelming. I felt like I had nothing to contribute to the conversation I left behind months ago. It was like a party I would wander into, circulate in kind of a distant and awkward fashion, and leave again. Because I had nothing to say.

I like people, but I’m very bad at feeling like I belong anywhere. It’s my default to feel like a fraud in any crowd I’m a part of, and awkwardness has a way of turning into a withdrawal spiral. This began in physical space, but physical space doesn’t have a corner on making me feel that way. Not at all.

I still don’t check Twitter very regularly.

~

One of the things that exacerbates this, in the SFF writer community, is cons. People talk a lot about cons. Who went, what the panels were like, what happened, who said what, what horribly embarrassing things occurred shortly before sunrise after the consumption of large quantities of strong beverages. People tweet during the cons, about the cons. People tweet after the cons, about the cons. People tweet prior to the cons, about the cons.

Please note the extraordinary self-control I’m exercising here by not making a conversation pun. There’s already enough of that in the names of the cons itself and I don’t think I should add to it.

So great. If you go to the cons, it’s wonderful. But cons are expensive. Some of them are very expensive. Some of them have registration fees well in excess of $100, and that’s often the smallest expense.

Cons are important. Cons are where you meet people. Cons are where you make friends. Cons are where you make connections. Cons are where you get your work known, yourself known. If you want to make a career of this, you really need to go to cons. Or man, it sure does help.

And then when you get back you have something to talk about on Twitter, with the people you now know.

Unless you’re bad at Twitter, and you can’t afford to get to a con.

I’ve heard many people say they can only afford to go to one con a year. They have to pick carefully. This is their social circle. This is their career. If they don’t go, there are consequences.

…Okay, I legitimately didn’t mean to make that one. Sorry.

~

Here’s another wrinkle: at least in SFF writerdom, there is really no meaningful distinction between friends and colleagues. Which, sure, is true of a lot of fields. But these relationships are particularly close, and the professional utility of these friendships can be very high. There are costs to missing out, to not being at the right place at the right time to meet the right person. Missed connections are a real thing. Because here’s another wrinkle: it’s not just about being talented. It’s about being noticed.

Yeah, generally you get noticed when you’re very talented. But not always. Sometimes people just… don’t get noticed. You can write the best book ever but people can’t buy it if they don’t know it’s there. There are thousands of short stories published every year, and many of them are fabulous. Not all of the fabulous ones get seen by the right people at the right time. There are cracks, and people and work together fall through.

My sense is that this isn’t a truth people are very comfortable with, because its implications aren’t comfortable ones. But I do think it’s true.

~

For me personally, this becomes especially poignant around about awards season. People are talking about the work that caught their attention, that excited them, that they think is especially worthy of note. People are making lists. People are posting all their eligible stuff and inviting examination.

I don’t know of a single person who will cop to enjoying that, the Here Is My Stuff thing. I hate it. It makes my skin crawl. But you sort of have to. Because there’s so much stuff out there, and it’s easy for your voice to get lost.

It’s easy, if you’ve been away for a while, to come back and feel lost. It’s easy to be silent about your own stuff.

So it’s easy to be forgotten. Or God, it really feels that way.

~

It’s an old and very bitter myth, this idea that being successful in writing is “all about who you know.” And yeah, it’s not all about that. But it is about that. It’s about which conversations you can be part of, with who, regarding what. It’s about who’s keeping an eye on you and what you produce – which attention you earn, but even so. It’s about the room parties and the panels and BarCon, HallwayCon, FloorCon, all the places people congregate and talk shop and talk shit. It’s about making friends and it’s about self-promotion, and again, I think that when you’re a writer the line between those things is practically nonexistent.

There are all kinds of reasons why someone might be bad at social media, having to do with both the body and the mind – because engagement with social media is embodied, and no mental illness or emotional problem exists in isolation from someone’s physical experience of life. Social anxiety isn’t just about the difficulties of walking into an actual room full of actual people. Depression isn’t just about not going outside.

There are all kinds of reasons why someone might not be able to go to cons – money and health being the two big ones, though there are lots of others.

So big surprise: the things that make life and work difficult in terms of everything – the things that makes it easier for certain people to be marginalized and unheard and rendered invisible – is at work here too.

This is especially ironic when we’re talking about writing, which is by nature such a solitary thing. The actual business is done alone and internally. The business side of the business is the exact opposite, and I don’t think it comes easily to many of us. For some of us it’s nearly impossible. A lot of us are not exactly rolling in cash. I’m probably not going to Worldcon this year, and World Fantasy Con is a big question mark. But I’m scraping together money and courage and medication, and going to whatever cons I reasonably can, because I’m lucky enough to be in a position where I can go to any, and because I basically have to.

Because I know there are consequences for not doing so.

And I guess I’m also hoping that when I get back, I’ll hop on Twitter and have some things to talk about.

LC pack 1 texture 24There are two primary things that background this, that are probably necessary to know.

The first is that this past year has been extraordinarily hard for me. The second is that it’s been very difficult to talk openly about.

I’ve always tried to be honest online – about what I’m going through, about what I’m wrestling with, and especially about mental illness, which I think is much less of a forbidden topic of conversation than it used to be but which I also think can still stand to be discussed more than it is, and especially in what we would probably call professional settings.

I’ve done this because I value vulnerability – or I want to. I feel like it’s something to aspire to, in no small part because I absolutely suck at pretending that everything is fine if I have to do so for more than five minutes at a stretch. It’s going to be awkward and uncomfortable no matter what I do, so generally I go with what I regard as the lesser of two evils. When I think I can.

And there’s also that I hope vulnerability might eventually help me.

But it turns out that I’m even worse at everything – pretending and talking openly about my shit – when things genuinely get rough.

So this past year things genuinely got rough, and for the most part, in most places, I clammed up. Because I didn’t know what else to do. I didn’t actually hide the fact that things weren’t going so great, but I didn’t do a whole lot of talking in a public way about the specifics and the uglier parts of what I was feeling regarding everything. I just didn’t want to go into it, in significant part because I was terrified of what people might think.

Things are better now. Sort of. And part of what I’m doing as an effort to make them better is to un-clam, to break myself open from the inside out and be – literally – painfully honest about stuff. At least a lot of stuff. Or to try. It remains incredibly difficult.

Getting going on this post, for example, was much harder than once it would have been.

I made a post a few days ago on my author blog. Just clenched everything and threw it all out there, and left it for people to do whatever they wanted with it. I don’t know that I felt better after doing do, but I certainly didn’t feel worse. A few people on Twitter and Facebook told me that they were really feeling what I was talking about. That was nice. Then I sort of moved on and left it alone.

Then a day or so ago it ended up in a WordPress recommendation Twitter account, and my inbox hasn’t stopped exploding since.

I don’t even know how many people have commented to say, essentially, me too and I really needed to read this. I haven’t tried to keep count. I haven’t honestly looked at the page. I think I’m a little afraid to and I’m not completely sure why. I do know that it’s a lot. I’ve been getting message after message that amounts to what I was talking about in the post itself: people in pain looking for connection. I knew they were out there; part of why I wrote the post in the first place was to state my belief that a fair number of us aren’t doing so hot and don’t know how to talk about it to anyone. But I didn’t expect to hear from so many of them.

It wasn’t until that happened that I realized something strange (though I don’t think it’s surprising): I wrote about looking for connection in vulnerability and the sharing of pain, and I didn’t expect to connect. Not like this.

Which got me mulling over vulnerability itself, and this kind of writing and the context in which we shoot it out into the world.

There isn’t only one kind of vulnerability, is the thing. There’s intimate person-to-person vulnerability, direct communication with a particular someone or someones about what’s going on in your head and heart. By no means does this have to be taking place face-to-face; what really matters – as far as I’m concerned and as far as my own experience goes – is that you’re speaking to someone, and that person is actively listening to you, and both of you know it.

In other words, you’re having something at least vaguely resembling a conversation.

Then there’s the kind of vulnerability I engaged in when I wrote the post. Which was directionless, openly broadcasted vulnerability. There were specific people I had in mind when I wrote it, sure. Some of them talked to me about it. But I wasn’t writing to them. I was writing to everyone and everything, writing to an undifferentiated public, some of whom were people I knew but the vast majority of whom are not. I wrote it, left it there, walked away, and on some level I think I never expected anything else to happen.

What happened is that the latter form of vulnerability began to slide into the former, and I didn’t know it was coming. And it was jarring. It was a little disturbing.

It’s a lot overwhelming. I’m still working up to responding to most of it.

I don’t think I’m saying anything that isn’t pretty self-evident. I don’t think there’s anything piercingly insightful here, or new or surprising. Yet I was surprised. It didn’t occur to me that one of these things could become the other; it didn’t even occur to me that there was a difference. I wasn’t thinking about it at all.

I believe it’s worth thinking about. Because I was writing about loneliness and connection, making myself available for it, and people reached out. Strangers, but also not. Because none of us are okay.

And that’s a deep thing to be united by.

Here’s the point (maybe, assuming I have one): The conversation about disconnection and loneliness regarding digital technology is old and tired and boring and I don’t think any of us want to have it ever again. But disconnection and loneliness can be more piercingly, viscerally felt in these digital spaces, and they can be confronted in an immediate, nuanced, and difficult way that I don’t think any other arena allows for. The ways in which we’re lonely and why. What exactly we’re afraid of. What hurts. How we want to get well. How we want to reach out and hope that there might be someone reaching back. And how we might not expect that when it happens, because private and public are after all not binary categories and connection means a hopeless number of different things.

Like I said, no piercing insights, and I’m not coming away from this with any answers of any kind. What I’m coming away with is the knowledge that a complicated thing is even more complicated than I thought, and there’s a lot more to be afraid of.

But I think there’s also a lot more to reach out for.

fc,550x550,silver.u1

“Public sociology”, for me, has always meant teaching. I obviously don’t mean that teaching is the only legitimate kind at all times and in all places, but to the extent that I’m still a sociologist, and a public one, teaching is how I do that. It’s what I feel comfortable with. It’s what I know I can do well, and it gives me real and observable and frequently immediate results, when I get results at all. I convey all this information about an entire discipline, an entire approach to the business of everything in a single semester, I make it as coherent as I can to a bunch of – usually – total beginners, and I hope for the best.

And every semester there’s at least one student who comes up to me and says this is so weird and so cool, I never looked at anything like this before, I didn’t know you could, this is my favorite class now.

(That has already happened in both of the sections of Intro I’m teaching this semester)

But more and more frequently, students are coming up to me – or, alternately, talking to me about it via email and writing assignments – and saying that what they love about the class is how it’s either giving them a vocabulary they can use to articulate stuff they already knew, or augmenting a vocabulary they already possessed.

More and more of my students are coming into these courses already knowing a lot of the concepts I’m teaching them. I used to get some balking when we got to privilege, no small degree of resistance when we started discussions of race and racism. But now I introduce privilege and I see nods. Okay. Yeah. They know this already. It’s not so scary for most of them. They get it. It’s a feature of how they perceive reality. Privilege. Absolutely.

This was especially noticeable to me, the first couple weeks of this semester, because it’s been a year since I taught anything.

I did a bit of mulling before I arrived at what is frankly a pretty obvious conclusion: a significant portion of the work I was there to do was already being done elsewhere.

And going by the number of hands that always go up when I ask how many of them have never taken a sociology course, it’s being done somewhere that isn’t a classroom.

~

I stopped teaching a year ago because my graduate assistantship concluded – and I saw this as an opportunity to find another job for a while, which didn’t happen, so here I am again – but also because I was feeling increasingly disillusioned by what I perceived as the place of teaching in an R1 state school sociology department like the one to which I’m attached. I should be clear: it’s not that we have bad instructors. We have some amazing instructors. We do very good work. We do have people who value teaching as much as it deserves – in my opinion – to be valued.

But I couldn’t escape the feeling that a lot of the rest of it was lip service. That undergraduate teaching was, for many people, an afterthought. It was something you slung at the graduate students because it was a distraction from what the faculty was really there to do.

Look: I fully believe that intro courses are some of – if not the – most important courses any sociology department ever provides. I think they’re everything. I think they’re our one big chance to engage fresh generations of students, many of whom are extremely bright, regardless of whether or not we get sociology majors out of them. I honestly don’t want everyone to walk out of my class as sociology majors. I would prefer that the vast majority don’t. I want people to walk out of my class and go off to work in government, in medicine, in law, in business, in advertising. I want this way of thinking to go everywhere that’s not a sociology department. I don’t think this work can be overvalued. I don’t think it’s possible to do that.

Fight me.

I couldn’t escape the feeling that I was… Well, clearly I wasn’t alone. I know I’m not.

But I felt alone.

So it felt, at that point, like maybe it was time to say goodbye.

~

Here’s what I did on my last class day of the 2014 spring semester: I sat down on a table in front of the class and I just talked. I talked for nearly an hour, with no notes and almost no plan, and I talked without pausing. I told them everything: I told them about the state of a lot of academia, about the probable state of a lot of the departments through which they were moving, about the damage that encroaching capitalism was causing, about the corrosive power of money. I told them about how I felt, about how I had seen some of my brightest fellow students beaten down by what a lot of this whole thing has become, about how this system fails people. About how it fails them. About how a great deal of higher education in this country is increasingly a form of fraud, about the place of students in a university like the one in which I teach. About how I felt, about what I saw when I looked back at the last five years of my graduate career, about how sad I was and how angry and how scared. For myself, and for them.

Clearly my opinion is biased, clearly it shouldn’t be taken at face value, but I swear, looking at them, no one in my position had ever talked to them like that before. A bunch of them stayed after and talked to me for another hour, essentially an extended conversational version of everything I had been saying.

What I closed that speech with – and what I said over and over in the conversation that followed – was what Nathan Jurgenson said to me in a conversation a couple of years back, about this very thing.

I told them that the work that a lot of academia used to do – the work it’s very good at but is being prevented from doing by its own damn self – will still be done. It’ll just increasingly be done elsewhere. It will find a way. It’s the kind of work that can’t really be stopped, even when the framework built to support it collapses.

I told them they could make the spaces in which that work would be done, if they wanted. I told them they didn’t need systems that didn’t work for them. Or they could, with a lot of time and a lot of effort, find ways to separate themselves. I told them they made me hopeful. It was misty-eyed, yeah. It was profoundly sentimental. That’s who I am and I make no apologies for it.

So then I said goodbye.

~

And here I am again, after a year away. Wasn’t sure what it would feel like. Had deeply mixed feelings, as a matter of fact. Yes, I love teaching – and I genuinely believe I’m very good at it – but I said goodbye and I was ready for that goodbye to last a long time. It’s been jarring to return this soon. I’ve honestly been feeling a little resentment at being forced back by practical considerations.

Then I started the semester… And little by little it began to dawn on me that maybe I was right when I told them what I told them. Maybe Nathan was right.

No; no maybe. We were right. Not all of these kids are coming in already knowing a lot of what I want them to know. But many are. They pay attention – which is at the core of being alive in the world. There’s a lot they don’t know, and I have a lot to teach them. But there’s so much they do know, and I look at them and I see that the work – in terms of how they think, how they approach what they see around them – is being done. It’s being done, and it’s not necessarily being done in universities, and it’s not necessarily being done by formally trained people with degrees.

People disparage Tumblr and its ability to teach people the theory that underpins social justice, and there’s all that ridiculous ew SJW bullshit flying around. And I actually don’t disagree with some of it. But Tumblr is where a lot of that work is being done.

Again, if you disagree, I invite you to engage with me in combat.

Not just Tumblr, either. I think everywhere. I think these conversations – about what a just society looks like, about lives that matter and forces that attack those lives, about why inequality persists and what can be done, about the deepest elements of institutions and culture that produce and reproduce oppression, about looking at individuals and seeing the larger contexts within which they exist – they’re happening. They’re happening all over. They’re finding ways to happen. People want them to happen.

I still think intro courses matter. I think they might matter more than any other course we teach.

But I’d like to think that maybe, one year later, in some very specific ways… they matter a little less.

apple-watch-fitness

Okay, so. Apple’s iOS8 Health app is an issue, at least potentially.

To recap, it’s an issue in significant part – and for the purposes of this – in terms of its effect on people who experience disordered eating and/or obsessive-compulsive behaviors and thoughts. Health trackers in general have the potential to do this, and in fact to be quite harmful. This is primarily because health trackers are highly quantitative in nature and extremely oriented toward the monitoring of details, and obsessive-compulsive tracking is one of the primary symptoms of an eating disorder – and the Health app is a focal point for this kind of monitoring. Though it allows for the entry of data, its primary purpose is to allow better curation of data from other health apps, but it still exists. In fact, it not only exists, but it can’t be removed. It can be hidden, but you – the user – still know it’s there. It will be difficult to ignore even if it can’t be seen. It gnaws. Trust me, things like that do.

It’s additionally an issue because these kinds of thoughts and behaviors aren’t something that people can just choose to stop doing. That’s why it’s a disorder, and it’s one of the most distressing things about this kind of disorder: if you’re presented with a relatively easy way to manifest symptoms, often you will even if you desperately don’t want to:

One of the nastiest things about OCD symptoms – and one of the most difficult to understand for people who haven’t experienced them – is the fact that a brain with this kind of chemical imbalance can and will make you do things you don’t want to do. That’s what “compulsive” means. Things you know you shouldn’t do, that will hurt you. When it’s at its worst it’s almost impossible to fight, and it’s painful and frightening.

Even if you don’t do anything, you’re still thinking about it. Over and over, obsessively. Thoughts are harmful, often physically. Thoughts themselves can trigger a relapse in someone in recovery from these kinds of disorders.

So now there’s the Apple watch, and some things have been added that are even more problematic.

Specifically, there are some shiny new apps. There’s a workout app, which naturally allows one to input goals and plans for physical exercise and track their progress, but the real kicker here is the activity app, which tracks almost every important aspect of the user’s regular physical activity through the day: the number of calories burned, the amount of time spent in motion, and a reminder to move when one has been stationary for a certain amount of time.

So what? So: the app is active all day. Or it possesses that capability. If we conceive of this kind of tracking as invasive for people with particular disorders – and remember that with these kinds of disorders something can be invasive simply by being there – then tracking that functions all day, tracking everything one does, is invasive to the nth degree.

It’s important to note here that it’s not only thoughts that matter in this context, for someone using this specific device. Concepts also matter. Even if someone else might not find the very prospect of this kind of app upsetting, someone who deals with the world this way might very well be upset by it. Upset is probably not a strong enough word. People are often skeptical about “triggers”, about people who are triggered by things – especially things generally seen as innocuous – but they’re real and they’re legitimate.

This is a health tracking app that’s locked into a device and is capable of constantly measuring just about meaningful everything you do, in terms of Apple’s standard of “health”. Yeah, that’s a problem.

Okay, just don’t buy an Apple Watch. But the problem there is that – as with the original Health app – Apple wasn’t thinking about this. The idea that someone might want to buy an Apple Watch but feel unable to do so because of disordered thoughts and behaviors simply wasn’t on the designers’ radar. They made these apps for a generalized default person with a generalized default attitude toward a generalized default idea of what”health” means. At the very least, design that essentially erases an entire category of potential users is probably worth some consideration. As Selena Larson points out in the Kernel article linked above:

Fitness apps and health trackers aren’t inherently bad or good. They’re tools that can used in different ways and come with their own built-in blind spots and biases. Apple’s decision to force Health onto iOS 8 devices could endanger those who have a compulsion to track themselves already. But for the great majority of people, monitoring their health should pose no harm.

Apple sees the world a certain way and designs their products accordingly. No, I’m not faulting them for that. But I do want to call attention to it, because – like all ableist design – it stands out as part of a much larger set of cultural and social marginalizing processes. Apple’s focus is on the “majority”. We need to ask whether that’s all we should expect, or whether better and more accessible might be possible, and what greater effects that better design might have.

But there’s another interesting question here, and it’s the degree to which an Apple Watch truly differs from an iPhone, in terms of how one might use it. Tom Greene argues:

For the Apple Watch to take off, it will need to carve out a distinct value proposition that a smartphone alone cannot deliver. After all, we all pretty much “wear” our smartphones everywhere we go. The combination of Apple’s iPhone 6 technology, coupled with my Withings products seem to make the health-related aspects of Apple Watch unnecessary.

What’s the difference, in practical design terms, between a watch on your wrist and something you carry around? This raises even bigger questions about physical relationships with physical devices, in terms of proximity and how – in an embodied sense – we experience what are essentially cases for apps.

What’s the difference between a watch case and a smartphone case? Does the packaging itself matter? How?

I think these are questions for another post. But I wanted to leave them here, because I don’t think we can consider the design of an app without – at least to some degree – considering the physical thing the app rides around in.

Sarah is on Twitter – @dynamicsymmetry

photo by Aaron Thompson
photo by Aaron Thompson

For last year’s iteration of Theorizing the Web, we took a new step in our development as a conference and produced an anti-harassment statement.

We felt it was important, for a number of reasons. It’s not a matter of feeling; it is important. It seems like – fortunately – this is an issue to which more and more conferences and conventions are paying attention. There’s more in the way of an ongoing discussion than there once was. There’s a growing recognition that these kinds of stands need to be taken and these kinds of explicit guidelines need to be established in order to make spaces safe for all attendees – or at least to try to make those spaces as safe as possible.

Because here’s the thing: these kinds of policies/statements are always going to be works in progress. They’re never going to be finished. They’re going to be subject to the forces and pressures of real-world application, and as such they’re going to be tossed into situations for which they were written but for which they frequently weren’t specifically designed. There are always things you don’t anticipate. Especially if you’re coming from a place of privilege, which – among other things – stunts the growth of the imagination. It just does. It hurts one’s ability to prepare.

So last year we had an anti-harassment statement. We put it online before the conference and tried to call people’s attention to it. We solicited and gratefully listened to a number of extraordinarily helpful comments and criticisms. We needed that help. We couldn’t do that alone.

We still need that help, because we still can’t do this alone.

Last year the statement was tested. I’m not going to go into the details now, but I invite you to read that post so you have some sense of what the process was like. This year it might very well be tested again. So we want to make sure it’s as clear and strong a statement as it can possibly be.

Here it is:

anti-harassment statement

In light of recent public conversations spurred by incidents at other conferences, and in the spirit of being both proactive and inclusive, it is important that we communicate the Organizing Committee’s commitment to providing a harassment-free space for participants of all races, gender and trans statuses, sexual orientations, physical abilities, physical appearances, body sizes, and beliefs. Harassment includes, but is not limited to: deliberate intimidation; stalking; unwanted photography or recording; sustained disruption of talks or other events; inappropriate physical contact; and unwelcome sexual attention. We ask you, as participants, to be mindful of how you interact with others—and to remember that harassment isn’t about what you intend, but about how your words or actions are received.

In keeping with a central theme of Theorizing the Web, we also want to remind you that what is said online is just as “real” as what is said verbally.

By attending Theorizing the Web, you agree to maintain and support our conference as a harassment-free space. If you feel that someone has harassed you or otherwise treated you inappropriately, or you feel you have witnessed inappropriate or harassing behavior, please alert any member of the Organizing Committee (identifiable by our badges; you can also find photos of all Committee members on our “Participants” page). If an attendee engages in harassing behavior, the conference organizers may take any lawful action we deem appropriate, including but not limited to warning the offender or asking the offender to leave the conference. We welcome your feedback, and we thank you for working with us to make this a safe, enjoyable, and welcoming experience for everyone who participates.

If you have any comments, concerns, any suggestions for ways in which we can make this thing better, please get in touch with us here or in the comments below.

We’re going to do all we can to make this year’s Theorizing the Web a safe space for everyone. Thanks so much.

aaaaah so fun
aaaaah so fun tho

One of the more frankly disturbing things I’ve read about video games recently wasn’t about sexism/misogyny but was instead about the NPCs (non-player characters) inserted into a game for a player to murder.

The piece in question was on the game Battlefield Hardline, and it contained quotes from the game’s makers regarding the thought that went into the presence and creation and – in particular – the dialogue of enemy NPCs in the game. As games have become more complex and voice acting has become more of a thing on which some focus is placed when a game is in development, there naturally arises the question of what these people are actually going to be saying. This leads to additional questions: Is the dialogue going to be more informative than anything else? Will there be any actual characterization of these people who are, after all, there largely to be killed by the player and whose lives will therefore be cut (tragically) short? Are these mustache-twirling villains, or are they just people?

And what do those decisions end up meaning for player experience?

This is actually a pretty complicated set of questions for this last reason, because of what it suggests about how the player feels about the NPCs they kill, and about the emotional weight of that killing. Think about this for a second: players in these kinds of games are frequently – essentially – mass murderers who proceed through the game by slaughtering hundreds upon hundreds if not thousands upon thousands of NPCs. Not all of these killings are even strictly speaking necessary. When a game has a significant stealth component – allowing a player to sneak by an NPC or merely render them unconscious – killing is no longer needed in order to proceed through the game.

But games often make killing fun.

When I played Dishonored, I played it through to the end more than once, not just because more than one ending was possible but because different approaches were possible and each was its own kind of fun. There was a lot of strategy and skill and awareness of environment and careful planning involved in the stealth approach – do I shoot that guard with a tranquilizer dart or chokehold him into unconsciousness? What route through this building allows me to avoid the maximum number of NPCs? How can I most effectively hide myself? How can I make use of the timing of NPC movements to my best advantage?

And then when I played it via the combat/killing-heavy approach, I got to knife dudes in the neck and summon swarms of rats to eat them alive.

That was rad.

I was killing NPCs – people, frankly, even if not fully-fledged and realized at all – in absolutely horrific ways and it was so damn fun. It was fun because it was designed to be, and I didn’t think about it or feel a single shred of remorse because the game didn’t encourage me to do so.

Part of why this is worth thinking about – beyond primitive hand-wringing won’t someone think of the children ethical concerns – is because of what killing actually is in video games. Critic Erik Kain noted that “killing people in video games is actually just solving moving puzzles”. It’s something you need to do in order to progress, which is how you play what a lot of people are still likely to think of as a “video game” (leaving aside all the games which aren’t about that at all, such as Flower, The Stanley Parable, Amnesia: A Machine for Pigs, Gone Home and Dear Esther, to name a few of my personal favorites). As such, a lot of the time it doesn’t even really feel like killing. When I play Call of Duty it doesn’t feel like violence to me in any real, visceral sense (I think a lot of this may also be that the Call of Duty series is excellently put together and really not very good).

But killing is also frequently intended and designed to be fun. It’s about creative, innovative ways of destroying humanoid bodies. I’m not a hand-wringer – I really enjoy killing people with rats, for Christ’s sake – but I don’t think that can be ignored.

Underpinning this is the commonly-held idea that games aren’t fun if they make a player pause and stop ignoring this. If they make a player consider the emotional and ethical weight of what they’re actually doing. Because if you did that, wouldn’t you feel bad? Wouldn’t you stop enjoying the game as much?

Rob Auten, writer for Battlefield Hardline, was pretty blunt about this, as a consequence of making an NPC too fully human:

Part of the cops and robbers fantasy is moving among the bad guys and being in the same room. So you have an opportunity to hear more from them. In some cases we made them too charming and people felt bad about shooting them or wanted to hang out with them instead of fighting them and that is no good.

(Personal aside: As a self-identified “gamer”, I think this is a gross idea far too commonly held. Sometimes I do just want to kill people with rats, but God forbid you emotionally engage with your thing.)

One of the games which has taken this idea directly to task – and one of my favorite games of all time and a game which I’ve written about a lot – is Spec Ops: The Line, which not only makes the people you’re killing other American soldiers – albeit soldiers who, as far as you know, have gone rogue – but allows you to listen in on conversations that approach the heartbreaking… and then gives you no choice about whether or not you will kill these people.

At one point in the game I crouched in cover and listened to two of these guys talk about how peaceful things were at that moment, and how, though things got ugly a lot of the time, that peace reminded them of what they were really fighting for.

Then I shot them in the head.

I had to. There was no stealth option there, and I needed to kill them to proceed to the next point in the game.

Reviewer Ben “Yahtzee” Crowshaw observed in a column:

[Call of Duty:] Modern Warfare got into the habit of making a shocking moment that illustrated the ruthlessness of the enemy and the resources at their disposal. It’s supposed to make you hate and fear them…The Spec Ops shocking moment [dropping white phosphorus on civilians], contrarily, is designed to make you hate yourself, and fear the things that you are capable of.

That is not “fun.”

But I also think it’s really good. And I enjoyed it, in terms of the intensity of what it made me feel.

The thing is that, at least with most AAA mainstream games, if the primary concern is this particular kind of “fun”, we’re going to continue to see exactly the convention that The Line was trying to subvert.

So I think we need to rethink the idea of “fun”.

If “fun” is enjoyment, I think we can think of a lot of other stuff in other mediums that we enjoy that doesn’t fall in line with this idea of “fun”. A lot of the stuff I like is not “fun”. I really enjoy Lars von Trier films. Those are not “fun”. I really enjoy books wherein everything terrible happens and my heart gets ripped out and eaten in front of me. Not “fun”. The Wire is not “fun”, at least not most of the time. The National is not “fun” music. Most of the fiction I write is not “fun”.

Okay, that’s me, I’m weird. But those things wouldn’t exist if there weren’t a lot of other people like me.

I want to suggest that this is a lot of why video games are generally – still – seen as juvenile by a lot of people: the attitudes toward emotion that underpin most of the big ones haven’t outgrown this idea of “fun” and begun to experiment with what fun can actually mean in terms of what we enjoy consuming.

The other thing is that big budget AAA games, while still what often get the most attention – aren’t the full picture, and a lot of stuff outside that bubble is doing exactly that. The games I mentioned above, which I really like? Flower is fun, and it also makes me cry every time I play it. Amnesia is a giant exercise in NOPE, and tremendously fun. The Stanley Parable is fun and ridiculously funny, but it’s also a bit of a mind-fuck and gently emotionally abusive at times. Gone Home is softly beautiful and sad. Dear Esther is one of those things that does the whole heart-ripping-eating business.

So this stuff is out there. More and more of it all the time. But that idea of “fun” persists, and I would like it to please stop being so unquestioningly accepted as it is there.

I still really like killing people with rats, though.

Sarah promises to not kill you with rats on Twitter – @dynamicsymmetry