fc,550x550,silver.u1

“Public sociology”, for me, has always meant teaching. I obviously don’t mean that teaching is the only legitimate kind at all times and in all places, but to the extent that I’m still a sociologist, and a public one, teaching is how I do that. It’s what I feel comfortable with. It’s what I know I can do well, and it gives me real and observable and frequently immediate results, when I get results at all. I convey all this information about an entire discipline, an entire approach to the business of everything in a single semester, I make it as coherent as I can to a bunch of – usually – total beginners, and I hope for the best.

And every semester there’s at least one student who comes up to me and says this is so weird and so cool, I never looked at anything like this before, I didn’t know you could, this is my favorite class now.

(That has already happened in both of the sections of Intro I’m teaching this semester)

But more and more frequently, students are coming up to me – or, alternately, talking to me about it via email and writing assignments – and saying that what they love about the class is how it’s either giving them a vocabulary they can use to articulate stuff they already knew, or augmenting a vocabulary they already possessed.

More and more of my students are coming into these courses already knowing a lot of the concepts I’m teaching them. I used to get some balking when we got to privilege, no small degree of resistance when we started discussions of race and racism. But now I introduce privilege and I see nods. Okay. Yeah. They know this already. It’s not so scary for most of them. They get it. It’s a feature of how they perceive reality. Privilege. Absolutely.

This was especially noticeable to me, the first couple weeks of this semester, because it’s been a year since I taught anything.

I did a bit of mulling before I arrived at what is frankly a pretty obvious conclusion: a significant portion of the work I was there to do was already being done elsewhere.

And going by the number of hands that always go up when I ask how many of them have never taken a sociology course, it’s being done somewhere that isn’t a classroom.

~

I stopped teaching a year ago because my graduate assistantship concluded – and I saw this as an opportunity to find another job for a while, which didn’t happen, so here I am again – but also because I was feeling increasingly disillusioned by what I perceived as the place of teaching in an R1 state school sociology department like the one to which I’m attached. I should be clear: it’s not that we have bad instructors. We have some amazing instructors. We do very good work. We do have people who value teaching as much as it deserves – in my opinion – to be valued.

But I couldn’t escape the feeling that a lot of the rest of it was lip service. That undergraduate teaching was, for many people, an afterthought. It was something you slung at the graduate students because it was a distraction from what the faculty was really there to do.

Look: I fully believe that intro courses are some of – if not the – most important courses any sociology department ever provides. I think they’re everything. I think they’re our one big chance to engage fresh generations of students, many of whom are extremely bright, regardless of whether or not we get sociology majors out of them. I honestly don’t want everyone to walk out of my class as sociology majors. I would prefer that the vast majority don’t. I want people to walk out of my class and go off to work in government, in medicine, in law, in business, in advertising. I want this way of thinking to go everywhere that’s not a sociology department. I don’t think this work can be overvalued. I don’t think it’s possible to do that.

Fight me.

I couldn’t escape the feeling that I was… Well, clearly I wasn’t alone. I know I’m not.

But I felt alone.

So it felt, at that point, like maybe it was time to say goodbye.

~

Here’s what I did on my last class day of the 2014 spring semester: I sat down on a table in front of the class and I just talked. I talked for nearly an hour, with no notes and almost no plan, and I talked without pausing. I told them everything: I told them about the state of a lot of academia, about the probable state of a lot of the departments through which they were moving, about the damage that encroaching capitalism was causing, about the corrosive power of money. I told them about how I felt, about how I had seen some of my brightest fellow students beaten down by what a lot of this whole thing has become, about how this system fails people. About how it fails them. About how a great deal of higher education in this country is increasingly a form of fraud, about the place of students in a university like the one in which I teach. About how I felt, about what I saw when I looked back at the last five years of my graduate career, about how sad I was and how angry and how scared. For myself, and for them.

Clearly my opinion is biased, clearly it shouldn’t be taken at face value, but I swear, looking at them, no one in my position had ever talked to them like that before. A bunch of them stayed after and talked to me for another hour, essentially an extended conversational version of everything I had been saying.

What I closed that speech with – and what I said over and over in the conversation that followed – was what Nathan Jurgenson said to me in a conversation a couple of years back, about this very thing.

I told them that the work that a lot of academia used to do – the work it’s very good at but is being prevented from doing by its own damn self – will still be done. It’ll just increasingly be done elsewhere. It will find a way. It’s the kind of work that can’t really be stopped, even when the framework built to support it collapses.

I told them they could make the spaces in which that work would be done, if they wanted. I told them they didn’t need systems that didn’t work for them. Or they could, with a lot of time and a lot of effort, find ways to separate themselves. I told them they made me hopeful. It was misty-eyed, yeah. It was profoundly sentimental. That’s who I am and I make no apologies for it.

So then I said goodbye.

~

And here I am again, after a year away. Wasn’t sure what it would feel like. Had deeply mixed feelings, as a matter of fact. Yes, I love teaching – and I genuinely believe I’m very good at it – but I said goodbye and I was ready for that goodbye to last a long time. It’s been jarring to return this soon. I’ve honestly been feeling a little resentment at being forced back by practical considerations.

Then I started the semester… And little by little it began to dawn on me that maybe I was right when I told them what I told them. Maybe Nathan was right.

No; no maybe. We were right. Not all of these kids are coming in already knowing a lot of what I want them to know. But many are. They pay attention – which is at the core of being alive in the world. There’s a lot they don’t know, and I have a lot to teach them. But there’s so much they do know, and I look at them and I see that the work – in terms of how they think, how they approach what they see around them – is being done. It’s being done, and it’s not necessarily being done in universities, and it’s not necessarily being done by formally trained people with degrees.

People disparage Tumblr and its ability to teach people the theory that underpins social justice, and there’s all that ridiculous ew SJW bullshit flying around. And I actually don’t disagree with some of it. But Tumblr is where a lot of that work is being done.

Again, if you disagree, I invite you to engage with me in combat.

Not just Tumblr, either. I think everywhere. I think these conversations – about what a just society looks like, about lives that matter and forces that attack those lives, about why inequality persists and what can be done, about the deepest elements of institutions and culture that produce and reproduce oppression, about looking at individuals and seeing the larger contexts within which they exist – they’re happening. They’re happening all over. They’re finding ways to happen. People want them to happen.

I still think intro courses matter. I think they might matter more than any other course we teach.

But I’d like to think that maybe, one year later, in some very specific ways… they matter a little less.

apple-watch-fitness

Okay, so. Apple’s iOS8 Health app is an issue, at least potentially.

To recap, it’s an issue in significant part – and for the purposes of this – in terms of its effect on people who experience disordered eating and/or obsessive-compulsive behaviors and thoughts. Health trackers in general have the potential to do this, and in fact to be quite harmful. This is primarily because health trackers are highly quantitative in nature and extremely oriented toward the monitoring of details, and obsessive-compulsive tracking is one of the primary symptoms of an eating disorder – and the Health app is a focal point for this kind of monitoring. Though it allows for the entry of data, its primary purpose is to allow better curation of data from other health apps, but it still exists. In fact, it not only exists, but it can’t be removed. It can be hidden, but you – the user – still know it’s there. It will be difficult to ignore even if it can’t be seen. It gnaws. Trust me, things like that do.

Technology has reached to a point where literally everything we do can be done or assisted with wearable gadgets or computers. Smartwatches have become a common thing to health and sports enthusiasts. If you are interested and looking for smartwatches, try reading about it on spotthewatch.com, they have one of the best detailed reviews out there.

It’s additionally an issue because these kinds of thoughts and behaviors aren’t something that people can just choose to stop doing. That’s why it’s a disorder, and it’s one of the most distressing things about this kind of disorder: if you’re presented with a relatively easy way to manifest symptoms, often you will even if you desperately don’t want to:

One of the nastiest things about OCD symptoms – and one of the most difficult to understand for people who haven’t experienced them – is the fact that a brain with this kind of chemical imbalance can and will make you do things you don’t want to do. That’s what “compulsive” means. Things you know you shouldn’t do, that will hurt you. When it’s at its worst it’s almost impossible to fight, and it’s painful and frightening.

Even if you don’t do anything, you’re still thinking about it. Over and over, obsessively. Thoughts are harmful, often physically. Thoughts themselves can trigger a relapse in someone in recovery from these kinds of disorders.

So now there’s the Apple watch, and some things have been added that are even more problematic.

Specifically, there are some shiny new apps. There’s a workout app, which naturally allows one to input goals and plans for physical exercise and track their progress, but the real kicker here is the activity app, which tracks almost every important aspect of the user’s regular physical activity through the day: the number of calories burned, the amount of time spent in motion, and supplements taken (see Omega Boom, it is one of many options to use for nutrition tracking) and a reminder to move when one has been stationary for a certain amount of time.

So what? So: the app is active all day. Or it possesses that capability. If we conceive of this kind of tracking as invasive for people with particular disorders – and remember that with these kinds of disorders something can be invasive simply by being there – then tracking that functions all day, tracking everything one does, is invasive to the nth degree.

It’s important to note here that it’s not only thoughts that matter in this context, for someone using this specific device. Concepts also matter. Even if someone else might not find the very prospect of this kind of app upsetting, someone who deals with the world this way might very well be upset by it. Upset is probably not a strong enough word. People are often skeptical about “triggers”, about people who are triggered by things – especially things generally seen as innocuous – but they’re real and they’re legitimate.

This is a health tracking app that’s locked into a device and is capable of constantly measuring just about meaningful everything you do, in terms of Apple’s standard of “health”. Yeah, that’s a problem.

Okay, just don’t buy an Apple Watch. But the problem there is that – as with the original Health app – Apple wasn’t thinking about this. The idea that someone might want to buy an Apple Watch but feel unable to do so because of disordered thoughts and behaviors simply wasn’t on the designers’ radar. They made these apps for a generalized default person with a generalized default attitude toward a generalized default idea of what”health” means. At the very least, design that essentially erases an entire category of potential users is probably worth some consideration. As Selena Larson points out in the Kernel article linked above:

Fitness apps and health trackers aren’t inherently bad or good. They’re tools that can be used in different ways and come with their own built-in blind spots and biases. Apple’s decision to force Health onto iOS 8 devices could endanger those who have a compulsion to track themselves already. But for the great majority of people, monitoring their health should pose no harm.

Apple sees the world a certain way and designs their products accordingly. No, I’m not faulting them for that. But I do want to call attention to it, because – like all ableist design – it stands out as part of a much larger set of cultural and social marginalizing processes. Apple’s focus is on the “majority”. We need to ask whether that’s all we should expect, or whether better and more accessible might be possible, and what greater effects that better design might have.

But there’s another interesting question here, and it’s the degree to which an Apple Watch truly differs from an iPhone, in terms of how one might use it. Tom Greene argues:

For the Apple Watch to take off, it will need to carve out a distinct value proposition that a smartphone alone cannot deliver. After all, we all pretty much “wear” our smartphones everywhere we go. The combination of Apple’s iPhone 6 technology, coupled with my Withings products seem to make the health-related aspects of Apple Watch unnecessary.

What’s the difference, in practical design terms, between a watch on your wrist and something you carry around? This raises even bigger questions about physical relationships with physical devices, in terms of proximity and how – in an embodied sense – we experience what are essentially cases for apps.

What’s the difference between a watch case and a smartphone case? Does the packaging itself matter? How?

I think these are questions for another post. But I wanted to leave them here, because I don’t think we can consider the design of an app without – at least to some degree – considering the physical thing the app rides around in.

Sarah is on Twitter – @dynamicsymmetry

photo by Aaron Thompson
photo by Aaron Thompson

For last year’s iteration of Theorizing the Web, we took a new step in our development as a conference and produced an anti-harassment statement.

We felt it was important, for a number of reasons. It’s not a matter of feeling; it is important. It seems like – fortunately – this is an issue to which more and more conferences and conventions are paying attention. There’s more in the way of an ongoing discussion than there once was. There’s a growing recognition that these kinds of stands need to be taken and these kinds of explicit guidelines need to be established in order to make spaces safe for all attendees – or at least to try to make those spaces as safe as possible.

Because here’s the thing: these kinds of policies/statements are always going to be works in progress. They’re never going to be finished. They’re going to be subject to the forces and pressures of real-world application, and as such they’re going to be tossed into situations for which they were written but for which they frequently weren’t specifically designed. There are always things you don’t anticipate. Especially if you’re coming from a place of privilege, which – among other things – stunts the growth of the imagination. It just does. It hurts one’s ability to prepare.

So last year we had an anti-harassment statement. We put it online before the conference and tried to call people’s attention to it. We solicited and gratefully listened to a number of extraordinarily helpful comments and criticisms. We needed that help. We couldn’t do that alone.

We still need that help, because we still can’t do this alone.

Last year the statement was tested. I’m not going to go into the details now, but I invite you to read that post so you have some sense of what the process was like. This year it might very well be tested again. So we want to make sure it’s as clear and strong a statement as it can possibly be.

Here it is:

anti-harassment statement

In light of recent public conversations spurred by incidents at other conferences, and in the spirit of being both proactive and inclusive, it is important that we communicate the Organizing Committee’s commitment to providing a harassment-free space for participants of all races, gender and trans statuses, sexual orientations, physical abilities, physical appearances, body sizes, and beliefs. Harassment includes, but is not limited to: deliberate intimidation; stalking; unwanted photography or recording; sustained disruption of talks or other events; inappropriate physical contact; and unwelcome sexual attention. We ask you, as participants, to be mindful of how you interact with others—and to remember that harassment isn’t about what you intend, but about how your words or actions are received.

In keeping with a central theme of Theorizing the Web, we also want to remind you that what is said online is just as “real” as what is said verbally.

By attending Theorizing the Web, you agree to maintain and support our conference as a harassment-free space. If you feel that someone has harassed you or otherwise treated you inappropriately, or you feel you have witnessed inappropriate or harassing behavior, please alert any member of the Organizing Committee (identifiable by our badges; you can also find photos of all Committee members on our “Participants” page). If an attendee engages in harassing behavior, the conference organizers may take any lawful action we deem appropriate, including but not limited to warning the offender or asking the offender to leave the conference. We welcome your feedback, and we thank you for working with us to make this a safe, enjoyable, and welcoming experience for everyone who participates.

If you have any comments, concerns, any suggestions for ways in which we can make this thing better, please get in touch with us here or in the comments below.

We’re going to do all we can to make this year’s Theorizing the Web a safe space for everyone. Thanks so much.

aaaaah so fun
aaaaah so fun tho

One of the more frankly disturbing things I’ve read about video games recently wasn’t about sexism/misogyny but was instead about the NPCs (non-player characters) inserted into a game for a player to murder.

The piece in question was on the game Battlefield Hardline, and it contained quotes from the game’s makers regarding the thought that went into the presence and creation and – in particular – the dialogue of enemy NPCs in the game. As games have become more complex and voice acting has become more of a thing on which some focus is placed when a game is in development, there naturally arises the question of what these people are actually going to be saying. This leads to additional questions: Is the dialogue going to be more informative than anything else? Will there be any actual characterization of these people who are, after all, there largely to be killed by the player and whose lives will therefore be cut (tragically) short? Are these mustache-twirling villains, or are they just people?

And what do those decisions end up meaning for player experience?

This is actually a pretty complicated set of questions for this last reason, because of what it suggests about how the player feels about the NPCs they kill, and about the emotional weight of that killing. Think about this for a second: players in these kinds of games are frequently – essentially – mass murderers who proceed through the game by slaughtering hundreds upon hundreds if not thousands upon thousands of NPCs. Not all of these killings are even strictly speaking necessary. When a game has a significant stealth component – allowing a player to sneak by an NPC or merely render them unconscious – killing is no longer needed in order to proceed through the game.

But games often make killing fun.

When I played Dishonored, I played it through to the end more than once, not just because more than one ending was possible but because different approaches were possible and each was its own kind of fun. There was a lot of strategy and skill and awareness of environment and careful planning involved in the stealth approach – do I shoot that guard with a tranquilizer dart or chokehold him into unconsciousness? What route through this building allows me to avoid the maximum number of NPCs? How can I most effectively hide myself? How can I make use of the timing of NPC movements to my best advantage?

And then when I played it via the combat/killing-heavy approach, I got to knife dudes in the neck and summon swarms of rats to eat them alive.

That was rad.

I was killing NPCs – people, frankly, even if not fully-fledged and realized at all – in absolutely horrific ways and it was so damn fun. It was fun because it was designed to be, and I didn’t think about it or feel a single shred of remorse because the game didn’t encourage me to do so.

Part of why this is worth thinking about – beyond primitive hand-wringing won’t someone think of the children ethical concerns – is because of what killing actually is in video games. Critic Erik Kain noted that “killing people in video games is actually just solving moving puzzles”. It’s something you need to do in order to progress, which is how you play what a lot of people are still likely to think of as a “video game” (leaving aside all the games which aren’t about that at all, such as Flower, The Stanley Parable, Amnesia: A Machine for Pigs, Gone Home and Dear Esther, to name a few of my personal favorites). As such, a lot of the time it doesn’t even really feel like killing. When I play Call of Duty it doesn’t feel like violence to me in any real, visceral sense (I think a lot of this may also be that the Call of Duty series is excellently put together and really not very good).

But killing is also frequently intended and designed to be fun. It’s about creative, innovative ways of destroying humanoid bodies. I’m not a hand-wringer – I really enjoy killing people with rats, for Christ’s sake – but I don’t think that can be ignored.

Underpinning this is the commonly-held idea that games aren’t fun if they make a player pause and stop ignoring this. If they make a player consider the emotional and ethical weight of what they’re actually doing. Because if you did that, wouldn’t you feel bad? Wouldn’t you stop enjoying the game as much?

Rob Auten, writer for Battlefield Hardline, was pretty blunt about this, as a consequence of making an NPC too fully human:

Part of the cops and robbers fantasy is moving among the bad guys and being in the same room. So you have an opportunity to hear more from them. In some cases we made them too charming and people felt bad about shooting them or wanted to hang out with them instead of fighting them and that is no good.

(Personal aside: As a self-identified “gamer”, I think this is a gross idea far too commonly held. Sometimes I do just want to kill people with rats, but God forbid you emotionally engage with your thing.)

One of the games which has taken this idea directly to task – and one of my favorite games of all time and a game which I’ve written about a lot – is Spec Ops: The Line, which not only makes the people you’re killing other American soldiers – albeit soldiers who, as far as you know, have gone rogue – but allows you to listen in on conversations that approach the heartbreaking… and then gives you no choice about whether or not you will kill these people.

At one point in the game I crouched in cover and listened to two of these guys talk about how peaceful things were at that moment, and how, though things got ugly a lot of the time, that peace reminded them of what they were really fighting for.

Then I shot them in the head.

I had to. There was no stealth option there, and I needed to kill them to proceed to the next point in the game.

Reviewer Ben “Yahtzee” Crowshaw observed in a column:

[Call of Duty:] Modern Warfare got into the habit of making a shocking moment that illustrated the ruthlessness of the enemy and the resources at their disposal. It’s supposed to make you hate and fear them…The Spec Ops shocking moment [dropping white phosphorus on civilians], contrarily, is designed to make you hate yourself, and fear the things that you are capable of.

That is not “fun.”

But I also think it’s really good. And I enjoyed it, in terms of the intensity of what it made me feel.

The thing is that, at least with most AAA mainstream games, if the primary concern is this particular kind of “fun”, we’re going to continue to see exactly the convention that The Line was trying to subvert.

So I think we need to rethink the idea of “fun”.

If “fun” is enjoyment, I think we can think of a lot of other stuff in other mediums that we enjoy that doesn’t fall in line with this idea of “fun”. A lot of the stuff I like is not “fun”. I really enjoy Lars von Trier films. Those are not “fun”. I really enjoy books wherein everything terrible happens and my heart gets ripped out and eaten in front of me. Not “fun”. The Wire is not “fun”, at least not most of the time. The National is not “fun” music. Most of the fiction I write is not “fun”.

Okay, that’s me, I’m weird. But those things wouldn’t exist if there weren’t a lot of other people like me.

I want to suggest that this is a lot of why video games are generally – still – seen as juvenile by a lot of people: the attitudes toward emotion that underpin most of the big ones haven’t outgrown this idea of “fun” and begun to experiment with what fun can actually mean in terms of what we enjoy consuming.

The other thing is that big budget AAA games, while still what often get the most attention – aren’t the full picture, and a lot of stuff outside that bubble is doing exactly that. The games I mentioned above, which I really like? Flower is fun, and it also makes me cry every time I play it. Amnesia is a giant exercise in NOPE, and tremendously fun. The Stanley Parable is fun and ridiculously funny, but it’s also a bit of a mind-fuck and gently emotionally abusive at times. Gone Home is softly beautiful and sad. Dear Esther is one of those things that does the whole heart-ripping-eating business.

So this stuff is out there. More and more of it all the time. But that idea of “fun” persists, and I would like it to please stop being so unquestioningly accepted as it is there.

I still really like killing people with rats, though.

Sarah promises to not kill you with rats on Twitter – @dynamicsymmetry

THE HELL IS THIS
WHAT THE HELL IS THIS

So I’m basically destroying my gamer cred here – to the extent that I had any, which is probably precisely not at all – by admitting that until this week I hadn’t yet played Destiny.

Look, I just hadn’t, okay? Leave me alone.

(Don’t worry, it gets a lot worse.)

Anyway, I had some free time so I dove into the demo. Many of the more critical (in the more academic sense, not the “this sucks” sense) reviews I had barely skimmed said it was both beautiful and ultimately pretty soulless, which I found – at least from the demo – to be true. But I can get behind a soulless game. I can even get behind a “walking simulator with stuff”. Sometimes I want to Not Think About Things in a fairly aggressive fashion.

So I was having fun. I was running around shooting things from cover and knifing people in the neck and hanging out with a floating metal eyeball with the voice of Peter Dinklage. I was playing by myself, because almost without exception I play video games alone in single-player mode, because I don’t particularly like people (and among other things, in all seriousness, online co-op gaming is not a safe space for me and I shouldn’t have to go into why).

And then I got to The City (the last safe human city because evil aliens blah blah destroy everything for Reasons blah blah look just don’t think about it too hard) to gear up and head back out for more shooting and knifing, and…

Ugh. There were people there.

Quipping aside, it really was rather jarring. Other players were present in the space with me, their screen names visible above their heads. I think it was the fact that it was unexpected that was the most jarring, because if I actually went in knowing a thing or two about Destiny aside from the fact that it was supposed to be pretty and soulless I would have seen it coming (like… significant parts of it are massively-multiplayer environments and that’s sort of the point of the game; I somehow managed to go in knowing effing nothing about this game, it’s ridiculous and I have no idea how it happened and I am so goddamn ashamed of myself). But I’m interested in why else it was jarring, and I think it has to do with how I as a player interact with the gamespace in both an emotional/physical way.

For me, the significant thing was that I wasn’t forced to interact with any of those players, at least not there. I was free to ignore them, and I did. But they annoyed me. I was annoyed at them for being there at all. The space no longer felt like my space because I was sharing it with people, and I didn’t feel as though I had consented to doing so (HOW DID I NOT KNOW THIS GOING IN, HOW). The players themselves had avatars the same as my own, and if not for the floating screen names I wouldn’t have known they were other players at all. In terms of my formal interaction with the game at that point, nothing was affected and nothing changed. My active play wasn’t altered. I acted as I would have done if I had been there “alone”.

But I felt so differently about it. I felt disconnected. I felt thrown out of the world in which I had been slowly immersing myself. Simply by virtue of knowing those other people were there.

Among other things, I think this is evidence that – jumping off a debate about “formalism” in game studies that I’ve been reading about recently, though in certain elements it’s a fairly old argument – when we’re examining something like play in a game, we can’t merely adhere to what a lot of people would call a classic formalist approach to gaming: that what matters most in terms of the analysis of a game is what you do in it. When you’re not doing something, you’re not playing the game. Simply being stationary in the environment, listening to it and looking at it, isn’t actually engaging with the game at all.

So clearly a problem here is how we define doing.

If the only thing that mattered in Destiny was the logistics of how I ran around and shot things from cover and knifed people in the neck, the presence of other players with whom I was not obligated to interact in any way shouldn’t have bothered me at all.

But I interact with these spaces emotionally. I want them to be mine. I don’t like to share. To arrive in one and have that not be the case made it harder for me – for whatever bizarre psychological reason – to immerse myself.

I feel sorta ridiculous even admitting that any of this happened, but it did and it’s… Yeah, it’s a thing.

I knew this, of course. I know I interact in an active way with video games even when I’m playing Journey and I’m just standing there looking at sand and literally crying because the score is so beautiful. I’m present in that space, I’m experiencing it as as space, and in fact one of the reasons why I sometimes stop “playing” to look at and listen to things in the environment is because my interaction with the game has become so intensely visceral.

So I’m not really even saying anything new here. A number of game studies scholars have pretty much said all of this. It just hit me again, freshly: how we interact with video games is so messy and complicated, because we’re messy and complicated and often resist easy analysis. I came at the mechanics of this specific game (KNOWING NOTHING, HOW DID THAT EVEN HAPPEN) with my own specific way of being in and feeling in the space of a game, and my experience of that space was complex and particular to me. That’s suggestive about how we need to think about the ways in which players experience games… in general.

Ugh. People.

 

Sarah doesn’t like you on Twitter – @dynamicsymmetry

image courtesy of Eduardo Mueses
image courtesy of Eduardo Mueses

A couple of days ago I finished writing a short story and burst into tears.

Anyone who knows me knows I have a lot of emotions. The point of this story is the story.

It started out as a story about a mysterious plague of suicides documented and shared via social media, which I seized on just because it resonated for a bunch of reasons, and I felt like writing something profoundly troubling. What it became was a story about me, about what the last year has been like, about what the last six years have been like – in a graduate program regarding which I seem to be moving from feelings of ambivalence to outright anger and resentment – and really what it’s been like since we first started using these technologies to connect with each other.

It’s about being a Millennial and what being a Millennial is like right now, all the clickbait headlines and ridiculous thinkpieces aside. It’s about fear and anger and loneliness, hopelessness.

It’s also about courage, and about the networks and technologies that allow us to take care of each other, especially when no one else is really able to.

Here’s the thing about things like Facebook and Twitter and Tumblr – especially those two latter for me, in part because they aren’t subject to the same kind of emotional algorithmic filtering. They allow us to share information and organize, sure. They help facilitate political action. They’re exploited by the powerful and the marginalized alike. They alter how we move through the world, how we understand ourselves and each other, how we understand the past and the present and the future. Okay, sure; all of those things.

But I look back on everything that’s happened to me, my life with these things, and what I think I see more clearly than anything else is that they’ve allowed us to take care of each other.

It’s not perfect. It’s not ideal. It is sure as hell not evenly distributed. It’s not the same for everyone, because no one is exactly the same. But it’s something. It’s an important thing. Sometimes it’s the only thing.

It’s very difficult to explain that to people who haven’t experienced it. Those people tend to be the same people devaluing these things, trying to draw distinctions between them and real connection. Those people also tend to be the people who set a lot of the popular discourse around this stuff. Maybe our generation won’t do that; probably we’ll find something else with which to do the same thing. It sort of seems to be what we do.

But in the meantime we do this. We create these spaces, and while we do them with code written by other people, and what we do is therefore constrained by that code, we still have power to make and make use. We construct our own languages and our own customs, our own mythologies. We beat paths to each other; we pave roads. It’s not always peaceful or simple; it’s as messy and hurtful and complex as the “real” world.

That’s why it’s real.

But we take care of each other.

So I was sitting there on the couch at one in the morning, looking at this story and weeping and trying to understand exactly why, and all I could take from it was the feeling that I was looking at something I understood, that many people I know understand, which means something and is important but which is a little like a child’s secret country that fades when we grow up. That can’t last. Neither contacts nor code are endlessly sustainable. Networks decay and nodes fade into obscurity, wink out of existence one by one.

This is not about age, but it is about time.

What I think is that when we write about these things, when we write about these technologies and these spaces and what it means to live there, we sometimes have a tendency to oversimplify in the service of analysis. Which is fine; analysis is for a necessary thing and it does a necessary job. But when I’m sitting there crying over a goddamn story I just wrote, what I think is that this is all more complicated that I have the most remote possible prayer of ever being able to explain in a journal article or a conference presentation. What I can say about it with absolute certainty is that it is.

We live here until we don’t anymore.

All we ever have is each other.

I loved those people. I loved every one of them. The people I never met. The people whose names and faces I never knew until I was watching them kill themselves. The people who mourned for them and invited me to mourn with them. We said we loved each other. We all said it. Over and over. Like hands across a chasm, groping in the dark. Trying to hold on. Knowing that, in the end, we probably couldn’t save anyone. All we could do was be there until they were gone, and be with whoever was left.

I remember how it was. I remember it. I remember it so well. I’m drowning in remembering.

Not very shareable.

I love you. I love you.

I love you.

image courtesy of matsuyuki
image courtesy of matsuyuki

I was doing a post on writing for my author blog, and I wanted an image for it, so off to the Flickr Creative Commons search I went. I searched the “writing’ keyword. Almost all of what I got back was some version of the above. Almost all of the rest of it was just random stuff. There were a few shots of laptops or computers but they nearly always also prominantly included notebooks and pens/pencils. Do a Google image search for “writing” and you get the same damn thing. All very attractive photos of pens and hands and often lovely, swooping script.

I do not write that way.

I can’t write that way.

I have a disability called dysgraphia, which manifests – among other ways – as severe difficulty in writing on paper. It involves both impaired motor function/coordination and a form of dyslexia in terms of the production of words. It’s impossible for me to hold a writing implement “correctly”. I get horrible hand cramps. My handwriting itself is illegible. I often get letters or words in the wrong order. Consistent use of capital letters? Hahaha no.

When I write on a keyboard all of that goes away and everything flows wonderfully. I couldn’t write without a keyboard. Without a keyboard, I am probably not a writer at all.

Why does this matter? It matters because we aestheticize the visual process and tools of writing as a part of the process of romanticizing it (which is sorta bullshit anyway). In so doing, we legitimize certain kinds of writing while at the same time delegitimizing others and even rendering them invisible. Most of the time I don’t think we intend to do that, just like we don’t intend to do most things like that. We just have a fixed idea of what Writing is and everything we attach to that idea reinforces it.

Okay, but why does it matter? Well, to start with, it’s at least vaguely ableist simply because it ignores the existence of people like me, and others who for one reason or another can’t depend on physical handwriting to produce words, and that’s already a toxic cultural process. Not a fan of anything which contributes to it.

But it also matters, I think, because it’s yet another symptom of our general tendency to (still) privilege the non-digital over the digital. There’s something about words produced on paper (preferably attractive paper with an attractive fountain pen, or even a quill for God’s sake and I’m not really kidding about that last) which is more real because of where it is and how it’s being done. My kind of writing? Unreal, and not just because of the aesthetic. And in fact, the aesthetic is part of what reinforces the idea of what’s real in this case. It’s also associated with the ways in which a tremendous amount of people still seem to feel that paper books are more real and more legitimate than ebooks, despite ebooks being enormously popular.

And I think a huge number of us now write on keyboards.

Is this really harmful to me? Immediately, no. More than anything it’s annoying. But looking at that stream of images, it was difficult to miss, and it was also difficult to miss what it meant.

 

Sarah writes on Twitter – @dynamicsymmetry

image credit: KeurigHack
image credit: KeurigHack. also, wow, even the machine talks to you like a jerk

A quick update regarding Keurig instituting the coffee maker form of Digital Rights Management in their products: Not so great for Keurig.

“It is a HUGE SHAME that the company decided to remove the ability to use your own coffee grounds in the home brew k-cup. …They should have just said we made these changes so our products would sell more so we could make a bigger profit,” reads a typical review. “They took a potentially killer machine and added horrible DRM – a rights management system, in the greedy attempt to get all other coffee pod manufacturers to pay them so their pods work,” reads another of the hundreds of one-star reviews. Many lamented the ability to give no stars. If you Google “Keurig 2.0,” the first thing that autocompletes is “hack.”

There’s not a tremendous amount that’s surprising about this, and I frankly don’t have much to add that I didn’t say back then, or that anyone else hasn’t already said about this. The Verge article is especially perceptive in terms of pointing out that the fact that this is about coffee is in itself significant; the kind of people who would probably buy a Keurig are people who probably have a particular relationship with this kind of coffee and are looking for a particular kind of coffee experience. Easy, convenient – which also indicates a versatility which DRM of any kind destroys.

DRM constrains use, which is the enemy of the versatile. It takes products and technologies and devices that might be fabulously nimble, highly adaptable, and renders them useful within a very narrow range of functionality. My husband recently jailbroke his iPhone; I knew about a lot of the features that allowed one to institute, and I understood the general culture behind tearing down the walls around Apple’s pristine garden, but it was still remarkable to see what that process turned the iPhone into: something far more useful and far more powerful than Apple was permitting it to be. Far more friendly.

The kind of DRM Keurig was putting in place is a natural extension of the control pretty much all companies who produce technology of any kind try to maintain over how those devices are used. It’s not unusual and there should be nothing surprising about it. I’ve written somewhat depressingly about what I’ve perceived as the inevitability of this kind of control, especially as it becomes both less visible and more normalized, but I’ve somewhat revised that view. Not just because of the kind of loud annoyance that seems to arise every time something like this happens and the messages that kind of loud annoyance sends, but also because of the sheer – almost joyful – creativity involved in the idea of hacking your coffeemaker. No one likes having to do something like that, and the necessity itself is ridiculous, but there’s something about that kind of resistance I find very satisfying.

Again: nothing particularly new about the observation that there are opportunities for both the exertion of control and the act of resistance in situations like this – political and commercial and whatever, everything in between. But yeah, I kind of see some connections to be drawn between resisting an asshole coffemaker company and resisting an oppressive political regime. If you tilt your head and squint. A bit.

And I’m pleased that Keurig paid for being stupid about this. That’s always satisfying too.

 

Sarah is on Twitter – @dynamicsymmetry

 

la-et-jc-poetweet-a-website-that-turns-tweets-into-poetry-20150126

Quiet, please, for Art.

ACTUAL BEST
by Sunny Moraine

Fabulously awesome, congratulations
That wouldn’t be weird, right?
Put up with for a lot of reasons

Me. It also still has no title.
I may be able to help you out
Gorgeous. Sorta choked up a little.

Might fly better if it has a name.

Dudes are just not even human.

When I decided to try to throw something together about Poetweet it went without saying that I’d have to see what it scraped together out of me (note the “me”; I used that word without thinking about it and we’ll return to that shortly). And of course, looking at it, I’m making instinctive sense of it. I recognize those as my words, and arranged in that fashion they do indeed seem to make a kind of sense. Further, it’s a pleasurable kind of sense – doofy, a little ridiculous, a little nonsensical in spite of itself, but I read and I (granted the bias) am all like hey, I kinda like that person.

Which is actually somewhat remarkable considering how difficult it can be to like oneself.

For those who don’t know – and I’ve had my head deep in writing and editing so I’ll admit I hadn’t heard of it either until Nathan Jurgenson brought it to my attention this morning – Poetweet is a bot which scrapes your Twitter account for stuff and arranges it into one of three different poetic styles (I went with Indriso because “new free and literary theory” sounded deep and vaguely edgy and sonnets are so old and done at this point and who the hell wants a French love poem I ask you). The response to this appears to range from delighted amusement to a general sense that there really is a kind of coherence to be found in it, that out of the noise a signal emerges. There are a few things here which I feel are worth some brief attention.

First, this is a kind of bot-output that is fundamentally self-centered: you enter data about yourself, you get back a thing about yourself. This isn’t new – I was doing quizzes to find out which character from The X-Files I was back in my Livejournal days – and in fact that’s sort of the point; the web by its very nature is extremely well suited to this kind of thing. The thing about this kind of output is that it is at once emotionally positive in nature – or it aims to be – and it functions as a way to see yourself from a slightly different perspective, to see you from the point of view of something which is not quite you, and which supposedly has no biases or agenda regarding what it tells you. This latter acts to enhance the pleasure you’re already deriving from the content of the output itself. It even (possibly a bit tongue-in-cheek) promises to give you a poem which possess significant meaningful meaning in and of itself, with phrases like “analyzing your deepest feelings” and “tracking the data of your inspiration”.

(Data as inspiration is a post in itself, but it is not this one)

In short, this practice is persistent because on a very deep level we like it. It’s a kind of positive reinforcement of pre-existing narcissism (I don’t think that’s all it is, but that’s what I’m focusing on).

But what about the idea of getting signal out of noise? What is it about doing that which might be pleasurable?

A number of people have already drawn a comparison between Poetweet and @Horse_ebooks. As was observed extensively around the time that account was outed as not the result of randomness but something which operated according to the conscious intentions of human beings, we liked the idea that @Horse_ebooks was nothing more than a “mindless” algorithm. We loved the idea that out of noise and chaos could come this:

horse-ebooksAnd what was most important about that – at least for a lot of people – was the idea that we could pull coherence out of total incoherence. We find recognizable patterns deeply comforting, and I think we particularly find them comforting when we see them where we don’t expect to. A significant component of delight is surprise.

So I think the same pleasurable emotional process is going on, with some important differences. First, most people’s tweets do make sense, and they are temporally coherent in aggregate, because they are one way in which someone’s own narrative – and self-narrative – emerges, is produced, and is reinforced. So what we have here is not noise, simply an unmanageable amount of coherence.

And what Poetweet does is take that enormous flood of coherence and cut it down to something neat and small, and we get to see what sense comes out of it. Of course there would be sense. There was sense from the beginning.

But the other significant difference is the orientation of the thing. @Horse_ebooks was almost entirely external; we looked at it from the inside out, and while we took what we found and internalized it, it was not a funhouse mirror image of us, at least not in the same way. @Horse_ebooks was about finding a meaningful signal in a universe full of meaningless noise. Poetweet is about finding a single, clear, emotionally positive and aesthetically pleasing signal in a much larger and messier collection of coherent information.

If anything, the removal of content from its context makes it less coherent. Hence the comparison to a funhouse mirror.

So what’s the point? Simply that it’s interesting what people will do to find meaning in themselves and in the world around them, where they’ll go for that meaning, and what they’ll make of that meaning when they find it. It is, for better or worse, something we deeply need. And we very much need to like what we find.

Play us off, Poetweet.

Even human
by Sunny Moraine

That wouldn’t be weird, right?
A Today Totally Didn’t Suck feast
In part of “We Have Always Fought”

A HUNDRED THOUSAND PERCENT BETTER
Seeing this story so off it goes
I’m gonna buy some body glitter

Tw for so much general grossness

To the Prevent Cancer Foundation

Sarah is possessed of deep and profound meaning on – where else – Twitter: @dynamicsymmetry

image courtesy of Elya
image courtesy of Elya

The problem with Je suis Charlie is that I’m not.

Going back for a second.

The hashtag/slogan that started in the wake of the massacre at the offices of the French satire magazine Charlie Hebdo has proven to possess an undeniable power – not because it’s meant in any literal sense (obviously) but because of what it means in every way that isn’t literal. It rose out of intense horror, outrage, and the things that intense horror and outrage do – it prompted correspondingly powerful feelings of solidarity. What happened was abhorrent, obscene. Of course this is how we respond when people are killed for what they say, what they write, for the art they create. We know what kind of world that kind of violence leads to, and that’s not a world in which people who value the right of free speech want to live. Of course we’ll fight to protect that right, however we can.

But there is a problem with Je suis Charlie, and it is that I’m not.

I’m a writer. I’ve written stuff that a lot of people would call offensive, that they would probably call obscene. I’m the last person to argue against the ideal of free speech. But here’s the thing: Especially as Americans, in the course of placing huge amounts of value on the right to free speech we (using we because I am and most of the people I know are as well, so it’s most of my social circle) tend to massively oversimplify what that right means and the context within which it exists. Some of us tend to use it as an excuse for utterly terrible behavior and to cry censorship when people call them on it.

And others – many others – throw the ideal of it around without regard for the complications it creates. This is especially true at this moment in history, with a great deal of our discourse bound up in the vaguely libertarian ideals we see – a lot of the time – in chaotic and loosely affiliated groups like Reddit, 4chan, and Anonymous (yes, I know those are not all the same things).

I may not like what you have to say but I will defend to the death your right to say it works fabulously well when it’s put to practice in the context of a society organized around a level playing field, where groups of people aren’t marginalized, oppressed, silenced, and murdered through systems and structures bolstered by culture and discourse, where for many what’s at stake is not I may not like what you have to say but rather What you have to say is part of what is killing me. In other words, it works fabulously well in the context of a society that does not and probably never will exist.

I feel like I’m not doing a very good job of articulating what I think about this, so let me refer to something said by fellow author Sofia Samatar on Twitter:

Thin. Not nonexistent, but thin.

This is the world in which we live. It sucks, and no one likes complications, and solidarity in the cause of freedom of expression is a great and powerful and righteous thing and I want to stress that I believe that last to the very core of my writerly human being. But the problem with Je suis Charlie is that I’m not, and to use that slogan – and to go no further with the conversation – obscures at least some of the extremely problematic and troubling things that accompany any ideals of free speech in a world in which some people are simply not free, and in which the speech of others produces and reproduces the cultures that keep them that way.

And this is what hashtag activism does. It’s what most protest slogans do. Anyone remember Kony 2012? The problem with that whole business wasn’t that it was incorrect – Joseph Kony is a horrendous human being, a psychopathic war criminal without whom the world would be undeniably better off, and no one disputed that. The problem with Kony 2012 was the white Western context from which it emerged and the fact that it oversimplified the issue to the point of obscuring and even erasing some very significant problems.

The problem with Je suis Charlie is that it kind of does the same thing, in the process of creating and encouraging a kind of solidarity that we need in the most desperate way. The problem with Je suis Charlie is that not everyone can be Charlie. The problem with Je suis Charlie is that it erases a huge amount of the conversation we should maybe be having about what free speech really is and does, and what it costs certain people to defend it completely and without any question or consideration in every single circumstance.

The problem with Je suis Charlie is that I’m not.

 

Sarah is on Twitter – @dynamicsymmetry