On Friday night, VP-elect Mike Pence went to see Hamilton. He was loudly booed. The cast delivered a respectful message asking him to “work on behalf of all of us”. President-elect and noted internet troll Donald Trump accused the cast of harassment, because the truth is whatever he says it is. By Saturday morning, it was – going by my feed – most of what Twitter was talking about.
So far I’m sure I’m not delivering any information that anyone reading this doesn’t already know. What was especially noteworthy about what happened – aside from the fact that it happened at all – was the timing. Specifically, it happened almost immediately after Trump settled a fraud lawsuit for $25 million, after trying to delay it until after the Inauguration. People had been talking about that, but suddenly it was largely submerged under Hamilton tweets. Which didn’t go unnoticed.
The point is impossible to argue. That’s exactly what happened. And it’s definitely disturbing; Pence getting booed at Hamilton is arguably both a more spectacular event and an event with significantly less legal and ethical importance. In fact, since then – and I’m sure I’m not alone in considering this – I’ve wondered if it wasn’t calculated. It seems like something tailored specifically for social media: take a despised political figure (also a virulent misogynist and dangerous homophobe, among many other things), put him in the audience of a massively popular musical where 90% of the audience plus the cast is pretty much guaranteed to dislike him, add smartphones, and hit purée.
Meanwhile, the President-elect’s generally unambiguous criminal activity – and his avoidance of a court appearance – slides under the radar.
Regardless of exactly how this happened, like I said, I’m disturbed by it and by its implications. Because if there’s one thing these people do understand, it’s how to manage a news cycle. I think we need to be watching for exactly this kind of thing, because I don’t think we’ve seen the last of it. This is a President-elect who genuinely appears to be under the impression that this is in fact all a reality show (is he even wrong? experts disagree), and he’s being advised by at least one man who knows how to leverage spectacle and manipulate an audience. This is a tactic that is likely to be effective as a distraction measure. It’s dangerous.
Does that mean we shouldn’t have been talking about it?
No. We should. What happened on Friday night should be talked about – only not because it’s fun to be clever on the internet (it is) or because these men are easy to mock (they are). It deserves our attention because it’s yet another way to emphasize and underline the fact that this is not normal.
Those of us who want to fight what’s happening, those of us who want to dissent, we need to do it with every single tool we can get our hands on, and the normalization of this entire situation is one of the biggest dangers right now. Normalization paves the way for much, much worse. A normal VP-elect goes to a Broadway show, and maybe a few people heckle, but hey, that’s normal. What happened on Friday night was not normal, and everyone saw it, or heard about it, and everyone knew.
Regardless of how much Trump wants to lie about how it really went down.
Letting events like this be a distraction from things like a multimillion dollar fraud settlement is dangerous, and it’s what they want. But taking events like this and using them as a way to broadcast dissent, to normalize it… That is not what they want. That’s one of the last things they want. A society that has normalized dissent is a society that is much less likely to normalize them.
This is going to be one of the major sites of contention over the course of the next four years, and it’s not new but it’s going to be of unprecedented importance: not the details of what happens, but how they’re used. Not that a spectacle occurs, but what it ends up meaning. Because that’s one of the fights we’re looking at in a post-truth environment. Not what happened, but instead: What does this mean?
It’s abstract. It’s also not. It’s about as meaningful as you can get.
This is the machine, in a courtroom (or not) and in the audience of a Broadway musical. It’s not here or there; it’s everywhere. We can’t afford to focus too much of our attention in any one place, on any one front; we also can’t afford to ignore any of the tools we can use.
This is the machine. Grab every wrench you can, and throw.
It’s probably appropriate that amidst a torrent of harassment and abuse directed at marginalized people following the election of noted internet troll Donald Trump, Twitter would roll out a new feature that purports to allow users to protect themselves against harassment and abuse and general unwanted interaction and content. Essentially it functions as an extension of the “mute” feature, with broader and more powerful applications. It allows users to block specific keywords from appearing in their notifications, as well as muting conversation threads they’re @ed in, effectively removing themselves.
In a lot of ways, this seems like a good feature and a useful tool. Among other things, it addresses problems with Twitter’s abuse reporting system, where people reporting abusive tweets are told that the tweets in question don’t violate Twitter’s anti-abuse policy. As Del Harvey, Twitter’s head of “trust and safety”, explains it:
We really tried to look at why — why did we not catch this? And maybe the person who did that front-line review didn’t have the cultural or historical context for why this was a threat or why this was abuse.
In that same Bloomberg piece, it’s noted that there’s also a new option to report “hateful conduct”, and that abuse team members are being retrained in things like “cultural issues”. Also good. Especially right now, when – despite Melania Trump’s charmingly quixotic stated mission to protect everyone from her husband on Twitter – there’s likely to be a significant upswing in this kind of profound ugliness, probably for a long time.
Here’s the problem, though. And it’s more of a quibble, but it’s worth the quibbling.
The primary thrust of Twitter’s new initiative is oriented toward the target. By which I mean, what looks like putting power in the hands of a user actually has the potential to put responsibility on them for their own safety. Which a lot of people would probably think is perfectly reasonable, and I agree – to a point.
The issue is that it’s very easy to do something like this – toss something into someone’s lap for them to use – and adopt the assumption that this is the best strategy for dealing with the deeper problem. Which isn’t that abusers are able to reach their targets. It’s that the abusers are there at all.
Here’s where someone says hey, that’s the internet, what do you expect? And yeah, I know. Believe me, I know. But what I expect? Is more than putting responsibility on a user in the guise – even if it’s not entirely a guise – of giving them power. I understand that it’s very difficult to kick these people out and keep them out. I understand that it’s just about impossible. I appreciate that Twitter does seem to be doing work in that direction. But it’s not enough. What I expect is that we’ll create spaces where we don’t have to worry about muting these people because they never start talking in the first place.
There’s also the issue of how, when you successfully ignore something while not removing it, you can actually enable its presence. Which is not to say that users shouldn’t take full advantage of this feature, but instead to say that Twitter should remember that just because you can’t hear it, that doesn’t mean it isn’t there.
It’s been a real struggle for me to talk about Donald Trump.
No, not because he’s an extremely unpleasant subject. I mean, that, sure. Though to be honest I’ve been talking about him a lot in various places. I wish I could ignore him – and the whole damn election – entirely, but this is not how I cope. Or my coping mechanism of choice isn’t altogether a healthy one, and it is to become totally and utterly obsessed.
Don’t ask me what my curated news feed largely consists of. Don’t ask me how many political podcasts I currently follow. Don’t ask me how frequently I check FiveThirtyEight, and how much emotional weight I attach to numbers which are, after all, not objective but instead mediated through and interpreted by human beings. The point is that I’m obsessed, which means that I’m immersed in the way you and I and we all talk about Donald J. Trump.
‘scuse me a sec.
Something I’ve noticed we do especially much is talk about his mental health. This has been done in a serious, concerted way – attempts to “diagnose” him, usually though not always on the part of people who have received no mental health training in their lives, not that that’s the only thing that matters – but more often in a casual, offhand way – Trump is “insane”. “Nuts”. “Delusional”. “Crazy”. So are his adherents. We’re at a loss to explain the phenomenon that is Donald Trump, at least in any rational way, so we turn to the discourse of mental illness. In order to account for his existence and its nature, we medicalize him.
This is a problem, and the problem is twofold.
Firstly – and this is actually what I intended the sole focus of this piece to be – it’s ableist as hell. Taking someone like Trump, with his cruelty, his arrogance, his racism and xenophobia and misogyny, and making use of mental illness to explain it, connects mental illness with all of those things, which isn’t merely wrong and bad but dangerous. It’s part of a larger discourse that works to demonize people with mental illness, to present them as potentially dangerous. Because Trump is dangerous, and is frequently and explicitly referred to as such (and I wouldn’t for a moment disagree). Recall the ways in which we tend to explain rampage shooters with mental illness rather than things like toxic masculinity. It also constructs people with mental illness as fundamentally irrational to a hopeless extent; these people can’t be reasoned with, can’t be reached.
Talking about someone like that dehumanizes them in a way we reeeeeeally don’t want to do. Because when someone can’t be reasoned with, a central element of their humanity is denied. It’s not a tremendous number of steps from that to some very ugly things.
This is especially ironic, because this way of speaking about mental illness is supposed to be kinder and more humane. But I’d argue that it ultimately has the opposite effect. With only a few exceptions, I haven’t seen this way of framing Trump elicit much sympathy for or desire to help him. It hasn’t humanized him. It’s served to remove him from those of us who are describing him in these terms, to draw hard lines between us and him. He isn’t like us. We’re better. We’re more rational. We’re sane. By extension, we’re better than everyone who likes him and/or is prepared to vote for him.
(I’m not sane, by the bye. Another thing you should not ask me about is all my medication. I’m on a lot of medication. No, it’s frankly not helping much with this.)
We’re also allegedly smarter. Intelligence is a thing. Trump is an idiot. He’s stupid. He’s a moron. I thankfully haven’t seen anyone call him “retarded” but in spite of my obsession I have largely remained in my little safe space with my safe people, and I know it’s being done. His people are the same. They’re not just crazy, they’re dumb.
Bringing someone’s intelligence into the conversation and using it to dismiss and dehumanize them is just as ableist as calling them crazy. We do it all the time, without thinking – and that’s a huge part of the problem.
I do it. I really try not to, but it’s deeply ingrained, so it happens anyway. Plus, yeah, it feels good. In a nasty way, but it does. It feels good to be superior – or to think you are.
Donald Trump frightens us. He confuses us. We don’t know what to do with him. So we try to explain him in medicalized, positivist terms that make us more comfortable, and we try to elevate ourselves above him and his Trumpians in order to feel a little better about everything.
But it’s not just that it’s ableist. It also doesn’t work. It isn’t sufficient or accurate, and we need to recognize that.
Using mental illness and/or intelligence to explain someone like Trump vastly oversimplifies the situation. It reduces it to those safe, comfortable terms. It requires no stretch on our part to understand the deeper complexities, because in spite of how many words people have spent on this, ultimately it’s dismissive – as I said above – and in dismissing someone or something, you absolve yourself of any greater responsibility to understand how they and the whole thing happened.
Again, it’s like writing off a rampage shooter as a “nutcase”. It means we don’t have to think about where that person actually came from and why they became who they became. We don’t have to think about the hideous effect of toxic masculinity on cisgender men who are raised in a fundamentally misogynist culture, and about how violence fits into the picture. That’s harder. It’s uncomfortable. Not least because it implicates us.
When we use mental illness to explain Donald Trump, among other things we don’t have to think about ideology. Mental illness discourse doesn’t allow us to think about ideology. But that’s only one thing among many.
When I was considering this the other day, it occurred to me that another form of discourse exists that does some of what mental illness discourse doesn’t. Once we explained (and a lot of us still do) things like this in terms of sin and evil. We used moral and ethical concepts that were grounded in spirituality, and the dominant forms of discourse largely abandoned this when we made the switch from one to the other, from believing that people with schizophrenia were possessed by demons to identifying them as suffering from an illness that could be scientifically treated as such.
Calling someone evil has the exact same dehumanizing effect I described above, only a lot more intense and a lot more direct. An evil person isn’t really a person anymore, at least not in the way that “good” people are. I personally think evil is a useful idea in some contexts, but even if that’s true, in this specific context its hazards are significant and whatever it does isn’t nearly sufficient to make up for mental illness discourse’s many shortcomings.
It’s also much too simple.
So how do we talk about Donald Trump, if very little of what we currently use is useful and is in fact harmful? From where do we get a different kind of discourse in order to describe Trumpishness? I honestly don’t know. I’m honestly not sure it can even be done. But I think we need to try, because we should strive not to harm people, and because as long as we’re failing in our attempts to articulate who Trump is and the social context that created the event that is his presidential campaign, and our place in all of it, we’re very poorly situated to do anything about it when it happens again.
And regardless of what goes down on Tuesday, you know it will happen again.
So it happened that, after about a year of unemployment and almost nothing but writing and editing books, I returned to video games.
I used to both play them a lot and write about them a lot, and I missed them. I genuinely think my mental health took a hit when I (largely) stopped. Video games engage a part of my brain that really nothing else does, and that brain-part gets engaged actively. Game critic Eric Kain wrote that killing in video games is essentially puzzle-solving, and I agree (though I don’t believe that’s all it is), because that’s exactly how it feels.
I prefer games where you shoot things, including games that I objectively recognize are not very good. I’ve played and loved a bunch of the entries in the Call of Duty franchise. I confine myself to the single-player campaigns and steer clear of multiplayer because I’m not a cisgender man and am not the kind of masochist that would make multiplayer bearable because of this, and also I play video games in significant part to get away from people (I’ve been informed multiple times that this is not the correct way to play CoD; the thing about that is that I don’t care).
But I also stick to the single-player campaigns because I like my games to have stories, even stories of the flimsiest and most ridiculous kind (something else I love? Just Cause 2. So). It’s like getting to play through a silly schlocky action movie. It’s a gaming fast food hamburger. Not everything has to be or should be Art. Yet the story really is important for me, and the gameplay within and alongside the story.
And this brings me to Deus Ex: Human Revolution.
Released in 2011, Deus Ex: HR is a relatively old game by now, but I honestly hadn’t played it before, because of time. Which is a poor excuse, because it was basically grown in a lab specifically for me. I love Deus Ex. I effing love. It. I’m in the middle of my second play-through, which I began about ten minutes after completing my first (I’m attempting to do a 95% non-lethal run; 95% because in the version I have, you have no choice but to kill your way through the profoundly irritating boss fights). It looks and feels unapologetically like Blade Runner. The protagonist is a cyborg with a Christian Bale-as-Batman voice and a barrel of man-pain. He has swords in his arms. He can leap off buildings and fall in a ball of golden lightning and land unharmed in a fist-to-the-ground Iron Man pose (not a single bystander ever seems to find this in the least bit startling, so I guess in 2027 it’s just a Thing). You can roll through the game as an angel of death or you can be nice and just punch guys in the head, combat-heavy or stealth-focused, exploring cyberpunk Detroit and Shanghai and hacking everything in sight.
Even more to the point, it’s story-heavy. The story is interesting, if not the most original thing ever, and the characters are fairly well-realized and engaging. I’m playing the game over again not just because I enjoy the gameplay but because I really love the storyworld and I wasn’t ready to leave just yet.
And lest this descend into a thousand words of me gushing about how much I love a slightly doofy cyberpunk game released half a decade ago, let me talk about the difficulty levels.
Difficulty levels in video games might seem like one of the simpler and more basic elements of game design, but they’re actually very complex and somewhat a matter of contention. There’s the difficulty in designing them, creating a selection of gameplay experiences that are capable of satisfying a range of players, but there’s also the matter of how they’re defined, how one conveys the meaning of things like “easy” and “hard” to a player and, perhaps even more, how those meanings are guided and constrained by what has traditionally been understood as as gaming culture. Specifically, difficulty levels often function as forms of identity-policing/gatekeeping via value judgment attached to the disparate levels of difficulty and, by extension, judgment of the player. In other words, serious gamers don’t play on easy, and the degree to which you’re “serious” about the game you’re playing is the degree to which the game itself deems you worthy to be playing it at all.
Put most simply: difficulty levels are the means by which a game judges you as a person.
(I think it’s kind of an unnecessarily jerk-ass thing to do.)
Something I noticed right away is that Deus Ex: HR at least makes an attempt to avoid this kind of judgmentalism through how it defines its easy level. Easy is called “tell me a story”, and the language is refreshingly positive about what this means:
But it begins to slip back into the old unpleasant tropes when you hit the “normal” and hard levels:
Yeah, see? It’s not coming out and explicitly calling the player a wuss for taking the first option, but the subtle condescension is arguably still there.
Okay, so far I’m not saying anything that isn’t pretty obvious. But here’s the thing about how this game breaks its difficulty levels down in discursive terms: I think it reveals an interesting and uncomfortable tension between what games are increasingly capable – and designed – to do in terms of the ambition and complexity of their storytelling, and how combat-oriented gameplay has traditionally worked and continues to work. Again, sure, Deus Ex: HR was released in 2011. But I don’t think that tension has disappeared. I think it’s just as present now, and it’s just as tense.
The debate over difficulty levels themselves is ongoing – how they should be defined and whether they should even exist at all (at least in their presently recognized form). In an essay for Gamasutra last month, Mark Venturelli breaks down what he views as the inherent problems of how difficulty levels function, and suggests a variety of new approaches suited to a variety of game types. It’s the issue of game types I want to focus on here, and moreover, why exactly a player is playing a specific game to begin with.
“To have fun.” Yeah, that doesn’t really tell me much. There are a hundred thousand ways to have fun, and everyone’s fun is going to be different, often from mood to mood and situation to situation. Sometimes I want to solve a puzzle (Portal). Sometimes I want to shoot rendered human figures in the face (Call of Duty). In both of those situations, as I said before, the presence of a story does matter to me, however silly and thin, and in fact I’ve suffered through somewhat mediocre gameplay multiple times because I loved the story and the characters so much (Enslaved: Odyssey to the West).
The games that have meant the most to me, the ones I return to over and over, are the ones with the powerful and interesting stories. The Last of Us (possibly one of the most stunningly good pieces of media of any kind that I’ve ever encountered). Seasons one and two of The Walking Dead (see The Last of Us). The Uncharted franchise. The Mass Effect trilogy. All three entries in the Bioshock series. Spec Ops: The Line. Portal and Portal 2. And now, Deus Ex.
What makes this such an exciting time for someone who loves and even requires stories in their video games is that smaller studios without huge backers are producing games that are easily 70% – 80% story, games where the entire point is the story. Dear Esther, one of my favorite games of all time, consists of the player wandering a linear path around a deserted Hebridean island while listening to voiceovers of a letter written by a man to his dead wife (that’s it, that’s the game). Both Amnesia games – The Dark Descent and A Machine for Pigs – are slightly more conventional in terms of the presence of gameplay, but the story remains the focus. The Stanley Parable is a game about games, and indeed about stories themselves, a genius work of meta. There’s Gone Home, a release from a few years back that received wildly positive critical response and functioned as a reimagining of old point-and-click adventure games, and now there’s what I feel is in many ways Gone Home’s spiritual successor, this year’s Firewatch.
And now this piece has become a rec list. I’m sorry. My eventual point is this: game designers are coming to understand on a very deep level that players like stories, and that many players like involved, ambitious stories. That many players, in fact, like stories more than just about anything else, even if they also want more conventional combat-oriented gameplay and they want that gameplay to be good.
Older attempts on the part of designers to deliver on this have created some problems, which Clint Hocking noted in his piece “Ludonarrative Dissonance in Bioshock”. Exploring the conflict between Bioshock’s story and its gameplay, he writes:
To cut straight to the heart of it, Bioshock seems to suffer from a powerful dissonance between what it is about as a game, and what it is about as a story. By throwing the narrative and ludic elements of the work into opposition, the game seems to openly mock the player for having believed in the fiction of the game at all. The leveraging of the game’s narrative structure against its ludic structure all but destroys the player’s ability to feel connected to either, forcing the player to either abandon the game in protest (which I almost did) or simply accept that the game cannot be enjoyed as both a game and a story, and to then finish it for the mere sake of finishing it.
And arguably this problem exists in significant part because the makers of Bioshock really really wanted to tell a story. Not only that, but they were trying to enmesh the story with the gameplay in a way that made sense and augmented both. They failed – really badly, if you agree with Hocking – but without actually asking any of them, I believe the sincere desire to do so was there.
Game designers are still in the process of figuring out how gameplay and story can work together, how games can tell stories in a way that manages to be immersive while granting players the kind of agency a player tends to want. Whether that last is even possible remains something of an open question. But what designers are wrestling with, I think, isn’t just a matter of writing and practicality.
The tension Deus Ex: Human Revolution reveals in its difficulty settings is in the question of what value its own story even has. Open up that menu and you have a front row seat to an identity crisis.
The makers of Deus Ex: HR wanted to tell a story. They wanted to tell a fun story (I think they mostly succeeded). Even more to the point, they wanted to express pride in that story, and not toss judgment at the people who were in the game for that story more than to stab dudes in the neck with their arm swords. They refer to it as an “experience”, and I interpret that language as legitimizing. Yet then you move up to the next two levels and it’s the same old deal. If you’re serious about the game, you’re not playing for the story.
The game fundamentally doesn’t know how it feels about itself, and about you as a player.
Once again, yes, I recognize that this is a study of a case that happens to be half a decade old. And once again, I think this identity crisis is still being worked through, at least on the part of AAA games coming from large studios. Deus Ex is just one of the more explicit examples of it that I can recall seeing.
I’ve seen a number of game critics over the last year or so say that they’re abandoning AAA games altogether, that there’s no longer anything meaningful or interesting being done there. I disagree with that. A lot. And not just because big budget AAA games continue to make my lists of faves.
Because these kinds of games are inherently conservative by nature, because change within them tends to be ponderous and plodding, because they tend to be conceptually clumsy and unable to quickly adapt to change, watching ambitious people working within that system and trying to push as much change as they can within the restrictions they’re dealing with is fascinating to me. I don’t think you see these kinds of crises of identity in what people tend to define as indie games. At least in my experience, indie games are more likely to know who and what they are, what they’re doing and what their player is playing them for. They’re lean and nimble. There’s a sharpness to the best ones, a kind of clarity of self. I like that.
But a lot of the big story-heavy games I play – and love – seem to be a little confused about themselves. And I like that too. I trust that they’ll figure it out eventually. I have faith in them.
Now if you’ll excuse me, I have to go play with my arm swords.
This July 4th, PBS viewers in the DC metro area were outraged to be reminded of the fact that they were watching television.
It’s actually not quite that simple, though it’s fun to phrase it that way. Here’s what happened: this past Monday was an extremely muggy and cloudy one in our neck of the woods; in other words, not at all the idea climatic conditions for a fireworks display. PBS, in something of a bind regarding how to maximize the spectacle for its live broadcast of the Independence Day celebration in front of the White House, elected to include archival footage of past fireworks displays with its live broadcast of the currently-happening fireworks.
People were displeased.
Specifically, people who took issue with the decision claimed that it was an act of fakery, that it was a cheap move and made the broadcast less “authentic”. That it was almost somehow a lie. PBS responded on Twitter with: “We showed a combination of the best fireworks from this year and previous years. It was the patriotic thing to do.”
I’m going to leave aside the interesting fact that PBS is characterizing this move as “patriotic” and instead focus on the two other things that I find interesting.
First, these viewers forgot the Baudrillardian truth about TV and indeed about all media and really kind of everything ever: it’s basically “fake” by definition. What a viewer is shown is almost always carefully edited and packaged, or at the very least presented with a specific intent in mind. Even when it comes to live TV, there is no such thing as a fully objective and solely factual depiction of what’s actually going on. This frequently has little to do with a political agenda (the vitally important patriotism of this particular decision aside) and far more to do with spectacle, and PBS’s intent in showing the fireworks was to provide exactly that.
I think the people who were offended by what PBS did believed that PBS’s primary goal was to show something as it truly was. And I’m not saying that PBS didn’t have that goal at all. But I doubt that – consciously or unconsciously – it was at the top of their list.
People like to believe that in cases like this, what they’re seeing is “real” and “true”. It’s jarring and even disturbing to be reminded that they can’t be sure of that, and that indeed they should assume that what they’re seeing is never real or true. That by definition it basically can’t be. People want to buy into an illusion – an illusion that PBS is selling, which it is obviously not alone in – and they become uncomfortable and upset when the illusion is destroyed.
Which leads me to the second thing that strikes me about this: the fact that it was PBS.
As a long-time PBS viewer, I get the distinct sense that someone watching the channel might expect the exact opposite of a focus on spectacle and selling a constructed package of imagery, and far more of a focus on soberly presenting things with a commitment to authenticity. Given the culture of PBS and the probable cultural affiliation of many PBS viewers, I think it’s reasonable to believe that whether or not PBS intends it, there’s a kind of implicit contract between the network and the people watching it, as well as a desire for authenticity in particular on the part of those people.
So I think what we have here is in some respects a twofold destruction of illusion and a perceived twofold betrayal: What people prefer to believe they’re seeing in this kind of broadcast, and what a specific cultural product promises the people consuming it. When I found out about this, I almost immediately wondered how someone watching CNN, MSNBC, or Fox News (oh my lord Fox News) would be likely to feel if the same editorial decision was made.
And I’d still like to ask PBS what exactly they meant by “the patriotic thing to do”. Maybe thick, low clouds are un-American.
Rose Eveleth’s piece for Fusion on gender and bodyhacking was something I didn’t know I needed in my life until it was there. You know how you’ve always known something or felt something, but it isn’t until someone else articulates it for you that you truly understand it, can explain it to yourself, think you might be able to explain it to others – or, even better, shove the articulation at them and be all THAT RIGHT THERE, THAT’S WHAT I’M TALKING ABOUT. You know that kind of thing?
Eveleth’s overall thesis is that “bodyhacking” isn’t new at all, that it’s been around forever in how women – to get oversimplified and gender-essentialist in a way I try to avoid, so caveat there – alter and control and manage their bodies (not always to positive or uncoercive ends), but that it’s not recognized as such because we still gender the concept of “technology” as profoundly masculine:
As a central personal example, Eveleth uses her IUD, and this is what especially resonated with me, because I also have one. I’ve had one for about seven years. I love it. And getting it was moderately life-changing, not just because of its practical benefits but because it altered how I think about me.
The insertion process was not comfortable (not to scare off anyone thinking of getting one, TRUST ME IT IS GREAT TO HAVE) and more than a little anxiety-inducing ahead of time, but I walked out of the doctor’s office feeling kind of cool. I had an implant. I had a piece of technology in my uterus, that was enabling me to control my reproductive process. I don’t want children – at least not right now – and my reproductive organs have never been significantly important to me as far as my gender identity goes (probably not least because I don’t identify as a woman), but managing my bits and what they do and how they do it has naturally been a part of my life since I became sexually active.
And what matters for this conversation is that the constant task of managing them isn’t something I chose. Trying to find a method that worked best for me and (mildly) stressing about how well it was working was a part of my identity inasmuch as it took up space in my brain, and I wasn’t thrilled about that. I didn’t want it to be part of my identity – though I didn’t want to go as far as permanently foreclosing on the possibility of pregnancy – and it irked me that it had to be.
Then it didn’t have to be anymore.
And it wasn’t just about a little copper implant being cool on a pure nerd level. I felt cool because the power dynamic between my self and my body had changed. My relationship between me and this set of organs had become voluntary in a way entirely new to me.
I feel like I might not be explaining this very well.
Here: Over thirty years ago, Donna Haraway presented an image of a new form of self and its creation – not creation, in fact, but construction. Something pieced together with intentionality, the result of choices – something “encoded”. She offered a criticism of the woman-as-Earth-Mother vision that then-contemporary feminists were making use of, and pointed the way forward toward something far stranger and more wonderfully monstrous.
The power of an enmeshing between the organic and the technological lies not only in what it allows one to do but in what it allows one to be – and often there’s no real distinction to be made between the two. We can talk about identity in terms of smartphones, but when we come to things like technologies of reproductive control, I think the conversation often slips into the purely utilitarian – if these things are recognized as technologies at all.
Eveleth notes that “technology is a thing men do”, and I think the dismissal of female bodyhacking goes beyond dismissal of the utilitarian aspects of these technologies. It’s also the dismissal of many of the things that make it possible to construct a cyborg self, to weave a powerful connection to the body that’s about the emotional and psychological just as much as the physical.
I walked out of that doctor’s office with my little copper implant, and the fact that I no longer had to angst about accidental pregnancy was in many respects a minor component of what I was feeling. I was a little less of a goddess, and a little more of a cyborg.
And lingering cramps aside, it felt pretty damn good.
I only heard the term “blockchain technology” for the first time this past autumn, but in the last few months, I’ve became pretty absorbed in the blockchain world. Initially I was intimidated by its descriptions, which struck me as needlessly abstruse — though, in a perfect chicken-and-egg scenario, I couldn’t be sure, since the descriptions didn’t offer an easy understanding of how it worked. What compelled me to press on in my blockchain research was the terminology surround it. I’m a long-standing advocate for open source, and blockchain’s default descriptors are “distributed” (as in “distributed ledger”) “decentralized” (as in “decentralized platform,” a tagline for at least one major blockchain development platform [1: https://www.ethereum.org/]) and “peer-to-peer” ( the crux of all things Bitcoin and blockchain). These words all spoke to my f/oss-loving heart, leading me to click on article after jargon-heavy article in an effort to wrap my head around the ‘chain. As I learned more about it, I came to understand why it’s begun to garner huge amounts of attention. I don’t like to get too starry-eyed about a new technology, but I too became a blockchain believer.
I’m in growing company. Even though the technical structure has been around since at least 2008 [2: www.bitcoin.org/bitcoin.pdf], when Bitcoin (which blockchain was originally developed for) was introduced to the public, blockchain-without-Bitcoin has been infrequently discussed until the past year. In January 2016, the World Economic Forum listed it as one of the foundational technologies of the Fourth Industrial Revolution [https://www.weforum.org/agenda/archive/fourth-industrial-revolution]; in March, the Chamber of Digital Commerce held the first-ever DC Blockchain Summit, which addressed issues of policy and regulation at the federal level. Since 2015, the number of blockchain conferences and major news stories has been snowballing. Blockchain’s status in the world of tech media has become formidable, and general-interest outlets have wasted no time in spreading the digital gospel. It’s arguably gotten to the point where blockchain mythos now overshadows its reality. It strikes me as irresponsible to write about it without first giving a few words to its image— the hype has become a fact unto itself, and any accurate reporting about it must deal with it as such.
In my Theorizing The Web talk “Block Party People: Off The Bitcoin Chain” I offer that blockchain offers tech media a unique opportunity to benefit from years of hindsight. Internet technologies in their earliest stages have historically been written in terms that designed to appeal to their developers and early adopters. In other words, those with the professional power, money, and intellectual access to take part in shaping the future of technology. This is by its very nature a narrow group, particularly where early adoption is concerned. It entails financial and cultural privilege that’s unavailable to most people.
Of course, there are plenty of reasons to target specific audiences when writing about emerging technologies. One is sheer comprehensibility: when a tool is in its earliest stages of development, layman’s terms and easily-understandable use cases have yet to materialize. As a general rule, the appropriate metaphors only emerge after a certain amount of time. But those interested in learning about new technologies in non-layman’s terms, the ones who want to pore over dense, jargon-filled texts and abstraction-heavy descriptions aren’t always professionals, and they’re not necessarily in the financial situation to become early adopters. They also don’t always fit the stereotypical image of an early adopter: sometimes they’re female, sometimes they haven’t gone to college, sometimes they live far away from a major city. Though the mainstream media has fallen in love with its (moneyed, masculine) image, the Silicon Valley techie is a very particular flavor of geek.
As one would imagine for a tool designed specifically for Bitcoin, blockchain is uniquely well-suited to streamline digital financial transactions. It can impact virtually anything that relies on Internet protocol, its applications within finance are much more apparent than for any other business sector (at least for now). In line with this, those most heavily invested in blockchain aren’t exactly the Occupy Wall Street crowd. One major blockchain initiative is called The Hyperledger Project [3: https://www.hyperledger.org]. It’s an open source, cross-industry effort to develop an open standard for blockchain. The Hyperledger Project is spearheaded by the Linux Foundation and IBM; partners include J.P. Morgan, Wells Fargo, Hitachi and Intel, along with a number of other large companies and V.C.-backed startups. Though it’s not the only blockchain research and development initiative, the Hyperledger Project is emblematic of the general scope of interest in blockchain. It’s fair to say that the financial industry and corporate world is very well-represented in this world.
I don’t want to suggest that this group should divest its interest in blockchain. Far from it: we need that type of power to develop broad-scale research. However, I do believe it’s critical that groups more representative of the average citizen — the person who’s not in a position of power at a global financial institution, large tech company or well-funded startup — become a part of the blockchain conversation. Those individuals may have different ideas about the technical protocols that become standard for blockchain over the coming decades. We’re a more tech-savvy society than ever before, and opening up the discussion to as many people as possible now, when the technology is still in its infancy, can work to ensure that it helps as many people as possible in the future. A big part of that starting that conversation relies on how blockchain is being presented to the public.
In simple terms, my work on blockchain has been guided by a desire to include more diverse audiences in the subject. As I was developing my research, though, I began to get cold feet. In the midst of fleshing out my clarion call for blockchain reporting to value inclusivity, I realized that I’d be treading in all sorts of dangerous territory. On one hand, there’s a lot of antagonism in the Bitcoin community about the use of blockchain without Bitcoin. Suggesting uses of blockchain not only outside of cryptocurrency, but for non-finance-related, socially equitable causes is a far cry from the freewheeling anarcho-capitalist ends championed by certain Bitcoin enthusiasts. I have no interest in inciting the wrath of cryptocurrency community, but my perspective on this is undeniably at odds with large parts of it. On another hand, I’m not a blockchain developer, and despite spending months reading about it, writing about it (including reporting on the DC Summit for a major Bitcoin news source [4: https://www.coindesk.com/dc-blockchain-summit-2016/]) and generally immersing myself in all things blockchain, I still doubted whether I was qualified to offer a real opinion on it.
I’m aware of Impostor Syndrome [5: https://en.wikipedia.org/wiki/Impostor_syndrome] but I still couldn’t help but wonder if I’d ventured too far into forbidden territory with my blockchain investigations. I worried that I’d be called out as having a naive understanding of technology and business and would walk away from the whole project having done damage to my credit as a researcher. In fact, throughout the course of the work, I frequently thought that I should just give this project up.The irony of this isn’t lost on me: as I was trying to offer encouragement for those who may not think of themselves as having power and influence in the tech world, I was losing confidence in myself. I became the very person I was trying to write for.
Of course, I didn’t quit — if I had, you wouldn’t be reading this. Part of my motivation to keep going was in realizing that by giving up, I’d be turning my back on the ideals. I’ve made some peace with the reality that I may not fully understand what I’m talking about. The fact of the matter is that such a risk is always there, no matter how far you advance in a research-based career. That’s true even for those writing the code. Fear of appearing naive or ill-suited to offer a perspective on technology is toxic, and I would argue that it’s a small part of why the community isn’t more diverse.
The title of the panel I’m on is “Hack The Planet,” which I thought was odd at first, since my talk doesn’t directly relate to hacking. In a way, though, what’s kept me going throughout my work on blockchain are what I think of as hacker values: curiosity, playfulness, and a tolerance for risk. So it’s in the spirit of hacking that I’m doing this work, and that I encourage others to take an open mind about it. It’s not always easy, but I think that in the long run, it’s for the best.
Emma Stamm is a writer, web developer and PhD candidate at Virginia Tech. You can find her online at www.o-culus.com and @turing_tests.
 This is Ethereum: https://www.ethereum.org/
 This is based on the 2008 publication of Satoshi Nakamoto’s white paper describing Bitcoin, which is generally recognized as the beginning of Bitcoin/blockchain. www.bitcoin.org/bitcoin.pdf
That statement alone would have raised eyebrows high enough. What made a lot of eyebrows especially frowny and angry is the way in which he then proudly defended this practice as something admirable, something the site’s unpaid writers should not only accept but be pleased about:
…we don’t pay them, but you know if I was paying someone to write something because I wanted it to get advertising pay, that’s not a real authentic way of presenting copy. So when somebody writes something for us, we know it’s real. We know they want to write it. It’s not been forced or paid for. I think that’s something to be proud of.
Let’s unpack that language. Let’s call particular attention to the words authentic and real. Authentic is a kind of ideal, an unquestioned Goodness; real accompanies it as a matter of course. At the conceptual opposite end is fake, which is unquestionably Bad. So writing that’s been paid for – even worse, that’s been produced with the expectation of payment – is neither authentic nor real. It’s fake, and therefore unreal, undesirable, and bad.
(Apparently paying someone is tantamount to “forcing” them, which I can’t even.)
(Actually, no, I can. The implication there – I think, it’s not entirely clear to me – is that writing produced for pay has somehow been pried out with a crowbar rather than created with a magical flourish of heartfelt inspiration. So again: fake and bad.)
(I’ve written most of my professional fiction with crowbar firmly in hand.)
I don’t think this can be emphasized enough: Stephen Hull is essentially saying that if you accept payment for your writing, your writing is bad and you should feel bad. He would probably disagree that he’s going that far, but he would be wrong.
He would also disagree with the claim that because writers aren’t paid, they aren’t compensated, because writers who have their work published by the Huffington Post get something far more valuable than fake bad money – which is, of course, exposure. Which, as Wil Wheaton says, does not enable you to pay your rent.
There are a lot of things that are fairly horrible about this, and so far I haven’t said anything that other people haven’t already articulated better. Aside from the issues above, there’s the fact that the Huffington Post is profitable to the tune of millions of dollars and can completely afford to pay their writers (just as an aside, the money I generally take for my fake bad short stories starts at the Science Fiction and Fantasy Writers of America’s minimum rate, which is six cents a word – between $200 and $300 per story – and which is paid to me by relatively unprofitable fiction magazines who nevertheless somehow manage to scrape together the resources to do so, maybe by digging between the couch cushions or something), so we’re dealing with a pretty cut and dry case of exploitation.
But beyond that, as Chuck Wendig points out, the even more problematic assertion here is that writing should not actually be considered labor at all:
The lie is this: writing is not work, it is not fundamental, it is a freedom in which you would partake anyway, and here some chucklefuck would say, haw haw haw, you blog at your blog and nobody pays you, you post updates on Twitter and nobody pays you, you speak words into the mighty air and you do it for free, free, free. And Huffington Post floats overhead in their bloated dirigible and they yell down at you, WE BROADCAST TO MILLIONS and DON’T YOU WANT TO REACH MILLIONS WITH YOUR MEAGER VOICE and THIS IS AN OPPORTUNITY FOR YOU.
The background for this is an even larger and more pervasive problem, and one that Millennials arguably face to an unprecedented degree: that the most important thing is to “do what you love”, and that anything not done for love is less legitimate (and I would add that in some cases the argument is that if you’re fortunate enough to do that, the love should compensate for low or even absent pay; see also unpaid internships). It’s the same kind of exploitation wrapped up in ostensibly noble ideology, and it’s one that emphasizes the gap between those who are privileged enough to survive just fine on Doing What They Love, and those who have to make a living however they can:
One consequence…is the division that DWYL creates among workers, largely along class lines. Work becomes divided into two opposing classes: that which is lovable (creative, intellectual, socially prestigious) and that which is not (repetitive, unintellectual, undistinguished). Those in the lovable-work camp are vastly more privileged in terms of wealth, social status, education, society’s racial biases, and political clout, while comprising a small minority of the workforce.
For those forced into unlovable work, it’s a different story. Under the DWYL credo, labor that is done out of motives or needs other than love—which is, in fact, most labor—is erased.
The credo of DWYL is a primary part of what allows the Huffington Post to get away with this offensive nonsense. Or at least to believe that it can and ought to.
Again, I’m not really saying anything new here. But what I don’t think I’ve seen addressed specifically enough is the fact that the Huffington Post is assuming and encouraging the assumption that a writer shouldn’t draw distinctions between the various kinds of writing they do. That if sometimes you write for passion alone, all your writing should be for passion alone. If you’re a real authentic writer, all the writing you do is either imbued with this real authenticity – or it isn’t.
This is insidiously, romantically appealing. It’s also utterly ludicrous. My professional fiction writing is not my fanfiction writing is not my essay writing is not my academic writing is not the writing I’m doing right this minute. These are all different realms and they’re different kinds of work, despite obvious similarities. Leave aside for the moment the extremely pertinent question of whether someone other than you is materially profiting from the writing you do for free, and consider that while I count all of those forms of writing labor in their own way, I personally determine whether I should be materially compensated for that labor. I do this by drawing distinctions not only between those different categories of writing, but by drawing distinctions regarding what I get out of doing this labor and who my audience is and what my relationship with them happens to be.
When I write professional fiction, I’m writing for a professional community that simultaneously takes writing seriously as an art form and considers it something worth set amounts of money. When I write fanfiction, I’m writing for a community that operates on the basis of a gift economy, where not only am I happy to not be paid but would in fact prefer that capitalism never get involved at all. When I write blog posts, I’m doing something similar in that I’m engaging in a conversation with a community and I’m doing so on my own time. Those distinctions are meaningful and legitimate and important, but by throwing words like authentic around, the Huffington Post is arguing that they aren’t. The only meaningful distinction is whether or not the writing is real.
Real writing isn’t worthy of compensation. In fact, it’s too good for compensation. It’s not work. It’s passion. It’s art. And something cannot be all three of those things simultaneously.
So while this is bad in and of itself, it’s part of something worse. And the lie isn’t only that passion and payment are mutually exclusive but that all writing is basically the same at the molecular level, and it exists as one option in a binary set. Which needs to be fought, and hard. As John Scalzi wrote back in 2012:
If you try to mumble something at me about writing for free on this site, I might feed you to wild dogs. When I write here, it’s me in my free time. When I write somewhere else, it’s me on the clock. Here’s a handy tip to find out whether I will write for you for free: Are you me?
And what I think every writer should adopt as a motto (emphasis mine):
This past year, I sort of disappeared from Twitter. Not completely – I’d poke my head in now and then – but for a number of reasons I stopped checking it at all regularly.
One of the things that ended up keeping me away for longer than I might otherwise have been was how it felt, those times when I poked my head back in. It was intimidating in a way it hadn’t been before. It was like I had been missing a long series of conversations that added up to one enormous conversation, and I no longer had any idea what was going on. Friends and colleagues and friend-colleagues with whom I used to be in nearly constant touch were suddenly discussing things I didn’t know anything about, and the prospect of trying to catch up was overwhelming. I felt like I had nothing to contribute to the conversation I left behind months ago. It was like a party I would wander into, circulate in kind of a distant and awkward fashion, and leave again. Because I had nothing to say.
I like people, but I’m very bad at feeling like I belong anywhere. It’s my default to feel like a fraud in any crowd I’m a part of, and awkwardness has a way of turning into a withdrawal spiral. This began in physical space, but physical space doesn’t have a corner on making me feel that way. Not at all.
I still don’t check Twitter very regularly.
One of the things that exacerbates this, in the SFF writer community, is cons. People talk a lot about cons. Who went, what the panels were like, what happened, who said what, what horribly embarrassing things occurred shortly before sunrise after the consumption of large quantities of strong beverages. People tweet during the cons, about the cons. People tweet after the cons, about the cons. People tweet prior to the cons, about the cons.
Please note the extraordinary self-control I’m exercising here by not making a conversation pun. There’s already enough of that in the names of the cons itself and I don’t think I should add to it.
So great. If you go to the cons, it’s wonderful. But cons are expensive. Some of them are very expensive. Some of them have registration fees well in excess of $100, and that’s often the smallest expense.
Cons are important. Cons are where you meet people. Cons are where you make friends. Cons are where you make connections. Cons are where you get your work known, yourself known. If you want to make a career of this, you really need to go to cons. Or man, it sure does help.
And then when you get back you have something to talk about on Twitter, with the people you now know.
Unless you’re bad at Twitter, and you can’t afford to get to a con.
I’ve heard many people say they can only afford to go to one con a year. They have to pick carefully. This is their social circle. This is their career. If they don’t go, there are consequences.
…Okay, I legitimately didn’t mean to make that one. Sorry.
Here’s another wrinkle: at least in SFF writerdom, there is really no meaningful distinction between friends and colleagues. Which, sure, is true of a lot of fields. But these relationships are particularly close, and the professional utility of these friendships can be very high. There are costs to missing out, to not being at the right place at the right time to meet the right person. Missed connections are a real thing. Because here’s another wrinkle: it’s not just about being talented. It’s about being noticed.
Yeah, generally you get noticed when you’re very talented. But not always. Sometimes people just… don’t get noticed. You can write the best book ever but people can’t buy it if they don’t know it’s there. There are thousands of short stories published every year, and many of them are fabulous. Not all of the fabulous ones get seen by the right people at the right time. There are cracks, and people and work together fall through.
My sense is that this isn’t a truth people are very comfortable with, because its implications aren’t comfortable ones. But I do think it’s true.
For me personally, this becomes especially poignant around about awards season. People are talking about the work that caught their attention, that excited them, that they think is especially worthy of note. People are making lists. People are posting all their eligible stuff and inviting examination.
I don’t know of a single person who will cop to enjoying that, the Here Is My Stuff thing. I hate it. It makes my skin crawl. But you sort of have to. Because there’s so much stuff out there, and it’s easy for your voice to get lost.
It’s easy, if you’ve been away for a while, to come back and feel lost. It’s easy to be silent about your own stuff.
So it’s easy to be forgotten. Or God, it really feels that way.
It’s an old and very bitter myth, this idea that being successful in writing is “all about who you know.” And yeah, it’s not all about that. But it is about that. It’s about which conversations you can be part of, with who, regarding what. It’s about who’s keeping an eye on you and what you produce – which attention you earn, but even so. It’s about the room parties and the panels and BarCon, HallwayCon, FloorCon, all the places people congregate and talk shop and talk shit. It’s about making friends and it’s about self-promotion, and again, I think that when you’re a writer the line between those things is practically nonexistent.
There are all kinds of reasons why someone might be bad at social media, having to do with both the body and the mind – because engagement with social media is embodied, and no mental illness or emotional problem exists in isolation from someone’s physical experience of life. Social anxiety isn’t just about the difficulties of walking into an actual room full of actual people. Depression isn’t just about not going outside.
There are all kinds of reasons why someone might not be able to go to cons – money and health being the two big ones, though there are lots of others.
So big surprise: the things that make life and work difficult in terms of everything – the things that makes it easier for certain people to be marginalized and unheard and rendered invisible – is at work here too.
This is especially ironic when we’re talking about writing, which is by nature such a solitary thing. The actual business is done alone and internally. The business side of the business is the exact opposite, and I don’t think it comes easily to many of us. For some of us it’s nearly impossible. A lot of us are not exactly rolling in cash. I’m probably not going to Worldcon this year, and World Fantasy Con is a big question mark. But I’m scraping together money and courage and medication, and going to whatever cons I reasonably can, because I’m lucky enough to be in a position where I can go to any, and because I basically have to.
Because I know there are consequences for not doing so.
And I guess I’m also hoping that when I get back, I’ll hop on Twitter and have some things to talk about.
There are two primary things that background this, that are probably necessary to know.
The first is that this past year has been extraordinarily hard for me. The second is that it’s been very difficult to talk openly about.
I’ve always tried to be honest online – about what I’m going through, about what I’m wrestling with, and especially about mental illness, which I think is much less of a forbidden topic of conversation than it used to be but which I also think can still stand to be discussed more than it is, and especially in what we would probably call professional settings.
I’ve done this because I value vulnerability – or I want to. I feel like it’s something to aspire to, in no small part because I absolutely suck at pretending that everything is fine if I have to do so for more than five minutes at a stretch. It’s going to be awkward and uncomfortable no matter what I do, so generally I go with what I regard as the lesser of two evils. When I think I can.
And there’s also that I hope vulnerability might eventually help me.
But it turns out that I’m even worse at everything – pretending and talking openly about my shit – when things genuinely get rough.
So this past year things genuinely got rough, and for the most part, in most places, I clammed up. Because I didn’t know what else to do. I didn’t actually hide the fact that things weren’t going so great, but I didn’t do a whole lot of talking in a public way about the specifics and the uglier parts of what I was feeling regarding everything. I just didn’t want to go into it, in significant part because I was terrified of what people might think.
Things are better now. Sort of. And part of what I’m doing as an effort to make them better is to un-clam, to break myself open from the inside out and be – literally – painfully honest about stuff. At least a lot of stuff. Or to try. It remains incredibly difficult.
Getting going on this post, for example, was much harder than once it would have been.
I made a post a few days ago on my author blog. Just clenched everything and threw it all out there, and left it for people to do whatever they wanted with it. I don’t know that I felt better after doing do, but I certainly didn’t feel worse. A few people on Twitter and Facebook told me that they were really feeling what I was talking about. That was nice. Then I sort of moved on and left it alone.
Then a day or so ago it ended up in a WordPress recommendation Twitter account, and my inbox hasn’t stopped exploding since.
I don’t even know how many people have commented to say, essentially, me too and I really needed to read this. I haven’t tried to keep count. I haven’t honestly looked at the page. I think I’m a little afraid to and I’m not completely sure why. I do know that it’s a lot. I’ve been getting message after message that amounts to what I was talking about in the post itself: people in pain looking for connection. I knew they were out there; part of why I wrote the post in the first place was to state my belief that a fair number of us aren’t doing so hot and don’t know how to talk about it to anyone. But I didn’t expect to hear from so many of them.
It wasn’t until that happened that I realized something strange (though I don’t think it’s surprising): I wrote about looking for connection in vulnerability and the sharing of pain, and I didn’t expect to connect. Not like this.
Which got me mulling over vulnerability itself, and this kind of writing and the context in which we shoot it out into the world.
There isn’t only one kind of vulnerability, is the thing. There’s intimate person-to-person vulnerability, direct communication with a particular someone or someones about what’s going on in your head and heart. By no means does this have to be taking place face-to-face; what really matters – as far as I’m concerned and as far as my own experience goes – is that you’re speaking to someone, and that person is actively listening to you, and both of you know it.
In other words, you’re having something at least vaguely resembling a conversation.
Then there’s the kind of vulnerability I engaged in when I wrote the post. Which was directionless, openly broadcasted vulnerability. There were specific people I had in mind when I wrote it, sure. Some of them talked to me about it. But I wasn’t writing to them. I was writing to everyone and everything, writing to an undifferentiated public, some of whom were people I knew but the vast majority of whom are not. I wrote it, left it there, walked away, and on some level I think I never expected anything else to happen.
What happened is that the latter form of vulnerability began to slide into the former, and I didn’t know it was coming. And it was jarring. It was a little disturbing.
It’s a lot overwhelming. I’m still working up to responding to most of it.
I don’t think I’m saying anything that isn’t pretty self-evident. I don’t think there’s anything piercingly insightful here, or new or surprising. Yet I was surprised. It didn’t occur to me that one of these things could become the other; it didn’t even occur to me that there was a difference. I wasn’t thinking about it at all.
I believe it’s worth thinking about. Because I was writing about loneliness and connection, making myself available for it, and people reached out. Strangers, but also not. Because none of us are okay.
And that’s a deep thing to be united by.
Here’s the point (maybe, assuming I have one): The conversation about disconnection and loneliness regarding digital technology is old and tired and boring and I don’t think any of us want to have it ever again. But disconnection and loneliness can be more piercingly, viscerally felt in these digital spaces, and they can be confronted in an immediate, nuanced, and difficult way that I don’t think any other arena allows for. The ways in which we’re lonely and why. What exactly we’re afraid of. What hurts. How we want to get well. How we want to reach out and hope that there might be someone reaching back. And how we might not expect that when it happens, because private and public are after all not binary categories and connection means a hopeless number of different things.
Like I said, no piercing insights, and I’m not coming away from this with any answers of any kind. What I’m coming away with is the knowledge that a complicated thing is even more complicated than I thought, and there’s a lot more to be afraid of.
But I think there’s also a lot more to reach out for.
We live in a cyborg society. Technology has infiltrated the most fundamental aspects of our lives: social organization, the body, even our self-concepts. This blog chronicles our new, augmented reality.