I’ve been writing a lot lately about what machines think and want, what the intentions of a drone are, what Siri wants to be and to do, what smartphones dream about and the goals to which my iPad aspires. It makes sense for me to write about technology this way – I’m a science fiction writer and my head is full of sentient machines, killer AIs and cyborgs in the explicit sense and androids longing for someone to teach their cold hearts to love. I’m not the only one; our technological folktales are full of sentient human-made devices, going back thousands of years. For a variety of reasons, this is something that we just tend to do. But I think there are a couple of issues inherent in doing it – a situation in which it’s beneficial and one in which it’s arguably harmful.
I also think we need to distinguish between anthropomorphizing a machine and imagining its agency. In one instance, the boundary lines between human and machine are blurred, even erased outright. In the other, human control of a machine is removed – literally or figuratively.
One could argue that granting a technological device the qualities of a human being is facile and childish – when I was a child, everything had a name and a personality, and I moved through the world in a giddy kind of Animist connection-with-all (which I haven’t altogether abandoned as a sensibility) and I think that’s a practice common to many children. There are the qualities that we grant a machine in order to make it more useable, of course: Siri’s voice, the faces of androids, the overall humanoid shape of our imagined personal robot butlers – and then there are the aspects we grant with no real clear object behind the granting.
Some of this reflects our close relationship with our devices in general – I swear there was a period when I was in high school where I named every single portable CD player I ever owned – but some of it reveals deeper things, both hopes and anxieties. We tell stories of machines who want to feel emotion, who want to be human – who, in essence, are engaging in a process of self-anthropomorphization. It’s been observed many times before that stories of our creations wanting nothing more than to be like us are stories of the kind of deep-seated anxiety a parent feels for the child who will ultimately replace them – given that, stories of the Commander Datas of the world are comforting, maintaining humanity’s position at the top of the identity pyramid. No one will replace us; no one can replace us. A machine that wants to be a human only emphasizes all the ways in which a machine will always fall short.
But anthropomorphizing a machine does something else. By giving machines aspects of humanity, it’s possible to make plain(er) the lie that underpins the stories described above: that there ever was any such thing as a pure, fully human humanity. Blurring the lines between machine and organic humanity, Haraway-like, shows that those lines are in fact blurrable.
At the DARC event that I wrote about last week, an attendee at our panel introduced the idea of conceiving of drones as moving along a spectrum rather than between two binary states of human/machine. However, while this is in itself a powerful idea, they made a further point: that a drone – indeed, that anything – can move in both directions on the spectrum, rather than always toward humanity. Which is what we find most often in these technological folktales; even if the boundary-blurring is progressive in nature, our construction of humanity is always the ultimately desirable state. A human should not desire to be more like a machine.
So why anthropomorphize machines? Most basically because we don’t seem able to stop doing it, but that tendency to do so no matter what is, as I’ve said repeatedly, revealing. It says something profound about our dreams, anxieties, fears, and understandings of who we are. I think there’s a place for doing it consciously; like all tropes it can be powerfully subversive when intentionally tweaked. I don’t think these stories are facile or childish, and I don’t even like the idea that something “childish” is an intrinsic absolute negative. I’m down with naming our smartphones. I just think we should be thinking very carefully about what’s going on when we do.
But anthropomorphization isn’t the same thing as agency. Agency is a crucial component of the former, but one can have agency without humanity. And when one does, all too often, it obscures rather than reveals.
More than one of us have written more than once about why this is such a big problem when it comes to considerations of drones. Drones are not “unmanned”, and for the most part, most drones are not (yet) autonomous. This kind of granting of full agency to something designed, built, programmed and operated by humans – not only by one or two but in many cases by over a hundred – obscures those humans and removes the visible aspects of their responsibility for what a drone does. And yet while this kind of discourse obscures, it also reveals: some part of us wants to remove human responsibility from the picture, leaving only the machine, the technique, the process.
And in fact this is exactly what we would expect to find in any setting for the waging of war. Zygmunt Bauman described the supremacy of modernity – technical skill, efficiency, and rational process – in the unfolding of the Holocaust, and scholars of barbarism in war have discussed over and over the role technology plays in distancing human actions from gruesomely lethal consequences. In studies of aerial bombardment, we can literally correlate emotional trauma with height.
When one considers drones, of course things aren’t that simple; drones are obviously not all for lethal purposes, not even all military, and the “sight” of a drone is a great deal more multi-faceted than a simple bird’s-eye view. But I’m not even speaking only about drones. Drones simply serve as an excellent example of a process that occurs whenever our feelings about how and why we use technology – how and why we are enmeshed with technology – become profoundly ambivalent.
Of these two kinds of storytelling, I obviously regard the former as less malignant. But I’m a fan of both, not in the sense of regarding both as admirable but in regarding both as significant and worthy of attention. It’s not a good idea to take the position that humans are humans and technology is technology and never the twain shall meet; it’s wrong, in addition to preventing us from being sensitive to important truths about how the world works (Digital Dualism blah blah blah). When we make a movie about Wall-E or when we give Curiosity a Twitter account, we’re doing a thing, and that thing isn’t stupid or silly. Or it may be those things, but it isn’t them alone.
To return yet again to one of my favorite quotes from SF&F writer Catherynne Valente, “there is only one verb that matters: to be.” So what we need – always – to be asking ourselves is what and where we are, in our stories and outside of them. Because there really isn’t a huge difference between the two anyway.
Sarah self-anthropomorphizes on Twitter – @dynamicsymmetry
Comments 2
ArtSmart Consult — October 26, 2013
Maybe we anthropomorphize machines to make the interface more comfortable. The metaphors bring familiarity. It's a mental cushion or shock absorber between the warm fuzzy person and the cold rigid tool. Machines are meant to be an extension of humans, not a replacement.
In Their Words » Cyborgology — October 26, 2013
[…] “Blurring the lines between machine and organic humanity, Haraway-like, shows that those lines are in…” […]