I mean, besides this guy.

Cory Doctorow’s recent talk on “The Coming Civil War Over General Purpose Computing” illuminates an interesting tension that, I would argue, is an emerging result of a human society that is increasingly augmented: not only are the boundaries between atoms and bits increasingly blurry and meaningless, but we are also caught in a similar process regarding categories of ownership and usership of technology.

Understanding the tension between owners and users – and the regulatory bodies, both civil and corporate,  who would like to have greater degrees of control over both – is necessarily going to be a consideration of the distribution of power in augmented human experience. If the categories of user and owner are increasingly difficult to differentiate clearly, it follows that we need to examine how power moves and where it’s located as the arrangements shift. I don’t mean just the question of whether users or owners have more power, but what kind of power they have, as well as who is losing what kind and who is correspondingly making some gains.

Doctorow’s initial point – and it’s an important point from which to start – is that not only is human life increasingly augmented, but it’s augmented by a collection of technologies that are at once more and less diverse than they used to be:

We used to have separate categories of device: washing machines, VCRs, phones, cars, but now we just have computers in different cases. For example, modern cars are computers we put our bodies in and Boeing 747s are flying Solaris boxes, whereas hearing aids and pacemakers are computers we put in our body.

If we understand these devices as “general purpose”, as Doctorow does, then power within that context takes on a very specific meaning: who controls what programs can run on these devices and how that ends up affecting how the devices are used? Owners? Users? Regulatory bodies? Corporations?

Traditionally we’ve understood an owner of something to have pretty much complete control over its use, within reason; this is fundamental to a lot of how we culturally conceive of private property rights. When we buy something, when we spend money on it and consider it ours, it’s been tacitly understood that we then control how it’s used, at least within the boundaries of the law. If you buy a car, you can have it repainted, switch out the parts for other parts, enhance and augment it largely to your heart’s content. If you buy a house, you can knock down walls and build extensions. I would argue that we tend to instinctively think of technology the same way: we – or, to paraphrase William Gibson, “the street” finds its own uses for things, and those uses aren’t subject to much constraint.

But increasingly, we can’t assume that.

When it comes to general purpose computing, both corporations and corporate-esque bodies with regulatory interests are exercising ever-greater degrees of control over what programs can and can’t run on our devices – in other words, how our “owned” devices can and can’t be used. As Doctorow points out:

We don’t know how to make a computer that can run all the programs we can compile except for whichever one pisses off a regulator, or disrupts a business model, or abets a criminal. The closest approximation we have for such a device is a computer with spyware on it— a computer that, if you do the wrong thing, can intercede and say, “I can’t let you do that, Dave.”

Such a a computer runs programs designed to be hidden from the owner of the device, and which the owner can’t override or kill. In other words: DRM. Digital Rights Managment.

Things like DRM are clearly problematic because they erode our very idea of what it means to be an owner of something; we can use it, install and run programs on it, and customize it to a degree – but only to a certain degree. Other entities can stop us from doing something with our devices that they don’t like, often through coercive means both subtle and not-so-subtle. And that line between okay and not-okay is subject to change, sometimes without much notice. Owners – people whose devices would traditionally be understood as their property – increasingly resemble users in many respects – people who can use and sometimes even alter or customize a device, but who don’t actually own it and whose power vis a vis the use of that device is necessarily limited. And, as Doctorow goes on to note, we are increasingly users of devices that we don’t even arguably own (such as workplace computers).

PJ Rey wrote an excellent piece in this vein a while back on Apple –  probably one of the more egregious offenders here. Apple, PJ notes, makes use of an aura of intuitive, attractive, user-focused design to suggest to its customers that it is empowering them – but this sense of empowerment is ultimately an illusion. Apple doesn’t want owners, it wants largely passive users – people who pay for the privilege of using the device but who will submit to the nature of that usage being severely curtailed:

[B]y burying the inner-workings of its devices in non-openable cases and non-modifiable interfaces, Apple diminishes user agency—instead, fostering naïveté and passive acceptance.

Even when a company is less overt about their desire to control the devices they’re selling, the presence of a net connection coupled with firmware updates can serve to reveal ways in which “owners” of a device have little control over what programs actually run on that device and how it can be used. I own a Playstation 3, and periodically I’m required to download a firmware update. I essentially have no choice in whether or not I download this update – I’m required to signal my agreement, but not doing so would deny me access to a number of features that pretty much make it possible for me to use the PS3 for the very things we bought it to do. I wouldn’t be able to access PSN (Playstation’s online store and software update network), which would mean that many of my games would be unplayable; they require regular software updates to run at all.

But by accepting one of these system firmware updates, I removed the ability of my PS3 to run a Linux-based OS – something that many users have found preferable and more flexible than the PS3’s default OS. The device I own is now less functional; I traded non-functionality for lesser non-functionality. Either way, I was reminded once again that I don’t necessarily “own” the device that is arguably my private property.

So power is in flux. It’s subject to a particular kind of contention here, and I’d argue that the form of that contention – or at least some of its elements – is new.

This picture is further complicated when we consider programs themselves. I’m old enough to remember a time when you bought software and it was basically yours in the traditional sense: you could install it on as many devices as you wanted and an internet connection wasn’t necessary for constant confirmation that you had actually paid for it. Where software is concerned, licensing is arguably supplanting traditional ideas of ownership – you are essentially paying for the privilege of installing it on a severely limited number of devices and you’re required to go through verification processes that frequently serve to make me feel like some kind of digital shoplifter.

Finally, Doctorow points out how this is all still further complicated by the ways in which people’s bodies are physically augmented and are likely to be so in the future (here he contrasts issues specific to owners with issues specific to users):

Most of the tech world understands why you, as the owner of your cochlear implants, should be legally allowed to choose the firmware for them. After all, when you own a device that is surgically implanted in your skull, it makes a lot of sense that you have the freedom to change software vendors. Maybe the company that made your implant has the very best signal processing algorithm right now, but if a competitor patents a superior algorithm next year, should you be doomed to inferior hearing for the rest of your life?…

[But] consider some of the following scenarios:

• You are a minor child and your deeply religious parents pay for your cochlear implants, and ask for the software that makes it impossible for you to hear blasphemy.

• You are broke, and a commercial company wants to sell you ad-supported implants that listen in on your conversations and insert “discussions about the brands you love”.

• Your government is willing to install cochlear implants, but they will archive everything you hear and review it without your knowledge or consent.

The point at which physical bodies are physically augmented by technology is a crucial crossroads here, one that Doctorow discusses but where I also think he could go further: the question of human rights vs. property rights. Doctorow is undoubtedly correct when he notes that users and owners don’t necessarily have the same interests – indeed, sometimes their interests conflict. But I think it’s also important to emphasize once again the the delineation between the two concepts isn’t always clear anymore – if it ever really was – and is likely to become less so. And along with the uncertainty about the boundaries between these two groups comes uncertainty regarding whether we can still meaningfully differentiate between property rights and human rights, when we not only own but are our technology.

On this blog we’re very used to the ideas of categories collapsing, and given that, it follows that once we accept the idea that those categories are collapsing, we have to ask ourselves what that exactly means – or might end up meaning in the long run. What we have now are questions – about where the power is, about where it’s going, and to what degree agent-driven technology use can survive the coercive control of corporate and government regulation of those technologies – especially when human life and experience and our very physical nature are so deeply augmented.

One final theoretical element that I think is useful here – and to which Doctorow makes no direct reference, though I think there’s a lot of room for it in his talk as well as a lot of indirect links already made – is Foucault’s concept of biopower – of power exercised by state institutions by and through and within physical bodies. The idea is an old one now – but within the context of the above, I think it’s changing in some significant ways. When technology is subject to institutional control, it’s deeply meaningful when that technology is literally part of our bodies – or so deeply enmeshed with our daily lived experience and our perceptions of the world around us that it might as well be. And when the lines between government institutional control and corporate institutional control become blurry in their turn, the traditional meaning of biopolitics is additionally up for grabs.

One of the more famous phrasings of the recent spate of technology-critical writing is Jaron Lanier’s You Are Not a Gadget. But more and more, that’s exactly what we are – we are our technology and our technology is us. Given that, we now need to understand how to defend our rights – property and humanity, users and owners, digital and physical, and all the enmeshings in between.