giphy.gif

I learn a lot on Tumblr. I follow a lot of really great people that post links, infographics, GIF sets, and comics covering everything from Star Trek trivia to trans* identity. I like that when I look at my dashboard, or do a cursory search of a tag I will experience a mix of future tattoo ideas and links to PDFs of social theory. Invariably, within this eclectic mix that I’ve curated for myself, I will come across a post with notes that show multiple people claiming that the post taught them something and so they feel obligated to reblog it so others may also know this crucial information. If you’re a regular Tumblr user you’re probably familiar with the specific kind of emphatic sharing. Sometimes it is implied by one word in all caps: “THIS!” In other instances the author is ashamed or frustrated that they didn’t know something sooner. For example, I recently reblogged a post about America’s Japanese internment camps that contained a note from another user who was angry that they were 24 when they first learned about their existence. I want to give this phenomenon a name and, in the tradition of fellow regular contributor Robin James’ recent “thinking out-loud” posts, throw a few questions out there to see if anyone has more insights on this.

Yesterday evening I asked Twitter if anyone had a name for this performance of learning that is so intentionally public. Most of the responses ranged from widely interpretable descriptors like “activist” or “advocate” to phrases that couldn’t be “repeated in polite company.” Several people used the term “social justice warrior” and noted that it was “typically not meant as a compliment.” frequent guest contributor Jeremy Antley made up the term “water-cooler crier” (as in town crier) which I like but doesn’t quite get at the persuasive aspect of what I keep seeing in Tumblr notes. A town crier doesn’t explicitly underscore the importance of their message the way a Tumblr user wants you to know how about the significance of Rose and The Tenth Doctor’s relationship. Admittedly, I don’t think I was as clear as I should have been in my original request, and the reference to “consciousness raising” might have primed people to think more politically than was necessary. So I think, for now at least, I want to call this affective condition simply “notorious learning.”

Notorious learning is the conspicuous consumption of information. It requires admitting ignorance of an important fact, so that the act of learning/consuming may be celebrated. It is always emphatic but can range from righteous anger to child-like glee. The individual instances of notorious learning can take many forms: A notorious learner can be grateful that her mind was blown by the semiotic insights of some anonymous Breaking Bad gif set maker or she can give a “signal boost” to an egregiously under-reported story of police brutality. In both instances the notorious learner wants to not just share the information, but share something about the perceived scarcity of the information.

Of course with scarcity comes a kind of value. The thing learned is important but it’s precious because the idea is so rare. Herein lies a kind of paradox: Reblogging reduces the scarcity and –in theory– could bring the value down to mundanity, but you continue to share because the information is important. In this way, the notorious learner is self-annihilating. They are always reducing the theoretical number of things to be surprised about; always seeking unearthed truths and marveling at how anyone went so long without knowing that The X-Files re-created scenes of the Dick Van Dyke show for that episode where Mulder and Scully go undercover as Rob and Laura Petrie. Seriously, did you know that?

I should stop here ––before I get to my Bigger Questions–– and acknowledge that I’m lumping really serious and important stuff (i.e. trans* identity, police brutality) with literal trivia. This isn’t because I think they’re of equal importance or magnitude, nor do I think notorious learners are treating all information with the same level of gravitas or importance. I acknowledge that as a straight cis white male there is a lot more information out there that I have the privilege of not knowing or voluntarily learning. I think that’s a big part of the notorious learner’s MO, although I’d want more empirical data before I said anything more. Which brings me to my first big question…

What is so satisfying about notorious learning? Why, on such a truly astounding array of topics, do people feel so compelled to admit ignorance so that they may underscore the importance of the information? Early twentieth century sociologist and economist Thorstein Veblen coined the term conspicuous consumption to describe how the nouveau riche flaunted their cash. In Veblen’s time it meant ostentatious furs and cars, today Mark Zuckerberg slaughters his own grass-fed meat. Perhaps, in an attention economy, we not only have to be very selective about what we spend time on but also flaunt just how much information we’re accumulating. If attention is precious and scarce, then aren’t we flaunting a kind of wealth when we show just how many things we paid attention to?

Regardless of whether one buys the econometric metaphors (I’m actually pretty suspicious of their neoliberal implications) I think we can all agree that this thing I’m calling notorious learning is performative. That is, people are deliberately drawing attention to the process of their learning, not just the importance/itnerestingness of the fact itself. I did a little bit of my own notorious learning when #ttw14’s anti-harassment statement was critiqued and made better immediately after posting it to our site. I wanted to express my gratitude and excitement that the community was making the statement better, while at the same time (again my subject position is important here) not doing that back-patting thing that straight, cis white people love to do when they learned a thing. Is notoriously learning mostly a platform for acting the white savior, or is it a site for critiquing it? I suspect it has to be both, or at least the former has to happen so that the latter may occur in the open as well.

I’d appreciate some notorious learning about notorious learning here in the comments. What do you think? Is this a new phenomenon, or just more visible? Are Tumblr users being conspicuous in their data consumption or are they just acting the enlightened individual?

 David is on Twitter and Tumblr.

_AmtrakResidency__

Full Name*: David A. Banks

City*: Troy

State*: New York

email*: david.adam.banks at gmail dot com

twitter*: @da_banks

Facebook URL: https://www.facebook.com/DABanks

Instagram handle: thoriumdirigible

Why do you want an #AmtrakResidency?* [In 1,000 characters or less (including spaces)]

I want to be a part of the #AmtrakResidency insomuch as this is one of only a handful of options left to me as an author. Its good to see that someone is willing to give away a thousand dollar ticket for a couple of tweets and a blog post. I want my workspace to be funded by a tax structure written by corporations combined with ticket sales from working stiffs going back and forth on the Northeast corridor. Food is included in this trip right?

This isn’t to say that the rustbelt house I live in right now isn’t implicated in similar forms of oppression. I just like how easy it is, while onboard a massive machine, to draw such a clear and straight line between capitalist exploitation and my own creative flourishing. Here I am, on a diesel powered train thinking deep thoughts while thousands die in wars for foreign oil. Too on the nose? Let’s just stick to the fact that I will be helping you make Amtrak the Carnival Cruise line for hipsters.

How would this residency benefit your writing?* [In 1,000 characters or less (including spaces)]

This is an important question so I’m glad you asked it. However, I’m unsure whether you’re asking this because you honestly do not know how this could benefit writers, or because you’re screening for applicants that, in the words of your Social Media Director Julia Quinn, understand that this arrangement should be “mutually beneficial.” I think its great that you’re offering yet another opportunity for millennials to do what we’ve been doing since 2008: run a social media campaign for free in hopes that you’ll see fit to pay for the exact same work a year from now. I am really looking forward to advertising your services to the audience that I’ve meticulously and loving cultivated for four years. This is definitely how I want to spend my social capital. That being said, I think its important to me, as a writer, to embed my practice within intersecting structures of power so I like the idea of writing about gentrification while I gentrify an entire continental transportation system.

Upload a sample of your work* [Uploaded]

Supporting link: http://www.davidabanks.org/2014/03/11/a-sample-of-my-work/

Official Terms* [Oh, I read them. https://twitter.com/sarahkendzior/status/442678025352536067]

Opt in to emails to receive sale and product information [Opt in]

*Required fields

Today’s post is a reply to Robin James’ post, which raises questions stemming from the observations made in Jodi Dean’s recent post on “What Comes After Real Subsumption?

 

Image c/o Aldor
Image c/o Aldor

This might be a tad “incompatible” with the existing discussion because while the discussion so far has focused mainly on a Marxist approach to a series of philosophical questions, I want to take an anarchist approach to an anthropological re-reading of the initial question: “what comes after real subsumption?” That is, I think some of the subsequent questions might be more answerable if we interrogate their anthropological facets. Particularly, I want to focus on what is considered feedstock for production and what is identified as the act of consumption which, by definition, must yield a waste that capitalists sort through in an effort to extract more surplus value. Pigs in shit as it were. 

It’s worth noting on the outset, as a sort of reflexive recursion of my own hypothesis, that this post performs what it will try to describe. All the way from the W.W. Norton Company tangentially benefiting from my (gosh, now over three years) writing for Cyborgology to the fact that as a cis-white male I took it upon myself to make my own post instead of contributing to a comment thread.

With that being said, here’s my hypothesis: All the points at which we say something to the effect of “x produces y” we are also saying that there is a ready and willing consumer base willing to pay the exchange rate for the produced y. The bringing about of real, and perhaps later absolute, subsumption itself as a historical category depends on us seeing ourselves as primarily a particular kind of producer and consumer. Capitalists must produce consumers as well as consumables. Whether it be pearl-clutching Nation authors looking to get a cover story on the backs of black women using twitter, or men’s rights activists looking for a readily (perhaps even in the Heideggerian sense, standing reserve) available victim served up in a mainstream news source as the not-quite-innocent-enough woman that can be blamed for their own assault; there needs to be someone ready to ingest new levels of toxicity.

To borrow neoliberal economists’ terminology, I think we need to pay as much attention to the demand side as we’re paying to the supply side. In doing so, I think we’re forced to deal with the heart of capitalisms resiliency: its ability to refer to itself to find new arenas of expansion and as a justification for its existence. In other words, capitalism produces instances in which its most destructive capacities are seen as natural and thus insurmountable, and thus we feel as though the only way to make real and sustaining change is to act “within” the system and wield these natural forces. Why else would so many people looking to make the world more “sustainable” seek out jobs where actually increase the efficiency of systems so as to increase the pace of resource extraction? Why else would this process of reclaiming the refuse of previous destructive capitalist endeavors be seen as sustainable at all?  Why else would big data seem worth it in the first place?

This is a leap, but hear me out on this: Absolute subsumption then, is subject to a kind of Gramscian hegemony contract. That is, we agree to call ourselves consumers and producers in just the right instances, because benefits us in the moment. For example, agreeing that something is natural and thus inevitable can also give us a pass when we don’t want to take responsibility for something we have done. This is the exact move that is made to dismiss rape culture and blame the victim. We agree to call ourselves consumers when we buy a computer or even when we use social media even though we are producing more things than we are consuming. We are making new ideas (blog posts!) and making new connections that do actual productive work. They also impose a kind of control over other people. The contract I make with the hegemonic discourse harms other people while benefiting me. The contract is highly contingent, but the cumulative effect is the making compatible of toxic assets with yet-unidentified consumers. Let us also not forget that the act of consumption is both a destruction (that produces wastes) but also an absorption. We become a little toxic in the act of consuming. I suspect that is where resiliency, in part, comes from. It’s a desensitizing as much as it is an inoculation.

I suspect that the answer to James’ second question, (“What about human capital that’s ‘such a waste that it’s no longer worth trying to do anything with it’? How are people produced as ‘the non-capitalizable remainder that lacks potential’? Is this remainder not the proletariat, but blackness?”) sits somewhere in the complicity with hegemony and the willingness to call ourselves consumers when we are primarily doing is producing. We are producing the non-capitalizable remainder. We are producing toxics that, ultimately, are nothing more than an escrow account for future capitalist production.

Now as I ­—admittedly perhaps too often— like to do, I’ll quote David Graeber[1] as a conclusion and kind of prescriptive:

If we wish to continue applying terms borrowed from political economy … [i.e. consumption] it might be more enlightening to start looking at what we have been calling the “consumption” sphere rather as the sphere of the production of human beings, not just as labor power but as persons, internalized nexes of meaningful social relations, because after all, this is what social life is actually about, the production of people (of which the production of things is simply a subordinate moment), and it is only the very unusual organization of capitalism that makes it even possible for us to imagine otherwise?

 


[1] Graeber, David. 2011. “Consumption.” Current Anthropology 52 (4): 489–511.

 

An entire train full of crude oil slides and tumbles 11 miles down hill. Image from NBCNews
An entire train full of crude oil slides and tumbles 11 miles down hill. Image from NBCNews

One morning, in the seventh grade, my math class was told to prepare for a surprise standardized writing test. A writing test with no warning in math class wasn’t the weirdest thing we had been asked to do. Jeb Bush was our governor and Florida was a proving ground for what would later be called “No Child Left Behind.” Tests were common and testing different kinds of tests were even more common. You never knew if the test you were taking would change your life or never be seen again. This one was a little bit of both. The prompt was really strange, although I don’t remember what it was. As a life-long test taker (my first standardized test was in the 4th grade) you become a sort of connoisseur of writing prompts. This one didn’t seem to test my expository or creative writing skills. It just felt like a demand to write and so we did. We wrote for about half an hour.

Almost as soon as our teacher told us to put our pencils down an assistant principle came into the room with a stack of tests from other classrooms. She looked hurried and the security guard behind her with the metal detector wand looked impatient. As she collected the tests from our teacher the assistant principle told us to stand up and wait to be wanded by the security officer. One by one, with arms outstretched, cold plastic and colder eyes brushed our eleven-year-old bodies. When the security guard came to me I raised my arms, looked at the wand and said earnestly: “I didn’t know we had one of those!” He scowled and passed the squawking device up and down the length of my body and told me to sit down. By the end of the period we were told that the morning’s hand writing samples had positively identified the student who claimed to have a bomb. There was no bomb, but that probably didn’t save that kid from juvie.

I am still surprised that they were able to go through the hundreds of essays that fast. Homeland Security hadn’t been invented yet, so perhaps the FBI had helped. Who knows? Before college I had gone through my fair share of bomb threat drills and memorized the color-coded alert systems printed on the back of the teachers’ IDs. You never wanted a black alert. Yellow was nice though; it usually meant you got to watch Remember the Titans until the lockdown was over.

Last week, Nona Willis Aronowitz wrote a piece and did several interviews about the rise of “active shooter drills” in suburban schools. These drills are meant to help law enforcement and school administrators prepare for the kind of disaster that was once unthinkable but now seems more like an eventuality. Aronowitz’s quotes are chilling not because they demonstrate just how gory a school shooting (even a simulated one) can be, but because student participation in these drills fits so nicely into the long list of activities good students are expected take part in:

Kiera Loveless, 17, who has done eight drills before, “thought it would be fun at first. Now I wouldn’t say fun exactly—it’s scary. But a good experience.”

Loveless signed up because she thought it would look good on college applications. The first time she participated, she was “terrified.” She’d only heard gunshots on television. “I didn’t even really have to pretend. I kept having to remind myself ‘this isn’t real, this isn’t real.’”

Co-hosts Molly and John Knefel discussed Aronowitz’s reporting on last Wednesday’s episode of Radio Dispatch.  They rightly pointed out that these drills contribute to a normalization of school shootings, and could do more harm than good. Molly makes the excellent point that while “schools that are already militarized” will probably have to bear even more shooter drills and increased militarization, suburban schools will begin to treat mass shootings like a tornado or some other unstoppable and unpredictable weather event. John agrees: “It’s a very depressing signal of what schools are going to continue to look like.”

Treating human action like the weather—naturalizing it so as to negate, obscure or excuse individuals’ very conscious actions—is nothing new. Karl Marx noted that the assignment of an exchange value to goods and the ebb and flow of commodities markets relied on a belief that these were natural phenomena. The belief that the price of a diamond is just as natural and indisputable as the crystal-forming properties of carbon is essential to capitalism. That is why faith in markets and in the future of this thing we call an economy is so important. If enough people agree that something isn’t worth the asking price, that price will fall. We like to think of that as the “natural” function of markets: something that will happen unless something like the government actively intervenes to “artificially” set prices. The truth of the matter is that all prices are a function of governments’ enforcement of contracts and the active and sustained collusion of corporations with one-another and other planetary governing bodies.YouTube Preview Image

 

I bring up Marx because, as John Knefel says, school security is probably “a great business to be in right now.” And as Molly notes, “you can never find more money to invest in school lunch, or raising the eligibility for free and reduced lunch, we have to make sacrifices … but there’s always money to run an active shooting drill.”

Indeed, there are very concrete ways government agencies can assure that there will always be money to arm a guard but not feed a child. As much as we like to say “you can’t put a price on human life” corporations and governments do it all the time. It’s actually essential to the way the government regulates industries and justifies expenditures.

Unsurprisingly, despite what the U.S. Constitution says, we are not all equal in the eyes of our government. The same person’s life isn’t even the same price from agency to agency. A 2011 New York Times article describes how the government’s “value of statistical life” indexes factor into regulating industry:

The Environmental Protection Agency set the value of a life at $9.1 million last year in proposing tighter restrictions on air pollution. The agency used numbers as low as $6.8 million during the George W. Bush administration.

The Food and Drug Administration declared that life was worth $7.9 million last year, up from $5 million in 2008, in proposing warning labels on cigarette packages featuring images of cancer victims.

The Transportation Department has used values of around $6 million to justify recent decisions to impose regulations that the Bush administration had rejected as too expensive, like requiring stronger roofs on cars.

The higher the price of a human life, the more money a government can justifiably spend or demand that a corporation spend on saving that life. That must make you wonder then, if the EPA can value a human life at $9.1 million, how much does the Department of Defense value your life? Depends on whether or not you’re the one used in justifying the fighting or the one actually doing the killing. If you’re killed in active service, your family typically gets $600,000. If you’re one of the millions of Americans that are being “defended” by the armed services, your life is virtually invaluable and thus justifies the most expensive military the world has ever seen. We see a similar disparity in how we fund schools: As children that need nutritious food, life is cheap. As potential shooting victims their lives are invaluable.

While the EPA still pegs human life at around $9.1 million, there are plenty of instances where that dollar figure gives way to much more unforgiving formulas: for example the exemptions to the Clean Water act given to companies that frack for natural gas. Here the calculus is all about who could afford to scientifically prove that ground water is tainted and then fund a legal team to sue for the cost of piping in clean water. Even here, it doesn’t remediate the damage or even stop the hazardous drilling. It only keeps that one person relatively safe from harm. The rest of us are left to defend ourselves against the dozens of loopholes and unenforced regulations that make it possible for coal ash and nuclear waste to seep into groundwater by the ton. Tucked away in the actuarial tables of high-rise office buildings and unassuming office parks are what companies are willing to pay when something goes wrong and it kills you. These numbers are disturbingly low. They have to be low. How else could you account for the sheer volume of last year’s industrial disasters? Here’s an abbreviated list of spills and explosions that happened just in North America:

Just like school shootings, all of these were preventable, human made disasters that were treated like unavoidable and unpredictable accidents. But while these are undoubtedly disasters, it would be a mistake to call them accidents. Company executives recognize (unlike most of us) that any technological system will inevitably fail if it isn’t subject to routine maintenance and even then there is a relatively predictable percentage chance that something will go wrong. The FAA’s decision to price human life at $6 million for example, is part of the calculation that goes into the maintenance schedules of commercial aircraft. Even if the part still works, the government requires that airlines replace certain parts after so much time because they calculate it is more beneficial (cheaper) to society as a whole to replace a working part than run an increased risk of engine failure. Corporations, on the other hand, don’t calculate what is best for society; they calculate what’s best for the corporation. It would actually be against their legal fiduciary responsibilities to do anything else. But that legal requirement shouldn’t excuse the ruthless calculation.

A student with stage makeup preparing for an "active shooter drill." Still from NBCNews
A student with stage makeup preparing for an “active shooter drill.” Still from NBCNews

Companies know that trains will derail and holding tanks will leak, and those eventualities are factored into the cost of doing business. The NRA can handle the economic impact of a bad news cycle caused by a school shooting and ExxonMobil continues to be the most profitable corporation in the world despite near-constant leaks and spills. Freedom Works is being sued out of existence for last month’s chemical spill but Rosebud Mining, the parent company, is doing just fine. It’s the cost of doing business. Industrial disasters are called “accidents” instead of terrorism because they are committed in the name of profit. A freight train derailment is just as calculated, deliberate, and ruthless as a homemade pipe bomb. The only difference is that industrial terrorists don’t know exactly when the bomb is going to go off and they never have the guts to be there when it does.

Its important to remember that corporations aren’t looking to prevent disaster; they’re looking to keep the cost of disaster as low as possible. Executives have to determine whether it is cheaper to lobby congress or invest in renovations and improvements. Sometimes it’s cheaper to just make a better system, but as Marcia Angell explains in her book The Truth About The Drug Companies, lots of corporations find it cheaper and easier to lobby Congress than to innovate in their respective industries.

Making your terrorism legal is the easy part. The hard part is introducing middle class white America to the new (immensely profitable) normal that comports with your company’s business strategy. For the NRA, that means investing millions in school security, thereby implicitly giving up on the idea that school shootings can be eliminated. It’s a way of making your business model seem as natural as the weather. The NRA doesn’t suggest arming teachers because they hope to sell guns to teachers; it’s because that sort of militarization makes gun violence the new normal. Just a few years ago white middle class people couldn’t believe that a shooting could happen in their schools. Today, a teen can put “active shooter drill participant” on their college application.

Routine matters. By routine I’m not just talking about your own day-to-day habits, but what you and everyone else considers to be normal. Not just basic social conventions (e.g. “I should wear clothes when I go out in public.”) or natural laws (e.g. “Gravity pulls things down.”) but the kind of normal we don’t like to consciously think about or dwell on. Normal is poor children starving, soldiers dying, and pipelines leaking. If corporations get their way, normal can also be weekly school shootings, exploding trains, and undrinkable tap water. Anything can be normal if it becomes routine. The sociologist Anthony Giddens likes to say, “In the enactment of routines agents sustain a sense of ontological security.” That is, it doesn’t really matter if its an endless war on terror, drugs, or poverty, people can accept new normals so long as their day-to-day lives are predictable; if they can recognize some semblance of cause and affect. This is a dangerously useful observation. It should be no surprise then, that Giddens was an advisor to Tony Blair’s government leading up to the Iraq War and why the Joker uses this very same line of thinking to cause mass chaos: “No body panics if everything goes ‘according to plan.’” The clown says to the lawyer. “Even if the plan is horrifying.”

David is on Twitter and Tumblr.

YouTube Preview Image

 

 

 

When you search for Foucault on AcademicTorrents
When you search for Foucault on AcademicTorrents

The Social Sciences –despite the widely held notion that we’re all a bunch of Marxists that will turn your children into pinkos– are incredibly conservative when it comes to their own affairs. Our conferences are pretty traditional, we took a really long time getting around to noticing that the Internet was A Thing, and if you take a Social Theory 101 course you’re more likely to read Durkheim than bell hooks. You can blame it on tenure, fear of action, or simple lack of imagination, but the analysis remains the same: rarely do our articles’ prescriptive conclusions make it into our day-to-day practice. When I read that a couple of students from the University of Massachusetts had launched a torrent site to share data I knew it wouldn’t be social scientists. Not necessarily because we don’t have the expertise, (more on that later) but because we so rarely seem to have the will to act. Its always the engineers and the natural scientists that come up with faster, cheaper, and more egalitarian methods of sharing data and promoting their work. What gives?

First it’s worth recognizing an old STS saw from 1975 called “The Engineer as Social Radical.” J.C. Mathes and Donald H. Gray, who are engineers themselves, noticed that their peers usually thought of themselves as conservative: “the typical engineer” they reflected, “perceives himself [sic] as a social and political conservative — and indeed society thinks of him [sic] as such — that is, as manning [sic] the barricades against movements for social change such as the New Deal or the Great Society.” Individual engineers’ conservative politics seemed to run counter to the effect their work was having in society. They were identifying something akin to Taylorism or Jacque Ellul’s technique: engineers’ focus on efficiency and optimization has deeply social implications. They conclude, in part, “The engineer, especially, must integrate his [sic] radical technological self with his [sic] conservative emotional self. He [sic] cannot continue to promulgate technologies requiring regional electric power grids, while also continuing the campaign initiated by Senator Goldwater against centralized bureaucratic controls.” In the driest, most male pronoun-intensive language possible, Mathes and Gray are identifying a really important aspect of engineering practice: what they do, by its very nature, has effects out on the world.

The same could obviously be said for social scientists, albeit with a very wide range of results. From Marxist revolutions to Anthony Giddens working for the Tony Blair administration- social science does things out in the world. Sandra Harding, bell hooks, Simone de Beauvoir W.E.B Debois all changed the world through their writing but in all of these cases it took lots of people being convinced by arguments made in books and at podiums to actually make that change happen. Social science does things, it would be preposterous to argue otherwise, but why do we seem to work within the most staid institutions?

Put another way, why aren’t anthropologists with a passing understanding of the Internet making stuff like AcademicTorrents? AcademicTorrents is a decentralized file sharing system meant to host large sets of data that would be difficult to share via email or other common “in the cloud” services. The fact that its decentralized also means its much safer from being lost because there are so many copies. According to Motherboard and Torrentfreak the creators don’t want this to be a tool for illegally sharing copyrighted journal articles but they’d be happy to see it as a tool for open access distribution. I’m sure all of the above will happen. So, again, why don’t social scientists come up with this stuff? Even more importantly, why don’t they utilize alternative publishing methods that are already out there?

Consider for example the amount of Open Access journals, in different subjects, in the Directory of Open Access Journals:

First, notice that while the social sciences definitely has the most OA journals, all of the social sciences, from sports psychology, to social movements theory are under a single heading. I don’t know the history of the DOAJ so I can’t say why “technology” and “technology and engineering” or “Medicine” and “Medicine (General)” get separate categories while anthropology and alcohol and drug abuse counseling share a single category. What is really telling however, is what happens when we put all of these journals into similarly broad Liberals Arts categories:

I know that its dangerous to generalize across cellular biology and fluid dynamics but I think if it can be said that they have one thing in common, its that they are fields where practitioners are interested in learning more about the world so that we can do useful work with it. Whether that’s curing Leukemia or making my iPhone lighter the acts themselves shouldn’t be counted as equally important or moral but they are all operating under someone’s flawed interpretation of “useful work.”

As I already mentioned, social scientists do productive work but not nearly in the same way. The words I put to paper and screen are not doing work the same way nanosilver is keeping germs off of surfaces but also probably destroying nearby wetlands. Social scientists and humanities scholars, among other things, work to change common sense. They take positions, make convincing arguments, collect data, present findings, and speak publicly about what is wrong and what must be fostered so as to spur action and create a better society. Unlike the consciously conservative engineer that unwittingly acts as a social radical, social scientists are much more conscious about how their work relates effects society writ large. Or… Maybe not.

There are some obvious examples of social scientists using their research to better the means by which they do that research. The first example to come to mind is the “13+ Club Index.”[PDF] Several women who had experienced tenure discrimination but were also hard pressed to empirically demonstrate this discrimination using existing metrics developed a way to show consistent patterns of non-promotion of women within multiple departments. Another example is, anthropologist Chris Kelty’s involvement in greatly expanding the University of California system’s open access policy. While not perfect, it requires academics to opt out of making their findings open access rather than opting in.

But how do these stack up against the National Institute of Health requiring that every research project they fund be open to the public? Or the highly successful and often-cited family of PLoS journals? Cultural Anthropology just went open access but most of the prominent, top-tier journals are still privately owned. Is it possible that just as engineers think they’re conservative but are ultimately radical, social scientists think they’re radicals but are actually very conservative?

Due to the nature of the work, its a lot harder to make a one-to-one comparison between what Marthes and Gray say about engineers and what I suspect might be happening to social scientists. At present, given what little data I have found, I’m in danger of making a tautological argument: the lackluster track record of innovative department governance schemes or publishing methods is both the evidence and the reason for why I think social scientists are conservative. Great ideas can’t change minds if they’re caught behind thousand-dollar pay walls and radically-minded people might not want to work within a centuries-old hierarchical department structure.

But perhaps this isn’t such a bad conclusion after all. Maybe its more recursive than it is tautological. I don’t think it’s any coincidence that Chris Kelty, the anthropologist behind the UC system’s open access policy, also writes on Free Software communities. Kelty’s book Two Bits proposes that these communities are best described as “‘recursive publics.” He defines recursive publics as “a public that is vitally concerned with the material and practical maintenance and modification of the technical, legal, practical, and conceptual means of its own existence as a public…” Recursive publics derive their legitimacy through the efficacy and vibrancy of the community they create.

Imagine if a sociologist were to claim that not only are low pay ratios (the difference between the highest and lowest paid workers) better for productivity because of the results of rigorous research, but they have implemented them in their own university. The increased pay for staff members and the decreased pay for administration may even be helpful in getting the research done and published in the first place.

Social scientists (and their departments) appear to be less recursive in their work because their specialties are governed much more strictly. Or, more precisely, innovations like AcademicTorrents are treated as a “market” option whereas department pay and governances structures are largely left up to administrators. Social scientists have a much more difficult road ahead of them than engineers. While bringing a product to market can be difficult, getting legislation passed can be just as hard if not harder. Computer scientists can set up a torrent site to share data, but a social scientist cannot simply declare that people will be paid differently. The creation of a new technology is rarely legislated in the same way human relations are legislated. Technology can impose its own sort of informal legislation that creates or affords change outside of traditional law-making apparati. One could even classify the 13+ Club as a technology more than a social science theory, if we were willing to accept methods and indexes as a kind of technology.

As for why social scientists are always slow to pick up on important trends like Open Access journals or torrent data distribution systems, I can only imagine it has something to do with an embattled sensibility among social science departments that are afraid to take risks with their tenuously funded departments. Who can afford to fight a copyright battle with Elsevier when they’re losing their last staff member? Its a sullen and admittedly boring reason for such a fascinating problem but there you have it. We can, however, rejoice in the near-limitless possibilities that are just on the horizon. Imagine what could happen if fed-up adjuncts and freshly-minted PhDs started living their “undoable” dissertations.

David is on Twitter and Tumblr

#review Features links to, summaries of, and discussions around academic journal articles and books. This week, I’m reviewing:

Sayes, E. M. “Actor-Network Theory and Methodology: Just What Does It Mean to Say That Nonhumans Have Agency?” Social Studies of Science (2014) Vol. 44(1) 134–149. doi:10.1177/0306312713511867. [Paywalled PDF]

Update: The author, E.M. Sayes has responded to the review in a comment below.

Image from You as a Machine
Image from You as a Machine

A few weeks ago Jathan Sadowski tweeted a link to Sayes’ article and described it as, “One of the best, clearest, most explanatory articles I’ve read on Actor-Network Theory, method, & nonhuman agency.” I totally agree. This is most definitely, in spite of the cited material’s own agentic power to obfuscate, one of the clearest descriptions of what Actor-Network Theory (hereafter ANT) is meant to do and what it is useful for. Its important to say up front, when reviewing an article that’s mostly literature review, that Sayes isn’t attempting to summarize all of Actor-Network Theory, he is focused solely on what ANT has to say about nonhuman agents. It doesn’t rigorously explore semiotics or the binaries that make up modernity. For a fuller picture of ANT (if one were making a syllabus with a week of “What is ANT?”) I suggest pairing this article with John Law’s chapter in The New Blackwell Companion to Social Theory (2009) entitled “Actor Network Theory and Material Semiotics.” Between the two you’d get a nice overview of both of ANT’s hallmark abilities: articulating the character of nonhuman agency and the semiotics of modern binaries like nature/culture and technology/sociality.

The most astounding thing about this article is the premise under which Sayes claims his review is necessary: “understanding this methodology is absolutely central to understanding the claims that nonhumans are actors and have agency.” and that critiques up to this point “must be understood as weak methodological assertions.” The aim of the article then, is to help everyone “understand more precisely what exactly it is that they are refusing or affirming.” To his credit, Sayes thinks ANT critics and proponents don’t quite “get” nonhuman agency and acknowledges that his “narrative is ultimately partisan.” But one can’t help shake the feeling that the author believes that if he just speaks slowly and loudly everyone will obviously love Bruno Latour.

Sayes also acknowledges that “the proponents of ANT seem reticent to give either a simple or precise definition” of a nonhuman. Given that “the term ‘non-human’ functions as an umbrella term that is used to encompass a wide but ultimately limited range of entities”  it is easier to list “what is excluded from the circumference of the term [which] are humans, entities that are entirely symbolic in nature (Latour, 1993: 101), entities that are supernatural (Latour, 1992), and entities that exist at such a scale that they are literally composed of humans and nonhumans (Latour, 1993: 121, 1998). Sayes also provides this clarifying point in a footnote: “All these entities are nonetheless actors, actants, monads, and entelechies, while only some are hybrids and quasi-objects. The relevant point here is that the term nonhuman does not seem to designate all that is not human and thus that the terms human and nonhuman do not exhaust all entities that exist (Latour, 1988c: 305, 1998; Law, 2009: 141).”

Instead of identifying what nonhumans are, ANT encourages us to consider what nonhumans do. For Sayes it is what things in the world contribute to society that matters most. In other words, instead of focusing on what kinds of things (e.g. scallops, classified documents, unbuilt French transportation systems) are qualify as “nonhuman” for the purposes of analysis, we should just ask what role it plays in the scenario you’re trying to understand. Sayes classifies the contributions of nonhumans into four categories:

“I first consider nonhumans as a condition for the possibility of human society (Nonhumans I). I will then consider nonhumans as acting in three further senses: as mediators (Nonhumans II), as members of moral and political associations (Nonhumans III), and as gatherings of actors of different temporal and spatial orders (Nonhumans IV).”

Nonhumans I are everything that separates human sociality from that of nonhuman animals, although it is unclear if nonhuman animals themselves would fall into this category. Nonhumans I are what make social relations “durable” and visible outside of the momentary interactions of any number of a few humans. They stabilize relationships and direct human action. A well-worn hiking trail is both a metaphor and example of Nonhumans I. It directs people where to go in place of a parks official constantly decided where individuals should go or leading them hirself.

Nonhumans II are mediators. They are not simple substitutions for human actors but entities with their own sets of agentic power. Unlike Nonhumans I, Nonhumans II can be unstable or introduce unstable elements into a network. Sayes writes:

“Nonhumans that enter into the human collective are endowed with a certain set of competencies by the network that they have lined up behind them. At the same time, they demand a certain set of competencies by the actors they line up, in turn. Nonhumans, in this rendition, are both changed by their circulation and change the collective through their circulation.”

Nonhumans III are a part of moral and political networks. Sayes notes that this is not to argue that Nonhumans III have their own morals or will, “[r]ather, the argument only entails that moral choice and the political sphere are not subject solely to the rational restraints of logic, the disciplinary logic of norms, or potential legal sanction – a claim that would seem to be largely uncontroversial.” Sayes does not offer a working definition of either “political” or “moral” and neither does the Latour article he cites. This is especially confusing given that he goes on to say that “attempting to discuss morality and politics from within the framework of ANT may risk fundamentally redefining what it means to speak of both – given that they evoke an element of purposeful action.”

Nonhumans IV, simply stated, means that no one (humans or nonhumans) acts in a vacuum and our own actions are only possible through nonhumans. There are no actors, and there are no networks, there are only Actor-Networks. Therefore, at any given moment, “action is always ‘interaction’ –which is to say that it is shared with variable actors. We are influenced by people long gone because we value their belongings (This was George Washington’s chamber pot!) or what they built still plagues us regardless of significant changes in collective will or desire (Robert Moses is still messing up your commute).

What then, you might ask, is left of agency if it is to encapsulate nonhumans I-IV? ANTs want us to think less of causal or willful agency and “pluralize” agency by “decoupling” it from “intentionality, subjectivity, and free-will.” Sayes concludes that this decoupled, pluralized form of agency is in fact a “minimal conception of agency. It is minimal because it catches every entity that makes or promotes a difference in another entity or in a network.”

So far, Sayes has done a thorough job of describing the A and the N but hasn’t said much about the T. Theory, although not give the full consideration Law gives it in the chapter I mentioned at the beginning, is given due consideration insomuch as it rehashes 21st century Latour’s revisionist auto-history of 20th century Latour. In 1998 Latour wrote “On Actor-Network Theory: A few clarifications” which seemed to mark a decline in his publishing rate and began a sort of collating phase where all of his work was to be synthesized and turned into one coherent project with a singular trajectory that smashed the modern binaries and let us fully realize our we were always, already post-modern selves. The problem, and this is freely admitted in both Sayes and Law, is that Actor-Network Theory is not a theory. The generous description given by proponents is that ANT is a methodology, an approach, and a set of sensitizing concepts. It is a “tool to help explicate, amplify and link — not” according to Sayes “a detaily series of rigorous, cohesive, general, and substantive claims concerning the world.

Using and evaluating ANT as primarily a methodological framework for analysis, and not a theory on nonhuman agency allows its practitioners to assign radically varying levels and intensities of agency that are contingent upon any given instance. Whether it is explaining the outcome of a historical event or the alignment of scientific research with industrial capabilities in situ, ANT can give the researcher a level of specificity and resolution that other methodologies lack. Sayes concludes that this interpretation of ANT absolves it of most of its criticisms [edit: see the author’s comment on this below.]:

“An interpretation of ANT that places an appropriate emphasis on the primacy of methodology goes a significant way to minimizing or deflecting some of the most important criticisms that have been made of the position over the past three decades. However, such an interpretation does raise a different set of issues with respect to the claims of the position and its conception of nonhumans.”

The remaining problems come down to ones of generalizability. “the explicit imperatives of the perspective  render it constitutively incapable of providing a general account of how humans, nonhumans, and their associations may have changed over time and might vary across space.”

The inability to compare and provide general accounts of situations is precisely the fatal flaw in Sayes’ article and ANT in general. Sayes’ contention that his “clarification” sufficiently renders ANT absolved of its major criticisms doesn’t hold water specifically because he ignores a vast majority of some of the best criticisms of Actor-Network Theory. It is not surprising, given he spends so little time on these criticisms. He cites Latour more times than all of the non ANT-practitioners in his bibliography combined. As a literature review and a piece of synthesis it succeeds. It is a great summary of the main points of Actor-Network Theory. But the promise that it will help readers  “understand more precisely what exactly it is that they are refusing or affirming” is a disingenuous one.

I have written on ANT before on this blog and most of the major criticisms I catalog are not mentioned here. Namely, the work of Sandra Harding provides damning critiques of ANT’s inability to locate and thus expose structures of domination and control. This would be understandable, even forgivable, if this criticism did not speak directly to the importance of defining the political and moral components of nonhumans that practitioners of ANT (including Sayes) leave intentionally ambiguous. Indeed, Sayes reiterates throughout his article that part of ANT’s strength is its ability to leave the locus of action uncertain or ambiguous amongst and within Actor-Networks. Instead of acknowledging this important criticism, he uses it as an opportunity to call (largely unnamed) critics “insincere” because they “interpret claims concerning the agency of nonhumans as strict theoretical principles that provide a complete account of all nonhumans equally – let alone of nonhumans and humans together.”

At the 2013 American Anthropological Association meeting Kim Fortun gave a thorough and biting indictment of the Latourian project during the “Ontological Turns” panel. Obviously Sayes could not have included and responded to these remarks since they were given while his article was probably in the final stages of editing, but I want to conclude with them anyway because they deserve close attention and, while original, speak to a long-standing criticism that ANT is ultimately a politically and morally conservative methodology. Additionally, given that Sayes ignored similar arguments, there doesn’t seem to be any indication that he would have confronted the latest one.

Fortun notes that the proprietary vocabulary of Latour’s work isn’t just about breaking from a metaphysically problematic past, but also about gaining efficiency through narrow control. “That’s how controlled vocabularies work” she said, “very productively, but with externalities.”

These externalities are rather alarming. Very little of Latour’s work mentions disaster, toxic chemical exposure, corporate malfeasance, or war. This is not merely a problem of content –topics that could be explained through ANT but are never used as the first example in Latourian texts– but a problem of methodology. Fortun asks, “Can, for example, Latour’s project attend to the monster that is the American Chemistry Council and their recent Essential2Life advertising campaign, which worked to cement a sense that there continues to be, in late capitalism, “better living through chemistry?” She says this under a historical backdrop of people who are both disproportionately over-exposed and over-reliant on toxic chemicals. Poor people live in homes made of cancer-causing plastics that, while giving them essential creature comforts that they otherwise would not have in the short term, are also exposing them to long-term health problems. At precisely the historical moment when we need specificity, we get intentional ambiguity.

“Late Capitalism,” according to Fortun “is characterized, in part, by extraordinary explanatory power, supported with extraordinary quantities of data” but this is also bolstered by organized ignorance. Even under the surveillance state there are enormous piles of toxics and plumes of particulates that are completely unmonitored. Nothing about ANT or Latour’s latest “Inquiry into Modes of Existence” seems particularly adroit at understanding these enormous and proven deadly disasters-in-the-making.

Sayes has provided a fine and tangible text that will surely make it easier to talk about ANT. He does not accomplish this, however, in the way he claims. Instead, his article demonstrates a continuing and persistent desire to ignore criticisms about power and inequity. It does not seem to be aware of the consistent criticisms coming from feminist and disaster studies. Instead it asks us to sit down, be quiet and pay attention. We would agree if we had been reading diligently. We must have been paying attention to something else at the time.

 David is on Twitter and Tumblr.

Image by Th3 ProphetMan
Image by Th3 ProphetMan

I’d like to start off with an admittedly grandpa-sounding critique of a piece of technology in my house: My coffee maker’s status lights are too bright. My dad got it for my partner and I this past Christmas and we threw-out-the-box-immediately-wanna-keep it, but the thing has a lighthouse attached to it.  We live in a relatively small (and very old) place and our bedroom is a small room right off the kitchen. The first night we had the coffee maker I thought we had forgotten to turn off the TV.  We don’t really need alarm clocks anymore either, because when it finishes brewing it beeps like a smoke detector. Again, we love the coffee maker (Dad, seriously we love it.) but sometimes it feels like wearing a shoe that was designed for someone with six toes.

As anyone that has seen a Gary Hustwit documentary can tell you, design is super important. Not just for things worthy of a Jony Ive industrial design-gasm (climax at 1:17) but for all sorts of stuff. Even the mundane things like coffee makers. I did a Google image search of our coffee maker, managed to check out CorpCofe.com for some other options, got back to what I started with and later when it isn’t floating in an ethereal white void its hanging out in some really swank kitchens:

cuisinart-coffee-on-demand-12-cup-programmable-coffee-maker-10

cuisinart-coffee-on-demand-12-cup-programmable-coffee-maker-4

What is that? Granite? Rounded edges? Very sensible. Turns out our coffee maker wasn’t made for us. This coffee maker was thrust into the world with one singular mission: to take up a post in a suburban ranch house where it must scream its status across a great room to the very bottom of the stairs that lead up to the master bedroom. That’s what the focus group told it to do.

You can almost imagine the Land’s End couple sitting in the plain white room describing their daily coffee “ritual” and laughing nervously when one of them brings up that time last week where it was his turn to get the coffee maker ready in the morning but he didn’t remember if he had set it to brew at 7AM so I had to go down to check it and the floor was cold and you know how much I hate it when my feet get cold and… Why yes, it would be nice if I could tell if the timer was set from across the room.

If the brightness of my coffee maker is an indicator light for the latent, smoldering anger of young, suburban middle class couples then I expect the divorce rate to rise considerably in the next 5 years.

My coffee maker is, admittedly, a problem with extremely low consequences and easy solutions. I will grow used to the bright light or just put a piece of tape over the light. Its not even worth exchanging (if we had kept the box) for one of the dozens of other drip coffee makers that don’t have features associated with bigger homes. If you were to make a list of “Things that Impact Coffee Maker Design” I doubt house size would be high on that list and yet here is a case where it obviously seems to have impacted this appliance. Nice coffee makers are made for big homes because people that can afford big homes also have nice coffee makers.

I decided to share this Parable of the Coffee Maker because I think it is mundane enough to be irrefutable. When something isn’t quite designed to your lifestyle you experience it as “I love this thing but I don’t understand why it does X, Y, and Z.” It would seem to follow then, that design incompatibilities between designed object and user would become increasingly obvious as any given user drifts further away from the intended user of the designed object. But that is, generally, not the case. Design something that’s just a little off, and it’s an itch you can’t scratch. Design entire product categories with only specific people in mind and its difficult to imagine the material world any other way.

While I can easily describe the problem I have with my coffee maker, bigger or more pervasive incompatibilities can paradoxically be harder to detect and depict because while its easier to imagine a coffee pot with a dimmer light, its harder to think of a feminist cell phone or an anti-racist social media platform. While you could make a passable argument that building products with big houses in mind reinforces suburban sprawl, the Parable of the Coffee Maker does not do a good job of portraying the way product design is shot through with imperialist white-supremacist capitalist heteronormative patriarchy.

If you want a good example of patriarchal product design, look no further than the latest crop of smartphones. On a recent trip to to Turkey to study the Gezi Park protests, sociologist Zeynep Tufekci struggled with something she had noticed for a while: “good smartphones are designed for male hands.” She couldn’t use her phone one-handed which meant, when tear gas was shot into the crowd, she was unable to take a decent photo to document the event. She goes on to write:

As a woman, I’ve slowly been written out of the phone world and the phone market. That extra “.2″ inches of screen size on each upgrade simply means that I can no longer do what I enviously observe men do every day: Check messages one-handed while carrying groceries or a bag; type a quick note while on a moving bus or a train where I have to hold on not to fall.

It doesn’t stop with size. What’s preinstalled on the phone is also indicative of who designed it. There’s no good reason why most smartphones have a pre-installed “stocks” app but no period tracking app even though many more people experience periods than own stocks. If you happen to download one of the more popular period tracking apps you’ll have to deal with more awkward euphemisms for periods and intercourse than high school sex ed.

It isn’t news that Silicon Valley has a diversity problem, but how that homogeneity affects their products is only just being recognized. Most of the time, defenders of the faith will mistake (perhaps intentionally) the structural problem of patriarchy or racism with the singular problem I described in the Parable of the Coffee Maker. You can see it on full display in the comments of Zeynep’s piece mentioned above. Since its easier to reveal and dismantle something once you’ve named it, let’s call this phenomena I’m about to describe “The Design Sir.” Tom Scocca’s recent essay on smarm captures a facet of the Design Sir:

If people really wanted a better world—what you might foolishly regard as a better world—they would have it already. So what if you signed up to use Facebook as a social network, and Facebook changed the terms of service to reverse your privacy settings and mine your data? So what if you would rather see poor people housed than billionaires’ investment apartments blotting out the sun? Some people have gone ahead and made the reality they wanted. Immense fortunes have bloomed in Silicon Valley on the most ephemeral and stupid windborne seeds of concepts, friends funding friends, apps copying apps, and the winners proclaiming themselves the elite of the newest of meritocracies. What’s was wrong with you, that you didn’t get a piece of it?

Or, put more simply, smarm “says ‘Don’t Be Evil,’ rather than making sure it does not do evil.” But the Design Sir doesn’t gravitate toward any ole’ “stupid windborne seeds of concepts.”  The Design Sir wants to build devices that control and manipulate. Nowhere is the Design Sir’s motivations more transparent than in a concept demo. Free from the restraints of what is technically possible or what anyone else would be willing to buy, the Design Sir is free to build the dystopian future of his dreams:

YouTube Preview Image

I really need you to watch that video. If you’re on a train or something and don’t have headphones just play it on mute. You’ll get the idea. Done? Okay let’s keep going.

This particular concept demo (thanks to Kate Crawford and Nathan Jurgenson for tweeting about it last Saturday) is particularly telling because you get to see the world through the Design Sir’s eyes. As Kate Crawford (@katecrawford) tweeted, “What a perfect, affectless vision of a future where rich white dudes prey on the unsuspecting via devices that act like their mommies.” Design Sirs seek to augment the parts of reality that are a mystery to them: What do girls like? How do I clothe myself? The Design Sir’s smarm is bolstered by the false assumption that their work is based majorly on unbiased research. This is what Donna Haraway calls the [PDF] “gaze from nowhere.”  It allows bestows, “the power to see and not be seen, to represent while escaping representation.”

It may seem, up to this point, that the consequences of the Design Sir’s reign stop at too-big phones and glasses for the emotionally stunted but I think the problem goes much deeper. Langdon Winner’s (@langdonw) chapter “Political Ergonomics” in Buchanan and Margolin’s Discovering Design is a great theoretical starting point for realizing the full consequences of Design Sirs. In this chapter, Winner encourages us to think of technology less in terms of agnostic tools and instead see them as (his emphasis) “political artifacts that strongly condition the shared experience of power, authority, order, and freedom in modern society.” He suggests that we should think of designed objects the way Hannah Arendt described the creation of poetry or architecture. Separate from “labor” and “action,” people engaging in the production of “works” (as in works of art) are making “things that will endure.” They’re the physical instantiations of who we are and what our society values, promotes, and protects.

If we look through the portfolio of devices and services that have been deliberately and painstakingly designed, what do we find? What do these objects value, promote and protect?

Earlier this month Amanda Hess (@amandahess) wrote in the Pacific Standard:

On the Internet, women are overpowered and devalued. We don’t always think about our online lives in those terms—after all, our days are filled with work to do, friends to keep up with, Netflix to watch. But when anonymous harassers come along—saying they would like to rape us, or cut off our heads, or scrutinize our bodies in public, or shame us for our sexual habits—they serve to remind us in ways both big and small that we can’t be at ease online.

Hess illustrates that while private social media companies should keep our information away from prying eyes, “the impulse to protect our privacy can interfere with the law’s ability to protect us when we’re harassed.” Through popular pressure and focused campaigns, social media companies have begun to put serious effort into building “report abuse” features into their sites but there doesn’t seem to be any effort towards fundamentally altering sites to account for the persistent nature of harassment online. Which isn’t all to say that social media is antithetical to feminist or anti-racist values (see: sarcasm bombing) but the default reaction is to protect privacy and not necessarily people. Its an incredibly privileged and naive conceptualization of privacy that hasn’t been forged in catcalls and rape threats.

Hess’s essay is an excellent example of the messy situation most designers find themselves in. Its not about “striking a balance” between different “interest groups” or cowtowing to public pressure, design is –when you get right down to it– deciding what kind of world you want to live in. What is a designer to do?

Winner suggests that what’s needed is a new field of study that he calls, “political ergonomics.” He observes that “Many criticisms about the relation of technology and social life are actually a commentary about an unhappy fit between the two.” Political ergonomics would answer questions like, “Which kinds of hardware and software are distinctly compatible with conditions of freedom and social justice?” and “How can we design instrumental systems conducive to the practices of a good society?” The prickly part here, is that Design Sirs think they’re making a better society. They believe that so fully and with such a massive heaping of smarm that to suggest they may be causing harm sends them into a bizarre utilitarian calculus of comparing the Arab Spring to Rebecca Watson.

Creating a new expert class that consults and works with designers might help open up the creative process to think more deeply about the diversity of human experience. But I think we can get just as far by making a concerted effort to make design and engineering teams more diverse.

That being said, I’m uncomfortable with simply prescribing “more women in IT!” as the solution. First, because I feel a little weird outlining massive, structural problems and then telling young women in high school “welp, guess you should fix that” and second because recent research has shown that engineering programs actually make their graduates less interested in issues of social justice than when they came in. An article [PDF] in the latest issue of Science, Technology, and Human Values by Erin A. Cech of Rice University showed that even in engineering programs at all girls schools, “over the course of their engineering education students’ beliefs in the importance of professional and ethical responsibilities, understanding the consequences of technology, understanding how people use machines, and social consciousness all decline.” This culture of disengagement, as Cech calls it, is why I don’t think the answer lies in adding more professionals. Its the structure that needs changing.

There’s a lot of excellent work being done right now in the area of participatory design. The works of Matt Ratto, Phoebe Sengers, Sarah Wylie, and even the decidedly neoliberal Eric Von Hippel point toward new kinds of design processes that open up the work flow to non-experts. Carl Disalvo (@cdisalvo) has even coined the term adversarial design to describe objects that are designed to invoke dissensus and and disagreement. Not all finished products need to satisfy all people all the time. It would be totally fine, in fact, if things were designed to make Design Sirs as deeply uncomfortable on the internet as most women.

I opened up and spent a decent amount of time on the Parable of the Coffee Maker because I thought it was just as crucial to describe what is not at stake here, than what is. This isn’t about making products for her or soothing hurt feelings. This is about intervening in an institution that is making deep and fundamental changes to society without knowing or even seeing most of the people that compose it.

David is on Twitter and Tumblr.

Thanks to @smorewithface, @hautepop, and @tanyalokot for helping with some research.

 

The plot of Scream is impossible without cordless phones.
The plot of Scream is impossible without cordless phones.

In Children of Men Clive Owen’s character Theo is trying to secure “transfer papers” from his cousin Nigel who seems to be one of the few rich people left in the no-one-can-make-babies-anymore-dystopia. The two older men are sitting at a dining table with a younger boy, presumably Nigel’s son, who seems to be inflicted in some way. He’s pale and stares vacantly at somewhere just past his left hand which is eerily still in between the twitches of fingers that are adorned by delicate wires. He doesn’t respond to others in the room and isn’t eating the food in front of him. After Nigel yells at him to take his pill we notice that they boy isn’t really sick or particularly disturbed, he’s playing a game attached to his hand.

In the original P.D. James book of the same name (highly recommend!) that scene never took place but you do learn more about the last and youngest generation to be born: the Omegas. “No generation has been more studied, more examined, more agonized over, more valued or more indulged….Men and women, the Omegas are a race apart, indulged, propitiated, feared, regarded with a half-superstitious awe. In some countries, so we are told, they are ritually sacrificed in fertility rites resurrected after centuries of superficial civilization.”

As a genre, science fiction and fantasy are prime avenues for sociotechnical critique. In the moments before we know he’s playing a game the audience sees Nigel’s son as Nigel sees him: Disengaged from those around him, the thing that has monopolized his attention is incomprehensible to the point that we are unable to understand why it is so captivating. You can just imagine the countless Dad jokes that happened in parking lots after that movie let out. (“That’s what  you’re like when you’re on the GameBoy!”) In addition to being prime avenues for such critique, many writers explicitly employ the narrative tropes and tools of the genre specifically and consciously to engage in that criticism; “sociological” science fiction is not the end-all-be-all of SF&F, but it’s a major player and it has a very long history. From Heinlein and Asimov to LeGuin and Delaney to Gibson and Atwood, even the most sciency stuff has usually had some form of social component. These aren’t just narrative tools; they’re thinking tools, established ways of working through the implications of something, of setting up thought experiments. When one is used to engaging in varying degrees of worldbuilding, it’s easier to take the existing world and tweak its settings to see what happens.

But as William Gibson – and many others – have pointed out, the world in which we live is now explicitly science fictional in a lot of ways. To the extent that writers in books, movies, and TV used to imagine the future, we’re living in it right now. This has implications for how writers engage in futurism; it also has implications for how writers working with contemporary settings make use of all the different ways in which people make use of digital technology.

Strange, therefore, that so many writers are so goddamn bad at it. Like, really laughably terrible. What gives?

Of course there’s the ubiquitous “enhance”  TV trope where someone stands behind another person seated at a computer and tells them to zoom in to grainy camera footage to find the killer’s face in the reflection of the coffee cup sitting on the table. That stuff always comes off as lazy writing, but it seems like there’s some willful ignorance at work too. When entire shows refuse to acknowledge the existence of smartphones or social media it looks downright bizarre.

Scene from The Killing where the police find a teenager's Super 8 home movie. In 2012.
Scene from The Killing where the police find a teenager’s Super 8 home movie. In 2012.

The Killing, a crime drama set in 2011 Seattle, is full of phone conversations… on flip phones. In one of the few instances where a smartphone is mentioned (again, this is set in Seattle) both on screen characters agree they’ll never buy one because “I’ve seen what they do to my son.” Sometimes these phones can take what look to be low light, high motion HD footage, in other instances their grainy still photos “aren’t enough to go on.”

Also, where were iPhones in Breaking Bad? Why does savewalterwhite.com look like some Geocities site from 1997?

Part of the reason movies and TV do such a poor job is because its difficult to portray social action that flits from Facebook, to text messages, to face-to-face contact and back again. Also, no prop designer wants to spend their limited funds and time on procuring smartphones and designing fake interfaces that steer clear of trademarked corporate brands. Especially if they’re only going to get a grand total of 20 seconds of screen time. Perhaps that’s why a lot of the code you see on TV are copied and pasted from a website’s source code or Wikipedia.

But groundbreaking shows like Sherlock have found ways to portray conversation without an over-the-shoulder shot of a computer screen, and weave text messages with face-to-face conversations in a provocative way. The difficulty here, and something that Sherlock largely gets right, is that smartphones or blogs are neither deus ex machinas nor window dressing. In the aggregate these inventions change social norms and have a big impact on what characters are and are not capable of in a scene, but they don’t necessarily have to drive the plot or become non-existent. They just are.

Horror movies seem to have it uniquely bad. You can’t make a character vulnerable if friends or the police are a phone call away. Writing around cell phones can be as simple and rote as “I don’t have signal in this abandoned mental hospital” to more complex narratives where the technological devices themselves are implicated in the suspense (i.e. The Ring, Scream, V/H/S, or Grave Encounters). But like science fiction and fantasy, horror movies are all about “what ifs” and paying close attention to the ways human relationships are mediated, controlled, and afforded. Just like the video game in Children of Men, horror movie writers rely on the expected technological literacy of their audience. The author can play with the expected capabilities of a technology, the recognizability of the device on screen, and/or the social norms associated with the object on screen to elicit surprise, fear, or foreboding.

well-the-resolutions-too-poor-it-wont-help-much-to-enhance-it
The first and last time a TV show understood how digital images work.

Some of this is probably the newness of this kind of technologically mediated interaction and experience of reality. Sometimes the imagination of creative people leaps forward, but often the practical aspects of it lag; imagining the future can sometimes be a great deal easier than dealing with the present simply because one is freed of the pressure to get it right and just have fun worldbuilding. Writers might use smartphones and write their stuff on laptops and tablets and collaborate via the internet and social media, but writers learn how to write in part from other writers, and a lot of the writing out there just doesn’t deal with this stuff. There is, as yet, no well-established toolkit, though we all know how to deal with phones and letters in the simplest of terms. But phones and letters don’t require the dramatic adjustments in a writer’s understanding of how interaction works now. It’s not that they’re completely new, and there are things to build on, but for a writer working from an already limited toolkit, they’re just new enough.

But also, as Sarah’s written before, some of it is sheer laziness and/or an assumption that this stuff is neither terribly important nor terribly interesting. As fiction writer Toby Litt put it in an essay on “The Reader and Technology”:

I don’t want to overemphasize this. You could imagine a similar anxiety over how the telephone would undermine fiction. Perhaps it is just a matter of acceleration. But I don’t think I am alone in already being weary of characters who make their great discoveries whilst sitting in front of a computer screen. If for example a character, by diligent online research and persistent emailing, finds out one day – after a ping in their inbox – who their father really is, isn’t that a story hardly worth telling? Watching someone at a computer is dull. Watching someone play even the most exciting computer game is dull. You, reading this now, are not something any writer would want to write about for more than a sentence.

Dude. Dude.

So what to do about this? The problem – or aspects of the problem – isn’t all that hard to diagnose, but with a problem that’s still taking form and manifesting in ways that we can see, a solution is a little harder to come by. Probably the best thing that can be said is that, again, there are media out there that are getting it right, or at least getting it closer to right than most other people. If writers write what they know – often what they see others doing – the toolkit will naturally expand on its own, and what we’ll see will be a process of growth in how stories are told, which is a natural thing that’s occurred many times in the long, long past. Some of it will also simply come from the next generation of creative types, who have far more familiarity with the day-to-day realities of this kind of experience than older generations of writers. Storytelling is always evolving, and what we’re seeing right now is a new stage in that evolution.

Until then, we’ll just have to endure some really, really poorly done technology.

Sarah and David have breached the system, are hacking code, and enhancing photos on twitter at @dynamicsymmetry and @da_banks.

 

YouTube Preview Image

 

 

On New Year’s Eve the biggest fireworks display ever was launched off of the biggest tower in the world. Dubai’s fireworks show was, in terms less vulgar than the display itself, an undulating orgasm of global capital. The 500,000 fireworks mounted to Burj Khalifa Tower and the surrounding skyscrapers, were reportedly viewed live by over a million people on the ground and livestreamed to millions more around the world. I can’t find a price tag for the display (too gauche?) but given that your typical municipal fireworks display for proles can easily top six figures, lets just assume that you could measure the cost of this display in national GDPs. It was profane in the way Donald Trump’s continued existence is profane. The fireworks display was so huge —such an utterly perfect metaphor for capitalism itself— that no single person standing on the ground could witness the entire thing. It was a spectacle meant for camera lenses.

“The spectacle” is a well worn topic of critical social thought. Guy Debord, one of the better-known members of the Situationist International, wrote that modern society was a “society of the spectacle.” That is, modern society alienates individuals not only during the production of goods, services, and media (just as Marx said) but also during their consumption. “What was once directly lived” says Debord, “has become mere representation.

Part of the modern myth of capital is that things are continually getting bigger and better. That forward in time means ever-increasing improvement. New Year’s celebrations are always a moment of assessment. Not only are we hyper-aware of the passage of time, but we are also strongly encouraged to assess everything that we did last year and to make resolutions about what we will do next year. Any one of us might resolve to read more or eat less candy, but for those that control the means of production, the resolution is always the same: make bigger, make more. For the geographic and socio-cutural centers of industry, New Years celebrations are an opportunity to demonstrate and confirm that we are still on an upward trajectory. Debord writes:

“It is the omnipresent celebration of a choice already made in the sphere of production, and the consummate result of that choice. In form as in content the spectacle serves as total justification for the conditions and aims of the existing system. It further ensures the permanent presence of that justification, for it governs almost all time spent outside the production process itself.”

The spectacle is meant to make you feel good about decisions already made. How one should regard the passing of a new year, and the manner with which capital has decided to mark its continued successes (regardless of actual achievements) are difficult to separate out. The rituals we are used to, what we picture in our heads when we think of ringing in a new year, are shaped by the media representations of a new year. If Burj Khalifa Tower had never been built, you never could have known such abundance. World records would have never been broken. We would be stagnant. Imagine what would happen, how people would talk about, a time square celebration that was noticeably “less than” the previous year’s. There’d be talk about a languishing economy and wide-spread malaise. We’d probably sacrifice Ryan Seacrest to the god of sex scandal but nothing would work. The whole year would be fucked.

Time Square in all its opulence and unabashed advertising (if you decided to shout all the words on the screen during the ball drop, as someone I was with during new year’s eve opted to do, you would have shouted “Happy New Year Toshiba!”) is about as secular as rituals come. Thanksgiving is close, but whereas Thanksgiving has a flimsy but utterable origin story, celebrating a new year just seems so objective. Not celebrating the new year requires explanation. It also requires a lot of work, given that celebration is everywhere. Even if you just want to watch television, you are presented with one company or another’s effort to produce a profitable tradition.

Capitalism abhors a solved problem. If everyone seems satisfied, you’ve got a stagnating market on your hands. The ever-present demand of capital to make celebrations bigger as a simultaneous representation, demonstration, and celebration of increasing market indicators means you also have to find new ways of showing that celebration. As New Year’s Eve celebrations get larger and more elaborate, we run up against the very finite boundaries of human cognition and perception. No single person standing on the ground can see fireworks illuminating and outlining an entire human-made archipelago. So you fly helicopters with high definition cameras on pre-determined flight paths so that the fireworks display and its recording are one in the same choreography. Witnessing the fireworks requires an augmentation of the senses.

The millions that stood in Dubai and watched the display in person got a very entertaining display, but they did not get —for lack of a better term— the full picture. Perhaps not “less than” but their own smartphone cameras could not capture everything that happened. It is a display that is so large, so incompressible to the individual, that its totality can only be captured by the event coordinators themselves, or by the collective documentation of the presenters. Which is not to say the two are completely different. In fact the two exist in knowledge of one another and are produced as such. In the moments where the camera pans to the audience we see the moment of recording that will eventually end in hundreds of personal YouTube videos of the same event from different perspectives. Each will be different but they will all, in a sense, tell a similar story: That even something meant to entertain, under capitalism, must eventually graduate to the level of over one’s head and behind one’s back.

When The Simpsons were still good (1996) there was a 4th of July episode where Homer buys an enormous firecracker. The man selling it to him pitches the explosive device by saying “celebrate the birth of your country by blowing up a small part of it .” This contradiction —that true celebration of something requires you destroy a small piece of it— is at the heart of consumption. After all, what is consuming but the act of destroying something so that it may become a part of you? With every new year’s eve party comes a new production problem to be solved: how do we capture and curate a few moments of unrepentant excess so that everyone may sample a small bit of it? And yet, given that we have all consumed the spectacle at one point or another, to denounce the New Year’s Eve celebration as mere spectacle is to engage in the most obnoxious of moralizing behavior. Debord himself says, “The spectacle is not a collection of images; rather, it is a social relationship between people that is mediated by images.” For him, that social relationship is a process of alienation, but for me and many others, to write off these thoroughly corporate celebrations as just corporate, would be to disregard my own (albeit nonconsensual) participation in the creation of that event.

Whether it be through corporate research, for which I am a kind of boundless informant; or my own deep desire to watch the ball drop on New Year’s Eve (I always feel its absence if the party doesn’t have it playing somewhere), these displays are a part of me. Just as the camera-equipped helicopter is meant to fly around Dubai, I find myself at a party maneuvering to a screen at around 11:55. To discount this as some kind of false consciousness would be wrong. Instead, I want to find what about the celebration is meaningful to me and reclaim it as the property of my chosen community. What that actually looks like, however, is a complete mystery to me.

David is on Twitter and Tumblr.

The Planned Headquarters of Apple Inc.
The Planned Headquarters of Apple Inc.

The year is 1959 and a very powerful modern art aficionado is sharing a limousine with Princess  Beatrix of the Netherlands. The man is supposed to be showing off the splendor of the capital of what was once —so optimistically— called New Amsterdam. His orchestrated car trip is not going quite as he had hoped and instead of zipping past “The Gut” and dwelling on the stately early 19th century mansions on Central and Clinton Avenues, Beatrix is devastated by the utter poverty that has come to define the very center of this capitol city now called Albany, New York. The art aficionado, unfortunately for him, cannot blame some far away disconnected bureaucrat or corrupt politician for what they are seeing because he is the governor of this powerful Empire State and he has done little to elevate the suffering of his subjects. He resolves, after that fateful car trip, to devote the same kind of passion he has for modern art to this seat of government. Governor Rockefeller will make this city into a piece of art worthy of his own collection.

Many decades later and just a couple of hours East on I-90, a building is falling down in perpetuity. A design critic, upon seeing it, writes in the Architectural Record,

It looks as if it’s about to collapse. Columns tilt at scary angles. Walls teeter, swerve, and collide in random curves and angles. Materials change wherever you look: brick, mirror-surface steel, brushed aluminum, brightly colored paint, corrugated metal. Everything looks improvised, as if thrown up at the last moment.

The building, according to its architect and administrative clients, was meant to impose and embed an overwhelming desire to implode structures and destroy barriers. The building was meant to make a deliberate and powerful intervention into the mutual shaping of social structure and material arrangements. “The main problem I was given was that there are seven separate departments that never talk to each other.” The Architect says. “…when they talk to each other, if they get together, they synergize and make things happen, and it’s gangbusters.” The implosion metaphor becomes a little too real as the behind schedule and over-budget building begins to come apart at the seams. Some of the humans inside the building do structural analysis for a living and point out that while the design is possible, the execution is severely flawed. In other words, MIT’s Stata Center designed by Frank Gehry made an imperfect transition from bits to atoms: Gehry has made a name for himself by designing buildings that are only possible in a world augmented by computers, but seems to have spent precious few hours considering how social the birth and life of buildings truly are.

Rockefeller’s plaza required 98 acres —350 businesses and 9000 homes— of downtown to be completely bulldozed to make room for a museum, a library, a performing arts center, five immense skyscrapers for state executive and legislative offices, three floors of underground mall, a small bit of government housing, and concomitant parking garages. All of which was clad in pristine white marble (well- except for the government housing of course) and accessible only by the highway. It was panned by architectural community (“A naive hodgepodge of barely digested design ideas….rumors of Le Corbusier, eavesdroppings of Oscar Niemeyer, threats of [Hitler’s personally chosen architect] Albert Speer.”) and Albany residents alike (To a local reporter: “When you go up there [the plaza] honey, do me a favor and take a bomb along.”).

An 8mm home movie that captures the parade that welcomed Princess Beatrix to Albany. C/o Times Union

 

You might think that of all the architectural patrons that might “get” the social nature of technological systems, Silicon Valley would be it. But, as architecture Paul Goldberger explains in a recent essay for Vanity Fair:

…Silicon Valley now wants to grow up, at least architecturally. But it remains to be seen whether this wave of ambitious new construction will give the tech industry the same kind of impact on the built environment that it has had on almost every other aspect of modern life—or even whether these new projects will take Silicon Valley itself out of the realm of the conventional suburban landscape. One might hope that buildings and neighborhoods where the future is being shaped might reflect a similar sense of innovation. Even a little personality would be nice.

Golderberger’s essay is excellent, despite some hackneyed moments. There is the obligatory pause to admire the child-like atmosphere (“The space within looks like … well, if an undergraduate publication had more than 15,000 staffers…”) and there is some mild digital dualism (“the real world is a kind of sideshow when your mission is to shape the virtual world.”) that ultimately undermines Golderberger’s analysis. He writes:

The goal of so much that has been invented in Silicon Valley is to take our consciousness away from the physical world, to create for us an alternative that we can experience by turning aside from the physical world and into the entirely different realm that all this technology was creating. When you are designing the virtual world and can make it whatever you want it to be, why waste your time worrying about what real buildings and real towns should look like?

Here I think Golderberger is getting it backwards. Or, perhaps more precisely, he is only seeing one iteration, of a much larger sequence of reactions to late capitalism. He recognizes and nicely lays out how the Googleplex or One Infinite Loop look like they could be almost anywhere. They are unassuming, boring white office boxes that have been gutted and filled with panini bars, standing desks, and doggy daycares. The thousands of employees that spend 60-80 hours a week in these campuses usually take private company-owned shuttles to these nondescript office parks so that they may begin work on their commute and leave the anxiety of navigating congested highways to an underpaid driver.

What Goderberger is missing is that Silicon Valley as we know it today is a product of a virtual world, not the creator of it. Virtual in the sense that suburbs are meant to be interchangeable, universal substrates upon which we graft our hopes, dreams, and preferred geographic genres. The same sub-development, with a few alterations in color scheme and road signage, can sufficiently represent the natural flora and fauna that its construction displaced; whether it be Prairie Bluffs in the southwest, Flamingo Cove in the South, or Eagle’s Landing in the northeast. It is in this infinitely pliable world that the Internet thrives. The ‘burbs is the social web’s natural habitat. These headquarters aren’t the product of ignoring “what real buildings and real towns should look like” they are a deliberate if not conscious choice to house the work of building networks within the progeny of Le Courbusier’s modern vision of total living machines. The Cold War’s promise of mutually assured nuclear destruction not only spurred the computer network research that eventually turned into the Internet we know today, it also demanded that dense cities be abandoned in favor of sprawling suburbs.

The long project of amassing and decentralizing capital into nondescript office parks and ranch houses has done immense damage to tight-knit communities and rendered most neighborhoods unnavigable by anyone incapable of financing and operating a fleet of cars. It created homogenous residential neighborhoods that spanned for miles in all directions leaving children and the elderly stranded in elaborately stuccoed cages. It should be no surprise then, that so many early accounts of accessing the Internet are about finding communities that “finally understood” the user. These networks offered up community where there was none. Access to the Internet, and the garage-based experimenting that went-along with it, was a suburban phenomenon mostly because it was too expensive for anyone but the affluent, but also because the youth of the affluent lived in places that were devoid of a place-based community.

Google Maps Looking at the GooglePlex
Google Maps Looking at the GooglePlex

The campus life of the Silicon Valley brogrammer could be seen as a neoliberal fork of a thoroughly socialist project. Rockefeller Plaza was the most literal interoperation of high modern architecture. These enormous systems, if implemented writ large, promised nothing less than the erasure of poverty and the total equality of all people. An Albany op-ed writer went so far as to demand that part of the plaza be devoted to a model slum because “little children will grow up never knowing what it was like to live in the days before we abolished poverty.”  Silicon Valley has a much more neoliberal approach to poverty these days. The decision to “mature” from the office park to the expertly designed campus isn’t so much a refocusing on architecture, as it is an opportunity to finally make the literal walled gardens that up until now were metaphors for their proprietary social networks [PDF].

The pendulum is starting to swing in the opposite direction: Instead of the completely interchangeable office buildings, a piece of physical plant meant to fade into a background of interchangeable gated communities, we have tightly coupled live-work systems that flow from a hip residential city to a rural office campus. Whereas Rockefeller wanted to build a machine that would house the state apparatus that would lead to (eventually, maybe, theoretically) the elimination of poverty, Silicon Valley companies are making closed and private systems that offer socialism for those within the company, by leaching off of the surrounding hinterland. This latest iteration of networks and built environments is earmarked by company campuses sitting atop long term or permanent tax free land operating fleets of private shuttles instead of investing in public infrastructure. It is a classic case of socialism for the rich and capitalism for the poor. Just like an iTunes to iPod connection, it’s a closed ecosystem but it “just works.”

The Stata Center at MIT, designed by Frank Gehry
The Stata Center at MIT, designed by Frank Gehry

On and on we go, from the total and highly centralized systems of high modernism, to the decentralized Cold War suburbs, and back to the hierarchical closed systems of the “creative class.” Each architectural ideal type is inflected by a digital network that begins as an isometric reflection of that built environment before it starts working to undermine it:  A Cold War mindset gives rise to a closed-but-decentralized network of intricate machines that connects total institutions (universities, government offices) only to eventually yield to the “suburban web” made up of exclusive communities just like the affluent sub-developments their creators grew up in. And just as the Stata Building –meant to house computer scientist– couldn’t be designed or really even conceived of, without the prior existence of computers, so too will the design of these headquarters be deeply influenced by the social web. We might even expect them to have the same set of its-not-a-bug-its-a-features: far too literal instantiations of disruption, implosion, and context collapse.

Now the built environment is starting to reflect this latest iteration of closed loops and walled gardens. Of course the aesthetic of horizontal organization will remain, as it serves the purpose of obscuring the lines of power. The one long continuous room that is set to be Facebook’s new headquarters (designed by Gehry) is the perfect example of this command-and-control bureaucracy masquerading as elite technocracy.“The Zuckerbergian dream, then” as The Altantic’s Alexis Madrigal puts it, “is everyone sitting in one big room with no walls or visible separations. Control is exerted only by secret codes and invisible meridians of force.” It’ll be interesting to see how our social media changes, once these shining towers are completed and begin to house the day-to-day administration of the network. Will the pendulum swing back and usher in a new generation of networks that revert back to the interoperability of the early web? Or will we see a doubling down of control and whatever remnants of the non-corporate web will be forever gated off like so much Palo Alto suburbs?

David is on Twitter and Tumblr.