4244539669_8dceaae1ba_z

In a widely-shared article on The Intercept, Sam Biddle made the point that, “Trump’s anti-civil liberty agenda, half-baked and vague as it is, would largely be an engineering project, one that would almost certainly rely on some help from the private sector.” The center of his article, that of the six major tech companies he requested comments from only Twitter gave him an unequivocal statement that they would not help build a Muslim database, was chilling even though most of the companies just never responded. The role of engineers and designers in carrying out political ends often relegated to business’s policies. That is, engineers themselves are seen as completely beholden to whatever their bosses decide their job should be. I want to look at this from a different angle: why are engineers so willing to defer responsibility for their actions and why are they so often in positions to do so?

Simply put, border security doesn’t happen without engineers willing to build the walls or design the drones that make up that border. If, as the oft repeated Bruno Latour quote goes, technology is society made durable, we should be paying attention to (and putting a lot more pressure on) who is choosing which parts of social life persist without direct, constant human intervention. Making sure that companies behave ethically is one strategy but we should also look at how engineers themselves are trained to deal with morally dubious projects. Many of the academics who study engineering pedagogy and the accreditation bodies that oversee engineering programs have come to the conclusion that not only are engineers not given the necessary skills to navigate social and political conundrums, they are primed to follow orders regardless of their moral outcome.

Consider first, the disturbing fact that engineers are vastly overrepresented in extremist groups of all stripes: from neo-nazis to jihadists, engineering is the most common educational background of right-wing extremists. Diego Gambetta and Steffen Hertog, the authors of a book on the subject found that relative to their prevalence in any given nation, engineers are vastly over-represented in violent right-wing extremist groups. Left-wing extremist groups that advocate or support violent means, on the other hand, have no engineers amongst their ranks and are instead made up of people with backgrounds in the social sciences and humanities.

Gambetta and Hertog’s reasoning for this phenomenon is based in political psychology: both engineers and right-wing extremists put considerable emphasis on hierarchy, order, clear boundaries between categories, and unchanging conditions. The personalities that choose right-wing extremism and engineering overlap considerably. Of course, every engineer is not a nazi, but we should never lose site of the numerical fact that engineers were over-represented in nearly every right-wing revolution of the past century: from 1970s Iran to 1920s Germany. It is unclear from their book whether their discovery is due to self-selection into engineering and fundamentalist groups or if engineering pedagogy primes people to accept right-wing extremism. In other words, the jury is still out as to whether this is a matter of correlation or causation, but there is some evidence to support the latter.

Embedded not just in our existing gadgetry but in the very methods and processes that design and build new ones, are very specific ideological valences. This goes as far back as Newton’s Principia where the very foundations of calculus were laid out in such a way to be directly beneficial to engineers building warships. Engineering, as social scientists Dean Nieusma and Ethan Blue like to say, has always been a war-built discipline. From the sorts of organizations engineers are trained to work in (very hierarchical ones) to their professional ethics (the customer/employer/contractor is always right), they are taught unquestioning deference to authority and unremitting neutrality towards issues of political consequence.

Some who study engineering pedagogy and professional development make strong arguments for including peace and justice in college curriculums. Some have gone so far as to build an alternative “shadow code” for engineering departments willing to build social justice into their lessons. Education scholar Michael Lachney and I, in our contribution to this shadow code, have suggested that engineers become fluent in the differences between violence and property destruction.

Imagine if medical doctors, instead of taking the Hippocratic Oath that says, in part, “do no harm”, instead took an oath to never knowingly expose their employer to malpractice suits? No one, patients included, wants to be involved in malpractice but the change in allegiance should be clear: we want doctors to be first and foremost concerned with their patients’ well-being and their hosting institutions should be directed toward supporting that concern. Why should engineers be any different? Why are there no oaths to build things that cause harm to fellow humans? Why are there no licenses to be revoked if an engineer knowingly and consistently builds things that do great harm? These seem like common sense requests until you look at the major employers of engineering graduates: military contractors, resource extraction companies, and the governments that own those militaries and resources.

A new society needs a new kind of engineer. One who would recognize that designing a prison is not unlike designing a building with no foundation. Both are a kind of malpractice: building something that has been shown time and time again to produce bad outcomes. Engineers must understand their impact on society as well as they know Java or the tensile strength of concrete. That way, when they are told to build that wall or compile that database, they at least have a professional set of standards they can hold up as antithetical to their assigned project.

David is on Twitter.

Image source.

Le Corbusier's La Ville Radieuse
Le Corbusier’s La Ville Radieuse

The motor has killed the great city. The motor must save the great city.”

-Le Corbusier, 1924.

 

In the fast and shallow anxiety around driverless cars, there isn’t a lot of attention being paid to what driving in cities itself will become, and not just for drivers (of any kind of car) but also for pedestrians, governments, regulators and the law. This post is about the ‘relative geographies’ being produced by driverless cars, drones and big data technologies. Another way to think about this may be: what is the city when it is made for autonomous vehicles with artificial intelligence?

The question of planning cities in response to automobiles is not a new one. It was addressed through a number of architectural and urban planning visions in the 1920s-50s. Two of the most significant are Le Corbusier’s La Ville Radieuse (‘The Radiant City’), and the Plan Voisin/ Ville Contemporaine (Voisin was the car company that bankrolled this plan) for Paris. The former was never achieved, and the latter was more developed but also left incomplete. Corbusier’s Plan Voisin was founded on the belief that the centre of Paris was congested, dirty, and unable to support the deluge of motor cars of the early twentieth century. Plan Voisin/Ville Contemporaine would have involved uprooting and razing most of central Paris from Gare de l’est to Rue de Rivoli, and from Place de la Republique to Rue du Louvre. Le Corbusier’s solution, Ted Shelton writes here, “was to eliminate the infrastructure of the Parisian street and replace it with spaces designed around the car. In the Plan Voisin the traditional city must yield to the infrastructure of the automobile wherever the two were in conflict.” (in Automobile Utopias and Traditional Urban Infrastructure: Visions of the Coming Conflict, 1925–1940).

Other models for cities imagined around technology, particularly cars, are The Metropolis of Tomorrow (Hugh Ferris, 1929), Broadacre City (Frank Lloyd Wright, 1932), and Futurama (Norman Bel Geddes, 1939–40). Each of these proposals attempted to reconcile the “the ever-increasing speed and large-scale geometries of the automobile and the much finer grain and slower speeds of the traditional city street.” (Shelton, above, again). In detailing vertical and horizontal planes of movement of people and traffic, the spread of buildings, the fates of city centres, and travel between airports and cities, automobile technology sets the direction for optimistic, Utopian, urban planning and architecture.

Le Corbusier's sketch for the evolution of the city, 1935. From https://eliinbar.files.wordpress.com/2010/10/ville-radieuse-by-le-corbusier0001.jpg
Le Corbusier’s sketch for the evolution of the city, 1935. From https://eliinbar.files.wordpress.com/2010/10/ville-radieuse-by-le-corbusier0001.jpg

Like the twentieth century automobile, the driverless car will re-order relationships to urban space and produce new kinds of places and urban cultures. The parking lot, rendered as cold, dangerous and creepy in cinema, is one such place. Commercial and personal use drones will need their own parking spaces, perhaps like the new Norman Foster droneport in Rwanda. The first pizza delivery by drone in New Zealand raises all kinds of practical questions about how exactly you’d get your pizza if you lived in an apartment building. Would the drone hover outside your window (what if you don’t have a balcony?) or leave it in a drone delivery depot? The devil is in the detail.

Liam Young’s forthcoming film, In the Robot Skies: A Drone Love Story, is a film shot by pre-programmed autonomous drones and tells the near-future story of two young lovers in a London Council Estate sequestered in their homes under ‘anti social behaviour orders’ who communicate by hijacking local CCTV camera drones that surveil their estate. Young says that just as the New York subway car of the 1980s birthed “a youth culture of wild style graffiti and hip hop”, the drone will create particular networks and cultures of surveillance activists and drone hackers. Not only is this a drone’s eye view of the city, but drones will be able to create film locations that weren’t accessible before.

Trailer for Liam Young’s In the Robot Skies

But while droneports and Council Estate drones may produce new flows of people and urban subcultures, big data technologies also continue to play a role in shaping and re-instating pre-existing physical geographies. Nowhere is this more poignant and difficult than at borders. Josh Begley’s new film Best of Luck With the Wall, is 200,000 satellite images of the US Mexico border on Google Maps. In making the film, Begley says he wants to focus on the physical geography and the inhabitants of it: “The southern border is a space that has been almost entirely reduced to metaphor. It is not even a geography. Part of my intention with this film is to insist on that geography.” He does, but in doing so is also pointing upwards to the very satellites that made the film possible, the vast human, legal and machine apparatus that produces and maintains the US-Mexico border. So this border, and any border at this point, is both a physical geography, as well as something produced by technologies of border surveillance that deliver certain kinds of knowledge about what is valid, legal and legitimate in terms of movement across it; and what is not.

The surveillance apparatus of the US Mexico border is also comprised of people who work to make sense of data collected by machines. Joana Moll’s and Cedric Parizot’s The Virtual Watchers is a project that reveals another side of crowdsourced, open source intelligence. Moll says that Virtual Watchers is based on a project that was launched in 2008, and consists of an online platform called RedServants, a network of 200 cameras and sensors. The 203.633 volunteers on Red Servant watched camera feeds of the US Mexico border and identified “illegal” border crossings and other “illegal” events.

Norman Bel Geddes’ Futurama was where cars would create the “grain” against which the city would be built; now, with the gradual accretion of sensors, radar, lidar, optical recognition, fingerprint scanners, biometric turnstiles, key-card only access zones, license plate scanners, cameras, recorders, databases, dashboards, and maps, it is as if big data is the grain against which place itself is imagined. Smart city visions are based on visions of second-order cybernetic actualization. Orit Halpern’s work analyses the evolutionary arc of urban design imaginaries in smart cities like Songdo in South Korea, Masdar in Abu Dhabi, and Singapore. In these cities architecture and urban planning become armatures, or interfaces, for control through a kind of higher-order knowledge assumed to be embedded in data.

In Crapularity Hermeneutics, Florian Cramer speculates on the tension between car and city in a way that might have thrilled Le Corbusier and Lloyd Wright. He suggests that “all cars and highways could be redesigned and rebuilt in such a way as to make them failure-proof for computer vision and autopilots …. For example, by painting all cars in the same specific colors, and with computer-readable barcode identifiers on all four sides, designing their bodies within tightly predefined shape parameters to eliminate the risk of confusion with other objects, by redesigning all road signs with QR codes and OCR-readable characters, by including built-in redundancies to eliminate misreading risks for computer vision systems, by straightening motorways to make them perfectly linear and moving cities to fit them.” The design company BIG made a video for Audi, (Driver)Less is More, which seems to capture what Cramer talks about. In the BIG view, the driverless car inhabits a city made for itself (notice the absence of humans):

AUDI – (DRIVER)LESS IS MORE from BIG on Vimeo.

But before we arrive at that point where everything is re-adjusted for the driverless car, there is going to be considerable struggle for political rights and freedoms against the blindness of algorithms based on already-biased databases. For example, as Seda Gurses recently said, would we rediscover racial discrimination in apps like the way-finding app, Waze, or Redzone, that “help” stay out of “high crime neighbourhoods”? What kind of new places will be created, and discriminations perpetuated, by autonomous driving that identify people and neighbourhoods as criminal or threatening? As unacceptable as this is, it is these moments of the messy glory of human difference that must be fashioned into speedbumps, in-computable objects, on the road to Utopia.

Maya Indira Ganesh is a reader, writer, researcher and activist  living in Berlin, Germany. She is working towards a PhD about ethics and technology at Leuphana University, and is Director of Research at Tactical Technology Collective. She has worked with feminist movements in India, and continues to at an international level through her work on Tactical Tech’s Gender & Tech project. She’s on Twitter as @mayameme; find more at Body of Work.

Lede image source.

turkey3

A couple of years ago I wrote about Friendsgiving, that very special holiday where cash-strapped millennials gather around a dietary-restriction-labeled potluck table and make social space for their politics and life experiences under late capitalism. All still very relevant, though I suspect this is the year where we should come up with a name for whatever happens after late capitalism. Some of you, of course, will be sharing a table with people not of your own choosing and so you might be forced into reckoning with people who make excuses for Nazis and disagree that trans people exist.

What follows are a couple of useful tactics that will help you hold your own and get through arguments that we shouldn’t have to keep having but here we are. These probably will not help you in a completely hostile room. These are better if you’re in a mixed crowd and you want to make sure that at the end of the political argument people don’t leave saying nothing more than “politics is so divisive!” People only criticize divisiveness when they aren’t sufficiently convinced by one side.

Above all, remember that political arguments are not about decisions based on different information, they are rooted deeply-held beliefs about how the world works that we are slowly socialized into. No single conversation will undo a social world. Campaigns (including these last two) know that most of their voters are “low information” voters who are not fluent in, or even persuaded by, long and involved explanations of policy. The mistake here is to assume that this is because most people are stupid and if you’re not basing your political positions on exhaustive research you don’t deserve to have tightly-held beliefs. This is a deeply condescending and unproductive position. Instead of delivering correctives like a walking, talking vox.com article, try to get to the bottom of what your debate opponents’ politics represent. If it is a general sense of declining American prosperity, agree with them! But then redirect the conversation away from race-baiting and lament any candidate’s ability to put forward a plan that would work for most people. Sometimes it helps to encourage someone to spin out their argument until it reaches an internally illogical conclusion like I did here. Depending on the situation, ask questions, challenge basic assumptions, or offer an alternative framing for the topic at hand. Which reminds me…

Understand Framing. No idea stands alone. Rather, concepts and ideas are interconnected and cannot be utilized without some unexpected or unwanted baggage. Framing is not just how ideas are presented, but what parts of an argument automatically feed into other arguments that the speaker is not intending to make. If you fall into an argument about how to make the country safer, for example, you are not talking about how most crimes tallied by the FBI’s Uniform Crime Reporting Statistics are at historic lows. (Same story with immigration.) Also, be sure to notice when you’ve started using someone else’s conceptual metaphor. If you talk about “trade wars” you have entered a conversation where trade is war. This means you’re trapped into talking about trade in terms of winners and losers who are determined through cut-throat violence. Try to reframe the conversation by talking about trade in a less combative way. Check out this handy list of conceptual metaphors to help you get familiar with conceptual metaphors.

Resist well-meaning people who want a reason “to hope that he succeeds in making the country great.” There is some sophisticated framing going on when someone parrots this line like a CNN talking head. The office doesn’t “make the man” and there are no checks and balances in place to make sure Trump is tempered by more level-headed people. The executive branch has never been more powerful and we have both parties to thank for that. Trump, for all his outsider status, has never made claims to devolving the power of the President. Don’t even argue about the Republican-held Congress and the soon-to-be 5-4 conservative Supreme Court. Instead, talk about all the ways Obama has strengthened the executive branch by embracing the Bush administration’s love of signing statements. Talk about how powerful the president has gotten in the last two decades and even if you like Trump he’s (hopefully?) not going to be president forever and someone will inherit the more-powerful position he’s helped create. There is nothing normal about this president and there are no counter-vailing forces within government powerful enough to correct the ship.

Reasonableness is so, so delicious. Everyone wants to be the reasonable one. Notice when the conversation turns toward what is reasonable, actionable, or realistic. This is a sign that someone is trying to do an end run around the very basis of your argument. They don’t want to engage in the substance of what you are saying and are more concerned with how reasonable and calm they appear to others. Britney Summit-Gil has more:

And if everyone at every interaction in their life is performing a self with the purpose of affecting another person, this holds true for left, right, and center. But for moderates, for white people, for the “reasonables,” there is little cost. Of all of the people I’ve seen calling for us to be reasonable, they are those least likely to be affected by a Trump administration. I have yet to see an immigrant, a person of color, a gay or trans person make this kind of call, though I am sure there are exceptions. But based on what I have seen, disenfranchised and targeted populations are calling for resistance, not unity.

Put the onus on Trump supporters to explain why we should ignore Nazi’s loud support for him, and “just give the guy a chance.” This is probably where the most aggressive confrontation must take place. Keep Trumpists on the defense by explaining why they think Nazi’s would be excited about this administration and what the administration plans to do to materially curb the power and prominence of these organizations (not just distance themselves from their most vocal avatars). Most likely you’ll be met with an argument about how these organizations are being given excess attention by the media and this is not representative of the Trump administration. Here you could agree that this isn’t totally unprecedented given that Reagan enjoyed endorsements from white supremacists, and a healthy handful of Republican primary candidates were supported by and shared a stage with a pastor that openly called for the execution of gay people. You could even bring up the fact that many white supremacist organizations celebrated Obama’s victory, albeit for very different reasons. After making that point, criticize Trump as not doing enough to overcome the problem that he’s nonetheless faced with. Above anything though, keep the focus on what Trump must do to deal with the seeming threat of Nazis regardless of whether that thread is manufactured by the media or not.

Stay away from talking about Trump in ableist terms. You might even surprise a few people by briefly, seemingly defending Trump. Stop anyone who is (still!) talking about Trump’s hand size or how “totally crazy” he is and instead keep focus on what he has said, done, and apparently believes. This is all that matters.

David is on Twitter.

Image source is this site that explains the significance of animals in your dreams.

308450382_ab9b7ca9e3_z

Two very different kinds of thoughts were running through my mind on the way to Leipzig to the BMW factory and on the way back. On the way there, I was thinking about how and why factories are relevant to the study of artificial intelligence in autonomous vehicles, the subject of my PhD; and on the way back I was thinking about the work of Harun Farocki, the German artist and documentary filmmaker who left behind an astonishing body of work, including many films about work and labour. These two very different thought-streams are the subjects of this post about the visit to the factory. They don’t meet at neat intersections, but I think (hope) one helps “locate” the other.

BMW is a German car company that is working on ‘highly automated driving‘ (although the Leipzig factory we visited isn’t making those cars at present). I’m doing a PhD that will – someday – suggest how to think about what ethics means in artificial intelligence contexts, and will do so by following the emergence of the driverless car in Europe and North America. One part of what I’m doing considers a dominant frame that has emerged around ethics in the driverless car context: ethics-as-accountability. In the search for the accountable algorithm in driverless cars of the future, I went to the BMW factory to see where the car of the future will come from. Who, or what, must be added to the chain of accountability when the driverless car makes a bad decision? Who, or what, comes before and around the software engineer who programs the faulty algorithm?

I discovered something else more vividly and strangely digital than the car – the automation of the factory itself. In fact, the autonomous, intelligent car receded into the background and what emerged was a demonstration of scaled up, cybernetic thinking resulting in a factory that is shaped by logistics, which as Zehle and Rossiter put it is the ‘organisational paradigm’ of cybernetics:

The primary task of the global logistics industry is to manage the movement of bodies and brains, finance and things in the interests of communication, transport and economic efficiencies. There is an important prehistory to the so-called logistics revolution to be found in cybernetics and the Fordist era following World War II. Logistics is an extension of the ‘organizational paradigm’ of cybernetics […] Common to neoliberal economics, cybernetics and logistics is the calculation of risk. And in order to manage the domain of risk, a system capable of reflexive analysis and governance is required. This is the task of logistics.

bmwsketch1
Hand-drawn sketch of the description of the BMW Leipzig plant’s assembly line. Maya Ganesh, June 24, 2016.

The factory has become an infrastructural node, rather than the primary theatre of action; as part of a multi-nodal software program that determines the movements of people and things, rather than a point in a linear assembly line. To mix metaphors, logistics is the brainchild of cybernetics, serving as a kind of mental model to with which to think about the processes of production in the face of rising costs, rising demands and complex risk paradigms. So, conversations about “smart factories” are not about robots taking away jobs, which is a limiting approach to the topic; it perhaps more that people’s jobs in the smart factory become integrated into and are determined by software programs that determine where people, money, raw materials, ships, and eventually power itself, are to go. In what may sound a little dramatic, things like cars have (to) become software in order to be produced.

The ‘logistical turn’ has gained prominence as computer programs have come to be their the main design environment and control mechanism in manufacturing. Ned Rossiter, explains why logistical technologies are important: that “logistical technologies that measure productivity and calculate value” intersect with financial capital and supply chains, to result in a governance regime of standardization.

The factory features prominently in the origin story of the theories we love, cite, and lean on. The machine of Capital, the industrial machine, commodity fetishism, the culture industries: these are ideas that come to us, primarily, from observations of workers and conditions in factories. Factories and making convey significant symbolic power. As Merkel famously retorted to the then-Prime Minister of Britain, Tony Blair, on what Germany’s secret sauce is: “Mr. Blair, we still make things.” But what does it mean to make things in conditions enabled by the internet, particularly unwaged work and new forms of labour wrapped up as ‘play’ and leisure? Trebor Scholz,edits Digital Labour: The Internet as Playground and Factory which offers a deep read into the many dimensions of what digital labour means. He says in the introduction: “there are new forms of labour but old forms of exploitation.” In an earlier time, it was perhaps less complicated to isolate where the exploitation comes from; in a time of ubiquitous computing, you have to pick away more carefully to reveal where it is.

In 1895, the Lumiére brothers made one of the world’s first films, Workers Leaving The Factory, in which workers are shown exiting the brothers’ photographic products factory in Lyon, France The film is a jumpy 45 seconds long on a 17m long film reel, a reminder of a time when it was apparent that people were technology, the first movie-making machines being hand-cranked projectors.

YouTube Preview Image

One hundred years later, German filmmaker Harun Farocki, asked: where were the workers going? “To a meeting? To the barricades? Or simply home?” and made a documentary researching the history of that film. Twenty years on and still intrigued by this film, Farocki and his co-curator, Antje Ehmann, presented Labour in a Single Shot, an exhibition of more than 200 single-shot 1-2 minute films about labour created through workshops in 15 cities around the world. The films are charming, whimsical, and are about varied and diverse kinds of labour: dog grooming, child care, industrial manufacturing, leather curing, surgery, taco delivery, water delivery, teaching, building inspection, security, tailoring, piano tuning, shoemaking, data centre management, and so on.

Labour is as much about capturing different kinds of labour as it is about filmmaking. All cameras are fixed giving a single perspective, the kind of thing we’re more likely to equate with CCTV visuality. In one film from Rio de Janeiro, we see a woman minding a little girl in a pink dress playing in a sandpit in a park in a garden. The child goes down a slide and obviously falls because we hear her wailing and the nanny runs out of the frame. We hear the child’s loud cries, and the nanny attempting to soothe her, all the while, the camera remains fixed. A few seconds later we see a child in her nappy, now muddy with sand, storming back into the frame and crying, the nanny scurrying behind her with the pink dress. What happened off camera?

Labour is a series about work that goes unrecognised as work, work that is both material and immaterial, mobile and fixed, routine and irregular, and the various contexts of sociality, camaraderie and the self in work. We see what it means to pay people for doing mundane and boring things like stacking clothes in a factory, or difficult things like scaling buildings, or moving the carcass of a dead cow, or things that are difficult to value, like teaching music. The mind seeks to draw equivalence between these activities and it is sobering and challenging to see where and how ideas of equivalence between different kinds of work break down. Always deeply politically invested, Farocki and Ehmann, want the viewer to be charmed and discomfited in equal measure, it would seem.

Back in Leipzig, a senior manager tells us, “Industry 4.0 is about smart logistics.” This isn’t just a piece of business jargon however; the manager said he did not like the idea of “smartness” and “4.0” but seemed to suggest it was baked into the design and operations of the factory; smartness was a sort of inevitability, it seemed. We heard many managers, at different times, talking about “the future” saying they were “ready” and “prepared to face it”. I wondered if this had something to do with the realities of autonomous vehicles that would come in “the future”? They just smiled in response.

The factory’s former chief engineer, and now- BMW board member, Peter Clausen says “communication was the implicit assumption underlying the design of this building…”; and eventually, “there is a central nervous system thinking in the flow of the building….” He hired the late Zaha Hadid to build a factory re-imagined as a place that would respond, seamlessly, to distant nodes of control and regulation. Thus, the words “flow”, “future”, “distribution” are mentioned often in talking about the architecture of the plant in relation to the production of cars. These and other ecology metaphors familiar to cybernetic thinking kept cropping up.

While we were walking around the shop floor, a manager told a story about BMW’s electric car, the i3, as we milled around its engine proudly on display. He said that in building the electric car they didn’t just replace the traditional combustion engine with an electric one; they actually invented a whole new car around an electric battery. They made “working from the outside-in” sound more intuitive than the oft-heard reverse, “from the inside out”. What this anecdote suggests, I believe, is that they wanted to, or had to, change how they saw production itself, to move away from the idea and practice of production as something linear. In a snarky comment to distinguish themselves from Google, someone said referring to the software company, “they’re a software company – they think about communication and then build a car around it.”

Here it seems that communication is embedded far deeper. The factory was designed in response to people’s communication flows. They measured the number of steps taken for one team to reach another, and the ways in which teams talk to each other through the production life cycle, and the different workflows of who talks to who, and when they need to talk to each other. One of the senior-most managers at BMW delighted in revealing that he receives less email than the visiting academics; he said he gets up and walks over to people to talk to them, thus reducing his email footprint: “email is asynchronous communication; talking to people is synchronous.”

Flow extends to how the shop floor merges with office space. Cars assembled in one part of the factory called the Body Shop travel along raised gantries right through the factory on their way to being painted and fitted out in the Paint Shop. You can be checking email, or talking to a coworker at the water-cooler, and have an unpainted shell of a i3 glide past overhead. The sides of the transfer gantries are lit in purple; we snickered about this as a tribute to the recently departed Prince. You could be forgiven for feeling like you’re on the filmset for a bad sci-fi film set from the 1930s. Or a music video from the 1980s.

People flow; there is an attempt to adjust traditional hierarchies into something nominally flatter in certain respects, and possibly shaped by mainstream notions of equality in German society (there are some deeply troubling notions of who a German is, however)The plant is built with one entry and one exit, so everyone -managers and workers and all levels of staff – enters and leaves through the same door. Everyone eats at the same cafeteria. Human Resources and Corporate Communications departments sit on the ground floor, by the cafeteria and the entry, and everyone has to walk past them.

The jewel in the factory’s crown is Clausen’s “finger concept”.. Traditional ideas of the assembly line are, well, linear. Imagine, instead, a single line bending to form a triangle before what was the ends of the line become the middle and the middle breaks apart to form the new ends of the line. This is, almost literally, what this factory does; what it means is that production can integrate new elements or processes without getting disrupted. For example, automation in cars means that new automating machines need to be introduced into the line. How do you enlarge the backbone of production without moving anything up or down the line? There is no way you can shut down a plant like this for more than ten days to change production processes. The answer: the factory has to expand and contract on demand. Thus the “finger” is an architectural design feature in which the physical layout of the plant can keep being extended by building new sections to integrate new machines, storage areas, supply chain entry points, and so on. Organic metaphors of ‘growth’, ‘marriage’, ‘body’, and ‘evolution’ are key to the description of such responsive architectural design.

But this is not about the triumph of welding social science into industrial design, nor about spatial design theory in corporate brochure copy. It is about the relentlessness of cybernetic thinking that promises comfortable, soft orderliness almost as a sort of counter-point to the very disruption it stimulates, with its flows, nodes, and self-organising, feeding-back loops, constantly seeking order in systems that may be complex, glitch-ridden, or creaky. The disruption and innovation, seen in light of rising costs and demands, are oddly, about standardization itself; the old factory again. Decision-making is not inspired, but faster. Algorithmic regulation resulting in the financialisation of labour, and the demands on physical infrastructure, and its people, to become like components of that system, smooth and flowing, to become data.

In 1990, after the fall of the Berlin wall, Farocki made How To Live In the FRG, a series of mock ‘training seminars’ for workers, from strippers to nurses, to adjust to a new life in a neoliberal world of a “relentless scripting” of interaction where human and commodity are “machined” to assume “maximum dependability”. For example, in the scenes with the stripper, you are shown a woman’s midriff and hips against a dimly red light-lit stage, very typically ‘stripper’; and a male voice off-camera. This un-embodied voice instructs the abbreviated woman on how to move her hips suggestively, how to slip out of her panties, and how to look more seductive. It is pedantic, funny, and awkward.

There is something that happens in the smart factory, that isn’t quite different from the not-smart factory: the worker is known through her relations to the finished product, and things in between get obscured. Erich Hörl reflects on how ubiquitous computing results in the “becoming-ecological” of media and creates displacement of workers, saying that work then was a “privileged action that focused on results and finality and obscured relations, mediations, and objects. Without direct dialogue, humans and the world or nature were placed in relation to the object, but only indirectly via the hierarchical structures of the community”. I read the smart factory as a continuation of the old one, in this sense. In the smart factory, the loops and flows of information supersede everything else, making the fact of mediation, the design, objects, and the people disappear; the flow is the thing.

The Farocki films I was thinking about on the train back from Leipzig keep digging at those obscured relations, mediations and objects, urging power out into view, quietly, sometimes grimly comical, and always with purpose. The scene with the stripper, like others in FRG, is rich in the minutiae of what work entails. In an issue of the journal e-flux dedicated to Farocki after his passing, the editors say:

With Harun’s precise scrutiny, an intimate world of techno-social micro-machinations comes to life. When an automated gate closes and latches, Harun is there. When looking into the LCD screens replacing rear view mirrors in cars, he is there. He is there when we address a colleague at work with a certain title.

Maya Indira Ganesh is a reader, writer, researcher and activist  living in Berlin, Germany. She is working towards a PhD about ethics and technology at Leuphana University, and is Director of Research at Tactical Technology Collective. She has worked with feminist movements in India, and continues to at an international level through her work on Tactical Tech’s Gender & Tech project. She’s on Twitter as @mayameme; find more at Body of Work.

Image Source

 

AppleSiri1TNW-1200x651

I recently updated my mac’s operating system. The new OS, named Sierra, has a few new features that I was excited to try but the biggest one was the ability to use Siri to search my files and launch applications. Sierra was bringing me one step closer to the human-computer interaction fantasy that was set up for me at an early age when I watched Picard, La Forge, and Data solve a complicated problem with the ship’s computer. In those scenes they’d ask fairly complicated questions, ask follow-up questions with pronouns and prepositions that referenced the first question, and finish their 24th century Googling session with some plain language query like “anything else?”  Judging by the demo I had seen on the Apple website it seemed like I could have just that conversation. I clicked the waveform icon, saw the window pop up indicating that my very own ship’s computer was listening and… nothing.

The problem wasn’t with Siri, it was with me. I had frozen. It was as if a rainbow spinning beach ball was stuck in my mouth. I was unable to complete a simple sentence. I closed the window and tried again:

Show me files that I created on… Damnit

Sorry I did not get that.

Show me files from… That I made on Friday.

Here are some of the files you created on Friday.

In all honesty, I should have seen this coming. I frequently use Siri to set reminders or to put things in my calendar but I always use my digital assistant in secret: the moment between getting in the car and starting the engine, alone at my desk, or (sorry) while I am using the bathroom. It works almost every time but when something goes wrong, it is my commands not Siri’s execution, that is left wanting. I pause because I forget the name of the place I need directions to or I stumble when it comes to saying exactly what reminder I want to set. There are several Siri-dictated reminders sitting in my phone right now that don’t want me to forget to “bring it back with you before you go” or “to write email in the morning.”  I clam up when I know my devices are listening.

It gets worse when other humans are listening to my awkward commands. The thought of talking to an algorithm in the presence of fellow humans is about as enticing to me as reciting a poem I wrote in high school or explaining a joke that just fell flat. Here I was thinking it was the technology that had to catch up to my cyborg dreams but now it seems that the flesh is the half not willing.

As it turns out I am not alone in my stage fright. Last June the marketing research firm Creative Strategies released a short report (though none of the raw data or a comprehensive methods section) that noted 98% of iPhone owners use Siri but only 3% ever talk to it in public. Most Siri usage seems to happen in the car which they surmise is related to hands-free laws, not “a free choice by consumers to embrace this technology.”

The authors of the report are surprised and seem to have no explanation for their two big findings: that 1) the speaking-to-phones-in-the-car effect is more pronounced in iPhone users than Android users even though Apple Maps is terrible and Google’s maps are the gold standard and, 2) Americans are “uncomfortable” using virtual assistants in public even though “consumers are accustomed to talking loudly on phones in public.”

None of this seems particularly surprising given my own experiences. Cars definitely require more hands-free usage but they are also where I (and most Americans) spend the most time alone. Privacy seems like an equally if not larger precipitating factor, which would mean that maps are not the only thing being used in the car. Additionally, most of Americans’ time in the car is spent commuting to work, and so maps are unnecessary. It is far more likely that we’re asking our phones to play that new album or place a call to mom to see how she’s doing.

Equating human-to-human conversation over the phone with giving orders to a virtual assistant is a digital dualist mistake. Americans are certainly good at yelling at each in public, but that skill may not transfer to digital assistants. Interacting solely with a piece of software is something altogether different, although still social because algorithms are made by people and our interactions are situated within and among other humans.

Moreover, engineers assume a one-on-one relationship with devices with little regard for how a device is used in a group or how others see us use our gadgets. We can know this by just looking at how these services are demonstrated at their launch and subsequently marketed. Commands are clearly stated sentences from a single person into one device. Even devices like Amazon’s Alexa, which are meant to serve the whole home, cannot intercede in a conversation between two or more people. It is always one-on-one.

Many tech critics, unfortunately, tend to reinforce this assumption in their writing by describing psychological effects and rarely sociological observations. Analysis focuses on the extension of individuals’ cognitive abilities or laments eyes focused on screens. Rarely are we treated to a discussion of the role of devices as social actors in a relationship with multiple humans. Part of this is strictly economics: if a device is meant to be shared you cannot sell one to every single person. More insidiously though, the asocial approach to technology reflects a shallow understanding of humans’ communicative practices.

How we are seen talking by third parties, especially when the conversational partner is unknown, is very important. It is the stuff of reputations and flash judgements. One of the myriad scenarios that run through my head when I imagine using Siri in public is that someone might think I am talking to a human the way I talk to Siri, which is to say, talking to them like an asshole. I do not tell Siri please and thank you, nor do I use deferential phrases like “could you” or “would you mind.” I talk to Siri the way I talk to a cable company’s phone tree.

I have not done an exhaustive study of this subfield of HCI, nor am I practitioner myself but a quick look at some of the emerging textbooks and research in what is being called “conversational interfaces” is immediately telling. Michael McTear’s modestly titled The Dawn of the Conversational Interface [PDF] opens with an introduction describing the 2013 movie Her. He does not use this references as a cautionary tale, but as a simple demonstration of what conversational interfaces may soon become. Her is aspirational in a way that makes you hope that McTear stopped watching the movie before the third act. Unmentioned is the romantic relationship this male character has with this feminine AI who is, one must assume, both his secretary and lover. (More on those considerations here.)

McTear’s writing is one example of a fairly common relationship between fictional depictions of technology and very real attempts at making that technology come to life. Engineers and scientists regularly appeal to the fears and hopes depicted in film as a way of building a mythos around their research program. Cyber security research promises to prevent the devastation depicted in action movies, public funding for road infrastructure will deliver us into a Jetsons-like techno utopia (see previous link), and Siri will eventually fall in love with you.

If there is any prescription to be had here it is the work of Philip Agre, Phoebe Sengers, and others who advocate for the integration of critical theory into computer science and similar fields. Agre’s argument that computer scientists would do better work if they were critical of the basic assumptions of their field, is immediately relevant here. Is boss/assistant really the best relationship we could have with our devices? Is this nothing more than a softening of the master/slave terminology [PDF] that still lingers in mechanical engineering and computer science? Are we still beholden to the idea of the robotnik: The Czech word for slave that, through the translation of Karel Capek’s play R.U.R gave us the English word robot.

Perhaps, deep down, we are reticent to bark orders at our phones because we sense the echoes of arbitrary power in the construction of our machine-readable verbal commands. That the embarrassment we feel is a sort of discomfort with being a master, not just looking or sounding awkward. That makes the commands in private seem even worst if I am honest. At least open and notorious commands are exactly what they appear to be. Acting the master in private is a desire for veiled power which, to my mind, seems more sinister.

If Microsoft’s ill-fated Tay was a bellwether of the racist invective endemic to the internet then the cheery submissiveness of our digital assistants are something even darker. Certainly what we say and do to software is (for now) nowhere near as important as what we say and do to our fellow humans but we should think deeply about what we are indulging in when we talk to computers. Whether these practices and relationships are a net positive for a society that could use fewer power differentials. Just because we have talking computers doesn’t mean we’re any closer to the utopic visions we see on TV.

David is on Twitter: @Da_banks

3415201885_75599f21e2_z

We should have seen this coming. The end of the world as we know it was announced today, unceremoniously with a blog post. Scripps Institution of Oceanography is reporting that we’ve definitely surpassed the 400 parts-per-million threshold for atmospheric CO2. It is at this concentration that a cascade effect is triggered and acidic seas rise to new heights, extinction rates increase, and food systems are permanently disrupted. More on all of that here.

What I want to focus on briefly is how we grapple with this enormous problem. It has been said before but it is worth saying again today: spurring people to act on climate change is difficult because the consequences are distributed and any solutions are really only best guesses to what is an enormously complicated question. Not only is it impossible to instantly halt all fossil fuel usage, it is difficult to even agree on how to scale it down. This is not a wishy-washy centrist political problem: should nations that have been plundered by colonial rule be forced into slowing down their own domestic nation-building projects? Should Europe and North America take on a greater share of the responsibility to account for historical advantages?

I am not expert in these matters, I only bring up this complication because it runs counter to the clear-cut narrative that U.S. environmentalism usually puts forward: that carbon neutral or even carbon negative futures are possible and it is only a matter of weak wills and greed that keep the smokestacks churning. Climate Change is often seen as a problem to be solved with equal parts technology and regulation but I would contend that an equal if not bigger issue is how we talk about climate change.

Consider, by way of extreme example, how the Green Party’s Jill Stein talks about “Protecting Mother Earth,”

Lead on a global treaty to halt climate change. End destructive energy extraction: fracking, tar sands, offshore drilling, oil trains, mountaintop removal, and uranium mines. Protect our public lands, water supplies, biological diversity, parks, and pollinators. Label GMOs, and put a moratorium on GMOs and pesticides until they are proven safe. Protect the rights of future generations.

In theory, the policy positions outlined on a campaign’s web site are not there to make an argument so much as they are there to help you decide if your values line up with that of the person running for office. In truth though—and this should be especially true for a third party candidate that needs to convince people to vote on a long-shot—every time you have a voter’s attention you should be trying to convince them to change their mind and vote for you or give them more fodder for an internal dialog of why they’ve made the right choice to vote for you. Stein’s platform is emblematic of a larger problem of environmental social movements as of late: there is no shortage of organizational energy but there is still no clear way forward.

Climate change inaction is essentially a problem of public engagement because there are very bright people with very clear agendas but nothing really seems to be taking hold as forcefully as the situation demands. And no wonder: what does it even mean to “halt climate change” at this point? What is an electorate signing up for when they choose a government that commits to protecting biological diversity? I know someone knows—there’s probably even precedent for it—but I’m a fairly educated person on this topic and if I were faced with having to answer that question in order to gain entry to the last boat leaving North Miami Beach during the supermoon, I would probably end up clinging to a classy sectional sofa somewhere 100 miles north of Cuba.

The actual answer to the questions I pose above are besides the point. Thinking of climate change as a problem of argumentation means that there is something fundamentally wrong with how we talk about confronting the issue. After reaching this auspicious milestone, it seems likely that it those who are convinced climate change is real will be talking about it. It is also likely that a lot of that talk will center around how thick-headed people are for not believing in climate change, becoming a single issue voter about it, or doing enough to reduce their own carbon footprints. I do not think that sort of talk is helpful anymore, if it ever was.

To answer my glib titular question: there has to be a renewed commitment to meeting people where they are at. Granted, where people are at, is bad: not nearly enough people in the U.S. believe in climate change (recent poll pegs it at 30%) but perhaps the problem is that we need people to “believe in” impending global catastrophe. Resolute and determined commitment to facing a danger is only one of many reactions and unfortunately willful ignorance is another. Instead of calling anyone that doesn’t believe in climate change an idiot, there needs to be a wide range of rhetorical strategies. The general shift towards talking about climate issues in economic terms is probably a good start. (Martin O’Malley’s often-repeated phrase “Climate change is the best job opportunity we’ve seen in 100 years” is probably a bit much though.)

I definitely would rather live in a world where climate change was treated as the pending disaster that it is, but instead I live in one where it is largely ignored or outright denied. I suspect it is time to stop expecting people to be persuaded by evidence, even when it has literally arrived at their front door in the form of regularly reoccurring floods or droughts. Climate change is not a problem primarily defined by not enough people knowing the science. It is a political problem that requires persuasion by multiple means. The oil company villains and the “if every single person just…” rhetoric seems to have reached as many people as it is going to reach and we have to change tactics. That is, of course, if we stay committed to the idea that there is still time to wait to persuade people at all. If that is not the case then perhaps environmentalists must consider going in the opposite direction and, rather than appealing to existing governmental bodies, step up the rate at which they take it upon themselves to forcefully close uranium mines and fracking. I don’t know if there’s a third option.

David is on Twitter: @da_banks

Image credit Tink Tracy.

karl_marx_iphone_5_cases-r632e538dd86341bb9930641d01a61757_80cs8_8byvr_324

A lot has been said about the removal of the 3.5mm headphone jack from the iPhone 7 but one essay that still has me thinking comes from The Verge’s Nick Statt who notes that Apple’s latest phone model is a “gift” to accessory makers. By removing an often-used port, Apple has opened the floodgates and soon we’ll be caught in a torrential downpour of over-priced adapters and dongles meant to keep your favorite headphones in use. Even more adapters will make it possible to listen and charge at the same time. Eventually, after replacing or dongling half of the things that connect to your phone everything will go back to normal.

Why might it behoove Apple to make Christmas come early for Belkin? The answer may actually come from an old observation about cities and the contradictions of capital accumulation. Sales of the iPhone have begun to plateau which is scary if that’s the product that gives you two thirds of your revenue. It has, in a sense, reached a very particular kind of surplus accumulation problem.

Surplus accumulation is what it sounds like: the value that remains after capitalists pay for all the equipment and labor they needed to make their money in the first place. If surplus accumulation is democratically collected and managed it can build infrastructure that benefits everyone. If surplus accumulation staying in the hands of capitalists they tend to build things for themselves: mansions, high-rise condominiums, and yachts. The spice must flow, however, and even the most extravagant personal items cannot spend all of the accumulated wealth. That’s when luxury condos, sports stadiums, and convention centers become absolutely necessary for “the economy.” Old things get demolished to make way for the new and the surplus goes back down to manageable levels. Without these massively expensive things and the debt they produce our massively lopsided economy would tip over.

The iPhone is a commodity –a discrete object that can be bought and sold in markets for a specific price– but it has become a little bit more than that. We often talk about “ecosystems” surrounding very popular pieces of consumer technology. It doesn’t take much to get financially (and maybe a little emotionally) invested in these ecosystems and companies like Apple love ecosystems because it makes it harder to switch to a competitor.

All of this you probably know but if we think about that old Silicon Valley saw in terms of surplus accumulation, something interesting happens: more than just a desperate grab to keep you locked into an ecosystem of accessories and apps, the removal of the headphone jack can be seen as a way to increase purchasing of a wider range of products and services. It is a shock to the system that spurs more spending.

Of course, the stakes are much smaller (no one is losing their home to make way for Bluetooth headphones) but the logic is the same here. Smartphones, because so much of our other purchasing habits pass through them, have become small economies that are subject to the same sorts of planned shocks and destruction that our cities experience. I suppose it takes a certain kind of courage to break something so that your customers are encouraged to buy your over-priced headphones.

David is on Twitter.

Screen Shot 2016-09-08 at 4.46.11 PM

In 2011 I was looking for new ways to play with ideas. I had just finished my first semester in graduate school and while class assignments kept me busy and the conversations I had with my fellow graduate students were deeply rewarding I wanted something else. Something a little more public and a lot more focused on writing. I was surprised that, at least in my program, there was very little attention paid to the craft of writing. How to convey a complicated idea in an elegant way, or how to identify your audience, were absent from even the most pragmatic training. (This is not true in all programs and less true for that particular department now, than it was five years ago.) In college I frequently read and shared Sociological Images articles and one day, while procrastinating on my very first round of graduate school finals I noticed that The Society Pages had multiple blogs. Instead of writing my finals I wrote a short thing about drones and video editing suites and before I knew it I was regularly contributing to this blog. In a few short years I was submitting things to bigger publications, and now I’m an editor here with Jenny and wow what a wild ride.

Last month I read Kelly Conaboy’s Blog, You Idiots and it got me thinking about the process, style, and frequency of my own writing. I’ve been editing more and writing less, and I’d like to change that. Jenny and I have been editing guest posts but the editing that has taken up most of my time is editing my own stuff. There’s probably half a dozen would-be essays in my Cyborgology folder that end right about here, in the second paragraph, where the hook or thesis should go. I think this means I need to get back to writing what Conaboy called for: “just a little thing that you read and enjoy.”

I will be writing more, soon, but before I rearrange the pace of my work schedule to accommodate that promise I thought I would put down into words a workshop I have run twice now on helping academics write for more public audiences. The intention here is to identify some of the common problems academics have in writing engaging, thoughtful, and relatively short essays. Much of it comes down to pacing and working with others.

Digesting an Idea

As an editor one of the more frustrating things you have to do is say no to a great idea poorly constructed. You can see what the author wants to get across and it is smart, good, and important but it is in an indigestible form. I like gastronomic metaphors for writing and reading because it highlights the destructive process of reading and writing. When all goes well, eating involves lots of destruction. You take something that was outside of you, prepared by yourself or someone else, and literally make it apart of you by destroying it. If all goes well, it leaves you in all the right ways and your body is nourished. If something is indigestible it gets regurgitated or quickly passes from one side to the other. As a writer you want your ideas to nourish, not unceremoniously come and go.

To do that you need to give the reader ample opportunities to make your idea their own. The easiest way to do that is through examples. Any complex, abstract idea needs an example. Similarly, depictions of very specific or obscure processes or events need to be rendered relatable or generalized. I like to think of my writing as instrumental. I constantly ask myself “What will this essay do for others?” “How will it give clarity to a confusing concept?” “What new considerations am I sensitizing my audience to?” “What stance or position am I making easier or more difficult to hold?” Most importantly though, your ideas must survive your readers’ mental digestion. If a reader doesn’t totally understand one element of your argument does the whole thing fall apart? Your idea should be robust enough to withstand mild misunderstanding or misinterpretation. Very delicate and intricate arguments are crucial to every discipline but the public essay is not necessarily the appropriate venue.

The Three Essays You Will Meet

The essay (or blog post) lets you convey one idea really well, and leaves little room for anything else. The approach to writing in this genre I am about to describe is, admittedly, reductive and formulaic. (For this reason I’ll limit my examples to my own writing.) Nowhere else in my professional or personal life do I like to put such a wide range of things into such large categories (case in point: all of my degrees have “studies” at the end) but this rubric is so useful for beginning writing that I cannot not share it. The approach is the following: every academic that has ever written an essay under 3000 words for a more general audience has really only written one of three different kinds of essays. You must pick one of these three kinds in order to write a compelling essay that has a definitive argument and structure. Those three kinds are the following:

  • Theory or concept X will help you understand Y event.
  • Obscure debate about X, between A and B, should be important to Y.
  • Summary of original research X.

That last one is fairly straightforward. One of my most-read essays on Cyborgology, “A Brief Summary of Actor Network Theory” is of this kind. The utility of such essays is obvious: huge theories are by their nature, complicated, and if you can strike the right balance between length and depth (going relatively deep into the research while keeping the length of the essay relatively short) you’re golden.

The second kind is probably the most complicated but reaps the biggest rewards. Disagreements about the form and function of fascism is important to American voters in 2016. The way Sherry Turkle deploys her own argument, and the critiques that have been levied against it are important for workers in the companies that take her ideas to heart. To pull this one off you generally have to do a few things in just the right order:

  1. Introduce a problem that Y is experiencing, in the terms that it is popularly understood.
  2. Slowly incorporate X, showing how they are structurally similar or somehow related.
  3. Give solid summaries of both A and B’s positions, followed by important disagreements and overlaps.
  4. Bring Y back in and demonstrate how something is easier to understand or predict with the new information presented by X, A, and B.
  5. Rearticulate Y’s problem in X’s terms. Perhaps you take a side and say B is better than A or A is probably a better fit for Y’s specific problem.
  6. Conclude with some suggestions as to how we might use X productively in the future or avoid Y.

The first kind is by far the easiest and most straightforward. Sometimes I call this “playing the exorcist.” In cinematic depictions of exorcisms, the priest typically has to “name the demon” to cast it out, and will frequently invoke another name (e.g. Jesus Christ) to work against the demon. Academics serve many functions but one of their biggest public services is giving a name to a particular phenomenon that was once difficult to articulate. Microaggressions, and intersectionality are good examples. By giving names to things we might cast them out or simply do work on them. Casting Y event as an instance of X concept or theory can seem simplistic but this is precisely what makes for a good essay. The argument is clear and there is a good chance most people just do not see what you do. The general outline for this kind of essay is the following:

  1. Introduce X
  2. Telegraph that X might be better understood in terms (or as) Y
  3. Summarize the coverage or popular understanding of X
  4. Explain what is missing from the coverage or popular understanding.
  5. Introduce Y
  6. Apply Y to X.
  7. Offer prescriptions and/or conclusions

The most straightforward applications of this kind of essay usually have a question for a title. For example: “What does Debord’s Society of the Spectacle say about political conventions?” Something more subtle may take a declarative tone: “Why We Can Wait for Amazon’s Drones.” The Society Pages’ There’s Research on That! Is nothing but this kind of writing, which demonstrates just how powerful, useful, and flexible this approach can be.

Learn the World of Editors

The world of short form nonfiction writing has a lot of rejection and even when you do get something accepted, there can be heavy-handed editing. Learn to love this. I always try to remember that it is a great honor that someone has taken my ideas into their hands and wants to polish it and make it better according to their own terms. You can disagree with their terms, but if you want to push back on a particular edit, you must first make sure you understand and are willing to meet the editor on their terms. The editor knows the audience you’re writing for.

Secondly, break the habit of trying to explain what you’re explaining in the essay in emails to the editor, unless explicitly asked. If you feel like you have to summarize or explain what you are trying to do in the essay, then chances are you have not written the best version of your essay. I always try to keep conversations about essay content and ideas in the comments of the document and keep email to more logistical issues of publishing schedules, accompanying image selection, and of course payment.

Finally, and this is where I’m probably least helpful because I’m still learning this art, academics have to learn how to pitch. A pitch is not an abstract. A pitch is not just about the content of the essay because it should also contain an explanation of your intended audience and a little bit about yourself and why you’re the one to write it. Try to put yourself in the editor’s shoes and think about all the considerations they need to make: does this fit into the publication? Who wants to read this? How does your essay contribute to an ongoing conversation or how does it start a new one? Above all though, what goes into a pitch varies according to publication, the nature of what you intend to submit, and your relationship with that publication or individual editor.

Letting Go

The glacial pace of academia is generally a good thing. There is a great deal of danger in getting it wrong, so careful evaluation and reflection is important. Writing for public audiences should be carefully done, no doubt, but relevancy also matters. If you have something to say about the VMAs, a presidential debate, or the new iPhone you have a fuzzy but certain deadline. That means knowing when to let go of your text.

When I go over this content in workshops, this is where I get the greatest pushback and concern. As academics we are trained to not only workshop and rewrite something multiple times, we then subject our writing to review that frequently asks for more revisions. Nothing is fast about our writing process and things should be just right before we hit send. Certainly your work should be polished when you send it to your editor but remember that the editor is not a reviewer. Their priorities are different because they are crafting the authorial voice of a publication, not gatekeeping a discipline or field. You work, generally, does get a second set of eyes to make the words flow. Make sure your writing is compelling and understandable but also learn to trust others with your work. Can’t emphasize this enough: trust.

14159070_10154349226626070_1229869182_n

How does technology mediate belonging in an era of both rising connectivity and xenophobia? The rhetoric of globalization would have us believe we are entering a new era of integration facilitated by advances in transportation and information technology, while racist populism is finding currency unseen since the Second World War. These perspectives represent very different views of how the world should work, and reflect one’s position and ability to navigate multiple, entangled systems of belonging, and the technologies making such movement possible.

We order our world with technology, in ways so mundane they escape detection without effort to separate representation of the world from the world itself. This is difficult because language itself is a sort of representational technology. Think of language as the software used in “hardware” (like stop signs or birth certificates) designed to order society.

The last decade has seen dozens of new kinds of hardware and software created to manage individuals’ relationships with the state. What does it mean to “belong” in a state? Is it enough to have legal credentials, or are there cultural and social dimensions that make belonging more than a value in a spreadsheet? Rather than thinking of belonging as a thing one has or does not have, we should think of it as a constantly evolving process. Systems of belonging operate by ordering relations between spaces, things and people, and, importantly, what counts as “people”. This latter distinction is the bedrock from which all other relations are imagined. To be a “person” signifies a special status, the right to have rights. One cannot own things if one does not own one’s self, and one cannot own one’s self if one is not a person.

Belonging has a context in which it operates – a domain where definitions are generally agreed upon, with a stable if evolving framework of relations between people and things, and a means of negotiating them. In the context of globalization and xenophobia, that context is the state, and the means of negotiating these ties are technologies of belonging.

Language and sight are primary technologies of belonging – what language you speak, and how you speak it signals others as much as appearance and behavior can, and all have a long history in the service of the state in determining property and citizenship rights. Every time you hear about a legislature mulling the idea of recognizing an “official language”, you are witnessing an aspect of belonging being crafted.

All persons are entangled with state systems of belonging, though in practice the reach of such systems is uneven and irregular. Many experience dissonance between representations and realities of social relations, and there is often a connection between asymmetrical distributions of power and these dissonances. In present-day liberal democracies, the promise of belonging is a promise of equal protection and access to the law. Between promise and reality, however, is the space where technologies of belonging are employed, co-opted, or subverted, as individuals and groups seek to redefine – or escape – actually existing relations in their respective states.

To answer how technology mediates belonging in a state system, we should begin by outlining how individuals  encounter and interact with such technologies. Technologies of belonging are both mundane and extraordinary, depending on context. They are mundane in that our shared acceptance of their efficacy (if not always legitimacy) allows other systems to function. They are extraordinary in that they become most visible and powerful at the limit of their domain – the border – where one of if not the most visible and powerful examples of a technology of belonging, the border security apparatus, hereafter “BSA”, operates.

US BSAs are sorting machines for belonging to or in a liberal democratic state, yet are often neither liberal nor democratic in practice. Ostensibly designed to protect civil and property rights of US citizens while securing authorized movement of goods and persons in and out of national territory, it often suspends or violates these rights while proving at best ineffective at preventing movement of unauthorized goods and persons across the border, and at worst actively making the border region less secure.

This can be better understood if we separate the framing of BSAs from actually existing BSAs, or more simply, how we talk about them from how they actually work.

BSAs recognize a kind of formal belonging to one or more states – citizenship – established through birthplace, family relation or both, documented with context-specific forms of identification, such as passports and visas, which also affect one’s relationships with states to which one does not belong, with the intent of creating unambiguous ties between persons and states.

Increasingly, however, actually existing relations between people and states may be more precarious than ever, whether possessing documentation of citizenship or not, and struggles for civil and property rights increasingly invisible to persons in different networks of social relations until they reach a breaking point, often taking the shape of direct actions seeking to reframe the social order through co-opting, disrupting, or defying existing technologies of belonging. These networks of social relations may be deeply entangled in physical space and dependency, yet almost entirely isolated from each other in how they perceive the status quo. And when the status quo becomes an existential threat, some kind of direct action challenging the status quo is almost guaranteed.  

Belonging-as-tourist, as-migrant, as-refugee, and even as-citizen are filtered through the lens of the “software” of belonging, and ordered through its “hardware” –  passports, visas, customs declarations, and so on. However, these formalized representations of belonging are often mediated as much by history, economics, and the subjectivity of the person assessing the situation as an individual’s safety or documentation does.

For example, non-citizens may be required to secure visas when travelling abroad, depending on country of origin and destination, and that visa may also require a certain level of financial liquidity, good health, or colonial history to be granted. The conditions of the visa may also dictate whether employment is allowed – or required – as a condition. Individuals who cannot meet the requirements of the visa process but are recognized to have imminent threats to their lives or rights may also be granted refugee status in order to achieve recognized entry, yet what is considered an imminent threat varies widely from context to context, again often mitigated by issues with little connection to the individual’s motivation for travel.

Speaking, looking, or acting in ways deemed threatening to a BSA may cause claims of belonging—rooted in formal citizenship or no—to be questioned or denied. Using language associated with threats to the state—perhaps Arabic, Urdu, or a turn of phrase taken out of context—can entangle you just as easily as having the wrong color skin might at a traffic stop dozens of miles from the actual border.

14193831_10154349226396070_74022283_n

Ironically, increasingly militarized border security practices over the last three decades have seemingly had the opposite effect intended, with little evidence showing drops in undocumented migration, drug trafficking, or terrorism. Such practices restricting movement of persons while facilitating trade may have strengthened organized crime networks while destabilizing local economies, forcing many living in parts of the world where agriculture is the only means of earning a living to either relocate to a place they can sell their labor, or become criminals themselves as a matter of survival. For many, the difference between being a refugee and criminal is as little as how long it has been since they have eaten, or if the odds of being killed by a gang for refusing to join are greater than trying to flee.

Often, the difference between an economic migrant and refugee is so minor as to be irrelevant, the line between organized crime and the state indistinguishable, and to remain in their home country a death sentence.

Such conditions render BSAs ineffective, as they assume non-citizens will be deterred by threat of detention or deportation. In practice, however, consequences of falling afoul of a BSA pale in comparison to motivations for many non-citizens, while their citizenship does not protect them.

Seemingly an incongruous juxtaposition, the ascent of neoliberal and xenophobic perspectives are two sides of the same coin, a product of how technologies mediate belonging in transnational and national social orders. Destabilization created by technologies of belonging facilitating (some) mobilities has generated powerful challenges to existing social orders, and also problematic yearnings for a return to an oppressive past free from such challenges. When the framing of social relations fails entire subsets of the population on an existential level, renegotiation of that framing arising from that breakdown is almost certain.

Breakdown between the framing of belonging and its actual practice need not be a story of failure, however. Dissonance between existing and imagined social relations come to light in practice, and are the data we need to make substantive changes to both the technologies we use to order our world, and between actually existing people who live in it. Failure is the absence of dialogue, and our best defense is to instead to embrace it.

Justin Quinn is a PhD graduate student of anthropology at the University of Florida. His research interests include the anthropologies of the translocal, the state, infrastructure, and development. He has worked in Yucatan on the sustainability of tourism, in Southwest Florida on the role of non-profit organizations in local development, and was a founding researcher of the Sarasota County Water Oral History Project. He is currently researching how various publics represent and practice infrastructure development in México.

Thiel - Girard

During the week of July 12, 2004, a group of scholars gathered at Stanford University, as one participant reported, “to discuss current affairs in a leisurely way with [Stanford emeritus professor] René Girard.” The proceedings were later published as the book Politics and Apocalypse. At first glance, the symposium resembled many others held at American universities in the early 2000s: the talks proceeded from the premise that “the events of Sept. 11, 2001 demand a reexamination of the foundations of modern politics.” The speakers enlisted various theoretical perspectives to facilitate that reexamination, with a focus on how the religious concept of apocalypse might illuminate the secular crisis of the post-9/11 world.

As one examines the list of participants, one name stands out: Peter Thiel, not, like the rest, a university professor, but (at the time) the President of Clarium Capital. In 2011, the New Yorker called Thiel “the world’s most successful technology investor”; he has also been described, admiringly, as a “philosopher-CEO.” More recently, Thiel has been at the center of a media firestorm for his role in bankrolling Hulk Hogan’s lawsuit against Gawker, which outed Thiel as gay in 2007 and whose journalists he has described as “terrorists.” He has also garnered some headlines for standing as a delegate for Donald Trump, whose strongman populism seems an odd fit for Thiel’s highbrow libertarianism; he recently reinforced his support for Trump with a speech at the Republican National Convention. Both episodes reflect Thiel’s longstanding conviction that Silicon Valley entrepreneurs should use their wealth to exercise power and reshape society. But to what ends? Thiel’s participation in the 2004 Stanford symposium offers some clues.   

Thiel’s connection to the late René Girard, his former teacher at Stanford, is well known but poorly understood. Most accounts of the Girard-Thiel connection have described the common ground between them as “conservatism,” but this oversimplifies the matter. Girard, a French Catholic pacifist, would have likely found little common ground with most Trump delegates. While aspects of his thinking could be described as conservative, he also described himself as an advocate of “a more reasonable, renewed ideology of liberalism and progress.” Nevertheless, as the Politics and Apocalypse symposium reveals, Thiel and Girard both believe that “Western political philosophy can no longer cope with our world of global violence.” “The Straussian Moment,” Thiel’s contribution to the conference, seeks common ground between Girard’s mimetic theory of human social life – to which I will return shortly – and the work of two right-wing, anti-democratic political philosophers who were in vogue in the years following 9/11: Leo Strauss, a cult figure in some conservative circles, and a guru to some members of the Bush administration; and Carl Schmitt, a onetime Nazi who has nevertheless been influential among academics of both the right and the left. Thiel notes that Girard, Strauss, and Schmitt, despite various differences, share a conviction that “the whole issue of human violence has been whitewashed away by the Enlightenment.” His dense and wide-ranging essay draws from their writings an analysis of the failure of modern secular politics to contend with the foundational role of violence in the social order.

Thiel’s intellectual debt to Girard’s theories has a surprising relevance to some of his most prominent investments. For anyone who has followed Thiel’s career, the summer of 2004 – the summer when the “Politics and Apocalypse” symposium at Stanford took place – should be a familiar period. About a month afterward, in August, Thiel made his crucial $500,000 angel investment in Facebook, the first outside funding for what was then a little-known startup. In most accounts of Facebook’s breakthrough from dormroom project to social media empire (including that offered by the film The Social Network), Thiel plays a decisive role: a well-connected tech industry figure, he provided Zuckerberg et al, then Silicon Valley newcomers, with credibility as well as cash at a key juncture. What made Thiel see the potential of Facebook before anyone else? We find his answer in an obituary for René Girard (who died in November 2015), which reports that Thiel “credits Girard with inspiring him to switch careers and become an early, and well-rewarded, investor in Facebook.” It was the French academic’s mimetic theory, he claims, that allowed him to foresee the company’s success: “[Thiel] gave Facebook its first $500,000 investment, he said, because he saw Professor Girard’s theories being validated in the concept of social media. ‘Facebook first spread by word of mouth, and it’s about word of mouth, so it’s doubly mimetic,’ he said. ‘Social media proved to be more important than it looked, because it’s about our natures.'” On the basis of such statements, business analyst and Thiel admirer Arnaud Auger has gone so far as to call Girard “the godfather of the ‘like’ button.”

In order to make sense of how Girard informed Thiel’s investment in Facebook, but also how he has shaped Thiel’s ideas about violence, we need to examine the basic tenets of Girard’s thought. Mimetic theory has not been widely applied in social analyses of the internet, perhaps in part because Girard himself had essentially nothing to say about technology in his published oeuvre. Yet the omission is surprising given mimetic theory’s superficial resemblance to the more often discussed “meme theory,” which similarly posits imitation as the basis of culture. Meme theory began with Richard Dawkins’s The Selfish Gene, was codified in Susan Blackmore’s The Meme Machine, and has been applied broadly, in popular and scholarly contexts, to varied internet phenomena. Indeed, the traction achieved by the term “meme” has made most of us witting or unwitting adopters of meme theory. Yet as Matthew Taylor has argued, Girard’s account of mimeticism has significant theoretical advantages over Dawkins-derived meme theory, at least for anyone interested in making sense of the socio-political dimensions of technology. Meme theory tends to reify memes, separating them from the social contexts in which their circulation is embedded. Girard, in contrast, situates imitative behaviors within a general social theory of desire.

Girard’s theory of mimetic desire is simple in its basic framework but has permitted complex, detailed analyses of a wide range of cultural and social phenomena. For Girard, what distinguishes desire from instinct is its mediated form: put simply, we desire things because others desire them. There is some continuity with familiar strands of psychoanalytic theory here. I quote, for example, from Slavoj Žižek: “The problem is, how do we know what we desire? There is nothing spontaneous, nothing natural, about human desires. Our desires are artificial. We have to be taught to desire.” Compare this with Girard’s statement: “Man is the creature who does not know what to desire, and who turns to others in order to make up his mind. We desire what others desire because we imitate their desires.” For Girard (and here he differs from psychoanalysis), mimesis is the process by which we learn how and what to desire. Any subject’s desire, he argues, is based on that of another subject who functions as a model, or “mediator.” Hence, as he first asserted in his book Deceit, Desire, and the Novel, the structure of desire is triangular, incorporating not only a subject and an object, but also, and more crucially, another subject who models any subject’s desire. Moreover, for Girard, the relation to the object of desire is secondary to the relation between the two desiring subjects – which can eclipse the object, reducing it to the status of a prop or pretext.

The possible applications of this thinking to social media in particular should be relatively obvious. The structures of social platforms mediate the presentation of objects: that is, all “objects” appear embedded in, and placed in relation to, visible signals of the other’s desire (likes, up-votes, reblogs, retweets, comments, etc.). The accumulation of such signals, in turn, renders objects more visible: the more mediated through the other’s desire (that is, the more liked, retweeted, reblogged, etc.), the more prominent a post or tweet becomes on one’s feed, and hence the more desirable. Desire begets desire, much in the manner that Girard describes. Moreover, social media platforms perpetually enjoin users, through various means, to enter the iterative chain of mimesis: to signal their desires to other users, eliciting further desires in the process. The algorithms driving social media, as it turns out, are programmed on mimetic principles.

Yet it is not simply that the signaling of desire (for example, by liking a post) happens to produce relations with others, but that the true aim of the signaling of desire through posting, liking, commenting, etc. is to produce relations with others. This is what meme theory obscures and mimetic theory makes clear: memes, far from being autonomous replicators, as meme theory would have it, function entirely as mediators of social relations; their replication relies entirely on those relations. Recall that for Girard, the desire for any object is always enmeshed in social linkages, insofar as the desire only comes about in the first place through the mediation of the other. A reading of Girard’s analyses of nineteenth-century fiction or of ancient myth suggests that none of this is at all new. Social media have not, as the popular hype sometimes implies, altered the structures that underlie social relations. They merely render certain aspects of them more obvious. According to Girard, what stands in the way of the discovery of mimetic desire is not its obscurity or complexity, but the seeming triviality of the behaviors that reveal it: envy, jealousy, snobbery, copycat behavior. All are too embarrassing to seem socially, much less politically, significant. For similar reasons, to revisit Thiel’s remark, “social media proved to be more important than it looked.”

But so far, I have been expanding on what Thiel himself has said, which others have echoed. However, what accounts of Girard’s role in Thiel’s Facebook investment never mention is the other half of Girard’s theory, the half that Thiel was at Stanford to discuss in 2004: mimetic violence, which, for Girard, is the necessary corollary of mimetic desire.

Thiel invested in and promoted Facebook not simply because Girard’s theories led him to foresee the future profitability of the company, but because he saw social media as a mechanism for the containment and channeling of mimetic violence in the face of an ineffectual state. Facebook, then, was not simply a prescient and well-rewarded investment for Thiel, but a political act closely connected to other well-known actions, from founding the national security-oriented startup Palantir Technologies to suing Gawker and supporting Trump.

According to Girard’s mimetic theory, humans choose objects of desire through contagious imitation: we desire things because others desire them, and we model our desires on others’ desires. As a result, desires converge on the same objects, and selves become rivals and doubles, struggling for the same sense of full being, which each subject suspects the other of possessing. The resulting conflicts cascade across societies because the mimetic structure of behavior also means that violence replicates itself rapidly. The entire community becomes mired in reciprocal aggression. The ancient solution to such a “mimetic crisis,” according to Girard, was sacrifice, which channeled collective violence into the murder of scapegoats, thus purging it, temporarily, from the community. While these cathartic acts of mob violence initially occurred spontaneously, as Girard argues in his book Violence and the Sacred, they later became codified in ritual, which reenacts collective violence in a controlled manner, and in myth, which recounts it in veiled forms. Religion, the sacred, and the state, for Girard, emerged out of this violent purgation of violence from the community. However, he argues, the modern era is characterized by a discrediting of the scapegoat mechanism, and therefore of sacrificial ritual, which creates a perennial problem of how to contain violence.

For Girard, to wield power is to control the mechanisms by which the mimetic violence that threatens the social order is contained, channeled, and expelled. Girard’s politics, as mentioned above, are ambiguous: he criticizes conservatism for wishing to preserve the sacrificial logic of ancient theocracies, and liberalism for believing that by dissolving religion it can eradicate the potential for violence. However, Girard’s religious commitment to a somewhat heterodox Christianity is clear, and controversial: he regards the non-violence of the Jesus of the gospel texts as a powerful exception to the violence that has been in the DNA of all human cultures, and an antidote to mimetic conflict. It is unclear to what degree Girard regards this conviction as reconcilable with an acceptance of modern secular governance, founded as it is by the state monopoly on violence. Peter Thiel, for his part, has stated that he is a Christian, but his large contributions to hawkish politicians suggest he does not share Girard’s pacifist interpretation of the Bible. His sympathetic account, in “The Straussian Moment,” of the ideas of Carl Schmitt offers further evidence of his ambivalence about Girard’s pacifism. For Schmitt, a society cannot achieve any meaningful cohesion without an “enemy” to define itself against. Schmitt and Girard both see violence as fundamental to the social order, but they draw opposite conclusions from that finding: Schmitt wants to resuscitate the scapegoat in order to maintain the state’s cohesion, while Girard wants (somehow) to put a final end to scapegoating and sacrifice. In his 2004 essay, Thiel seems torn between Girard’s pacifism and Schmitt’s bellicosity.

The tensions between Girard’s and Thiel’s worldviews run deeper, as a brief overview of Thiel’s politics reveals. As a libertarian, he has donated to both Ron and Rand Paul, and he has also supported Tea Party stalwarts including Ted Cruz. George Packer, in a 2011 profile of Thiel, reports that his chief influence in his youth was Ayn Rand, and that in political arguments in college, Thiel fondly quoted Margaret Thatcher’s claim that “there is no such thing as society.” As George Packer notes in his New Yorker profile of Thiel, few claims could be more alien to his mentor, Girard, who insists on the primacy of the collective over the individual and dedicated several books to debunking modern myths of individualism. Indeed, Thiel’s libertarian vision of the heroic entrepreneur standing apart from society closely resembles what Girard derided in his work as “the romantic lie”: the fantasy of the autonomous, self-directed individual that emerged out of European Romanticism. Girard went so far as to suggest replacing the term “individual” with the neologism “interdividual,” which better conveys the way that identity is always constructed in relation to others.

In a seemingly Ayn-Randian vein, Thiel likes to call tech entrepreneurs “founders,” and in lectures and seminars has compared startups to monarchies. He envisions “founders” in mythical terms, citing Romulus, Remus, Oedipus, and Cain, figures discussed at length in Girard’s analyses of myth. Thiel’s pro-monarchist statements have been parsed in the media (and linked to his support for the would-be autocrat Trump), but without noting that for a self-proclaimed devotee of René Girard to advocate for monarchy carries striking ambiguities. According to Girard’s counterintuitive analysis, monarchical power is the obverse side of scapegoating. Monarchy, he hypothesizes, has its origins in the role of the sacrificed scapegoat as the unifier and redeemer of the community; it developed when scapegoats managed to delay their own ritual murder and secured a fixed place at the center of a society. A king is a living scapegoat who has been deified, and can become a scapegoat again, as Girard illustrates in his reading of the myth of Oedipus (Oedipus begins as an outsider, goes on to become king, and is ultimately punished for the community’s ills, channeling collective violence toward himself, and returned to his outsider status).

If Thiel, as he reveals in a 2012 seminar, views the “founder” as both potentially a “God” and a “victim,” then he regards the broad societal influence wielded by the tech élite as a source of risk: a king can always become a scapegoat. On these grounds, it seems reasonable to conclude that Thiel’s animus against Gawker, which he has repeatedly accused of “bullying” him and other Silicon Valley power players, is closely connected to his core concern with scapegoating, derived from his longstanding engagement with Girard’s ideas. Thiel’s preoccupation with the risks faced by the “founder” also has a close connection to his hostility toward democratic politics, which he regards as placing power in the hands of a mob that will victimize those it chooses to play the role of scapegoat. Or as he states: “the 99% vs. the 1% is the modern articulation of [the] classic scapegoating mechanism: it is all minus one versus the one.”

No serious reader of Girard can regard a simple return to monarchical rule – which Thiel has sometimes seemed to favor – as plausible: the ritual underpinnings that were necessary to maintain its credibility, Girard insists, have been irreversibly demystified. Perhaps on the basis of this recognition, and even while hedging his bets through his involvement in Republican politics, Thiel has focused instead on the new possibilities offered by network technologies for the exercise of power. A Thiel text published on the website of the libertarian Cato Institute is suggestive in this context: “In the 2000s, companies like Facebook create . . . new ways to form communities not bounded by historical nation-states. By starting a new Internet business, an entrepreneur may create a new world. The hope of the Internet is that these new worlds will impact and force change on the existing social and political order.” Although Thiel does not say so here, from a Girardian point of view, a “founder” of a community does so by bringing mimetic violence under institutional control – precisely what the application of mimetic theory to Facebook would suggest that it does.

As we saw previously, Thiel was ruminating on Strauss, Schmitt, and Girard in the summer of 2004, but also on the future of social media platforms, which he found himself in a position to help shape. It is worth adding that around the same time, Thiel was involved in the founding of Palantir Technologies, a data analysis company whose main clients are the US Intelligence Community and Department of Defense – a company explicitly founded, according to Thiel, to forestall acts of destabilizing violence like 9/11. One may speculate that Thiel understood Facebook to serve a parallel function. According to his own account, he identified the new platform as a powerful conduit of mimetic desire. In Girard’s account, the original conduits of mimetic desire were religions, which channeled socially destructive, “profane” violence into sanctioned forms of socially consolidating violence. If the sacrificial and juridical superstructures designed to contain violence had reached their limits, Thiel seemed to understand social media as a new, technological means to achieving comparable ends.

If we take Girard’s mimetic theory seriously, the consequences for the way we think about social media are potentially profound. For one, it would lead us to conclude that social media platforms, by channeling mimetic desire, also serve as conduits of the violence that goes along with it. That, in turn, would suggest that abuse, harassment, and bullying – the various forms of scapegoating that have become depressing constants of online behavior – are features, not bugs: the platforms’ basic social architecture, by concentrating mimetic behavior, also stokes the tendencies toward envy, rivalry, and hatred of the Other that feed online violence. From Thiel’s perspective, we may speculate, this means that those who operate those platforms are in the position to harness and manipulate the most powerful and potentially destabilizing forces in human social life – and most remarkably, to derive profits from them. For someone overtly concerned about the threat posed by such forces to those in positions of power, a crucial advantage would seem to lie in the possibility of deflecting violence away from the prominent figures who are the most obvious potential targets of popular ressentiment, and into internecine conflict with other users.

Girard’s mimetic theory can help illuminate what social media does, and why it has become so central to our lives so quickly – yet it can lead to insights at odds with those drawn by Thiel. From Thiel’s perspective, it would seem, mimetic theory provides him and those of his class with an account of how and to what ends power can be exercised through technology. Thiel has made this clear enough: mimetic violence threatens the powerful; it needs to be contained for their – his – protection; as quasi-monarchs, “founders” run the risk of becoming scapegoats; the solution is to use technologies to control violence – this is explicit in the case of Palantir, implicit in the case of Facebook. But there is another way of reading social media through Girard. By revealing that the management of desire confers power, mimetic theory can help us make sense of how platforms administer our desires, and to whose benefit. For Girard, modernity is the prolonged demystification of the basis of power in violence. Unveiling the ways that power operates through social media can continue that process.