Photo by: The Fayj

The concept of “risk” comes up a lot in the classes I TA. Usually, it comes up as part of a conversation about acceptable levels of risk for consumer products: How safe should a car be? How much money should we spend on fire safety in homes? If you’re utilizing a cost-benefit analysis that also means calculating the price of a human life. How much is your life worth? These questions are familiar to safety regulators, inspectors, CEOs, and government officials but as private citizens and consumers, we like to think that such questions are sufficiently settled. Cars are as safe as we can make them because human life is incalculably valuable. We won’t be able to know when something bad happens, so it’s better to get sp30 car insurance to avoid disturbing future costs. After all, these sorts of questions sound macabre when we invert the function: How many cars should explode every year? How many jars of peanut butter should have salmonella in them? These questions are largely considered necessary evils in today’s risk-based society, but what kind of society does that create?

The German sociologist Ulrich Beck wrote a very influential book in 1992 called Risk Society: Towards a New Modernity. The very first sentence of the first chapter summarizes the general thesis: “In advanced modernity the social production of wealth is systematically accompanied by the social production of risks.” (Emphasis in the original.) He goes on to say, “Accordingly, the problems and conflicts relating to distribution in a society of scarcity overlap with the problems and conflicts that arise from the production, definition and distribution of techno-scientifically produced risks.” In essence, we as individuals spend our days trying to mitigate our own personal risk. We buy insurance, wear seatbelts, wash our fruit, and bring an umbrella in case it rains. Large institutions like governments and corporations do something similar, but the implications of their actions are not only bigger, they are also more complex. We all learned that the hard way in the 2008 financial collapse.

Economists and political scientists love to play around with risk. They make models and games (not the fun kind) that explain how we mitigate risks and why we make certain risky decisions. Just when models start to settle out and things become somewhat predictable, a new technology will disrupt the calculus. The invention of Yelp reviews reduces your risk of going to a terrible restaurant, to give one mundane example. A more dramatic one is the invention and subsequent widespread use of drones in combat situations. All sorts of military incursions and surveillance missions become less risky when a drone brought into the equation. Drones provide a certain level of security for combat troops, but they also make it easier to kill people that are hard to get to. Just as rich people are at less risk of dying from treatable illness (because they can afford healthcare) enemy combatants are much more likely to die in battle than American troops. We can see clearly see this risk analysis playing out in the recently uncovered US Justice Department memos that provide the rationale for drone attacks. Drones are used when capturing the target is deemed too difficult or the threat is too imminent. You may disagree with what qualifies as “too difficult” or “imminent” but the logic remains. A wealthy country can produce sophisticated robots to mitigate its own risk and dramatically increase the risk of others. 

It’s important to note that we aren’t just talking about the risk of being killed in combat. Collateral damage, the death of civilians, is also important to note here. Not just for moral and ethical reasons, but because it helps us understand the transfer of risk. Relatively rich American civilians are made safer through transferring the risk of combat death to relatively poorer people in the Middle East.

Death from drone attack is an extreme example. Most risk analysis isn’t about the likelihood of civilians in the blast radius, its about unintended consequences and externalities. Where does pollution go? Who gets exposed to what most often? There are millions of examples of this, but it’s easier to speak in hypotheticals than any specific case. (For a book-length treatment of this sort of risk exposure I’d recommend Michael Mascarenhas’ (2012) Where the Water’s Divide.) Imagine a new coal power plant is proposed for a major metropolitan region. Where the plant goes is still up for debate, but there is a willing landowner just outside the city that is willing to sell. Neighbors catch wind of the possible plant and start yelling, “Not In My Back Yard!” They call their representatives, picket outside the coal company’s headquarters and maybe a few college students chain themselves to a tree on the proposed construction site. They cause enough of a ruckus that the company decides to not build on the site and go elsewhere. Where is elsewhere?

For the coal company, their biggest concern at the moment is more bad press and the extra costs associated with every time they pick a new site. They have to look for a place to build that won’t elicit public outcry. Through a mix of market-based land prices, racism, and classism environmental dangers tend to move to poor neighborhoods. Risk moves until it settles next to people that no one listens to. The poor rarely have time to call their congressional representative, nor do they have the political or social capital to collectively organize and persuade others to act. Nuclear power plants, water treatment facilities, phosphorous mines, and many other dangerous things are necessary for modern society, but the risks associated with having them are disproportionally experienced by people of color and the poor.

Photo by Bob August

There are other ways of going about risk mitigation. Most European countries exercise something called the “precautionary principle” which requires actors (like coal companies) to demonstrate (usually in front of a government or citizen’s panel) that their project is safer and more beneficial than other feasible options.  This has problems of its own, but it might be a start. More generally, deep and systematic change comes from individuals in industry that make smarter and more informed decisions about what their inventions will do to society. It means taking unintended consequences more seriously and spending more time thinking critically about what a technology will encourage. It is not enough for roboticists to tell themselves that drones will save the lives of soldiers, and not consider how their invention can make killing easier. It’ll take more than every engineer reading The Whale and the Reactor and eschewing the idea that technologies are as moral as their users. Perhaps it means making engineers responsible for “malpractice” like doctors and lawyers? A faulty car brake means a lead engineer gets his license revoked.

Ultimately, the problem is that we haven’t even begun to consider alternative social arrangements that could improve innovation while reducing risk in an egalitarian fashion. The dearth of actionable alternatives doesn’t mean the status quo is fine, it just means the source of the problem has not been properly identified. More regulation is not the answer, and calling for smarter regulation just sounds hackneyed. What we need is a deep structural shift in the entire process. From engineering pedagogy to product labeling, the whole system is stumbling over itself and too many innocents are dying. Modern technological society is outgrowing its capacity to gauge risk effectively and the consequences could be disastrous.

Follow David on Twitter with little-to-no risk of bodily harm: @da_banks