Ellen Ullman

I’m an admirer of the writer Ellen Ullman, the software engineer turned novelist. Her 1997 memoir, Close to the Machine: Technophilia and Its Discontents, is a wonderfully perceptive reflection on her years as a professional programmer.

Ullman recently wrote a commentary for the New York Times on the computerized trading debacle triggered last month by the brokerage firm Knight Capital. In it she reaffirmed a crucial point she’d made in Close to the Machine, a point I find myself coming back to repeatedly in this space. To wit: If you think we’re in control of our technologies, think again.

To refresh memories, Knight, one of the biggest buyers and sellers of stocks on Wall Street – and one of its most aggressive users of automated trading systems – had developed a new program to take advantage of some upcoming changes in trading rules. Anxious to profit from getting in first, Knight set its baby loose the moment the opening bell sounded on the day the changes went into effect. It went rogue, setting off an avalanche of errant trades that sent prices careening wildly all over the market. In the forty five minutes it took to shut the system off, Knight lost nearly half a billion dollars in bad trades along with many of its clients and its reputation.

Much of the finger-pointing that followed was aimed at Knight’s failure to adequately debug its new system before it went live. If only the engineers had been given the time they needed to triple check their code, the story went, everything would have been fine. It was this delusion that Ullman torpedoed in her essay for the Times.

Wondering who’s in charge here.

It’s impossible to fully test any computer system, she said. We like to think there’s a team of engineers in charge who know the habits and eccentricities of their programs as intimately as they know the habits and eccentricities of their spouses. This is a misconception. Systems such as these don’t run on a single body of code created by one company. Rather, they’re a collection of interconnected “modules,” purchased from multiple vendors, with proprietary software that the buyer (Knight Capital in this case) isn’t allowed to see.

Each piece of hardware also has its own embedded, inaccessible programming. The resulting system is a tangle of black boxes wired together that communicate through dimly explained “interfaces.” A programmer on one side of an interface can only hope that the programmer on the other side has gotten it right.

The complexities inherent in such a configuration are all but infinite, as are the opportunities for error. Forget, in other words, about testing your way to perfection. “There is always one more bug,” Ullman said. “Society may want to put its trust in computers, but it should know the facts: a bug, fix it. Another bug, fix it. The ‘fix’ itself may introduce a new bug. And so on.”

As I say, these were the sorts of issues Ullman explored with terrific insight in Close to the Machine. Ullman’s experience as a programming insider affirmed what so many us on the outside sense intuitively, that computer systems (like lots of other technologies) follow their own imperatives, imperatives that make them unresponsive to the more fluid needs of human beings. “I’d like to think that computers are neutral, a tool like any other,” she wrote, “a hammer that can build a house or smash a skull. But there is something in the system itself, in the formal logic of programs and data, that recreates the world in its own image.”
I discussed this tendency in my 2004 masters thesis on the philosophy of technology, citing a passage from Ullman’s book as an example. Here’s part of what I wrote:

In her opening chapter, Ullman describes a meeting she has with a group of clients for whom she is designing a computer system, one that will allow AIDS patients in San Francisco to deal more smoothly with the various agencies that provide them services. Typically, this meeting has been put off by the project’s initiating agency, so that the system’s software is half completed by the time Ullman and her team actually sit down with the people for whom it is ostensibly designed.

As the meeting begins, it quickly becomes apparent that all the clients are unhappy for one reason or another: the needs of their agencies haven’t been adequately incorporated into the system. Suddenly, the comfortable abstractions on which Ullman and her programmer colleagues based their system begin to take on “fleshly existence.” That prospect terrifies Ullman. “I wished, earnestly, I could just replace the abstractions with the actual people,” she writes.

But it was already too late for that. The system pre-existed the people. Screens were prototyped. Data elements were defined. The machine events already had more reality, had been with me longer, than the human beings at the conference table. Immediately, I saw it was a problem not of replacing one reality with another but of two realities. I was at the edge: the interface of the system, in all its existence, to the people, in all their existence.

The real people at the meeting continue to describe their needs and to insist they haven’t been accommodated. Ullman takes copious notes, pretending that she’s outlining needed revisions. In truth she’s trying to figure out how to save the system. The programmers retreat to discuss which demands can be integrated into the existing matrix and which will have to be ignored. The talk is of “globals,” “parameters,” and “remote procedure calls.” The fleshly existence of the end users is forgotten once more.

“Some part of me mourns,” Ullman says,

but I know there is no other way: human needs must cross the line into code. They must pass through this semipermeable membrane where urgency, fear, and hope are filtered out, and only reason travels across. There is no other way. Real, death-inducing viruses do not travel here. Actual human confusions cannot live here. Everything we want accomplished, everything the system is to provide, must be denatured in its crossing to the machine, or else the system will die.

Ullman’s essay on the Knight Capital trading fiasco shows that in the fifteen years since Close to the Machine was published, we still haven’t gotten the bugs out of the human-machine interface, or out of the machine-machine interface, for that matter. Nor are we likely to anytime soon.

This post is also available on Doug Hill’s personal blog: The Question Concerning Technology.

Jacques Ellul

Last month the Heartland Institute, a climate-denying “think tank,” plastered Ted “The Unabomber” Kaczynski’s scowling face on a series of billboards in Chicago.

I still believe in global warming,” the copy read. “Do you?

Kaczynski has long been the figurative poster boy for technophobic insanity, of course, but the Heartland Institute made it literal. The billboard campaign was quickly recognized as a miscalculation and withdrawn, but it served as a reminder of what a gift Kaczynski turned out to be for some of the very enemies he sought to destroy. It also served as a reminder of how egregiously he misused the ideas of a philosopher who is revered as a genius by many people, myself included.

I refer to Jacques Ellul, author of The Technological Society. Ellul died 18 years ago last month; this year marks the hundredth anniversary of his birth.

David Kaczynski, Ted’s brother, has said that Ted considered The Technological Society (published in French in 1954 and in English ten years later) his “bible.” That’s easy to believe when you compare how closely the Unabomber Manifesto follows – once you weed out its many hate-filled digressions – Ellul’s ideas.

Kaczynski claimed in all humility that half of what he read in The Technological Society he knew already; he discovered in Ellul a soul mate rather than a teacher. “When I read the book for the first time, I was delighted,” he told a psychiatrist who interviewed him in jail, “because I thought, ‘Here is someone who is saying what I’ve already been thinking.'”

So, what are Ellul’s ideas on technology? His most central point was that technology has to be seen systemically, as a unified entity, rather than as a disconnected series of individual machines. He also argued that technology is as much a state of mind as a material phenomenon, in part because human beings have been absorbed into the technological complex he called “technique.”

Ellul defined technique as “the totality of methods rationally arrived at and having absolute efficiency (for a given stage of development) in every field of human activity.” While technique isn’t limited to machines, machines are “deeply symptomatic” of technique. They represent “the ideal toward which technique strives.”

These quotes hint at Ellul’s conviction that technique has become almost a living entity, a form of being that drives inexorably to overtake everything that isn’t technique, humans included. The belief that humans can no longer control the technologies they’ve unleashed – that technique has become autonomous – is also central to his thought. “Wherever a technical factor exists,” he said, “it results, almost inevitably, in mechanization: technique transforms everything it touches into a machine.”

Along the way technique’s drive toward completion does provide certain comforts, Ellul acknowledged, but overall its devastation of what really matters – the human spirit – is complete. “Technique demands for its development malleable human ensembles,” he said. “…The machine tends not only to create a new human environment, but also to modify man’s very essence. The milieu in which he lives is no longer his. He must adapt himself, as though the world were new, to a universe for which he was not created.”

Ellul’s reputation among scholars is mixed. He has his admirers, but many philosophers of technology consider him a nut. The principle objection is that he reifies technology, imputing to it a life and will of its own. It’s true that Ellul’s language often gives that impression, but again, his definition of technique includes human beings. Without their assent and participation its vitality would collapse.

Ellul’s unrestrained literary style also won him no friends in the academy. He had no interest in scholarly convention. His books include few citations of other works and even fewer qualifications – Ellul never doubted his own argument. His writing is filled with colorful description, irony and righteous anger. He’s more direct than the stereotypical French intellectual, and thus more fun to read. Nonetheless, his erudition is extraordinary, his insight incomparable.

He did occasionally go over the top. Perhaps the most embarrassing moment in The Technological Society comes when, in the process of making the quite reasonable point that technique finds a way to co-opt any political movement or art form that resists it, he dismisses jazz as “slave music.”

A third reason Ellul is considered something of an oddball in academic circles is his faith. Throughout his prolific career he divided his time between books on technology and books on religion. (That he could follow Jesus and still appreciate Marx will perhaps be more surprising in America than it would be in France.) He was a theologian of subtlety and depth, but one suspects that for many his religious beliefs undermine rather than enhance his credibility.

Ted Kaczynski managed to ignore Ellul’s religious views altogether. Where Kaczynski sought with his manifesto to overthrow technology by force, Ellul in The Technological Society explicitly declines to offer any solution at all. Ellul insisted his intention was only to diagnose the problem, not prescribe a treatment. He also insisted, however, that as despairing as his analyses often seemed, he was no pessimist. There’s always room for hope, Ellul said, even if it has to rely on the possibility of miracle.

Kevin Kelly

Another person who’s found Ellul’s thought amenable, though he doesn’t seem to realize it, is the technophilic writer Kevin Kelly. In his recent book, What Technology Wants, Kelly devotes several pages to the Unabomber Manifesto, calling it, with apologies, one of the most astute analyses of technology he’s ever read. This is largely because Kelly agrees with Kaczynski that technology is a dynamic, holistic system – the “technium,” he calls it – that behaves autonomously. “It is not mere hardware,” Kelly writes; “rather it is more akin to an organism. It is not inert, nor passive; rather the technium seeks and grabs resources for its own expansion. It is not merely the sum of human action, but in fact it transcends human actions and desires.”

That’s as Ellulian as it gets.

The major difference between Kelly’s view of technological autonomy and Ellul’s is that Kelly sees the technium/technique as a force that ultimately increases human freedom while Ellul believed the opposite.

For Kelly, humans + technology = an evolutionary extension of the species.

For Ellul, humans + technology = mutation.

Kelly makes no mention in his book of Ellul, although he frequently cites Langdon Winner, a professor at Rensselaer Polytechnic Institute who happens to be one of Ellul’s staunchest defenders. Winner’s 1977 book, Autonomous Technology, which Kelly credits as a key influence on his thinking, is a seminal contribution in its own right, but it also wears its debt to Ellul on its sleeve.

On one of the dozens of pages that mention Ellul, Winner offers what I suspect is an intentionally measured assessment of The Technological Society, calling it “less an attempt at a systematic theory than a wholesale catalog of assertions and illustrations buzzing around a particular point.” Still, he adds, “It is possible to learn from the man’s remarkable vision without adopting the idiosyncrasies of his work.”

[Langdon Winner is one of the scholars scheduled to speak at a centenary celebration of Ellul’s life and work at Wheaton College in July.]

This post is also available on Doug Hill’s personal blog: The Question Concerning Technology.

A couple of weeks ago I posted an entry on technological autonomy. It made the point that a nation’s commitment to advanced technologies can result in a situation where its economic well-being is directly counter to the physical or psychological well-being of its people. The point I’d like to make today is that the commitments of corporations to advanced technologies can become similarly antithetical.

The example in that previous post was Japan’s commitment to nuclear power. Here I’ll consider two examples involving specific consumer products: the international sale of sports utility vehicles and the international sale of snack foods.Both examples raise an important definitional question: Which is the driving force, technology or capitalism? It’s a hard question to answer because at a certain stage of development the two are so closely intertwined that it’s often impossible to separate them. On the one hand, the spread of global capitalism would clearly be impossible without mass production technologies. On the other hand, capitalism is clearly the economic model most responsible for the development and exploitation of mass production technologies.

The historian David F. Noble has argued that technology is “the racing heart of corporate capitalism,” implying that capitalism directs the enterprise while technology supplies the motive force. I think you could just as successfully argue that the opposite is true. The best solution is probably to say that the relationship between technology and capitalism is dialectical, or symbiotic. Sometimes technology stimulates capitalism, other times capitalism stimulates technology; in advanced technological/capitalist societies neither could exist without the other. From either perspective an expansion of influence becomes a priority that overwhelms every other consideration, which is another way of defining a condition of de facto autonomy.

Here are the two examples that came to my attention recently:

Example 1: Ford to Quadruple SUV Offerings in China Over Next Year

That’s the headline on a recent report from Reuters regarding Ford’s eagerness to supply millions of Chinese consumers with vehicles that will push global warming past the point of reversibility as quickly as possible.

According to Reuters, 2.1 million SUVs were sold in China last year, an increase of 25 percent over the previous year. That’s about half the annual sales of SUVs in the United States, but it’s just the beginning for China. Ford is aiming to increase its SUV sales in China by increasing production there and also by importing one of its largest models, the Explorer, from the United States. As it is, Chinese auto dealers can’t get enough SUVs to sell.

SUVs are hugely profitable for the auto companies, and huge profits invariably translate into glowing reports in the financial press. Environmentally, the impact isn’t so positive. In his book High and Mighty: SUVs: The World’s Most Dangerous Vehicles and How They Got That Way, Keith Bradsher reported that a midsize SUV puts out roughly 50 per cent more carbon dioxide per mile than the typical car. A full-sized SUV may emit twice as much.

No doubt those figures have changed somewhat in the ten years since Bradsher’s book came out, but it’s safe to say that SUVs aren’t the most fuel efficient vehicles on the road. That’s why their popularity – in the US, China, or any other country – isn’t something to celebrate. In fact, the Chinese government has made it a policy to encourage sales of electric vehicles. Not many consumers are buying them, though, in part because they’re absurdly expensive compared to conventional vehicles.

 Example 2: Snacking for the Sake of Sales

The New York Times recently reported that Kellogg, the cereal company, has launched a major initiative to expand its sales of snack foods. The company is betting on snacks because sales of cereal are declining. More and more these days people are eating breakfast on the run, and a bowl of Frosted Flakes isn’t very mobile. On the other hand, Americans seldom fail to take advantage of what food marketers call a “snacking occasion.”

It doesn’t take a genius to see that this is an instance where the health of the economy is at odds with the health of the consumers upon whom the economy is built. Obesity is a health crisis of epidemic proportions, not only in the United States but around the world, and an over-abundance of snacking occasions is one good reason why. The fact that providing more opportunities for snacking occasions has become, as the Times put it, a “core mission” for Kellogg essentially means the company hopes to profit by undermining the health of its customers.

Kellogg is especially culpable on this score because, as the Times pointed out, it has long based its marketing campaigns on the lie that foods drenched in sugar are good for you. For example, it sells a breakfast cereal called Smoze (named after the old campfire treat, s’mores) that it advertises as a “good source of Vitamin D.” Another cereal, Krave, sports a label reading “Good source of fiber and whole grain.” It’s available in chocolate and double-chocolate flavors.

At the moment, Kellogg realizes only about five per cent of its international revenue from snacks, and international sales as well as snack sales are where the company sees its future. Margaret Bath, Kellogg’s senior vice president for research, quality and technology, cites projections that the world’s population will grow to between seven and nine billion people by 2050. “That’ll be a lot of mouths to feed,” she says. “We have people that are undernourished and we have people that are overnourished. It’s the job of a food scientist to serve that whole spectrum.”

Might I suggest that Ms. Bath has an unfortunately narrow conception of what it means “to serve”?

This post is also available on Doug Hill’s personal blog: The Question Concerning Technology.

In January I wrote an essay for Cyborgology on the subject of technological autonomy and its implications for the environment. There’s no more important dynamic when it comes to understanding our relationship with machines and where they’re taking us.

Technological autonomy is shorthand for the idea that, once advanced technologies pass a certain stage of development, we lose our ability to control them. I generally use the phrase “de facto technological autonomy” to underscore that what’s being talked about is a loss of practical rather than literal control. Loss of practical control occurs for a number of reasons, among them the fact that the economies of modern societies have come to depend, completely, on various technologies. Remove those technologies and the economies collapse.

A striking example of this is the dilemma facing Japan as it contemplates whether to resume its dependence on nuclear energy in the wake of the post-tsunami meltdowns at the Fukushima Daiichi reactors last year.

Since the meltdowns, operations at all the nation’s 54 nuclear reactors have been gradually suspended. Public concern has kept the plants offline despite increasingly strident warnings from officials there that without them the nation faces (as one publication put it) an “energy death spiral.” The threat is that without power sufficient to supply its manufacturing needs, Japan’s largest employers will be forced to abandon domestic production, initiating a process of “deindustrialization” that would cripple the economy. These concerns are exacerbated by uncertainties regarding international oil supplies and the prognosis that this coming summer may be unusually hot, prompting a spike in energy demands.

The dilemma is an excruciating one. The nation’s citizens are essentially being told that they must welcome back into their midst an industry that’s made whole towns uninhabitable and that’s undermined confidence in their food supply, not to mention their officials. The alternative is widespread unemployment and poverty. In other words, while it’s literally possible to shut down the reactors permanently, practically speaking Japan may have no choice but to turn them back on. That’s de facto technological autonomy.

Global warming doubles the bind. Without the reactors, Japan will make up some of its energy deficit with fossil fuels, thereby increasing its emissions of greenhouse gases.

Japan’s distinction is that the tsunami has forced it to confront the issue of technological autonomy sooner than other industrialized countries. Their time (our time) will come.

This post is also available on Doug Hills personal blog: The Question Concerning Technology.

As the trial of mass murderer Anders Behring Breivik proceeds in Norway this week, it’s worth noting a technology-related anomaly in the press coverage of his crimes.

Breivik is the right-wing terrorist who killed more than 70 people, most of them students attending an island summer camp, in July 2011. At the time, it was widely noted that significant sections of his “European Declaration of Independence” had been lifted verbatim from Ted Kaczynski’s “Industrial Society and Its Future,” aka “the Unabomber Manifesto.” What wasn’t noted was how dramatically at odds Breivik’s beliefs are from Kaczynski’s.

Like Kaczynski, Breivik hates leftists, and it was Kaczynski’s passages excoriating those enemies that Breivik copied. But where the Unabomber set out to destroy the Kingdom of Technology, Breivik spoke of using technology – technology in general, that is, in addition to bombs and guns – to further his goals. His agenda was aimed at cleansing European nations of “multicultural” influences. Toward that end, he urged that the technological advantage of European nations be widened, recommending that twenty per cent of their national budgets be reserved “for research and development in relation to science and technology.”

For the same reason, he also urged that Western technology be kept out of the hands of Muslim countries. Christian nations should focus their energies solely on their own economic development, the declaration said, “allowing unlimited research and development relating to every aspect of technology and science (including all aspects of biological research, reprogenetics etc.).”

Kaczynski's Remote Cabin

Another significant difference was their respective positions on genetic engineering. Kaczynski specifically warned that using those techniques was a step in the direction of technological totalitarianism, but Breivik specifically endorsed them. “Reproduction clinics” should be established, Breivik said, in order to promote population growth from “pure sources,” defined as “non-diluted (95-99% pure) Nordic genotypes.” Kaczynski urged that all forms of genetic engineering be outlawed, but predicted that in a technological society they wouldn’t be. (See paragraphs 122-124).

Many of us who have doubts regarding the technological project deeply regretted Kaczynski’s discrediting of the issue with his terror campaign. If there’s a lesson here it may be that hatred seeks justifications for its slaughters under whatever guise happens to be convenient.

This post can also be found on Doug Hill’s personal blog “The Question Concerning Technology.” 

 

Roger Boisjoly has died.

The name may not ring a bell, but Boisjoly’s place in history certainly will: He was the engineer who tried in vain to persuade NASA that it was unsafe to launch the space shuttle Challenger on January 28, 1986.

The Challenger explosion remains today one of our most evocative images of technology gone wrong. This is due in part to the personal nature of the tragedy – the schoolteacher onboard, the family members watching – and in part to the subsequent revelations that NASA proceeded with the launch despite Boisjoly’s warnings.

My intention here is not to rehash the chain of events that led to the Challenger’s demise, but to show how some of those events demonstrate patterns of error that are commonplace – indeed, almost inevitable – in the operation of complex technological systems.

These thoughts have been inspired mainly by the analysis of the Challenger explosion provided by Harry Collins and Trevor Pinch in their book, The Golem at Large: What You Should Know About Technology. Other key sources include Charles Perrow’s Normal Accidents: Living With High-Risk Technologies and Jay Hamburg’s reporting in The Orlando Sentinel.

I’ll group the patterns to be discussed – let’s call them Underappreciated Contributing Dynamics – in two categories, the first involving the question of certainty, the second involving the consequences of human interaction with machines.

Underappreciated Contributing Dynamic #1: There is no certainty.

The Challenger explosion is thought to have occurred because the O-rings separating sections of the booster rockets that powered the shuttle’s ascent into space failed to seal properly. The failure of the seals allowed a tiny gap to form between the sections. Flaming gas leaked through the gap and exploded.

The conventional wisdom is that NASA bureaucrats, anxious to press forward with the launch largely for public relations reasons, ignored the warnings of Boisjoly and others who recognized the danger and tried to stop the launch. There’s truth to that narrative – comforting truth, because it reassures us that if we only follow the proper procedures, such accidents can be prevented. In practice, it’s not that simple.

Engineers at NASA and Morton Thiokol, the contractor responsible for building the booster rockets, had known for years that there was a problem with the seals. The question was not only what was causing the problem and how to fix it, but also whether the problem was significant enough to require fixing.

According to Collins and Pinch, the O-rings were just one of many shuttle components that didn’t perform perfectly and about which engineers had doubts. To this day, they add, we can’t be sure the O-rings were the sole cause of the explosion. “It is wrong,” they write,

to set up standards of absolute certainty from which to criticize the engineers. The development of an unknown technology like the Space Shuttle is always going to be subject to risk and uncertainties. It was recognized by the working engineers that, in the end, the amount of risk was something which could not be known for sure.

Part of the uncertainty regarding the O-rings was that NASA and Morton Thiokol could never determine exactly how large the gaps in the seals became in liftoff conditions, and thus how serious a danger they represented. Countless tests were run trying to answer that question, but they consistently produced inconsistent results. This was so in part because NASA’s and Morton Thiokol’s engineers couldn’t agree on which measuring technique to trust. Each side, say Collins and Pinch, believed its methods were “more scientific,” and therefore more reliable.

Charles Perrow writes that the inability to pinpoint the source of technical failures is especially common in what he calls “transformation” systems, such as rocket launches or nuclear power plants: the intricacy of the relationships between parts and processes (“tight coupling”) makes it impossible to separate cause and effect. “Where chemical reactions, high temperature and pressure, or air, vapor or water turbulence [are] involved,” he writes,

we cannot see what is going on or even, at times, understand the principles. In many transformation systems we generally know what works, but sometimes do not know why. These systems are particularly vulnerable to small failures that ‘propagate’ unexpectedly, due to complexity and tight coupling.

Roger Boisjoly’s suspicion that cold weather was the source of the Challenger’s O-ring problem was just that – a suspicion. As of the night before the Challenger launch, he had some evidence to back up his suspicion, but not enough to prove it. On the strength of Boisjoly’s concerns, his superiors at Morton Thiokol initially recommended that the launch be delayed, but NASA’s managers insisted on seeing data that quantified the risk. Unable to provide it, Morton Thiokol’s managers reversed their recommendation, and the launch was approved.

Roger Boisjoly

Underappreciated Contributing Dynamic #2: The Double Bind of the Human Factor

We know now that Morton Thiokol’s managers should have supported their engineer’s conclusions and held their ground, and that NASA, upon hearing there was a possibility of catastrophic failure in cold weather, should have exercised caution and postponed the launch. Again, all that is true, but it’s not the whole truth. To pin the blame on irresolute and impatient managers is to underestimate the complexities of the human dynamics that led to the decision.

We like to think that sophisticated machines are reliable in part because they eliminate human error. In truth complex technological systems always include a human component, and therein lies the dilemma. There’s no shortage of examples before and after Challenger proving that the interaction of human beings and machine can end badly. It’s also well known that we ask for trouble when we unleash powerful technologies without including human judgment in the mix. Human beings: can’t live with ’em, can’t live without ’em.

A subcategory of the human factor dilemma is what Charles Perrow calls the “double penalty” of high-risk systems. The complexity of those systems means that no single person can know all there is to know about the myriad elements that comprise them. At the same time when the system is up and running one central person needs to be in control. This is especially true in crisis situations when the person in control is called upon to take, as Perrow puts it, “independent and sometimes quite creative action.” Thus complex technological systems present us with built-in “organizational contradictions.”

Communication issues can exacerbate those organizational contradictions. Middle level managers, for example, may decide that it’s unnecessary to pass relevant information up the chain of command. In Challenger’s case, many of NASA’s senior executives were unaware of the ongoing questions regarding the booster seals. It’s likely no one told the astronauts, either. Opportunities for misunderstanding also arise from the manner in which information is offered and from the manner in which it’s interpreted. On at least two occasions NASA managers shrugged off engineers’ warnings about the risks of cold-weather launches because the engineers themselves didn’t seem, as far as NASA’s managers could tell, that alarmed about them.

Collins and Pinch stress that in many respects the arguments between NASA and Morton Thiokol the night before the Challenger launch were typical of the sorts of arguments engineers and their bosses (also engineers, usually) routinely engage in as they iron out problems in complex technological operations. And, as mentioned above, these were continuations of discussions that NASA and Morton Thiokol had been having over the O-ring problem literally for years.

The longevity of those arguments actually became a barrier to their resolution. Some of the engineers at NASA and Morton Thiokol had invested so much time and energy in the O-rings that they developed a sort of psychological intimacy with them. Believing the problem fell within acceptable margins of risk, they grew comfortable wrestling with it. It was a problem they knew. This is an example of a phenomenon called “technological momentum.” Simply put, habits of organizational thought and action become embedded and increasingly resistant to change. Devising an entirely new approach to the booster seals – one that would surely have had its own problems – was a step the shuttle engineers were reluctant to take, given the pressure they were under to move the project forward. Roger Boisjoly was able to look at the booster problem differently because he joined Morton Thiokol several years after the shuttle project had begun.

A major reason NASA’s engineers were inclined to resist Morton Thiokol’s recommendation that the launch be scrubbed because of the cold weather was that temperature had never before been presented to them as a determinative element in a launch/no launch decision. This wasn’t Roger Boisjoly’s fault: the freezing temperatures on the eve of the launch were a fluke, and therefore presented conditions that hadn’t been encountered before. Nonetheless the novelty of Boisjoly’s theory helped sway the consensus against him, as did his admitted lack of definitive data.

“What the people who had to make the difficult decision about the shuttle launch faced,” Collins and Pinch write,

was something they were rather familiar with, dissenting engineering opinions. One opinion won and another lost, they looked at all the evidence they could, used their best technical standards and came up with a recommendation.

This may seem a cold assessment in light of what occurred, and Collins and Pinch aren’t arguing that the decision the engineers made that night was correct. Obviously it wasn’t. Still, the question must be asked: Isn’t this exactly the sort of rational decision-making we generally prize in our scientists and technicians?

We understand that human judgment is fallible. Still, when complex technological systems go awry, we want to insist that it shouldn’t be. Which is to wish for another sort of double jeopardy: to have our cake and eat it too.

Originally posted on “The Question Concerning Technology.”

For the sake of argument, let’s assume that what the scientists are saying about global warming – that we are headed for all manner of catastrophic changes in the environment unless fossil fuel emissions are drastically reduced, immediately – is accurate.

Also for the sake of argument, let’s assume that the world’s political leaders and the citizens they represent are sane, and that, therefore, they would like to avoid those catastrophic changes in the environment.

Assuming both propositions to be true, it would seem reasonable to ask ourselves whether it’s possible to take the necessary actions that would forestall those changes. In order to answer yes to that question we will need to overcome a series of challenges that can collectively be described as technological autonomy.

Technological autonomy is a shorthand way of expressing the idea that our technologies and technological systems have become so ubiquitous, so intertwined, and so powerful that they are no longer in our control. This autonomy is due to the accumulated force of the technologies themselves and also to our utter dependence on them.

The philosopher of technology Langdon Winner refers to this dependence as the “technological imperative.” Advanced technologies require vast networks of supportive technologies in order to properly function. Our cars wouldn’t go far without roads, gasoline, traffic control systems, and the like. Electricity needs power lines, generators, distributors, light bulbs, and lamps, together with production, distribution, and administrative systems to put all those elements (profitably) into place. A “chain of reciprocal dependency” is established, Winner says, that requires “not only the means but also the entire set of means to the means.”

Langdon Winner

Winner, whose book Autonomous Technology is the seminal study of this issue, also points out that we usually become committed to these networks of technological systems gradually, not realizing how intractable our commitments will become. He calls this “technological drift.” As we invent and deploy powerful technologies for specific purposes, Winner adds, they create ripple effects that radiate unpredictably out into the culture. These influences generate a variety of unintended consequences, many of them virtually impossible to control.

In an earlier Cyborgology essay , PJ Rey pointed out that as citizens of a technological society we go about our daily business placing a significant degree of faith in the technological devices and systems we use. Faith is necessary because most of us don’t have the slightest idea how those devices and systems actually work, and we certainly wouldn’t know how to repair them if they fail. We trade, Rey said, certainty for convenience. In the process we also surrender a substantial measure of control of those devices and systems.

The historian of technology Thomas P. Hughes has pointed out that our deepening commitment to existing systems is psychological as well as practical, and that it applies as much to the people who make technological systems as to the people who use them, if not more so. “Technological momentum” is the term he coined to describe this tendency to habituation. It’s a tendency that high tech companies like Google try desperately to avoid, regularly pronouncing their determination to retain the flexibility of start-ups. The regularity with which those promises are made suggests the tenacity of the problem.

Two other dynamics that contribute substantially to technological autonomy should be mentioned. Technological convergence describes the merger of previously disparate technologies into new combinations. Technological diffusion describes the spread of existing technologies into novel, often unanticipated applications. The power of technological convergence can be seen in the joining of computers with everything from television and telephones to surgery and genetics. Technological diffusion can be seen in the spread of assembly line techniques from the manufacture of automobiles (inspired in part by the disassembly of animals in meat packing plants) to the manufacture of hamburgers. Franchising represents the extension of the assembly line concept to the manufacture of business empires.

Individually each of the dynamics I’ve named here would be difficult to restrain. Collectively they constitute a forward motion that is irreversible. I call the consequence of this collectivity “de facto technological autonomy.” By that I mean that although we can theoretically detach ourselves from the technological systems on which we’ve come to depend, practically such a detachment is impossible because it would create unsupportable levels of disruption.

Japan’s response to the post-tsumani nuclear disasters last March is an example. According to The New Yorker magazine, anti-nuclear activists there were optimistic that widespread plant shutdowns after the crisis would become permanent, but their optimism proved premature. It’s now assumed that most of the country’s reactors will re-open. To abandon nuclear power would be, in the words of Japan’s economics minister, “idealistic but very unrealistic.”

Global warming and other crises have caused many scientists and policy wonks to conclude that the only escape from the destructive effects of technological autonomy is more technology. Geoengineering envisions the application of techniques that seem borrowed from science fiction, among them fertilizing the ocean to boost the growth of CO2-absorbing phytoplankton and the manufacture of artificial volcanoes that would fill the atmosphere with clouds of heat-blocking particles.

One looks for hope where one can find it, but the problem here is obvious: Even if they did work for the purposes intended, nobody knows what the unintended results of such radical measures might be. Technological autonomy is a process that proceeds without regard to original intention.

YouTube Preview Image

Start at 13:42 – 15:37 for images of Zuccotti Park being dismantled

The clearing of the Occupy Wall Street demonstrators from the streets of various cities over the past few weeks has been a strikingly naked demonstration of the characteristic properties of what Jacques Ellul called “technique.”

Like other philosophers, Ellul thought of technology more as a state of being than as a collection of artifacts. “Technique” is the word he used to describe a phenomenon that includes, in addition to machines, the systems in which machines exist, the people who are enmeshed in those systems, and the modes of thought that promote the effective functioning of those systems.

In The Technological Society, Ellul called technique “the translation into action of man’s concern to master things by means of reason, to account for what is subconscious, make quantitative what is qualitative, make clear and precise the outlines of nature, take hold of chaos and put order into it.” The machine, he added, is “pure technique… the ideal toward which technique strives.”

Jacques Ellul

From Ellul’s perspective technique aims relentlessly toward two fundamental goals: expansion and efficiency. OWS can be seen as a revolt against precisely those objectives. In essence the protestors are arguing that the social and political balance of power has been radically shifted toward the priorities of technique and away from their proper focus: the welfare of human beings. That’s as concise a summation of the Ellulian ethic as one could wish for.

Ellul argued that a certain amount of rebellion is not only tolerable in the technological society but necessary, simply because the strain of living up to the demands of the machine creates pressures that must find some form of release. Outbreaks of acceptable resistance help provide that release. The OWS protests were cleared, I think, because they threatened to get in the way of business as usual, and therefore crossed the line from acceptable to unacceptable resistance. “Popular will,” Ellul said, “can only express itself within the limits that technical necessities have fixed in advance.”

According to the statements of several mayors and other authorities, the OWS evictions were necessary because they posed a threat to public health and safety. This is what Ellul called “a rationalizing mechanism,” invoked to justify the operations of the machine. Such rationalizing mechanisms, he added, account for the “intellectual acrobatics” of politicians who insist that they support the rights of free speech and assembly even as they’re dispatching battalions of police to forcibly disperse citizens who are exercising those rights. Meanwhile the momentum of potentially meaningful protest is effectively blunted. Movements come and go; technique remains.

Ellul included in the category of acceptable resistance the persona of the Rebel. The Rebel is the uncompromising anti-hero who constantly appears in movies, music, and advertising (and on the street, for that matter): tough guys and gals who have the guts to go against the tide and win, or at least go down in a blaze of glory. This is stance that, as Ellul noted, hardly threatens the status quo, given that it’s less genuine rebellion than an image of rebellion, a fashion statement easily acquired through the purchase of whatever products the Rebel brand happens to have certified at any given moment. “I am somehow unable to believe in the revolutionary value of an act that makes the cash register jingle so merrily,” Ellul said.

One of Ellul’s central themes was that the forces of technique are relentlessly adapting human beings to the demands of the machine, demands for which, in their natural state, they are wholly inadequate. “It is not a question of causing the human being to disappear,” he wrote, “but of making him capitulate, of inducing him to accommodate himself to techniques and not to experience personal feelings and reactions…Human joys and sorrows are fetters on technical aptitude.”

Because this process of adaptation is not yet complete, living in the technological society continues to create tensions that, as mentioned above, need to be harmlessly released. In addition to acceptable rebellion, mechanisms that help accomplish this objective include most forms of entertainment, drugs (legal and illegal), propaganda and most forms of religion.

Doug Hill is a journalist and independent scholar who has studied the history and philosophy of technology for fifteen years. More of this and other technology-related topics can be found on his blog, The Question Concerning Technology, at http://thequestionconcerningtechnology.blogspot.com/

Follow him @DougHill25 on Twitter.

 

The clearing of the Occupy Wall Street demonstrators from the streets of various cities over the past few weeks has been a strikingly naked demonstration of the characteristic properties of what Jacques Ellul called “technique.”

Like other philosophers, Ellul thought of technology more as a state of being than as a collection of artifacts. “Technique” is the word he used to describe a phenomenon that includes, in addition to machines, the systems in which machines exist, the people who are enmeshed in those systems, and the modes of thought that promote the effective functioning of those systems.

In The Technological Society, Ellul called technique “the translation into action of man’s concern to master things by means of reason, to account for what is subconscious, make quantitative what is qualitative, make clear and precise the outlines of nature, take hold of chaos and put order into it.” The machine, he added, is “pure technique… the ideal toward which technique strives.”

From Ellul’s perspective technique aims relentlessly toward two fundamental goals: expansion and efficiency. OWS can be seen as a revolt against precisely those objectives. In essence the protestors are arguing that the social and political balance of power has shifted unacceptably toward the priorities of technique and away from their proper focus: the welfare of human beings. That’s as concise a summation of the Ellulian ethic as one could wish for.

Ellul argued that a certain amount of rebellion is not only tolerable in the technological society but necessary, simply because the strain of living up to the demands of the machine creates pressures that must find some form of release. Outbreaks of acceptable resistance help provide that release. The OWS protests were cleared, I think, because they threatened to get in the way of business as usual, and therefore crossed the line from acceptable to unacceptable resistance. “Popular will,” Ellul said, “can only express itself within the limits that technical necessities have fixed in advance.”

According to the statements of several mayors and other authorities, the OWS evictions were necessary because they posed a threat to public health and safety. This is what Ellul called “a rationalizing mechanism,” invoked to justify the operations of the machine. Such rationalizing mechanisms, he added, account for the “intellectual acrobatics” of politicians who insist that they support the rights of free speech and assembly even as they’re dispatching police battalions to forcibly disperse citizens who are exercising those rights. Meanwhile the momentum of potentially meaningful protest is effectively blunted. Movements come and go; technique remains.

Ellul included in the category of acceptable resistance the persona of the Rebel. The Rebel is the uncompromising anti-hero who constantly appears in movies, music, and advertising (and on the street, for that matter): tough guys and gals who have the guts to go against the tide and win, or at least go down in a blaze of glory. This is stance that, as Ellul noted, hardly threatens the status quo, given that it’s less genuine rebellion than an image of rebellion, a fashion statement easily acquired through the purchase of whatever products the Rebel brand happens to have certified at any given moment. “I am somehow unable to believe in the revolutionary value of an act that makes the cash register jingle so merrily,” Ellul said.

One of Ellul’s central themes was that the forces of technique are relentlessly adapting human beings to the demands of the machine, demands for which, in their natural state, they are wholly inadequate. “It is not a question of causing the human being to disappear,” he wrote, “but of making him capitulate, of inducing him to accommodate himself to techniques and not to experience personal feelings and reactions…Human joys and sorrows are fetters on technical aptitude.”

Because this process of adaptation is not yet complete, living in the technological society continues to create tensions that, as mentioned above, need to be harmlessly released. In addition to acceptable rebellion, mechanisms that help accomplish this objective include most forms of entertainment, drugs (legal and illegal), propaganda and most forms of religion.

###

There’s a new ebook out that’s attracting some attention, in part because its conclusions are so startling, in part because its conclusions come from an unexpected quarter. The title is Race Against the Machine: How the Digital Revolution is Accelerating Innovation, Driving Productivity, and Irreversibly Transforming Employment and the Economy. Its authors, Erik Brynjolfsson and Andrew McAfee, are two professors from one of the academic epicenters of tech, MIT.

I haven’t read the book, but I have read the three excerpts (pt. 1, pt. 2, pt. 3) run on The Atlantic magazine’s web site. I would definitely recommend them, both because they’re clearly written and because they document in a dispassionate way some of more important effects of our ever-increasing social and economic commitments to technology.

Suffice it to say that their prognosis for the working man and woman isn’t pretty. According to Brynjolfsson and McAfee, losers in the war between workers and machines could ultimately constitute a majority – perhaps more than 90 percent – of the population.

One example of where we’re headed: Brynjolfsson and McAfee report that the huge Chinese electronics manufacturer, Foxconn, plans to buy a million new robots in the next three years. All will be used to perform tasks previously handled by human beings.

Two things struck me in particular about Brynjolfsson and McAfee’s predictions. One is how closely they resemble earlier warnings about the impact of automation on employment from one of their predecessors at MIT, Norman Wiener.

 

Norbert Wiener

As the father of cybernetics, Wiener (1894-1964) played a huge role in the early development of automation technologies. He also spent a lot of time worrying about where those technologies might be taking us.

In his 1950 book The Human Use of Human Beings, Weiner described automation as “the precise economic equivalent of slave labor.” Thus, he said, any labor that competes with automation will have to accept the economic conditions of slave labor. As unpleasant as this might be for the slaves, it often serves the ambitions of their owners. “Those who suffer from a power complex,” Wiener wrote, “find the mechanization of man a simple way to realize their ambitions.”

If our only standard is profit, Wiener added, automation will lead us to levels of economic disruption that will make the Great Depression “seem a pleasant joke.”

If Wiener were alive today, he’d surely marvel at the sophistication of the automation technologies now in place. Just as surely he’d shake his head at how much closer we’ve come to the sorts of disruptions he predicted, and at how little we’ve done to prepare for them.

That’s the second thing that struck me about Brynjolfsson and McAfee’s report. The exponential advance of technology is widely recognized both by those who celebrate that advance and those who fear it. It’s a goal of those who work to develop technologies to make them ever more effective and efficient. Incremental improvements achieved day by day, together with the occasional breakthrough, drive the advance relentlessly forward.

As they work on their specific projects, it is not the technologist’s concern, generally speaking, to consider the ancillary effects those projects may have on the culture at large. Nor, generally speaking, do we consider those effects the technologist’s responsibility.

Still, the question remains: Whose responsibility is it?

Doug Hill is a journalist and independent scholar who has studied the history and philosophy of technology for fifteen years. More of this and other technology-related topics can be found on his blog, The Question Concerning Technology, at http://thequestionconcerningtechnology.blogspot.com/

Follow him @DougHill25 on Twitter.