"Please, Sir. I want some more."
“Please, Sir. I want some more.”

The humanities are in retreat. For years science and technology have been running roughshod over the arts in the nation’s colleges and universities, a thrashing turning now into rout.

This is hardly news. For years a consistent string of news articles and commentaries have documented the humanities’ decline. An especially robust burst of coverage greeted the release last summer of “The Heart of the Matter,” an earnest series of recommendations and equally earnest short film produced under the auspices of the American Academy of Arts and Sciences.

Backed by a prestige-dripping commission of actors, journalists, musicians, directors, academics, jurists, executives and politicians, “The Heart of the Matter” sounded what the New York Times called a “rallying cry against the entrenched idea that the humanities and social sciences are luxuries that employment-minded students can ill afford.” In our race for results, the commission urged, the quest for meaning must never be abandoned.

Alas, the sad truth is that earnestness at this point doesn’t begin to cut it. Celebrity endorsements won’t reverse the trend, either. We may as well come clean and accept that the fact that the humanities have been losing this fight for centuries, and a reversal of their fortunes isn’t likely anytime soon.

The reason, I think, is fairly simple. Relative to the tangible solidities produced by science, technology, and capital, the gifts the humanities offer are ephemeral, and thus easily dismissed. “The basis of all authority,” said Alfred North Whitehead, “is the supremacy of fact over thought.”

This is not to say the humanities won’t retain a place at the table. They will, insofar as they can make themselves useful. But like Oliver Twist, the paucity of their portion will always leave them begging for more.  All the more reason, then, to celebrate those who have managed to land a blow or two against the empire.

Francis Bacon
Francis Bacon

One of the early aggressors against whom the seekers of Higher Truth had to defend themselves was Francis Bacon. His introduction of the scientific method was accompanied by an unending string of attacks on the philosophers of ancient Greece for their worthless navel-gazing. Like children, he said, “they are prone to talking, and incapable of generation, their wisdom being loquacious and unproductive of effects.” The “real and legitimate goal of the sciences,” Bacon added, “is the endowment of human life with new inventions and riches.”

Legions of scientific wannabes followed Bacon’s lead to become dedicated experimental tinkerers in whatever the Enlightenment’s version of garages might have been. Meanwhile Jonathan Swift stood to one side and argued, with droll, often scatological amusement, that the emperor had no clothes.

Jonathan Swift
Jonathan Swift

Those who read Gulliver’s Travels in the days before literature classes were eliminated may recall Gulliver’s visits to the Academies of Balnibarbi (parodies of Salomon’s House, the utopian research center envisioned in Bacon’s New Atlantis), where scientists labored to produce sunshine from cucumbers and to reverse the process of digestion by turning human excrement into food. Embraced in greeting by the filth-encrusted investigator conducting the latter experiment, Gulliver remarked parenthetically that this had been “a Compliment I could well have excused.”

A more recent battle in what might be called the Arena of Empiricism unfolded in 1959, when C.P. Snow presented his famous lecture, “The Two Cultures and the Scientific Revolution.”

C.P. Snow
C.P. Snow

The cultures to which the title referred were those of literary intellectuals on the one hand and of scientists on the other. While it’s true Snow criticized the scientists for knowing little more of literature than Dickens, by far the bulk of his disdain was reserved for the intellectuals. Sounding a lot like Bacon, Snow said the scientists had “the future in their bones,” while the ranks of literature were filled with “natural Luddites” who “wished the future did not exist.”

Again, a partisan of the humanities launched a spirited counterattack, this one fueled not by satire but by undiluted rage.

F.R. Leavis
F.R. Leavis

Manning the barricades was F. R. Leavis, a longtime professor of literature at Downing College, Cambridge. Leavis was well known in English intellectual circles as a staunch defender of the unsurpassed sublimity of the great authors, whom he saw as holding up an increasingly vital standard of excellence in the face of an onrushing tide of modern mediocrity. Snow’s lecture represented to Leavis the perfect embodiment of that mediocrity, and thus a clarion call to repel the barbarians at the gate.

From his opening paragraph Leavis’s attack was relentless. Snow’s lecture demonstrated “an utter lack of intellectual distinction and an embarrassing vulgarity of style,” its logic proceeding “with so extreme a naïveté of unconsciousness and irresponsibility that to call it a movement of thought is to flatter it.”

Snow, Leavis said, made the classic mistake of those who saw salvation in industrial progress: he equated wealth with well being. The results of such a belief were on display for all to see in modern America: “the energy, the triumphant technology, the productivity, the high standard of living and the life impoverishment—the human emptiness; emptiness and boredom craving alcohol—of one kind or another.”

The uncompromising spleen of Leavis’s tirade certainly outdid the conciliatory platitudes of the “The Heart of the Matter,” but to no greater effect. Neither fire and brimstone nor earnest entreaty will rescue the humanities from their fate. Meaning will remain the underdog in a world that increasingly demands the goods to which it has increasingly grown accustomed.

Defeatist? To the contrary, all the more reason to carry on the fight, boldly and without apology. I keep a quote from Jonathan Swift taped over my desk:

“When you think of the world give it one lash the more at my request. The chief end I propose in all my labors is to vex the world rather than divert it.”

Doug Hill’s book, Not So Fast: Thinking Twice About Technology, has just been published on Amazon Kindle and other outlets. He blogs at The Question Concerning Technology and can be followed on Twitter at @DougHill25.

aaa -- sandra

The film Gravity is having an especially strong run at the box office, and it seems to be having an especially powerful impact on those who have seen it. It’s certainly a beautiful movie, visually, and an unusual one, as far as big-budget Hollywood attractions go. For anyone who thinks a lot about technology, as I do, the film has some interesting, though somewhat ambiguous, messages.

Be forewarned: What follows is all spoiler.

Technology gone wrong plays a central role in Gravity. The film also resonates with a theme that’s central to the technological project: the drive to open new frontiers. This is not to say that either of those subjects is the principal concern of Gravity’s director and co-writer, Alfonso Cuarón. His interests lie elsewhere, as I’ll explain. Still, when you make a saga about human beings in space, questions of technology and frontiers are hard to avoid.

The direction of Gravity’s plot makes it, in a basic sense, an anti-frontier movie. It’s all about the need of Sandra Bullock’s character, Dr. Ryan Stone, to return to Earth. She has to get home. There’s no mission to explore an unknown planet, destroy a threatening asteroid, or investigate mysterious transmissions from Jupiter. The plot hinges on what is, in essence, an industrial accident during a routine repair mission a few hundred miles above Earth.

aaa - gravity man and earthWhat’s not routine is our reaction as viewers to that routineness. It’s clear from the reviews and social media comments on Gravity that people have been stunned and moved by the beauty of Earth as it’s depicted from the astronaut’s perspective. Also stunning is Cuarón’s depiction of the vastness of space. The genius of  the film—the reason people find it so effecting, I think—is the way Cuarón positions his human characters between those two poles. Seldom has the strangeness of our position in the cosmos been defined so starkly. Dr. Stone’s desperation to return to Earth is more than a fight for survival. It’s a flight from existential terror.

As I say, I don’t think Cuarón has any particular ax to grind for or against technology, but it’s hard not to notice in Gravity how alien the space environment is for the humans who are moving around in it. This is something that’s always mystified me about space enthusiasts. I understand being motivated by the adventure of exploring space, but the appeal of actually living there escapes me. You can’t breathe in space! Or walk! You have to put on an incredibly bulky suit every time you go out the door. It seems like it would get old pretty fast.

As many critics have pointed out, Dr. Stone’s psychological motivation is to come home not only from outer space, but also from her disconnection from life. Having lost a daughter in a freak accident, she loses herself in work, and in long drives to nowhere after work. This suggests that Cuarón might be commenting on the technological project, after all. Is he saying that space exploration is equivalent to an escapist drive into the night?

 Machine Overlords
 Machine Overlords

Probably not. In fact, Gravity’s concluding scene is open to a couple of interpretations.

On the one hand, when Stone pulls herself up on the shore, gasping for air and grasping terra firma, we see an affirmation of how good it is for her to be back where she belongs. She’s isn’t running away anymore. She’s alive, and grateful to be alive: she can embrace the pain that comes with the joy of existence. She’s home, and space travel seems behind her.

Or is it? Stone’s emergence from the water is an obvious allusion to humankind pulling itself out of the ocean and onto land to begin its evolutionary adventure as a terrestrial species. In case we miss the point, Cuarón shows us a frog swimming alongside Stone as she escapes the sinking space capsule and swims toward the surface. Once on dry land, Stone doesn’t just lie there. She pulls herself to her feet and stands, and the low angle from which Cuarón shoots her rise leaves no question that he wants us to consider her standing a courageous, heroic act of will.

All this seems to suggest not so much that Stone has learned to accept the limitations of life on Earth, but that she’s found a new determination to accept no limits. It’s a testament to the indomitability of the human spirit, to the implacable drive for progress that brought us out of the depths and took us to…well, outer space!

Cuarón has said in interviews that the inspiration for Gravity was the divorce he endured immediately before writing the script (with his son from a previous marriage). The ability to triumph over adversity, to survive and move forward, are themes that mean a lot to him. They’re also themes that mean a lot when contemplating what’s good and bad about the technological project.

Progress as an ideal has taken its hits in recent years, in part because it’s been inextricably linked with a refusal to accept limitations. Nonetheless, we can’t give up on it. We have to believe we can move toward something better, and it’s not unreasonable to think that technology, sensibly applied, can help us get there. The flip side, of course, is that technology has so often been used to take us in the opposite direction.

As Gravity’s credits roll, we leave the theater inspired by Dr. Stone’s resilience and her determination to carry forward the human adventure. What we tend to forget is the reason she ended up on standing heroically on that shoreline in the first place. She was sent spinning helplessly into space when a Russian missile exploded a dead satellite, starting a chain reaction that sent a shower of metal debris hurling into orbit. This is no made-up problem. So many pieces of space junk now surround Earth—the Air Force currently tracks 20,000 of them, some as large as a Greyhound bus—that low-Earth orbits will eventually have to be abandoned if some method of cleaning up the mess isn’t found.

There’s a pattern here. We’re very good at using our technologies to open new frontiers. We’re also very good at using our technologies to despoil those frontiers. Survival is one thing. What we do with survival is another.

###

Doug Hill’s book, Not So Fast: Thinking Twice About Technology, has just been published on Amazon Kindle and other outlets. He blogs at The Question Concerning Technology and can be followed on Twitter at @DougHill25.

aaa -- minnesota bridgeI remember hearing somewhere that one of the most important things you can teach a child is to delay gratification.

Give a five-year-old a choice between a cookie on the table in front of him right now and two cookies 15 minutes from now, and chances are he’ll take the one cookie right now. Maturity is about learning to live within your means. You want something nice, you save up for it. You resist blowing your entire paycheck on bling so that when the first of the month comes you have enough money to cover the rent.

It’s obvious that the consumer economy wants us to ignore these basic principles. Commercials tell us we can have what we want right now! The financial meltdown was largely a product of the banks handing out mortgages to people for houses they couldn’t begin to afford.

More and more we’re learning that our National State of Technology is another manifestation of the same mentality. We built it, but we don’t want to pay what it costs to maintain it.

Case in point: the 2013 report card on the state of the nation’s infrastructure from the American Society of Civil Engineers. Here are a few highlights:

· Roads get a grade of D. Federal, state, and local capital investments on roads and highways have increased to $91 billion annually. That amount is still insufficient, the ASCE says, to prevent continuing declines over the long term. The Federal Highway Administration estimates that capital investments of $170 billion annually would be needed to significantly improve road (and therefore traffic) conditions.

· Bridges get a grade of C+ One in nine of the nation’s bridges is rated “structurally deficient,” the ASCE says. The Federal Highway Administration estimates that we need to invest $20.5 billion annually on bridge repair and maintenance. We’re currently spending $12.8 billion annually.

· Dams get a grade of D. More than 4,000 dams in the United States are currently rated “deficient.” The Association of State Dam Safety Officials estimates we need to invest $21 billion to repair them.

· Levees get a grade of D-. The nation’s estimated 100,000 miles of levees, originally built to protect farmland, are increasingly protecting developed communities. The National Committee on Levee Safety estimates we need to spend roughly $100 billion to repair our levee systems.

· Wastewater systems get a grade of D. We need to spend an estimated $298 billion over the next 20 years to bring our waste and storm water systems up to snuff, the ASCE says.

aaa- sandy house 2The ASCE studiously avoids mention of climate change, so it’s hard to know if the figures it cites take into account the added stress of severe weather patterns associated with global warming. Hurricanes Katrina and Sandy and the current series of floods, droughts, tornadoes, and wildfires in the Midwest and West have given us previews of what’s in store. New York Mayor Michael Bloomberg recently announced a $20 billion plan (the total cost is expected to be much higher than that) to build an extensive network of flood walls, levees, and bulkheads to protect the city from rises in sea level and storm surges. “This is urgent work, and it must begin now,” Bloomberg said.

Similarly, the ASCE report gives roadways a higher grade this year than four years ago only because in the interim there have been “targeted efforts” to improve their condition, targeted efforts that are largely the result of huge federal expenditures in economic stimulus funds to bolster a tanking economy. At the same time mass transit continues to go begging. The ASCE notes that funding of mass transit improvements has declined even as demand for mass transit has increased. Forty five percent of American households lack any access to mass transit, the report says, and where there is access it’s often inadequate. Meanwhile many systems are reducing services and increasing fares.

aaa - infrastructureIn an era when politicians and voters are obsessed with cutting government spending, don’t expect to hear many mayors—or governors, senators, or representatives—calling for major infrastructure initiatives of the sort New York’s Bloomberg has proposed. (It’s been pointed out that Bloomberg won’t be in office when the bills to pay for his plans come due.) What we can expect is that rising interest rates will make infrastructure investments today significantly more costly than they have been. And don’t forget: whatever money goes to repairing infrastructure is money not spent on other public services, including, for example, schools, public parks, and aid to the poor.

In his seminal 1977 book Autonomous Technology, Langdon Winner talked about the “technological imperative,” referring to the vast web of supporting systems needed to sustain our technological commitments. As Winner put it, those commitments set in motion “a chain of reciprocal dependence” that requires “not only the means but also the entire set of means to the means.”

The automobile culture is the most obvious example. Our commitment to cars has obligated us to build and maintain massively complex fuel and repair systems to keep them running, massively complex traffic systems to keep them from running into each other, massively complex police and ambulance systems to take care of the mess they make when they do run into each other, etcetera, etcetera. All that in addition to the hundreds of billions of dollars we’ve already spent on roads, bridges, and tunnels, and the hundreds of billions of dollars more we’ll spend in perpetuity to keep those roads, bridges, and tunnels in working condition.

Under the best of conditions, all of these systems can be said to work marginally well. But as the ASCE report shows, the cracks in the seams are becoming increasingly evident, even as the costs of staving off progressive decline are becoming increasingly steep.

The bottom line: When times were good we constructed a culture based on some hugely profligate toys. While doing so we didn’t give a lot of thought to what it would take to keep the whole thing running in the long term. In other words, we grabbed the cookie that was on the table in front of us.

***

Is modern culture being overwhelmed by an epidemic of childishness? José Ortega y Gasset, writing in 1930, thought so. Annals of Childish Behavior™ chronicles contemporary examples of that epidemic. The childish citizen, Ortega said, puts “no limit on caprice” and behaves as if “everything is permitted to him and that he has no obligations.”

***

Doug Hill is a journalist and independent scholar who has recently completed a book on the history and philosophy of technology. This post also available on his blog The Question Concerning Technology.

aaa -- family w radio
An early strategy for making new technology feel familiar

I was thinking this morning about two subjects that don’t usually go together, skeuomorphs and morality.

A skeuomorph is a design element applied to a product that looks as if it’s functional but really isn’t. Its real purpose is to evoke a sense of familiarity and comfort. The literary critic N. Katherine Hayles cites as an example the dashboard of her Toyota Camry, which is made of synthetic plastic molded to look as if it’s stitched fabric.

Software designers use lots of skeuomorphs for their user interfaces; examples include the “pages” that seem to “turn” in e-readers and word processing programs. Hayles calls skeuomorphs “threshold devices.” They “stitch together past and future,” she says, “reassuring us that even as some things change, others persist.”

For a few months last year Apple Computer took a lot of flack from the design cognoscenti for its dedication to skeuomorphs; the linen texture that appears as background on the iPod and the wooden bookshelf used for the iBook display were frequently cited examples. The company’s skeuomorph aesthetic was said to be a legacy of Steve Jobs, who was convinced people needed recognizable touchstones to ease them through the uncharted expanses of cyberspace.

Enough! cried the cognoscenti. The time for such reassurance has long since ended! So it was that cheers greeted the firing in October of Apple’s software leader, Scott Forstall, carrier of the skeuomorphic torch, as well as the introduction of Microsoft’s sleek new Metro Design, which doesn’t pretend to be anything other than pixels on a screen.

aaa -- apple bookshelf
Apple’s iBooks display

Although I’m pretty sensitive to the look and feel of things, my thoughts about skeuomorphs this morning had nothing to do with user interfaces. Rather I was thinking that perhaps the idea of a skeuomorph might be applied outside the domain of design, specifically to the realm of ethics.

It seems likely that in an era of mass-market technology morality has become – not always, but often – a skeuomorph: a feature that’s retained for the sake of appearances rather than any practical function. Any function, that is, other than that of reassuring people that even as some things change, others persist.

These admittedly dark thoughts were prompted by reading the recent cover story in the New York Times magazine, “The Extraordinary Science of Addictive Junk Food.” Reporter Michael Moss describes in brilliant detail the lengths to which America’s biggest food and beverage conglomerates have gone to design products we can’t get enough of, literally. Their profits and our waistlines get fatter. They prosper, we don’t.

Moss opens the piece by telling the story of an extraordinary summit meeting of food industry leaders, convened in Minneapolis in 1999 by a Pillsbury executive named James Behnke. Among those in attendance were the presidents or CEOs of Kraft, Nabisco, General Mills, Procter & Gamble, Mars, and Coca-Cola.

Behnke called the meeting to discuss the implications for their businesses of the nation’s skyrocketing rates of obesity. Health officials and physicians’ groups were sounding alarms: Obesity was a health-care catastrophe in the making, especially among children, and the products sold by the men at Behnke’s meeting were a major cause of the problem.

aaa -- Obesity health risks

Behnke, a chemist with a doctorate in food science, believed it was time the industry addressed the obesity issue, not only because it was threatening to become a public relations liability, but also because the health officials and physician’s groups were right: the industry did bear some responsibility for undermining the health of its customers. His feelings were shared by a vice president from Nabisco named Michael Mudd, who presented to the assembled executives an obesity primer, describing with the help of some 114 slides the scale of the epidemic and the threat it posed to the packaged food industry.

Mudd then offered a series of recommendations. First, he said, the industry should acknowledge that its products were more fattening than they needed to be. Second, the industry should vow to do something about it. He suggested a three-step course of remedial action that began with a program of scientific research to determine what was causing people to eat more than was good for them. Once that was determined, the industry could reformulate its products to reduce their harmful ingredients and devise a nutritional code for its marketing and advertising campaigns.

It would be lovely to report that Mudd’s presentation was greeted with a standing ovation and an impassioned, consensual vow to go forth and reform America’s packaged foods marketplace for the sake of the people. That didn’t happen. According to Moss, the next person to speak was Stephen Sanger, who at the time was leading General Mills to record profits by selling just the sort of fat-, sugar-, and salt-filled products being blamed for the obesity explosion.

aaa -- Yoplait original
A General Mills success story: a “health food” with twice the sugar per serving as the children’s cereal, Lucky Charms

Sanger wanted no part of Mudd’s remedial program, or, for that matter, his guilt. General Mills had always acted perfectly responsibly, he said, not only toward its customers but also toward its shareholders. Consumers buy what they like, and they like products that taste good. General Mills was in the business of selling products that satisfy those tastes and had no intention of doing anything else. His competitors, Sanger suggested, should do the same. On that note, Moss reports, the summit meeting ended.

That these captains of industry would proceed, with endless creativity and ambition, to fill the shelves of American supermarkets with mountains of unhealthy, wasteful, dishonest products, knowing the pernicious effects those products would likely have on the well-being of millions of consumers, was the source of my dark thoughts this morning.

It’s hard not to conclude from examples such as these – and there are many others – that in the pursuit of corporate power moral restraint becomes – not always, but often – a skeuomorph, a decorative element that pretends, for reassurance’s sake, to be functional, but really isn’t. There’s nothing new about greed, of course. The difference is the scale of mendacity advanced technologies put at our disposal, and the ease with which checks on mendacity are discarded. The only justifications required: consumers will buy it and stockholders will profit.

The divorce of technology from morality is a theme that was sounded repeatedly by the philosopher who has most influenced my thinking on matters such as these, Jacques Ellul.

As I’ve explained elsewhere, Ellul pictured technology as a unified entity that relentlessly and aggressively expands its range of influence. Ellul used the term “technique” to underscore his conviction that technology must be seen as a way of thinking as well as an ensemble of machines and machine systems. Technique includes the methods and strategies that drive those systems, as well as the quantitative mentality that drives those methods and strategies. The invention, production, distribution, and marketing of addictive junk foods are all manifestations of technique.

Jacques Ellul
Jacques Ellul

The single overriding value of technique, Ellul repeatedly said, is efficiency. Worries about morality are obstacles in the attainment of efficiency, except where the brutality of efficiency’s pursuit causes concerns that might produce resistance, and interference. “It is a principle characteristic of technique that it refuses to tolerate moral judgments,” Ellul wrote in 1954. “It is absolutely independent of them and eliminates them from its domain.”

Moral “flourishes” remain, he added, but only for the sake of appearances. In reality, “None of that has any more importance than the ruffled sunshade of McCormick’s first reaper. When these moral flourishes overly encumber technical progress, they are discarded – more or less speedily, with more or less ceremony, but with determination nonetheless. This is the state we are in today.”

At the time of the food summit in Minneapolis, it’s clear James Behnke hadn’t fully acclimated himself to those conditions. Stephen Sanger apparently had.

(For a related post, see Autonomy continued: Technology or Capitalism?)

Doug Hill is a journalist and independent scholar who has recently completed a book on the history and philosophy of technology. This post also available on his blog The Question Concerning Technology.

Given that we’re not in the habit of thinking too much where our technological passions might lead us, I’ve been heartened over the past year to see an unusual willingness to confront the potentially devastating impact of the robotics revolution on human employment.

It was a question that was hard to avoid, given the global recession and the widening gap between rich and poor. It’s obvious that rapid advances in automation are offering employers ever-increasing opportunities to drive up productivity and profits while keeping ever-fewer employees on the payroll. It’s obvious as well that those opportunities will continue to increase in the future.

Some credit for opening up the conversation on the implications of this progression goes to two professors at MIT, Erik Brynjolfsson and Andrew McAfee. Their book Race Against the Machine, published late in 2011, had a man-bites-dog quality that attracted a lot of attention. We don’t necessarily expect experts from the temple of technology to question whether technology might be leading us in directions not entirely favorable to humankind.

“The tone of alarm in their book is a departure for the pair,” said the New York Times, “whose previous research has focused mainly on the benefits of advancing technology.”

A series of similar assessments appeared at intervals through the year, among them an essay by Christopher Mims in MIT’s Technology Review in May (“Is Automation the Handmaiden of Inequality?“), an Atlantic.com commentary by Moshe Y. Vardi, a professor of computational engineering at Rice University, in October (“The Consequences of Machine Intelligence“), and three New York Times pieces in December, two by Paul Krugman (“Rise of the Robots” and “Robots and Robber Barons“), and one by Brynjolfsson and McAfee (“Jobs, Productivity and the Great Decoupling“).

Also in December, the perennial technological enthusiast Kevin Kelly weighed in with his views on the automation revolution in a Wired cover story (“Better Than Human,” published online in December, in print in January). Although Kelly skipped over the short-term challenges that worry Brynjolfsson and McAfee, in the end his conclusion was the same as theirs: The future for workers depends not on competing with machines but on learning to leverage the advantages they offer to get ahead. As Kelly put it, “This is not a race against the machines,” he wrote. “If we race against them, we lose. This is a race with the machines….Let the robots take the jobs, and let them help us dream up new work that matters.”

Probably the oddest recent entry in this discussion, and the one that intrigued me most, came not from an economist or journalist, but from a press release issued by a company hoping to capitalize on the robot revolution. I’d like to dwell a bit on the details of that release here, seeing as how it had some outstanding man-bites-dog qualities of its own.

The release was issued in November by a San Francisco-based startup called Momentum Machines. It announced the arrival of a hamburger-making machine that will revolutionize fast food as we know it. (The robot pictured at the top of this piece is from a different company.)

The surprise isn’t that the release promises a product with revolutionary benefits. The surprise is that it acknowledges those benefits might be accompanied by some troubling side effects. In doing so the release embodies with unusual clarity the tension that exists between those two conflicting outcomes.

Momentum Machines claims its hamburger machine can churn out 360 fully prepared and packaged hamburgers an hour. Not just ordinary burgers, but gourmet burgers, expertly cooked and seasoned to order. The quality of the product, however, isn’t the principal value the release seeks to promote.

Its headline centers instead on the machine’s astonishing economic benefits. “The Restaurant Industry Is The Most Labor Intensive Industry In The Country,” it reads. “Our Technology Can Save The QSR [Quick Service Restaurant] Industry $9 Billion/Year In Wages.”

The release goes on to promise that Momentum Machine’s machine (they’ve yet to come up with a name for it) “replaces all of the hamburger line cooks in a restaurant. It does everything employees can do except better.”

The unexpected twist appears in a three-paragraph statement at the release’s end.

Apparently recognizing that its earlier promise to replace every line cook in the business carries some unpleasant implications, for line cooks if not their employers, the company expresses a desire to help retrain people displaced by its technology. To help smooth their “transition,” the company says it will offer opportunities for technical education at a discount. The suggestion is that fry cooks will be transformed into engineers, after which they will participate in further automating the fast food industry.

The release then ventures into territory seldom explored in the annals of public relations: economic theory.

“The issue of machines and job displacement has been around for centuries,” it says, “and economists generally accept that technology like ours actually causes an increase in employment.”

This increase is the result, the release says, of three factors: New employees are hired to build the robots; the robots allow the company to expand its “frontiers of production,” which requires more employees; and automation produces savings that can be passed along to customers, thereby stimulating the economy.

“We take these issues very seriously,” the release says, “so please feel free to tell us how we can help with this transition.”

The release also encourages anyone with questions to get in touch, so I did. I asked for any references the company could provide to support its contention that economists “generally accept” that technologies increase employment, and also for more information on the retraining assistance the company planned to offer displaced employees.

To my surprise, a couple of weeks later I received a response from the Founder and President of Momentum Machines, Alexandros Vardakostas. His note was cordial, but not very enlightening. It provided no direct answers to my questions, only a link to a Wikipedia article on technological unemployment, aka “the Luddite Fallacy.”

“Hi Doug. Hope all is well you,” Vardakostas wrote. “Read this to learn more. Warm regards. Alex.”

True enough, Wikipedia’s article does describe the theories of a number of economists who agree that technological advance ultimately leads to an increase in employment. It can’t be considered an unqualified endorsement of that position, however, given that Wikipedia says its “factual accuracy” is disputed, and that it needs “additional citations for verification.” The article also pays little attention to whether today’s revolutionary advances in automation may be creating changes in the economics of labor that render previous theories, even if they were historically true, obsolete. That’s the question asked by most of the articles I’ve cited above.

Amazon warehouse (Reuters)

As we rush toward a super-automated future, plenty of other questions remain unanswered. Brynjolfsson and McAfee, for example, propose innovation through entrepreneurship as a leading solution to employment stagnation, and toward that end recommend various deregulatory measures that will allow new businesses to flourish unencumbered.

I’m no economist, but it seems fair to ask whether measures of the sort they prescribe might simultaneously open the way for even greater exploitation or elimination of labor. Automation typically puts more power in the hands of management, after all, and there’s no guarantee that the vast majority of post-automation jobs will be any more satisfying, economically or spiritually, than the jobs they replace.

For example, Brynjolfsson and McAfee cite as models for the future the thousands of entrepreneurs now exploiting the new opportunities for employment offered by the likes of eBay, Amazon Marketplace, Apple’s App Store, and Android Marketplace. Even if we take for granted that such traditional benefits as health insurance, vacation pay, maternity leave, and pensions are off the table, one wonders how many of those entrepreneurs are making what used to be considered a middle-class income.

Not many, at least in the apps market, according to a recent article in the New York Times. The article makes it clear that winning big in that race with the machines is only slightly more likely than hitting it big in the lottery, and that the people who are really cashing in on the app market are the stockholders of Apple and Google. The headline tells the story: “As Boom Lures App Creators, Tough Part Is Making a Living.”

That article demonstrates perfectly a phenomenon Brynjolfsson and McAfee do address: The emergence, due to the unprecedented economies of scale offered by various technologies, of a super-star marketplace, one in which a few people become fabulously wealthy while everyone else scrambles desperately to break into their golden circle. Kevin Kelly’s vision in this regard is positively Panglossian. Let the machines take over the jobs, he says, while we dream of new work that matters. Take it from one who knows: You don’t get paid for dreaming.

Perhaps the most pressing question is whether an economy shaped by super-automated techno-entrepreneurs will be sustainable. We have an abundance of evidence today that another historical truism of economic theory – that growth solves every problem – may also be obsolete. Certainly the environmental disasters we’ve created suggest that perhaps the time has come to consider a different option: restraint.

Momentum Machines, for example, believes that technologies like its hamburger machine will open new “frontiers of production.” Forgive me, but I’m not sure the frontiers of production opened by previous advances in fast food technology have proved entirely salubrious.

These are some of the reasons why I was hoping Momentum Machines might provide more detailed answers to my questions. I’ve sent Alex Vardakostas a second email, asking for elaboration and passing along links to the aforementioned articles. So far no response, which could mean he’s disinterested in further discussion, or simply that he’s too busy opening new frontiers of production to entertain quarrelsome emails from bloggers.

Doug Hill is a journalist and independent scholar who has recently completed a book on the history and philosophy of technology. This post also available on his blog The Question Concerning Technology.


Bloomberg News reported earlier this weekthat Google used creative accounting to avoid paying something like $2 billion in corporate income taxes this year.

Much of that savings was realized by funneling nearly $10 billion in profits to shell companies in Bermuda, which has no corporate income tax. Like other multinationals, Google uses maneuvers such as the “Double Irish” and the “Dutch Sandwich” to get its revenues safely out of the countries in which they’re made, relatively untouched. By doing so, Bloomberg says, the company was able to cut its overall tax rate nearly in half.

Bloomberg is correct when it says that reports such as these fuel anger at multinational corporations for not paying their fair share for the general support of the commonweal, but I think that outrage is shortsighted.

Who says Google owes anything to the nations of the world? Is it right to have to pay for benefits you don’t get or need? Isn’t it Google’s duty to maximize its profits to the greatest extent possible, and aren’t governments and the taxes that support those governments only obstacles in pursuit of that goal?

In support of these arguments, here are 11 good reasons why Google should not be forced to pay any taxes whatsoever, if it can legally avoid doing so:

1. Google built the Internet without any help from any government, domestic or foreign.

2. Google constructed and maintains the roads, buses, bridges, tunnels, trains and air traffic control systems that make it possible for its employees around the world to get to work.

3. Google provides its own police forces and criminal justice systems so that its employees and their families around the world are reasonably secure from robbery, assault and other crimes. It also provides its own national defense and homeland security departments.

4. In order to ensure an ongoing supply of customers, Google educates all of the children in all of the countries in which it operates, from kindergarten on.

5. Google picks up the garbage, cleans the streets and provides sewer systems in all the communities in which it does business.

6. Google inspects and regulates the foods and beverages its employees and their families consume to make sure they’re reasonably safe from harmful ingredients and contamination. The company also approves, inspects and regulates any medications its employees and family members may need to take in order to maintain their health.

7. Google invented, built and regulates the satellite systems that make many of its businesses possible.

8. Google established and maintains its vast network of international operations with no help whatsoever from the trade and diplomatic services of any government, American or otherwise.

9. Google established the democracies that have allowed free enterprise in general and technological capitalism in particular to flourish.

10. Google’s executives don’t have enough money.

11. Google doesn’t do evil.

This post also available on Doug Hill’s blog The Question Concerning Technology.


Brad Pitt’s latest movie, which opens today, is being described as an attack on capitalism, at least as it’s currently practiced in America.

When “Killing Them Softly” premiered at Cannes last spring, an article in the Los Angeles Times called it a “post-Occupy” film and “what the documentary ‘Inside Job’ might look like if it was a fictional feature.”

“Inside Job,” you may recall, is director Charles Ferguson’s Oscar-winning examination of how Wall Street speculation and duplicity led to our current economic crisis. The action in “Killing Them Softly” takes place during the stock and housing market crashes that got the current crisis rolling; visible in the background are clips of presidential candidates Obama and McCain making promises (still unfulfilled) of economic reform. Director Andrew Dominik’s underlying theme, according to the Times, “is that U.S. capitalism is deeply flawed, and that government, whether Democrat or Republican, has let down its people.”

I mention this here because “Killing Them Softly” demonstrates a theme I’ve written about in this space: the symbiotic relationship between capitalism and technology. It also demonstrates the contradictions inherent in trying to use the tools of that symbiotic relationship to attack it.

“Killing Them Softly” was financed by Megan Ellison, the daughter of Larry Ellison, the co-founder and chief executive officer of the software company, Oracle. The third richest man in America, Ellison is reportedly worth more than $35 billion, a fortune produced by that magically powerful combination of – you guessed it – technology and capitalism. Brad Pitt, of course, is one of the biggest movie stars in the world, an icon whose stature is a product of that same magical combination (in addition to good looks and acting talent).

As I noted in my earlier commentary, you can argue that capitalism is the driving force behind technology or you can argue that technology is the driving force behind capitalism. That’s what I mean when I say the relationship between the two is symbiotic. Sometimes technology stimulates capitalism, other times capitalism stimulates technology. At their present state of development in advanced technological/capitalist societies, neither could exist without the other.

I’m an admirer of Brad Pitt, who, like George Clooney, has gone out of his way to use his Hollywood clout to make meaningful movies, both as works of cinematic art and as commentaries on important issues of the day. Not every film Pitt and Clooney make fits that category, but they’re obviously trying. The problem, as I’m sure they know, is that those films owe their existence to a system that’s responsible, in many ways, for the injustices they’re trying to address. If the films are successful they also feed that system.

There’s also a contradiction implicit in addressing real-life issues through a technological medium that sells dreams. “Killing Them Softly,” says the Times, “is a hit-man movie, albeit an arthouse one, and contains many of the schemes and stylized violence you might expect from a film with that label.” This is reminiscent of “The Godfather,” surely one of the most profitable anti-capitalist films in Hollywood history. I’m not saying that art can’t have an impact. I am saying that we don’t strike a meaningful blow against the empire by spending ten dollars or more to watch a make-believe assassin pretend to kill people.

My favorite example of this contradiction is the DreamWorks logo, a silhouette of a boy with a fishing pole, sitting, one imagines, by a peaceful lake on a summer’s afternoon, lost in a reverie. This, of course, is exactly the sort of old-fashioned pastime that DreamWorks, with all the technological and marketing power at its disposal, is doing its best to make obsolete. Boys won’t be spending their summer afternoons lolling peacefully by lakes if DreamWorks has anything to say about it. Rather, they’ll be sitting inside multiplexes in shopping malls, hypnotized by reveries conjured for them by the latest extravaganzas of computer animation.

This post also available on Doug Hill’s blog The Question Concerning Technology; It is an updated version of an essay he posted last May, when “Killing Me Softly” premiered at the Cannes Film Festival.

As many have noted, technology – specifically, email accounts – played a central role in the ongoing scandal involving the resignation of CIA Director David Petraeus. “Harassing” emails sent to socialite Jill Kelley led to the FBI’s discovery of emails that revealed Petraeus’ affair with Paula Broadwell; other emails led to the discovery of questionable exchanges between Kelly and another top-ranking official, General John R. Allen; subsequent searches found classified documents on the hard drives of individuals who weren’t authorized to have them.

With the indispensible assistance of the media, reverberations have been ricocheting furiously up and down the corridors of power and gossip from Washington and Langley to Florida, Afghanistan, and Libya since the scandal broke last Friday. It’s not the first time these elements have combined to produce a sensation, but it’s the messiest we’ve seen lately.

The Petraeus scandal demonstrates the dynamics of a phenomenon known in organization theory as the “tightly coupled system.” The concept was introduced by Charles Perrow in his 1999 book, Normal Accidents: Living with High-Risk Technologies. Computer programmers use the term to describe systems in which central processing units share some or all of the system’s memory and input/output resources.

The elements at play in the Petraeus scandal are more heavily weighted toward the human than the examples Perrow deals with in his book, which include nuclear and petrochemical plants, airplanes, mines, and weapons systems. Nonetheless, because his emphasis is so strongly systemic, and because the systems in question always rely on some combination of technology and human beings, his ideas can be fairly applied.

Interconnections too complicated to imagine

As the name implies, tight coupling describes a system in which an intimate connection exists, intentionally or not, between its component parts. This connection creates a potentially volatile interdependence as changes in one element of the system quickly reverberate throughout, setting off a chain reaction of associated effects. A simple example is a freeway at rush hour, when a stalled car in one lane causes a backup that stretches for miles.

The stalled car example demonstrates, as does the Petraeus scandal, that in tightly coupled systems small events can quickly mushroom into crises on a different order of magnitude. After-the-fact accident analyses, Perrow says, consistently reveal “the banality and triviality behind most catastrophes.”

Perrow writes somewhat ruefully that all too often it’s the human factor that introduces the fatal flaw into technological systems that are, because of their complexity, already primed for error. “Time and again warnings are ignored, unnecessary risks taken, sloppy work done, deception and downright lying practiced,” he says. “Routine sins” plus technology equal “very nonroutine” consequences.

Perrow also stresses that, as careful as we think we are, it’s impossible to anticipate every consequence of any action taken within a tightly coupled system – the potential reverberations are beyond our comprehension. What we see isn’t only unexpected, he adds, it’s often, at least for awhile, “incomprehensible.” This can be true either because we’re not aware of the consequences as they gather momentum, or because we’re aware of them but can’t bring ourselves to believe they’re really happening. One assumes the principles in the Petraeus scandal have experienced both conditions.

 

Note: An earlier essay by Doug Hill discussed the part that the dynamics of tightly coupled systems played in the Challenger space shuttle disaster.

“Everything is Connected” is a recurring feature named in honor of the late Barry Commoner’s four laws of ecology: Everything is connected to everything else, everything must go somewhere, nature knows best, and there is no such thing as a free lunch.

Photo Credit: Washington Post/ISAF via Reuters; Image, physicsworld.com

This post also available on Doug Hill’s blog The Question Concerning Technology.

©Doug Hill, 2012

There are some Big Ideas in the philosophy of technology that I find very helpful in understanding what’s going on in the world of machines today. One of those ideas is a concept known as “technological momentum.”

Technological momentum is a phrase coined by the historian Thomas Parke Hughes to describe the tendency of successful technological systems to become entrenched over time, growing increasingly resistant to change. This resistance is a product of both physical and psychological commitments. We invest materially in factories and emotionally in careers. Equipment and infrastructure accumulate and intertwine; dependence and force of habit build.

Professor Hughes’ label has its problems, for reasons I’ll explain, but before I do let me note two recent examples of technological momentum in action. Both, as they say, are ripped from the headlines.

Carol Bartz

The first example involves comments made earlier this month at a Fortune magazine forum by Carol Bartz, fired last year as president and chief executive officer of Yahoo. According to the NewYork Times, at one point Bartz was asked if she had any advice for her successor in those roles, Marissa Mayer. Bartz replied that Mayer shouldn’t kid herself about quick turnarounds at a company as large as Yahoo. When informed of proposed changes in policy, she recalled, staff members there typically responded with agreement to her face and defiance in private. Bartz came away from the experience amazed by “how stuck individuals can be, much less 14,000 people.”

“It’s very, very hard to affect culture,” she said. “And you can get surprised thinking you’re farther down the path of change than you really are because, frankly, most of us like the way things are.”

The second example involves an even bigger tech brand, Microsoft. In August Vanity Fair magazine ran a lengthy dissection of the company’s creative decline under the stewardship of its Chief Executive Officer, Steven Ballmer.

Steven Ballmer

The article portrays Ballmer presiding over a “lumbering” behemoth, “pumping out” tried and true products (Windows and Office) while failing to exploit opportunities (search, music, mobile) that have turned other companies (Google, Apple) into global icons. “Every little thing you want to write has to build off of Windows or other existing products,” a software engineer told reporter Kurt Eichenwald. “It can be very confusing, because a lot of the time the problems you’re trying to solve aren’t the ones that you have with your product, but because you have to go through the mental exercise of how this framework works. It just slows you down.”

That comment suggests why Professor Hughes’ “technological momentum” label isn’t ideal. Momentum implies movement, but often as not the dynamics he’s describing lead to paralysis. Computer programmers refer to the acquired intractability of older software systems as problems of “legacy” or “lock-in,” terms that may more accurately convey the obstinacy involved.

The fact that a software program can be an obstacle to change underscores a point touched on earlier: technological momentum is about more than stubborn geezers stuck in their ways. Technological systems become entrenched because they’re made out of real-world stuff. Companies can replace operating systems and assembly lines, but not without a lot of energy and expense, and inevitably the replacements have to incorporate some of what came before. An entire society’s commitment to a technology becomes almost impossible to reverse. America’s highway systems won’t be dismantled any time soon; the problem is keeping them repaired.

Technological momentum tells us that technological systems tend to be self-perpetuating. There’s irony in that because the quality we typically associate with technology is progress, not stagnation. In fact both things are true: technological systems can be both disruptive and obstructionist, sometimes both at the same time. It’s also true, as any football fan knows, that momentum – forward momentum, that is – can be lost or regained. Steve Jobs did both at Apple, and Steve Ballmer is in the process, with the introduction of a new operating system, a new music service, a new phone system, and a new tablet computer, of trying to pull off the same trick at Microsoft.

The greatest example of technological momentum is technology itself. Technology is astonishingly creative within its own realm, but it’s incapable of recognizing any realm outside itself. To the degree that we fail to recognize that fact – which these days is almost completely – we surrender ourselves to the technological paradigm. Even sane people are beginning to think that the only way we’ll be able to save ourselves from environmental catastrophe is by the invention of some ingenious technique. Individual ambitions aim in the same direction; everyone’s out to make a dent in the universe on the scale of Gates or Zuckerberg or Jobs. These dreamers may consider themselves consummate innovators, but their thinking is still trapped in a box labeled “Technology.”

This post also available on Doug Hill’s blog The Question Concerning Technology.

Image credits: Bartz, Tony Avelar/Bloomberg via Getty Images; Closed Mind illustration: Harry Campbell

There are any number of ways to frame the apocalypse, I suppose. As one who spends a lot of time thinking about technology, mine is a phenomenon known as “technological autonomy.”

I’m convinced that technological autonomy may be the single most important problem ever to face our species and the planet as a whole. A huge statement, obviously, but there’s plenty of recent evidence to back it up.

Briefly stated, technological autonomy describes a condition in which we’re no longer in control of our technologies: they now function autonomously. This isn’t as absurd as it may sound. The idea isn’t that we can’t switch a given machine from “on” to “off.” Rather, technological autonomy acknowledges the fact that we have become so personally, economically, and socially committed to our devices that we couldn’t survive without them.

Technological autonomy is probably the most controversial theory in the rarified but growing field known as the philosophy of technology. Paul T. Durbin, a professor emeritus of philosophy at the University of Delaware, has written that the discipline is roughly divided between those who interpret technology narrowly and those who interpret it broadly. If you think of technology as tools, period, scholars in the narrow camp agree with you. They tend to have engineering backgrounds and become irritated at any suggestion that machines have taken on a life of their own. “It is not the machine that is frightening,” says Joseph Pitt of Virginia Tech University, “but what some men will do with the machine; or, given the machine, what we fail to do by way of assessment and planning.”

Scholars in the broad camp, who often come from philosophy or sociology backgrounds, say it isn’t that simple. They insist that technology must be seen systemically, that it includes not only machines but also the social relationships and economic structures in which machines flourish. As Thomas Misa of the University of Minnesota puts it, technology is “far more than a piece of hardware,” but rather “a shorthand term for the elaborate sociotechnical networks that span society.”

From that perspective we can see that controlling our machines involves much more than just deciding, “Okay, we’re not going to do that anymore.” All of us, whether we like or not, are enmeshed in a massively complex web of interconnected, inter-dependent technologies and technological systems. To extricate ourselves from those systems would inflict massive, probably irreparable, damage to our way of life. I use the term “de facto technological autonomy” to suggest that while we can literally turn off our machines, practically we are unable to do so.

The people of Japan have learned a lot about technological autonomy since the tsunami hit the Fukushima reactors. They’d love to get rid of nuclear power altogether, but their leaders are telling them that to do so invites economic disaster. In much the same way we Americans, along with most of the rest of the developed world, are trapped by our automobiles. We know that for lots of reasons we’d be better off if we stopped driving them tomorrow, but we can’t. If we did, life as we know it would collapse, since in one way or another we depend on the internal combustion engine for our jobs, our food, and virtually everything else we need. It’s impossible to overestimate the implications of that particular dilemma, politically, economically, militarily, and – most important – environmentally.

The reasons I think technological autonomy is the most crucial issue in history are contained in several reports I’ve come across in recent months. They’re collected on my hard drive in a folder labeled “The End of Civilization.” Together they testify, explicitly or implicitly, to a growing consensus in the scientific community that we humans are not going to find it within ourselves to act soon enough or dramatically enough to forestall catastrophic climate change. If the battle is about bringing our machines to heel, it’s pretty certain at this point we’re going to lose.

An example appeared in a New York Times essay in July by Roger Bradbury, an ecologist at Australian National University. The world’s coral reefs, sources of food for millions of human beings, have become “zombie ecosystems,” he wrote. They will collapse entirely within a human generation. Although the evidence that this is happening is “compelling and unequivocal,” scientists and politicians alike have consistently “airbrushed” the truth. There’s hope to save the reefs, we’re told, if only we take prudent action. Forget it, Bradbury said. There isn’t any hope.

The scent of doom similarly emanated from a report in England’s Guardian newspaper in February. “Civilization faces a ‘perfect storm of ecological and social problems,'” the headline read. That was the conclusion of a group of 20 scientists who had all been winners of the Blue Planet prize, an international award the Guardian described as “the unofficial Nobel for the environment.” The paper issued by the group made repeated use of the word “unprecedented,” as in this passage:

“In the face of an absolutely unprecedented emergency, society has no choice but to take dramatic action to avert a collapse of civilization. Either we will change our ways and build an entirely new kind of global society, or they will be changed for us.”

A month later the journal Science published a paper signed by an international group of 32 experts who specialize in environmental governance. “Societies must change course to steer away from critical tipping points in the Earth system that could lead to rapid and irreversible change,” the paper said. “Incremental change is no longer sufficient to bring about societal change at the level and with the speed needed to stop earth system transformation.”

Yet another apocalyptic paper soon followed, this one by a group of 22 scientists from a variety of fields, writing in the June issue of the journal Nature. Entitled “Approaching a state shift in Earth’s biosphere,” the paper warned that the planet’s environmental systems were nearing breakdown on any number of fronts and that those “tipping points” would likely be sudden and dramatic rather than gradual. The Los Angeles Times quoted lead author Anthony Barnosky, a professor of integrative biology at UC Berkeley, as comparing the likely severity of the environmental shifts we’re facing to the effects of an asteroid hitting the planet.

That the world seems to have taken little to no significant notice of these warnings strikes me as utterly astonishing. It’s as if a family has been told that their house is on fire and they remain glued to their TV shows and video games, potato chips and soda at hand. Certainly climate change hasn’t been anything close to a central issue in the current presidential campaign. The only conceivable explanation is that we are simply unable to contemplate the scope and depth of changes that will be required to forestall the catastrophes the scientists are predicting. That by definition comprises a condition of de facto technological autonomy.

The volume of scientific alarms increased in recent months in part in anticipation of Rio+20, an international colloquy on the environment held in Rio de Janeiro in June. Officially named the United Nations Conference on Sustainable Development, the nickname was a nod to the fact that the conference was convened on the twentieth anniversary of the 1992 Earth Summit, also held in Rio. At that meeting a global “blueprint” was adopted that would supposedly set the nations of the world on the path to a saner environmental future.

As it’s turned out, the two Rio conferences can be considered benchmarks on our journey over the environmental cliff. Global warming, among other sources of degradation, has only accelerated since the first one, and nothing emerged from the second one to suggest we’ll find a way to reverse that trend anytime soon. Kumi Naidoo, executive director of Greenpeace International, called the conference as a whole “a failure of epic proportions” and its rambling, inconclusive final report “the longest suicide note in history.”

Our ongoing impotence in the face of climate change prompted one of our better-known environmental activists, Bill McKibben, to publish an angry and pessimistic jeremiad in the August issue of Rolling Stone. He spent a few thousand words documenting the latest irrefutable evidence that disaster’s approach continues unimpeded while the “charade” that we’re actually dealing with it continues, as it has for decades.

“Since I wrote one of the first books for a general audience about global warming way back in 1989,” McKibben said, “and since I’ve spent the intervening decades working ineffectively to slow that warming, I can say with some confidence that we’re losing the fight, badly and quickly – losing it because, most of all, we remain in denial about the peril that human civilization is in.”

The conclusion is inescapable that some challenges are just too difficult to face. Controlling the machines we’ve unleashed seems to be one of them.

This post also available on Doug Hill’s blog The Question Concerning Technology. Find his previous essays on the topic of technological autonomy here and here.