I attended a viewing and panel discussion for the final episode of Jeopardy! IBM Challenge in which IBM’s Watson supercomputer beat reigning champions Ken Jennings and Brad Rutter in a 3-night challenge of standard Jeopardy! games. Check out my previous post to read more about the tech behind Watson and what IBM hopes to do with this impressive technology.  This post will focus more on the public perception of Watson, and what it means to have a technology that is credited with producing very accurate conclusions based on complex data.

The panel discussion and viewing event was hosted at Rensselaer Polytechnic Institute in Troy, NY.  In the interest of full disclosure- I am currently enrolled as an M.S./Ph.D student in RPI’s Science and Technology Studies Dept.  It’s also worth noting that Principal Investigator, David Ferrucci; research scientist Chris Welty; and Senior Software Engineer Adam Lally, are all RPI alumni.

I asked a few RPI students, during the preceding reception, two questions: 1) what they foresaw as the first application of Watson, and 2) whether or not they were afraid of Watson.  A group of three guys, Alex, Sean, and Thomas wanted to see this technology replace the Google algorithm or WebMD.  Two women, Anna and Karen, repeated what they’d heard the previous night, that Watson would be a tool to deal with “information overload.”  For the second question, all five students seemed puzzled.  How could a machine that sorts data, be evil?

To me, Watson is dangerous because of how people react to it, not what it does.  The panel, which consisted of Adam Lally, Chris Welty, and several others, was asked whether humans are capable of “trusting a machine.”  Dr. Lally’s response was, “The confidence values build trust.” Chris elaborated by noting that Watson provides a precise percentage of certainty, while a human will most likely say, “I’m certain.”  The panel even collectively considered the possibility of government officials asking a Watson-like computer, how to solve the economic crisis.

When someone came to the microphone and asked the panel whether or not this was a technology that they should be making, the panel looked generally confused.  Chris jokingly (I think) said “Make sure I’m the one in power.”  RPI’s Department head and professor of Cognitive Science concluded that as long as an embodied Watson-like machine in a combat scenario had some sort of “acceptable risk” algorithm, it would be ethically sound.  None of the panelists believed that “the singularity” was a realistic probability worthy of discussion.

Overall, I think we need to be worried about who has access to, what I would call, a “Truth Machine.”  Because as long as it retains its reputation as a fount of 98% certain Truth, access to the machine (and the credibility it bestows) will remain with power elites. Will these machines prove “trickle-down economics?”  Or will the most efficient solution be a command-economy with a central executive?  But maybe a less radical question is: Will this machine be used to end poverty, or to calculate optimal mutual fund portfolios?

IBM_Watson

This computer isn’t connected to the internet. It takes up an entire room, and its made by IBM. This sounds like the kind of technology you would find in a 1980 edition of Compute! Magazine. Instead, Engadget has been following the story in the traditional 21st century manner of tech news coverage: live blogging with photos and under-10-minute video interviews. The new computer making news is Watson [official IBM website for the Watson project], a new 80 teraflop supercomputer meant to answer natural language questions. It was demoed last Thursday at IBM’s research facility in Armonk, NY. Watson is being tested in the most grueling tournament of fact retrieval know to humankind: it is competing in several games of Jeopardy! against reigning champions Ken Jennings and Brad Rutter.

IBM intends to commercialize the technology by selling it to large medical and data industries who need to provide lots of seemingly routine answers to questions from a wide array of topics. By developing a system that can understand the subtlety of human language -with all of its puns, idiomatic expressions, and contextual meaning- data becomes retrievable in a very human way.

Consider Clippit (more commonly refereed to as “Clippy”), the smug, ineffectual, anthropomorphized paper clip form Microsoft Word circa 1998. The bane of most office workers’ existence, this “office assistant” would monitor your work and try to help you with writing a letter or printing some labels. CNET ranked Clippy (and Microsoft Bob) as the worst product of the decade, and has since been totally removed from all Microsoft products since 2007 but still remains the “Edsel” of the computer industry.

Clippit was a failure largely because it was so bad at figuring out what you wanted. The interface was meant to be welcoming and “natural” but was more like a broken record player stuck on a song you didn’t want to hear. Watson isn’t an enormous supercomputer (it wouldn’t even rank on the Top500 list), but it is still several years away from individual commercial use. While we wait for Moore’s law to do its job we can contemplate the implications for natural language inputs. IBM engineers tout Watson as the first step towards the computers on Star Trek: massive (and invisible) computers that are able to understand virtually any natural language command.

In reality, this machine could be your next doctor. Describe your symptoms, swab your mouth, and wait for Dr. Watson to come back with the test results. It could be the penultimate customer service representative: a worthy opponent in your battle to speak with a real level two service technician. As a twenty-something, I look forward to complaining about how no one works for the answers to their questions. “In my day, we Google searched our aches and pains until we found a WebMD article that vaguely sounded like what we had, and we were happy!

The Washington Post ran an article last Sunday about the Air Force’s new surveillance drone. The bot can hang in the air for weeks, using all nine of its cameras to provide a sweeping view of a village. Its a commanding officer’s dream come true: near-total battlefield awareness. Recording the data however, is only half of the battle. This vast amount of real-time data is almost incomprehensible. No one is capable of making sense of that much visual data unaided by some sort of curation device. There is an entire industry however, focusing on providing viewers with up-to-the-second live coverage of large, complex environments: sports entertainment.

Pro sports have always been on the cutting edge of video recording. Being able to show an entire football field and, with a swift camera change, immediately shift focus and follow a fast-moving ball into the hands of a running receiver. The finished product is a series of moving images that provide the most pertinent data, at the right scale, as it happens.

The Pentagon is adapting ESPN’s video tagging technology to make sense of battlefield surveillance. Need a replay of every car bomb detonated via cell phone in a neighborhood? Its as easy as replaying Brett Farve’s last three incomplete passes. Now that the data is organized into piles, its still relatively unmanageable. Imagine a Google search result that didn’t rank by relevance. Almost worthless.

In order to fix this problem, military analysts turned to a media phenomena that has mastered the art of finding relevance where there is none: Reality TV. Using similar editing and searching techniques, generals can call up the best surveillance coverage.

These video management systems are a defining characteristic of what we have come to call “The Information Age.” Important global institutions and resources are built and maintained using identical technologies and organizational schemes. In other words, state surveillance, professional sports, war, entertainment, prisons, and reference materials are all beginning to look like each other: similar means to different ends.

This may seem like a moot point. After all, corporations have always swapped seemingly unrelated business practices. Taylorism, the second-by-second regimentation of workers’ movements, has spread from Ford’s factories to McDonald’s kitchens. What gets scary, is the simple laws that drive the complexity of it all. As Steven Levy writes in the latest issue of WIRED,

“Today’s AI doesn’t try to re-create the brain. Instead, it uses machine learning, massive data sets, sophisticated sensors, and clever algorithms to master discrete tasks. Examples can be found everywhere: The Google global machine uses AI to interpret cryptic human queries. Credit card companies use it to track fraud. Netflix uses it to recommend movies to subscribers. And the financial system uses it to handle billions of trades (with only the occasional meltdown).”

These algorithms are understood by few, but are relied upon by billions. They fight our wars, cure our diseases, and entertain us on Sunday nights. The “occasional meltdowns” can only be seen by those that understand the most complex of codes. AI won’t be contained in a single physical being, it’ll be in the cloud. Our collective fears over our self-aware machines rising up against us may gives us too much credit. They only need to fail at their own assigned tasks in order to win.

Manipulation and willful ignorance of these systems on the part of dominant groups may become the new method of control. Just as a teenagers blame poor cell phone reception for not calling their parents on time, a government can point toward burdensome upgrades needed to prevent the inevitable false-positives of automated surveillance. The teenager and the government use the limits of technology to negotiate increased power.

Bentham’s panopticon worked because prisoners could not see whether or not they were being watched by guards. In the twenty first century, the central tower is never manned, it is automated. Foucault reminds us that the state works much like the panopticon, but what would he say about a nation surveilled by learning networks? It no longer matters if the tower is manned or not because everything is recorded and available for instant replay.

This over-surveillance will become a part of an individual’s daily risk mitigation. This has already begun in counties that have installed red light cameras at busy intersections. The citizens of war-torn countries will experience much greater consequences than expensive traffic tickets. They may be subject to a perpetual surveillance that combines the inscrutable detail of sports coverage, with reality TV’s fetishistic fascination of the mundane.

David A. Banks is a Science and Technology Studies M.S./Ph.D student at Rensselaer Polytechnic Institute. You can follow him on twitter: @da_banks