(un)mask is a short film about the near future of affective, immaterial labor. Cameras—owned by advertisers and the state—pervade our physical spaces. Hypersensitive to facial expression data, corporations and government entities capture and exploit it, offering new modes of biopolitical control through a commodity we cannot help but give away.
Drawing on discourses about immaterial labor and the increasingly sophisticated face-tracking technologies embedded in surveillance systems and tools to measure the effectiveness of advertising, (un)mask suggests that every facial expression is a valuable piece of data—affective labor that advertisers and government agencies can use to make inferences about us and make recommendations to us, algorithmically anticipating our actions and more deeply enmeshing themselves in our daily lives. The film aims to question what avenues of resistance are available to us, and suggests that by over-emoting and thus overflowing the databases of facial expression recognition data with a flood of affect, we can confuse those aiming to exploit this data and devalue it as a commodity.
About the filmmaker: Zach Kaiser is an Assistant Professor of Graphic Design and Experience Architecture in the Department of Art, Art History, and Design at Michigan State University. A designer and music producer, he earned his MFA from the Dynamic Media Institute at the Massachusetts College of Art and Design in 2013. He has exhibited and lectured both in the U.S. and internationally, including recent appearances at the IMPAKT Festival in Utrecht, The Netherlands, and Relating Systems Thinking and Design 3, in Oslo, Norway. When he’s not worrying about the algorithmic mediation of daily life, Zach can usually be found opining to the nearest passerby on why his hometown of Madison, Wisconsin, is the greatest city in the world. 
Image Credit Miguel Noriega
Image Credit Miguel Noriega

Two weeks ago Zel McCarthy published a story in Thump about a mysterious infographic that’s been making the rounds lately. The infographic purports to show which drugs are popular at various music festivals by scraping Instagram for references to different drugs and certified cbd. The consumers of the Maeng Da variant have reviewed it repeatedly that the consumption of this medicine has improved their ability to concentrate on their work and their tasks hence increasing their efficiency, redirected here if you want to read this post. Scientific research elaborates that it has a direct effect on the cerebral system of the body making it a brain drug or a mental enhancement medicine that can be used as a supplement in small amounts to improve the ability to work and to concentrate more on the work. Anyone that knows a thing or two about research design would already raise an eyebrow but it gets worse.If you need Telescoping flagpole for festival  you can visit here. According to McCarthy:
This intentionally-opaque study was conducted and assembled by a Florida-based content marketing agency Fractl, which works regularly with DrugAbuse.com. While at first glance the site appears to be a credible resource for those struggling with addiction and abuse issues, it’s actually a redirect for for-profit rehab and addiction centers, mainly ones that bankrolls the site. Here are 11 things to look in an addiction treatment program. To help dig deep into the issues of research design, online performativity, and substance use I sat down over Skype with Ingmar Gorman, a clinical psychologist at the New School for Social Research who was quoted in the Thump article saying that this “study” was not only poorly constructed, it was also indicative of an archaic, “moralistic approach” to substance abuse research. What follows is edited to make us both sound more articulate. You can listen to the whole interview (warts and all), using the SoundCloud embed at the end of the interview. The recording, along with the sound of a computer fan and me saying “uhh” a lot, also includes something I’ll call “bonus content” about a study that used the Watson supercomputer to tell if someone was on psychedelics. Enjoy.

David A Banks: You are interviewed in Thump regarding research that was done by a treatment center that used Instagram tags to study drug use at festivals, or at least that’s what it billed itself as. Could you start by describing the basics of this study and why wasn’t it the best science it could have been?

Ingmar Gorman: From a methodological standpoint what this study consisted of was going through a large number of Instagram posts and looking to identify when words associated with substances appear along with the names of a festival or a photo from a festival. And essentially what they did was say, “in X percentage of these posts from this festival, this percentage mentioned this drug.” But now we have to get into the nitty-gritty of it a little bit. They used words like cocaine or marijuana which clearly mean a drug however they also use words that could be potentially more ambitious. For example, crack could mean someone spoke about cracks in the playa at Burning Man. When you design this study looking at the use of language and these words, yeah they’ll probably get a “hit” —what you’re looking for— but there will also be a substantial number of false positives. The issue with the study, really if I can go into it, is transparency.

It’s interesting because it is sort of an example of the democratization of science. Maybe you could think of it this way: do scientists and researchers have to be the only people that produce “scientific knowledge”? I don’t know very much about the background of the people who developed this study because well, it isn’t really available! I think in the Thump interview the reporter was able to contact the person behind the research and ask them questions, but in a peer reviewed publication we would see who the authors are but in this case we don’t even know who the person is. The first thing to do would be to speak to this person and ask what their methodology was.

Depending on how you design your study, the methodology you use, the data you collect, the quality of the data, how you ensure the quality of that data, and most importantly the question that you ask —what is your hypothesis— it will set you up for a result that you can deduce some sort of understanding from.

The main issue with the study was that well, all parts of it were poor. The design was poor, the data quality was poor, we have no idea of the quality checks that were in there, so its not that we can’t draw conclusions its that we don’t really know what it means! The best conclusion that we could make is that these words coincided with these festivals.

DAB: Some of the other words that struck me as, at the very best, ambiguous were, “coke”, “spice”, “pills”, “yellow jackets”, “white girl” and references to prescription drugs that could be totally legal! These could be prescribed to these people and no one has to defend why they are taking an Instagram photo of them having to take their medication.

IG: I could see someone responding to this, to play devil’s advocate here, saying “oh c’mon guys, we know when people use these words they are talking about drugs there’s no need to make excuses about it.” And we could concede to that sort of argument and say yeah let’s take [this study] at face value and everything is completely accurate, [but] the next issue is that —and this is where things get very tricky and very clever— this does not translate into behavior. The data showed something like 3% of Instagram messages had mentioned Burning Man and crack cocaine. So what does that mean? Do 3% of festival goers use crack at Burning Man? That’s highly unlikely. But we can’t know [from the data in this study].

The clever piece about this is that the clinic never made that claim. So, if you had a peer-reviewed journal and you made a statement that said this is the data we collected and this is the conclusion we’re drawing, 3% of Burning Man attendees take crack we could argue against that. But what this group did, which is interesting, is they said “we’re just going to look at this data,” which we might call a convenience sample, “and we just make an infographic.” Which wasn’t really all that obfuscating, but then they ask you to draw your own conclusions. So what happened was the EDM festival community of web sites picked up on this and started spreading these infographics around. And then all sorts of claims are drawn from these. “These sorts of drugs at these festivals.” Which is a logical leap, there’s no indication of that.

DAB: Do you know any that could use social media to study these sorts of claims or do you think that this might be a fool’s errand: to attach what we say online to actual action?

IG: You might know more about this than I do but the first thing that pops into my mind is “how we present ourselves online, is that necessarily accurate of who we are and how we actually behave?” I think you and probably most people would argue that we present a persona, so that is an issue in interpreting data like this. The other question about whether this is a fool’s errand: you know, no. On some level I would even applaud this group who did this study… I even hesitate to call this a study because I don’t even know if that’s what they would call it, but the people that collected this data and presented it— I applaud them for using a novel method. But I think there’s a little aspect of it that’s disingenuous when— we have science and we present our methodologies and look for controls and confounds so that… we’ll never get a 100% accurate, objective picture of reality but we’re trying to sort of do the best we can.

DAB: I think that while everything we’ve already discussed definitely indicates problems with this study in particular, the beginnings of the privatization of social data in general in science is also at play here. Would you say that this study starts to reify or make stronger our long-standing beliefs of what you described in that Thump article as the moralistic approach to treating issues related to substance use?

IG: So is there a connection between my statement about the moralistic approach and the privatization of data?

DAB: The structure of the data that is already available to us, and the people that hold the keys to this data, are probably not as versed as you in what it means to do a good study on drug research. So then, is all of this data in the wrong hands? Is it fixable? Can this data ever be used for good research in your field?

IG: There are several things that come to my mind. First of all the data that they accessed in this study, this project that they did, was available from people’s feed.

DAB: They used the Instagram API which almost anyone can get, where in their very tiny methodology section, they said over March 2015 using the Instagram API they collected all of that data. [Editorial Note: while I said “anyone can get” it is also totally within the discretion of parent company Facebook to withdraw a person or organization’s access to the API for any reason.]

IG: Right, so I want to be fair to that group and not misrepresent what they did, however clearly there is data privatization that exists which is an issue! So the question is an interesting one. Yeah, how you execute a proper study is important. But also the deeper question is “how do the questions that we pose reflect our biases?” When I spoke [in the Thump article] about the moralistic approach to substance abuse treatment, that was a response to a statement made by the group that generated the data for this project.  The article reads, “One of the report’s authors, Michael Genevieve, tells THUMP the study was conducted with the intention of ‘[raising] enough awareness to scare readers into a sober festival experience, in fear of being arrested.’” My response was that that quote itself represents a moralistic approach to substance abuse treatment.

Now the reason why I said that –and this really goes into the area of substance use that’s outside of the social media question– is because historically the early perspectives of why people misuse and become dependent on substances was thought to be because they lacked moral character. “You are a bad person. There is something wrong with you. You are sinful. Therefore you are weak and you engage in these behavior because you lack self control.” The next movement was the disease model which is still prevalent and is difficult to unpack. Some people have a very strict, narrow scope of the disease model which is, “Substance addiction is a disease, an illness, you have it for your entire life, there’s nothing you can do, you’re allergic to alcohol [for example] and you can never touch it again.”Incredibly popular throughout Southeast Asia (particularly Thailand and Indonesia where the overwhelming majority of Red Maeng Da Kratom strains are grown), in just the last few years Red Maeng Da Kratom has also gained popularity around the world and is now one of the best selling kratom options anywhere on the planet.  click site for more details of Kratom.

What is beginning to come into our conversation is a kind of model of self medication, a model of harm reduction. Which is an idea that encompasses the bio-psycho-social perspective. Yes, there are biological components, so it takes that piece of the disease model but it also has to do with a person’s psychology. Meaning, the way they see the world, the way they view themselves, histories of trauma, things like that. And then the sociological, which is the broader culture that perpetuates use or the context for the person’s use. The best kratom vendors among the hundreds can be daunting, somewhat like looking for a needle in a haystack this helpful site here.

So when I talk about this “moralistic approach” to say “oh well we’re just scaring people” essentially what this person is communicating is “we looked at Instagram, we can associate festivals with drug use, therefore if you’re posting about the drugs that you’re using or you’re using drugs at these festivals, we know about it and you better not do it otherwise you’ll get thrown into jail.” It’s a fear tactic. It is unfortunately a dominant perspective in this country about why people use drugs and how people who use drugs should be treated, but in my opinion it is an archaic perspective that will be replaced by the newer perspectives on substance use. That’s why I made that statement.

DAB: I would like you to actually go a little deeper into why this new bio-psych-social perspective might be said to be better on lots of different axes. It could be more efficacious in helping people lead lives that allow them to flourish, or it could be a better explanatory model for why people engage in drug use at all. Could you do a little more unpacking on why it’s a better model. Then —given what we discussed earlier about how a lot of social media is performative— if there is any compatibility in doing better work in the bio-psycho-social perspective using big data analytics.

IG: Our approach to data analysis, and our approach to asking questions, whether it is big data or [conventional] scientific research, reflects our biases and that is something we have to own. And if individuals who are responsible for data analysis and big data have a moralistic attitude towards substance use, then they will very likely find what they are looking for because of how they structured their questions or analysis. What’s almost more important than developing a better study is having a different understanding of substance use and why various people use. An issue in this country is that people outside of the mental health field is that the dominant cultural perspective of why people use drugs —whether they have a problem or not— is set in this moralistic “you have a weak character” kind of approach. So that’s where I see those two things coming together.

Moving to your first question, the bio-pyscho-social model has been around for quite a while, a few decades at least, so that’s not necessarily new. What I believe is newer, and to be transparent I am a big fan of this, this is one of the models I use when I do psychotherapy as a clinician, is called the harm reduction psychotherapy model.

You’ll be familiar with harm reduction in terms of needle exchanges and safe sex education, there’s a whole host of things you can do. Harm reduction psychotherapy applies that perspective to the psychotherapeutic process in terms of dealing with substance use. And this is what I think is more novel and which is gaining steam is this idea that people who use and have a problematic relationship with a drug are doing it because it works for them and it helps them, or it did at one point and now it is a sort of residual behavior that is difficult for them to let go of. That idea kind of blows people’s mind.

By looking at the bio-psycho-social model through the harm reduction lens you can say, “there are reasons and motivations for substance use drugs.” Its fantastic because it includes everything. Biology is essential. I’m a materialist. I think some experiences which are hard to pinpoint in biology are important, ‘meaning’ is really important, but I’m a materialist and everything as far as I know is in the brain or somewhere in the body. That is essential especially when you’re altering your brain chemistry with a substance, that’s very biological.

The psychological piece, to speak more to that: the co-morbidity among substance use issues, what we call personality disorders, and trauma is enormous. Studies vary but I’ve heard something between 30 and 60 percent overlap [among these categories]. And this bears out. I was recently at a psychiatric emergency room and, this is anecdotal, but I’d say 95% of cases that were there at that moment all had a history of substance use and mental illness. There’s a lot of crossover, so understanding psychology is really important.

And then, the most important thing to talk about from a socio-cultural perspective, is the incarceration rates of black individuals and marijuana and other drug-related crimes. That’s the critical perspective. However, from a perspective of clinicians, [we ask ourselves] why do we not think twice about alcohol and caffeine ––or at least think minimally about it–– and there are minimal consequences for using these drugs. That’s a cultural context.

So this model is important for research because we are looking at causes and roots, but it’s also important in terms of treatment. What we look at in a clinical, psychological context is how a person derives meaning and understands their behavior. You need that insight, but there’s more than that. If someone comes from a disease model and I ask them “what’s behind this problematic, repeated use” they’ll say “well I’m an addict.” That closes off all exploration. No, you’re not just an addict! You’re a father, you have had a difficult childhood, you have issues becoming employed because of your criminal history, there’s so much there that makes someone depressed or upset that will drive them to their use. So really, what’s really important in this model is understanding the complexity that exists in the person.

DAB: I think we can leave it at that. Thanks so much for doing this.

IG: This has been really great, and thanks for having me.

ufo

Fox has decided to renew X-Files, a series that aired its last episode over thirteen years ago, with a “six-episode event series” that begins this January. I don’t know what an “event series” is but I’m pretty excited. Of course, there’s a lot of new things to distrust the government about, so one has to wonder: from the burning temperature of jet fuel to the Facebook algorithm, what will the writers decide to focus on? I couldn’t help myself and made a listicle.

Aliens are Still Around, Kinda

I’m gonna go ahead and say that alien abductions don’t quite capture the public imagination like they did in the 90s. The reason for this is probably over-determined, but making an educated guess as to why greyskins levitating someone off their bed and out the window went from terrifying to hackneyed, would help us know what to avoid while making a compelling and interesting alien sub-plot.

Perhaps the uniformity of alien abductions makes them no longer eligible for Quality Television. Maybe it was the very first episode of South Park that signaled that it was a predictable trope. In any case I think the classic bright lights, big eyes sort of alien abduction could make a comeback if it were shot immaculately and had some sort of new spin on it. There needs to be a new and compelling reason for the abductions. You can also reimagine the sequence so that the victim isn’t always walking through the forest or sleeping in bed. I think the V/H/S 2 (2014) alien kidnapping is the way to go here.

UFO sightings are harder to make compelling for many reasons that I’ll get into later but one of the biggest hurdles is that the ability to capture so much has diluted the market in unexplainable events caught on tape. There are tons of “Best UFO Sightings 2013 Compilation” videos but so many of them are well-made but obvious fakes. What would it take to convince Mulder and Scully to investigate one of the hundreds of videos uploaded every year? Perhaps the conspicuous absence of video (nine minutes perhaps) would be more compelling than capturing what looks to be a flying saucer. Proof of aliens won’t be shocking and well-documented alien abductions, it’ll just be creepy holes in the digital record.

Slenderman Episode

Slenderman is the closest the Internet has to a legendary folk creature, so naturally it would make sense that someone open an X File on it. Because Slenderman is from and of the Internet, so much about him is out of focus, glitchy, and full of static. A slenderman episode might be a great opportunity to push the genre and, like the X-Files did many times in its later seasons, bring in a guest director to do a feature episode. Bringing in Paranormal Activity director Oren Peli to do a found footage episode would be pretty fun. It’d have to be as least as good as X-Cops.

Slenderman is also suburban: hanging out in municipal parks, cul-de-sacs, and wherever bored teenagers can film their tallest friend in a suit with a sock on his head. It seems like a really appropriate story line give that the X-Files always played off of the same distinctly 90s paranormal of the mundane that also fueled shows like Unsolved mysteries and Sightings. Imagine an episode where kids are filming a slenderman episode but actually film something unexplainable? They’d bring in the Lone Gunman crew and dissect the video. Fun!

A Trip to a Data Center

“Scully, why is it that we don’t have a problem imagining a haunted Victorian mansion but these clean, modern buildings seem somehow immune to the supernatural?”

“I don’t know Mulder. Maybe because ghosts are manifestations of complex anxieties that don’t have a locatable subje–“

“Look, all ghost hunters agree that paranormal phenomena feed off of electrical energy. The largest server farm east of the Mississippi is practically an all you-can-eat buffet of EM waves. Think of it Scully, they’re probably getting second deserts.”

“I just think there’s a rational explanation for why photos of a dead girl are mistakenly showing up on other people’s profiles.”

Etc. Etc.

Fox Mulder Tries Tinder

This doesn’t have to be the plot of an entire episode. But I think we can get a solid ten minutes of Emmy-nominated air time on this subject.

Capturing Creepy Stuff on Camera is Harder, Not Easier.

I really, really need a scene where Mulder is seen standing in line at a pharmacy, disposable camera in hand, waiting for someone to actually come over to the photo center. Maybe a nice old lady would walk up to him and say something about how she still likes to make photo albums and Mulder will say, “Yeah, this is the only way I can seem to hold onto photos.”

Nearly all photography is now taken on phones and those phones have internet connections. Mulder and Scully don’t have more tools at their disposal for capturing proof. They have less. Not only are we more skeptical of what we see on video, there’s also plenty of opportunities for the government to copy, monitor, and delete any photo available to the network. It would be a shame if they ignored this really complicated and relevant topic with an “I use Tor” throw-away line.

The X-Files had already run out of steam by 2001 but one of the final nails in the coffin was the nationalism immediately after 9/11. Suddenly, stories about government cover-ups and shadowy back-door deals was either in poor taste or too real to be entertaining. Today we might say the same is still true –perhaps even more true than ever before– but maybe that’s exactly why we need Mulder and Scully again.

8270445558_4663509bf0_z
Photo Credit: Bill Dickinson

Science, to borrow a phrase from Steven Shapin, is a social process that is “produced by people with bodies, situated in time, space, culture, and society, and struggling for credibility and authority.” This simple fact is difficult to remember in the face of intricate computer generated images and declarative statements in credible publications. Science may produce some of the most accurate and useful descriptions of the world but that does not make it an unmediated window onto reality.

Facebook’s latest published study, claiming that personal choice is more to blame for filter bubbles than their own algorithm, is a stark reminder that science is a deeply human enterprise. Not only does the study contain significant methodological problems, its conclusions run counter to their actual findings. Criticisms of the study and media accounts of the study have already been expertly executed by Zeynep Tufecki, Nathan Jurgenson, and Christian Sandvig and I won’t repeat them. Instead I’d like to do a quick review of what the social sciences know about the practice of science, how the institutions of science behave, and how they both intersect with social power, class, race, and gender. After reviewing the literature we might also be able to ask how the study of science could have improved Facebook’s research.

This sort of work has been done under a number of names, including social studies of science, science studies, science and technology studies, sociology of knowledge, and science, technology, and society. The banners that individual researchers march under is less important than the approaches and perspectives each take, so instead of concentrating on the changing names for this sort of work, I’ll instead focus on what these authors thought was aspect of science was most important to study.

The scientific method itself was born out of a debate between Thomas Hobbes (author of The Leviathan, best known for the “states exist so we don’t immediately kill one-another” hypothesis) and Robert Boyle (inventor of the air pump and widely considered founder of modern chemistry).  The two argued vigorously over whether or not you could see something and declare it as fact (Boyle), or whether one had to understand underlying causes before contributing to natural philosophy (Hobbes). Whereas Boyle was willing to separate facts from causes ––birds die when you put them in a vacuum, exactly why was a mystery–– Hobbes saw this as sloppy philosophy. One had to build an argument from the ground up, starting with the cause (which may have been grounded in what would today be called “social” or “political” reasons) and ending with the observable phenomenon.

The division between Hobbes and Boyle (catalogued in Shapin and Schafer’s Leviathan and the Air Pump)represents in microcosm, the modern western worldview we have today: politics says what should be and science says what is. But science, whether it is making nuclear bombs or social media platforms, often works in the service of politics or becomes the center of political debates. You can’t neatly separate the two. It’s no coincidence, for example, that Newton’s calculus is particularly helpful at calculating cannon ball trajectories and statistical methods are particularly well-attuned to assisting a few people make definitive claims about lots of people.

Prominent French sociologists like Emile Durkheim and Max Weber have written about science; the former recognizing that just because something is socially derived does not make it not objective, and the latter noticing that science is a vocation and, like all vocations, demands loyalties to certain practices and people that are based on social considerations. By the 1930s social scientists were dedicating their entire careers to studying science. Robert Merton and Bernard Barber were some of the first sociologists of science. They saw the rise of authoritarianism in Europe as a threat to science and set out to show that science was inherently democratic and therefore science was good for democracy. Their work was concerned mainly with the practice of science and rarely made claims about the nature of facts and claims. They studied how scientific communities formed, how they rewarded desirable behavior, and the ways science was internally stratified by rank and prestige.

Starting in the 40s with Michael Polanyi and picking up in the 60s with the publication of Thomas Kuhn’s The Structure of Scientific Revolutions, science studies began to expand out and make observations about the content of scientific knowledge as well. Polanyi argued that underlying claims to knowledge are personal and collective convictions and science does itself a disservice to ignore such preconditions. If the sort of free debate that is necessary for a healthy scientific community is to occur, values must be stated plainly. To claim neutrality, Polanyi argued, was to hide your values. Kuhn makes a similar, but more systemic argument that all science happens within a paradigm. A paradigm is analogous to what sociologists call a s social world: a set of practices, widely held ideas, values, languages, and practices that mark a particular place and time.

According to Kuhn, most of the history of science can be described as punctuated equilibrium. Scientists live within a certain paradigm and do work based on that paradigm until something really big happens that threatens the dominant paradigm. The go-to example is the “Copernican Revolution” but not for the reason most people think. The story that is popularly told is that Copernicus “discovered” that the sun revolved around the Earth, not the other way around, but it is more accurate to say he re-discovered this fact. It was widely understood by the Ancient Greeks that the sun was the center of the solar system, but that “fact” fell out of commonly-accepted natural philosophy for over a thousand years.

Kuhn would say that this demonstrates that science is not a linear progression of ever-increasing understanding, rather it is a practice that generally works within incrementally changing social worlds until something big happens that causes a revolution into a radically different one. The important thing to note here is that the “something big” need not be a scientific discovery or breakthrough. It can be a war, a new tool coming to market, or (and Kuhn says this is usually the case) the death of a prominent member of a scientific community. If their “rivals” are able to take up positions as department chairs and journal editors, entire disciplines can change dramatically. Of course the ability to gain prestige in the field is based in no small part on the ability to do science but it is far from a pure meritocracy.

From the 60s to the 80s social scientists were largely preoccupied with describing exactly what contributed to success in science beyond the merit of work. Or, to put it more precisely, social scientists set out to understand how, what, and who was deemed meritorious and worthy of praise within scientific communities.

Early work in this field falls under the large banner of “social constructionist.” Radical social constructionists say that all claims to knowledge are power moves, not efforts towards truth or understanding. More moderate social constructionists only contend that scientific theories should at least be subjected to the same sociological analysis, whether they end up being “proven” true or false. This still means that a social constructionist analysis would never ascribe the success of a theory to its (to use a Colbertism) “truthiness.” Instead, the success or failure of a scientific program or theory comes from its ability to do useful work or are particularly suited to confirming the beliefs of powerful actors. At the center of the social constructionist approach are authors like David Bloor, Barry Barnes, and Donald A. MacKenzie. They and others are usually referred to as the Strong Programme of STS.

Shapin and Schafer describe the social constructionist approach as “playing the stranger.” They write in the introduction to their history of Hobbes and Boyle that they sought to “adopt a calculated and an informed suspension of our taken-for-granted perceptions of experimental practice and its products. By playing the stranger we hope to move away from self-evidence.” They try to understand why someone might disagree with the scientific method, especially at a point in history when it was far from clear that Boyle would win that particular controversy.

Social constructionist analyses tend to focus on the role of individual agency to effect change in scientific research programs but, as Daniel Kleinman has shown [paywall], the institutional structure of science can have a big influence on research practice as well. The standardization of lab equipment, for example, has a constraining influence on the variety of scientific research. Widely available equipment and chemicals “are created to suit a wide market of laboratories, not the local needs of individual labs.” Specialized research isn’t just a matter of modifying those widely available chemicals or tools, especially if they are covered under intellectual property laws. Standardized, proprietary equipment can make private companies indispensable to entire sub-disciplines. It can also mean the replicability of a study is directly tied to the business decisions of private firms

Aside from social constructionism, another large branch of science studies comes out of critical feminist studies. Feminists focus on the ways androcentric views of the world are embedded in scientific accounts of nature and scientific practice. Donna Haraway, Sandra Harding, bell hooks, Patricia Hill Collins, Karen Barad, Susan Traweek, Evelyn Fox Keller, Linda Layne, and many more authors have contributed extensive research in this field. Everything from the time to full professor (surprise, it takes much longer for women) to the tendency to ascribe the features of patriarchal white middle class family structures onto animal communities bend science toward male supremacy and away from other (and perhaps one could even say more empirically accurate) views of the world.

There are lots of examples here but Haraway’s concept of Teddy Bear Patriarchy is one of my favorites so I will use that as my example. * Haraway, in tracing a genealogy of primatology and natural history more generally in her book Primate Visions, notes that early naturalists and conservationists’ practices form the ideological bedrock for how present-day scientists go about cataloging and understanding the world. She points to the dioramas in the American Museum of Natural History in New York, some of which date back to Theodore Roosevelt’s efforts at nature conservancy, as the epitome of patriarchy’s counter-intuitive logic: taxidermy –the active hunting and killing of animals so that their skins may be presented in a museum– is somehow held up as a window onto life. She writes:

This is the effective truth of manhood, the state conferred on the visitor who successfully passes through the trial of the Museum. The body can be transcended. This is the lesson Simone de Beauvoir so painfully remembered in the Second Sex; mas is the sex which risks life and in so doing, achieves his existence. In the upside down world of Teddy Bear Patriarchy, it is the craft of killing that life is constructed, not in the accident of personal, material birth.

Teddy Bear patriarchy can hide in plain sight thanks to what Donna Haraway, in her essay Situated Knowledges calls the “god-trick.” The god trick is “seeing everything from nowhere.” It is the illusion of an all-seeing eye that doesn’t just passively view natural phenomena but also “fucks the world to make techno-monsters.”** Instead of a disembodied and universalistic approach to understanding nature, feminists like Haraway argue for a  “Feminist objectivity” that situates knowledge in particular bodies and treats objects of study “as an actor and agent, not a screen or a ground or a resource.”

What might feminist objectivity look like in practice? How would the Facebook echo chamber report look if its authors had used feminist research practices? It is hard to tell because so much of Facebook is built upon a decidedly anti-feminist approach that capitalizes on the view from nowhere. But, if we were to start incorporating feminist epistemologies into Facebook research, a good place to start would be adopting what Sandra Harding and others have called “standpoint epistemology.” Standpoint epistemology is a way of reporting scientific findings without relying on the god-trick to assert legitimacy.

Facebook researchers using a standpoint epistemology would first recognize that their position of power in relation to users, not to mention their clear biases in showing that the algorithm is benevolent if not agnostic, has profound impacts on their results. If they still wanted to conduct the study they may mitigate their own biases by selecting users to review the data as well. They might pair their quantitative data with qualitative accounts and personal stories about avoiding or seeking out opposing viewpoints.

Most importantly though, a standpoint epistemology would recognize that different people avoid or seek out viewpoints for different reasons and not all echo chambers are created equal. A group of people intensely sharing news that confirms anti-choice legislation is not the same thing as a group of people sharing stories about the survival of trans people in the rural south. The latter acts as a safe space in a largely hostile world, whereas the former is means of distilling a hegemonic discourse.

Truly interesting questions arise when we think about how algorithms themselves may benefit from science studies in general and standpoint epistemologies in particular. Could algorithms help in the controlling of triggering content or might the formation and shaping of the algorithm become part of the daily practice of Facebook? It seems that having a single algorithm runs counter to the very basic precepts of standpoint epistemology in the first place. Perhaps the first step in a more feminist direction would be to acknowledge the obvious: that the Facebook algorithm is a product made for certain ends and to embrace that fact in future scientific reports.

David is on Twitter and Tumblr.

*It is worth noting, reflexively, that Pierre Bourdieu (who founded what he called, reflexive sociology wherein the sociologist recognizes and announces his or her sociological positioning) identified summaries and textbooks as inherently political documents wherein the author defines a discipline by selecting its constitutive authors. That’s totally what I’m doing.

**Not completely sure what this means but see asterisk above. An interpretation of this quote might suggest that the view from nowhere gives scientists ideological cover when building or contributing to weapons development or other destructive technologies. Scientists can say that their work is value-neutral and it is the decisions of politicians that actually produce deadly results.

Works Cited:

  • Bauchspies, Wenda K, Jennifer Croissant, and Sal Restivo. 2005. Science, Technology, and Society: A Sociological Approach. 1st ed. Wiley-Blackwell.
  • Bloor, D. 2007. “Ideals and Monisms: Recent Criticisms of the Strong Programme in the Sociology of Knowledge.” Studies In History and Philosophy of Science Part A 38 (1): 210–34. doi:10.1016/j.shpsa.2006.12.003.
  • Haraway, D. 1988. “Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective.” Feminist Studies 14 (3): 575–99.
  • Haraway, Donna J. 1990. Primate Visions: Gender, Race, and Nature in the World of Modern Science. Reprint. Routledge.
  • Harding, Sandra, ed. 2003. The Feminist Standpoint Theory Reader: Intellectual and Political Controversies. 1 edition. New York: Routledge.
  • jurgenson, nathan. 2015. “Facebook: Fair and Balanced.” Cyborgology. May 7. https://thesocietypages.org/cyborgology/2015/05/07/facebook-fair-and-balanced/.
  • Kleinman, Daniel L. 1998. “Untangling Context : Understanding a University Laboratory in the Commercial World.” Science, Technology & Human Values 23 (3): 285–314. doi:10.1177/016224399802300302.
  • Kuhn, Thomas S. 1996. The Structure of Scientific Revolutions. 3rd ed. University Of Chicago Press.
  • Merton, Robert K. 1979. The Sociology of Science: Theoretical and Empirical Investigations. University Of Chicago Press. http://www.amazon.com/The-Sociology-Science-Theoretical-Investigations/dp/0226520927.
  • Sandvig, Christian. 2015. “The Facebook ‘It’s Not Our Fault’ Study.” Social Media Collective. May 7. http://socialmediacollective.org/2015/05/07/the-facebook-its-not-our-fault-study/.
  • Shapin, Steven, Simon Schaffer, and Thomas Hobbes. 1985. Leviathan and the Air-Pump: Hobbes, Boyle, and the Experimental Life : Including a Translation of Thomas Hobbes, Dialogus Physicus de Natura Aeris by Simon Schaffer. Princeton, N.J.: Princeton University Press.
  • Tufekci, Zeynep. 2015. “How Facebook’s Algorithm Suppresses Content Diversity (Modestly) & How the Newsfeed Rules the Clicks.” The Message. May 7. https://medium.com/message/how-facebook-s-algorithm-suppresses-content-diversity-modestly-how-the-newsfeed-rules-the-clicks-b5f8a4bb7bab.

 

Cartoon by Matt Lubchansky (@lubchansky) original posting can be found here.
Cartoon by Matt Lubchansky (@lubchansky) original posting can be found here.

A few editorial cartoons offering a counterpoint perspective to the cultural sentiments and media portrayals that denounce the Baltimore “riots” as politically unproductive, ethically unjustifiable hooliganism have achieved viral status.  One particularly prominent cartoon illustrates alternative histories in which once denied freedoms and equities were achieved without systemically disruptive uprisings (see image above).  In one panel an 18th century Haitian slave cordially informs a French Imperialist that he and his fellow slaves would rather be free.  The receptive overseer responding in an equally kind fashion decides to abolish the system of slavery that legitimizes his very authority.  In another panel an 18th century French revolutionary asks King Louis XVI to abdicate his power as well as dissolve the monarchy to make way for democratic rule and, like in the previous example, history is comically rewritten to suggest that the powers that be were enthusiastically and progressively responsive to such a request. 

These cartoons are funny because they reveal an absurd distance between the historical actualities and their reimagined counterparts—a distance which further reveals the absurdity of expecting that the existing authority will readily relinquish oppressive, self-enriching policies of inequality for the betterment of others and society as a whole.

The cartoons communicate a series of specific messages.  For most observers these cartoons first and foremost communicate the naivety of assuming that progressive change of any social system is possible without some hostile disruption to said system’s order.  But there is, perhaps, a more important message embodied by these comic narratives.  The logic of these cartoons suggests something about the nature of communication: the condition of apparent clarity and reasonableness of a given utterance (or attempt at communication) does not necessarily elicit a systemic response.  To better understand this presently vague stipulation let us turn to some theoretical insights about social systemic communication.

Niklas Luhmann suggested that the social system is, first and foremost, a system of communications.  To understand how a social systemic order emerges, persists and changes, then, we must consider how communications emerge, persist and change.  Looking again to the cartoons, we may recognize that these reimagined circumstances present the reader with what are in practice comically ineffective means for communicating social grievances.  System’s theorists could say that such  utterances never becomes a communicative act, never emerge from noise and thereby fail elicit a meaningful response.

With these stipulations in mind, we may ask the questions: Who gets to participate in systemic communication? Whose communications make a meaningful difference to the systemic order?  The utterances of Haitian slaves, French revolutionaries and present day, black citizens of Baltimore emerged in their respective times without meaningful reception and response.  The prevailing social systems relegated their voices of dissent and discontent to meaningless noise.   Systemic disruption, however, serves as a means in which utterances may effectively emerge from systemic noise.   When an utterance takes the form of flipping cars, breaking windows, and impassioned shouts, it is increasingly difficult to avoid giving said utterance your full attention.

The violence perpetuated by such disruptions is not irrational or simply aberrant; rather it demands the penultimate acts of reason-centered discourse, the emergence of dialogue and mutual recognition.   The irony of discourses that fail to understand the sensibility behind the Baltimore uprisings is that they often appeal to Martin Luther King Jr’s teachings of nonviolence to justify their reservations.  Yet the Green party nominee of 2012, Jill Stein, suggests on Facebook that alluding to MLK, Jr. is wholly inappropriate for such discourses.  She concludes with his famous quote: “A riot is the language of the unheard.” In line with this logic, we may acknowledge that Baltimore’s turmoil will subside when grievances are addressed meaningfully; wherefore the aggrieved become a part of and meaningfully contribute to the systemic order.

James Chouinard is an independent scholar with a PhD in Sociology.

tumblr_nn4kta2uDp1t0j5qro7_1280
Photo By Aaron Thompson

I was happy to see Theorizing the Web go so well for so many people. The committee has been getting a lot of positive and constructive feedback and we’re reading all of it. If you feel so moved to write your own reflections on #TtW15 please send them our way. Last year, my post-conference thoughts were all about labor and the dangers of doing what you love. That’s still a problem ––TtW relies almost completely on volunteer labor–– but this year I’m thinking more about the institutions that prop up the typical Hilton-hosted conference model and make it difficult, if not financially impossible, to have more events like Theorizing the Web.

Our pay-what-you-want donation model reflects our commitment to lowering the barriers to participation in theorizing, but it is also indicative of all the little mundane ways that traditional academic institutions and TtW are not compatible. Most institutions have a little bit of money put aside for travel and conference costs, but need proof that the conference asks for a certain amount of money before funds can be approved. Without a posted conference fee many of our attendees (me included) have no mechanism by which to expense the registration cost and thus end up paying their donation out-of-pocket.  By choosing to not demand money from all attendees, we forgo the kind of money that average disciplinary associations receive every year.

Because we work out of unusual places we usually have to install a large, but temporary, internet connection. Internet service providers rarely, if ever, have someone asking to buy a single month’s worth of their biggest commercial line. We’ve tried to buy less (We only use it for one weekend!) but companies selling lots of bandwidth only work in monthly increments. I’m pretty sure we have become notorious to Time Warner Cable installation crews too—as we typically run multiple lines in the same building. Nothing about the conference fits neatly into the forms and procedures of a typical cable customer.

There are really important reasons why, for the last two years, we have opted to put Theorizing the Web in unorthodox locations. The early American university was purposefully isolated- it gave students a space where they were safe to make radical claims and were free from distractions so that they could get work done. If anything has been held over from that early time, it is the isolation that campuses breed. There’s a time and a place for that and TtW isn’t it. We want people to come off of the street, register really fast, and check out what’s happening. Few venues make this serendipity possible, but it is an important feature that we look for in selecting a venue. In the last two years we’ve had lots of people come in, listen for a few minutes and leave. Some stay. The ones that stick around are tremendously important to us.

All of this is to say that Theorizing the Web could actually be even cheaper than it already is, if we weren’t such an anomaly to venues, Internet Service Providers, and employers. TtW comes up against many of the actual and immediate material costs of constructing alternative institutions. There’s nothing quite like TtW but we hope that changes soon. We can’t meet the demand for this sort of project and if there were more organizations like TtW it might be easier for all of us. Until then, feel free to send us your suggestions and recommendations for making TtW16 even better: http://theorizingtheweb.tumblr.com/ask

 David is on Twitter
Image Credit
Image Credit

The 2016 presidential race has already started and it’s easy to get caught up in the horserace and forget about all of the technologies and tactics that campaigns employ to get their message out. The 2008 Obama campaign was the first to take full advantage of social media and eight years later these tactics seem to have become the new normal. It is now possible to deliver precisely tailored messages for key demographics and even individuals. American presidential campaigns have never been models of democracy but with the help of private databases and corporate collusion, the 2016 presidential race is shaping up to be a very murky process.

What deserves our immediate attention is what Zeynep Tufekci, in a 2014 First Monday article, calls computational politics. “Internet’s propensity for citizen empowerment is neither unidirectional, nor straightforward.” Warns Tufekci, “The same digital technologies have also given rise to a data–analytic environment that favors the powerful, data–rich incumbents, and the technologically adept, especially in the context of political campaigns.” The use of big data “for conducting outreach, persuasion and mobilization in the service of electing, furthering or opposing a candidate, a policy or legislation” is computational politics.

In the past, campaigns relied on general relationships between demographics and voter behavior and reception to key messaging tactics. Today, instead of focusing in on “white males over the age of 65” a campaign message can be tailored specifically to an individual based on a natural language analysis of Facebook status updates, purchasing history, and other kinds of data. This means that not only are campaigns going to get much more personally persuasive, they are, somewhat paradoxically, going to get much more fractured. The same candidate can present themselves as a job creating capitalist to your slightly left of center dad while touting tough climate change legislation to the college student that has Greenpeace “liked’ on Facebook.

One might be tempted to see this as a more powerful version of what Herman and Chomsky called their “Propaganda Model” in their famous book Manufacturing Consent. Herman and Chomsky argue that instead of being skeptical and adversarial opponents to powerful interests, the media is actually complicit in framing and shaping the news in such a way that it legitimizes the state’s use of violence. “It is much more difficult” they write, “to see a propaganda system at work where the media are private and formal censorship is absent.

While this still holds true, social media doesn’t just broadcast information from a few people to a larger audience, it also shapes and mediates our conversations between each other. Tufecki cites studies that concluded Facebook could increase –to a statistically significant degree–voter turnout by showing users a list of their friends that had voted followed by a “go vote!” message. It doesn’t take much imagination to think of a scenario where a similar method is put up for sale as a way to selectively get out the vote for a particular candidate or party.

Even when the candidates don’t have algorithms to hide behind, their teams are adept at controlling the conversation and avoiding important topics. In the debates of the 2012 campaign there was virtually no discussion of drone warfare until the last debate where Romney and Obama agreed that drones were totally awesome. There was also absolutely no discussion of climate change, even though it has been a major debate topic for decades and the problem has only gotten worse. What sorts of topics can we expect to not hear about this campaign season? Will the candidates take strong stands on killer police, NSA spying, or escalating wars in the Middle East? If there isn’t loud and long debates on these subjects we should assume they all agree that these are good things worth keeping around.  Or, at the very least, campaign coordinators have all agreed that the politically viable opinions that secure winning percentages of the population have no bearing on sensible and permanent solutions.

Finally, we should be ready to hear a deafening silence when it comes to who actually gets to do the voting once all the campaigning is over. Who is allowed to do the voting gets smaller and smaller every year, thanks to so-called “voter fraud prevention” laws that take wild and wide swipes at voter rolls; knocking out people who should be able to vote. In October of last year, just before the midterm elections, Al-Jazeera reported on massive voter suppression efforts “that threatens a massive purge of voters from the rolls. Millions, especially black, Hispanic and Asian-American voters, are at risk.”

Much of voter suppression, according to the Al-Jazeera report, is accomplished through the Crosscheck Program. Crosscheck is a database that purports to help election officials see if people have voted more than once by comparing names of registered voters in different districts and states. The system spits out so many false positives that its continued use is an implicit admission of guilt. Moreover, it seems the program’s “lists are heavily weighted with names such as Jackson, Garcia, Patel and Kim — ones common among minorities, who vote overwhelmingly Democratic.”

Regardless of your feelings about whether or not –either through design or consequence of external factors– presidential campaigns are effective political change agents, it is safe to say that many of the databases that are coming online today are not in service of the greater good. They reify power differentials that hide us from candidates’ true intentions and (even worse) our own opinions about those candidates. By taking conversations that should be had in groups and in public, and personalizing them to the point of individual private conversation, we lose a lot of the advantages that come with outnumbering our elected officials. We can’t compare notes or even ask the same questions. We aren’t only battling misinformation, we have to do the hard work of fighting to know what we do not know about our candidates and the issues.

Image Credit: Kris Krüg
Image Credit: Kris Krüg

When someone starts talking about privacy online, a discussion of encryption is never too far off. Whether it is a relatively old standby like Tor or a much newer and more ambitious effort like Ethereum (more on this later) privacy equals encryption. With the exception of investigative journalism and activist interventions, geeks, hackers, and privacy advocates seem to have nearly universally adopted a “good fences make good neighbors” approach to privacy. This has come at significant costs. The conflation of encryption with privacy mistakes what should be a temporary defensive tactic, for a long term strategy against corporate and government spying. It is time that we discuss a new approach.

The prevailing logic seems sound: runaway government and corporate surveillance is often accomplished through the abuse of pre-existing data or the interception of daily digital life. We may be tracked via geotragged vagueposts about our flaky friends or Kik messages between activists might be intercepted as it goes from sender to receiver. End-to-end encryption is meant to prevent the latter sort of surveillance and is often compared to a paper security envelope: the network only knows the sender and recipient and the content of the data is obscured. There are lots of protocols and technologies that provide end-to-end encryption, the most prevalent being https which verifies the identity of both parties and keeps the digital envelope closed and secure as it traverses the series of tubes. Services like Gmail, Facebook, Twitter all use https and you can tell by looking at the address bar in your browser. Chrome even turns it a happy, reassuring shade of green.

If we were to take the security envelop to its logical conclusion however, the attitudes towards data security would appear preposterous if not deeply insufficient. If the government were opening our snail mail to the same degree they were vacuuming up our digital communications, private citizens’ first inclination might be to spend the extra money on tamper-evident/proof envelopes, but would we really continue to innovate primarily in better envelopes? Would we go on to make a private postal service even if we knew that the government had given itself lawful authority, if not the capacity, to search private as well as publicly conveyed correspondence? Would we continue to pour resources and effort into making a better envelope when the problem was obviously bad government?

There is a sort of digital dualism at play here. While efforts to develop encryption are rarely questioned, I think similar tactics for different problems would be criticized as both a stop-gap measure and overly defensive at a moment that demands an offensive strategy. That is, we should be building the capacity to weather tyranny so that we may fight against it, but creating a new normal of balkanized communication is wrongheaded.

To be clear, I’m not saying we shouldn’t be working on encryption while we fight the good fight against privacy invasion. I just don’t want us to mistake a coping strategy for a solution to a big problem. We tend to confuse advancement in encryption technology with social progress in the fight against government overreach. Even the most politically radical Anon, who most certainly is engaged in offensive strategies in every sense of the word, never seems to wish for the day when all of these good fences become unnecessary.

Ethereum, the latest invention to be touted as “artillery in the running battle between technology and governments” bills itself as nothing less than “web 3.0.” Its creators describe it as a total reorganization of how the Internet is run and how data is stored. Using the spare space on personal hard drives and processors, Ethereum offers encrypted, distributed, public and unalterable transaction records for everything from bank transactions to sexts. That means no one is in control of the system so authorities can’t shut it down, nor can data be disappeared or held in private databases.

Anything that, by design, hinders the accumulation of power is a good thing in my book. I like that the technology forces a kind of anarchic or rhizomatic politics.  I tend to think of horizontal organization as running on interpersonal trust, but hackers don’t seem to see it that way. Even when it comes to advertising its decentralized infrastructure, Ethereum and similarly-designed cryptocurrency organizations choose to cast the lack of central authority as “trustless” rather than trustful. It is disturbing to me that a metric for good design is the lack of trust in one-another. That is the sort of thinking that got us into this mess in the first place.

Modern hierarchical bureaucracies, as Max Weber observed early in the twentieth century, make it possible to act without interpersonal trust. I know that a doctor I have never met is qualified because she has credentials and licenses from organizations that have knowable and somewhat static requirements. In a sense, we outsource interpersonal trust to large institutions so that we may trust people that we haven’t taken the time to get to know. We might achieve the same thing through aggregating lots of opinions, so long as we trust the aggregator. Once there was even the slightest suspicion that Yelp was in the business of removing bad reviews for a fee, the ratings of independent individuals became suspect.

Instead of handing over our trust to organizations like professional associations, governments, or corporations, hackers would have us move that trust to algorithms, protocols, and block chains. Of course human organizations can be co-opted and corrupted but so can algorithms. Coding is just as much a human (and thus social) endeavour as organizing a government or creating a business. But even if technologies weren’t vulnerable to human faults, our problems do not come from organizations and code working incorrectly. Most of them are doing exactly what they are supposed to do: corporations have fiduciary responsibilities to seek profits above all other things and just as the invention of the train also brings about the invention of the derailment, so too does the invention of the nation state yield war. We don’t need more things that let us go about our lives not trusting. We need to get rid of or deeply reform the institutions that foster distrust and fear.

If we build a world full of trustless technologies what happens when we feel ready to trust again? Even the anarchists who fought in the Spanish Civil War against Franco organized their militias without officers, salutes, or rank. They recognized that means and ends were deeply intertwined; you don’t get to a vastly better world by reproducing its undesirable elements. I do not know what a more communitarian technology would necessarily look like, but it I know we have to start by changing our ethic first.

 David is on Twitter.

image

I started writing something about funding community media houses using fees extracted from cable companies, something that local governments will have more political leverage to do with this recent FCC ruling, but as I look back at the dissenting opinions from the Republican commissioners, and the palpable fear of claiming anything close to regulation in the final FCC order, I feel pretty deflated. Don’t get me wrong, its good that net neutrality was preserved, but we should also call it what it is: holding ground. This wasn’t a step forward, it was a lot of work and campaigning just to keep a not terrible status quo.

Here’s the first two paragraphs of what I was about to write:

I listen to a lot of podcasts. Chances are you listen to a couple as well. You might also subscribe to some YouTube channels or follow a live stream account. Maybe you read a small circulation magazine. There’s certainly been a lot of ink spilled about the democratizing effects of consumer devices that afford all of this new media and there’s been an equal amount of rigorous research into what sorts of communities they engender: fan groups, social movements, and radical (left and right) political affinities just to name a few. What we don’t talk about very often are the kinds of organizations that make something like Welcome to NightVale or The Ideas Channel possible in the first place. One consequence of that is a serious lack of political imagination with regards to what we should be demanding from the governments and corporations that hold the keys to the server cabinets. There are lots of ways to take this but, in light of the recent FCC ruling on Net Neutrality lets focus on something that seems eminently possible now that wasn’t before: regional networks owned and operated by the communities they are meant to serve.

As cable companies were carving out their markets it became commonplace for local governments to start negotiating for extra goodies in exchange for a place on the telephone pole. Public access television was often the beneficiary of these deals but in the last few decades those deals have extracted fewer resources for the public and public access programming has to compete with hundreds instead of dozens of channels. Then there’s the Internet. It makes little sense to limit your media to a local TV market when you can quickly and easily post a YouTube video. Middle class people might find it easy enough to make media with the tools available but we could certainly do more to provide lending libraries for this sort of thing. It might also be nice to rent space in a real sound studio.

That all seems pretty reasonable, right? I was gonna talk about revitalizing libraries as a place to not only read but also “write” media. I had this great idea for an extended metaphor about “gaining write access” to government-funded media but then I made the mistake of looking over the dissenting opinions to find some kind of counter-intuitive, even-the-Republicans-could-agree sort of argument but the wind was out of my sails. The conversations at the highest level are so cynical that they appear as afterthoughts: like they were written long before an actual decision was even reached. For example, here’s a line from the press release describing the official order:

the Order DOES NOT require broadband providers to contribute to the Universal Service Fund under Section 254.

and yet, here is the dissenting opinion from Commissioner Pai:

One avenue for higher bills is the new taxes and fees that will be applied to broadband. Here’s the background. If you look at your phone bill, you’ll see a “Universal Service Fee,” or something like it. These fees—what most Americans would call taxes—are paid by Americans on their telephone service. They funnel about $9 billion each year through the FCC. Consumers haven’t had to pay these taxes on their broadband bills because broadband has never before been a Title II service.
But now it is. And so the Order explicitly opens the door to billions of dollars in new taxes. Indeed, it repeatedly states that it is only deferring a decision on new broadband taxes—not prohibiting them.

Obviously there’s a lot of bad faith arguments happening here. Either the FCC as a whole is trying to deflect everyone’s attention from the possibilities of new taxes, or Commissioner Pai is using a tried-and-true mix of slippery slope scare tactics to make people fear the protection of existing broadband regulation. All of this misses the point that broadband should be paying into the Universal Service Fund. That was the fund that redistributed wealth from people that could afford telephones to people that could not afford telephones or lived in regions where it wasn’t profitable to run telephone lines. It was a tremendous revenue generator, modernized many regions that would have otherwise been cut off entirely, and in the long run actually forced Bell to think really long term about infrastructure. This is a good thing that one side is vilifying and the other is desperately, IN ALL CAPS trying to distance itself from.

So I’m not going to do the thing where I describe some really useful public program with the annual operating budget of a HellFire missile and watch it just sit there looking politically untenable. I’m really happy for all the people that see this as a huge win, and it is definitely a good thing that came out of tons of tireless work, but it only take a minor zooming out in scope and time to see that this is a somewhat minor victory. This is making things not actively suck worse. It is creating the potential for possibly better ways of doing things but we need to demand so much more.

source
source

I have a secret to tell all of you: I kind of don’t care about teaching evolution in science classes. Put another way, I’m less than convinced that most people, having learned the story of species differentiation and adaptation, go on to live fuller and more meaningful lives. In fact, the way we teach evolution ­­––with a ferocious attention toward competition and struggle in adverse circumstances–– might be detrimental to the encouragement of healthy and happy communities. I also see little reason to trust the medical community writ-large, and I cringe when a well-meaning environmentalist describes their reaction to impending climate change by listing all of the light bulbs and battery-powered cars they bought. I suppose –given my cynical outlook– that the cover story of this month’s National Geographic is speaking to me when it asks “Why Do Many Reasonable People Doubt Science?” Good question: what the hell is wrong with me?

Joel Achenbach, the author of the cover story, assumes that most people doubt science because they either do not understand it, or find a much more compelling explanation for what they see in the world.  Moon landing truthers, anti-vaccination advocates, adherents to intelligent design, and global warming denialists all share misinformation that somehow feels more satisfying because they corroborate foregone conclusions about how the world works. Stanely Kubrick faked the moon landing, for example, because it is easier to believe the government covered something up than accomplished something great. While science literacy goes some way in explaining why less people vaccinate their children and no one cares about the impending heat death of our planet, that is not the only thing going on here.

Science isn’t just a set of facts or a method for arriving at those facts, it’s a collection of institutions, and those institutions haven’t given many people a reason to trust them, let alone go to bat for them when they are embattled. The spoils of science have been severely misallocated and there is little reason to trust, let alone pay attention to, science experts. Austerity has ravaged health services, making relationships with health professionals few and far between. Industrial disasters seem to be increasing in frequency while major scientific breakthroughs and engineering achievements are reserved for those that can afford them. College is less affordable than ever before. The question should not be why do many reasonable people doubt science” it’s the opposite: “why do many reasonable people still believe in science at all?”

Medical science has certainly made lots of breakthroughs, but only a miniscule portion of the global population has benefited from those advances. Climate change might be a looming threat that demands immediate action, but it is hard to care about 50 years from now when you don’t know where tomorrow’s dinner is coming from.

Achenbach chalks up this lack of trust, as an internal battle between what seems intuitively real and what science reveals to be fact. He cites a behavioral study, which “indicates that as we become scientifically literate, we repress our naive beliefs but never eliminate them entirely.” The example he gives is so telling of his class position that it is worth a long block quote:

Most of us [make sense of the world] by relying on personal experience and anecdotes, on stories rather than statistics. We might get a prostate-specific antigen test, even though it’s no longer generally recommended, because it caught a close friend’s cancer—and we pay less attention to statistical evidence, painstakingly compiled through multiple studies, showing that the test rarely saves lives but triggers many unnecessary surgeries. Or we hear about a cluster of cancer cases in a town with a hazardous waste dump, and we assume pollution caused the cancers. Yet just because two things happened together doesn’t mean one caused the other, and just because events are clustered doesn’t mean they’re not still random.

We have trouble digesting randomness; our brains crave pattern and meaning. Science warns us, however, that we can deceive ourselves. To be confident, as stated at www.mesotheliomahelp.org/mesothelioma/, there’s a causal connection between the dump and the cancers, you need statistical analysis showing that there are many more cancers than would be expected randomly, evidence that the victims were exposed to chemicals from the dump, and evidence that the chemicals really can cause cancer.

Yes, it would be nice to know if the chemicals used in commercial and industrial processes caused cancer. Unfortunately, many of the hazards that we face every day go undetected, especially in under-served communities. If your fire department or school is underfunded, there’s a good chance the EPA is not monitoring your air very well either.  Also, as Candice Lanius wrote last month, demands for statistical proof are not evenly levied across all populations. White and affluent people get their anecdotes taken seriously while the poor and disenfranchised must come up with statistics to corroborate their personal experiences.

Even if we lived in a world where everyone had to prove their position with statistical data, and there were monitoring stations evenly distributed across the country, we would still face the issue of what political sociologists of science call “organized ignorance.” That is, powerful actors like governments and companies make a point to not understand things so that they are difficult or impossible to regulate. Whether it is counting the number of sexual assaults, or the amount of chemicals used in fracking, intentionally not collecting data is a powerful tool. So while I agree with Achenbach that people should base important decisions on sound data, we should also acknowledge that access to data is deeply uneven.

Assuming that access to the Internet is the same as having access to data is like wondering why all of the wires in your house aren’t generating any electricity. If you wanted to know why everyone in your community is getting sick, and all of your searching revealed that the no one even bothered to collect the data, why would you go back to the same sources to know about the origin of the human race? Why would you care what these people have to say about your body if there is a big gray NO DATA polygon over your neighborhood in an air quality map? In many cases, what Achenbach characterizes as a competition between science and misinformation is actually the latter filling a vacuum.

Maybe Achenbach and everyone else that writes about science denialism knows this, and this is why they act so surprised when “well educated and affluent” people stop vaccinating their children. Why would the affluent –the people that science serves best– start questioning the validity of science? After all, it is the poor that were used as guinea pigs for medical research. It was poor southern black people that were mislead into believing they were being treated for a disease, not rich Bay Area yuppies. [1]

I would venture to make an educated, maybe even socially scientific, guess that while the rich can afford to construct purity narratives that put vaccines in the same category as pesticides and preservatives, the rest of us still react positively to the ethics of care that vaccines engender: the common good over profit. It is the kind of care that encouraged Jonas Salk to sell the polio vaccine at cost. Vaccines are one of the few medical technologies that don’t follow the pill-every-day-for-the-rest-of-your-life business model. You aren’t renting your health with a daily supplement; you are doing something to yourself that keeps others safe as well. You take on the pain and burden of getting the shot so that those too weak to take it aren’t put in harm’s way. If you stop thinking of the affluent as the only people capable of making an informed and collective decision, and start thinking of them as selfish actors that can’t imagine their bodies working the same way a poor person’s body works, the education paradox disappears.

The selfishness of the rich is also the unspoken necessary condition for climate change denial. The interests of corporations who have a direct financial interest in the fossil fuel status quo are certainly a big part of the equation, but let’s not forget that those people already experiencing the effects of climate change are those people that have been pushed to the least hospitable parts of the world. Indigenous populations have been at the forefront of climate change activism, much more so than the reticent scientists that are concerned about being marked as political actors. There was little fear of politicization when American scientists were vulnerable to nuclear annihilation but the far-off danger of climate change doesn’t seem to motivate middle-aged scientists. Why, again, should these institutions and the people that work in them, be treated as stewards of truth and trust? Why is it everyone else that should be chastised?

Finally, what did I mean by my first example when I said evolution doesn’t help foster community? What does evolutionary theory have to do with preparing people to be a part of a fulfilling community? Knowing about the slow but steady changes that turned ape-like common ancestors into apes and humans shouldn’t have anything to do with how I get along with my neighbor.

If you ever watch a show like Doomsday Preppers (On the National Geographic Channel!) you might know where I am going with this. The show tracks families and individuals who are convinced that “life as we know it” will end within their lifetime. They are compelled to act in preparation for what they believe to be the natural state of humanity. The story of how people will react without creature comforts or law enforcement is remarkably similar regardless of whether they are prepping for an Earthquake or a financial collapse: Hobbesian war of all against all. It’s no surprise then, that a typical prepper household has lots of canned food and guns.

How do we get such a uniform story from a wide range of people? Part of the answer is obviously the producers who want to craft a particular story, but there is also a popular notion that, if left to our own devices, humans without government and the threat of violence will compete with each other to the death. There are many different contributors to this myth, but science education is a big one. Many school children would be surprised, for example, to hear that Darwin never wrote the phrase “survival of the fittest.” That phrase actually came form Herbert Spencer, a foundational utilitarian philosopher usually cited by libertarians.

I bring this up because my argument is much more than a “what has science done for me lately” complaint. There are values and perspectives embedded in the work. As Donna Haraway famously said, scientists are not the mere “modest witnesses” they claim to be. Science is a human enterprise that intersects with race, class, and gender power relationships. The work of Darwin and his contemporaries never focused so heavily on competition and dog-eat-dog environments. The naturalist and anarchist scholar Pytor Kropotkin even wrote a book, and had several exchanges with Darwin, about species’ tendency to provide mutual aid in times of scarcity. The downplaying of cooperation and the focus on competition, despite many examples of both, shows the final and most basic reason for doubting science: it doesn’t feel like a tool of liberation anymore.

I would care much more about the teaching of evolution in classrooms if it taught that cooperation and reciprocity, the sorts of things that make strong communities and fulfill lives, were foundational to life itself. I would care more about stopping anti-vaccination movements if I thought anyone other than the most selfish among us were able to believe them. I would do more about climate change if scientists worked to prevent it as much as they work to bring products to market. I would convince people that we actually landed on the moon if I thought there was any political will left in my country to do something that amazing within my lifetime. I doubt science because it doubts us.

David is on Twitter: da_banks

 

[1] Correction: this essay originally stated that people were injected with the syphilis virus. The Tuskegee experiments, in fact, mislead participants into believing they were being treated when they were not.