The Tuskegee Syphilis Experiment is one of the most famous examples of unethical research. The study, funded by the federal government from 1932-1972, looked at the effects of untreated syphilis. In order to do this, a number of Black men in Alabama who had syphilis were misinformed about their illness. They were told they had “bad blood” (which was sometimes a euphemism for syphilis, though not always) and that the government was offering special free treatments for the condition. Here is an example of a letter sent out to the men to recruit them for more examinations:
The “special free treatment” was, in fact, nothing of the sort. The researchers conducted various examinations, including spinal taps, not to treat syphilis but just to see what its effects were. In fact, by the 1950s it was well established that a shot of penicillin would fully cure early-stage syphilis. Not only were the men not offered this life-saving treatment, the researchers conspired to be sure they didn’t find out about it, getting local doctors to agree that if any of the study subjects came in they wouldn’t tell them they had syphilis or that a cure was available.
The abusive nature of this study is obvious (letting men die slow deaths that could have been easily prevented, just for the sake of scientific curiosity) and shows the ways that racism can influence researchers’ evaluations of what is acceptable risk and whose lives matter. The Tuskegee experiment was a major cause for the emergence of human subjects protection requirements and oversight of federally-funded research once the study was exposed in the early 1970s. Some scholars argue that knowledge of the Tuskegee study increased African Americans’ distrust of the medical community, a suspicion that lingers to this day.
This remarkable newspaper article illustrates how skin color (which is real) gets translated into categorical racial categories (which are not). The children in the images below — Kian and Remee Hodgson – are fraternal twins born to two bi-racial parents:
The story attempts to explain the biology:
Skin colour is believed to be determined by up to seven different genes working together. If a woman is of mixed race, her eggs will usually contain a mixture of genes coding for both black and white skin. Similarly, a man of mixed race will have a variety of different genes in his sperm. When these eggs and sperm come together, they will create a baby of mixed race. But, very occasionally, the egg or sperm might contain genes coding for one skin colour. If both the egg and sperm contain all white genes, the baby will be white. And if both contain just the versions necessary for black skin, the baby will be black.
But then the journalist makes a logical leap from biological determinants of skin color to racial categories. Referring now to genes for skin color as “black” and “white” genes, she writes: “Baby Kian must have inherited the black genes from both sides of the family, whilst Remee inherited the white ones.” And, of course, while both children are, technically, mixed race*, the headline to the story, “Black and White Twins,” presents them as separate races.
We’re so committed to racial differences that the mother actually speaks about their similarities as if it is surprising that twins of different “races” could possibly have anything in common. She says:
There are some similarities between them. They both love apples and grapes, and their favourite television programme is Teletubbies.”
This is also a nice example of a U.S.-specific racial logic. This might not have been a story in Brazil at all, where racial categories are determined more by color alone and less by who your parents are. It is not uncommon there to have siblings of various racial designations.
The images below are all screen shots from the fantastic American Anthropological Association website on race. They are designed to show how we take what is in reality a nuanced spectrum of skin color and turn it into racial categories. In this first image, they show how we could, conceivably, separate human beings into short, medium, and tall based on height:
In this second image, they show how, by adding two additional figures, both taller than the tallest in the previous image, the way in which we designate people can easily change.
And this third image demonstrates how, when we actually consider all potential heights, where we draw the line between short and medium and medium and tall is arbitrary and, ultimately, not very useful.
Skin color is like height. If we just look at three groups with very different skin colors, there appears to be a significant and categorical difference between those three groups of people.
But, if we consider a wide range of people, it becomes clear that skin color comes in a spectrum, not in categories (such as the five from which U.S. citizens are forced to choose on the census).
Since their invention in 1913, and since this Kelvinator ad first ran in 1955, refrigerators became bigger, better, and went from a luxury to a necessity. It’s nearly impossible to imagine life today without having somewhere to store your vegetables and a place to keep your leftovers: in the one hundred years it’s been around, the fridge altered our grocery shopping habits and our attitudes towards food.
Appliance companies and advertisers worked hard to transform refrigerators from “a brand new concept in luxurious living” to an everyday household object. They succeeded in the 1960s, after years of fine-tuning its features to appeal to the middle-class housewife, writes historian Shelley Nickles. Besides ensuring the fridges were spacious, easy to clean, and had adjustable shelving, designers even took care of minutiae such as including warmer compartments – so that the butter kept in them would be easier to spread. Having attracted the housewives’ attention and become affordable with ideas such as government-sponsored fridges floating around, the appliances made their way into middle-class homes.
Buying too many perishable items suddenly became a minor concern. Buy one, get one free! Get more value for your money – purchase a bigger container! As the number of fridge compartments increased, so did the number of refrigeration-dependent foods and “supersize” deals offered in stores (or the other way around). Ultimately, grocery shoppers – mainly women – returned home with more food than they otherwise would have. Fridges enabled families to stock up, and the major weekend grocery haul was born. Now we have this:
But while having a fridge to store all the groceries made it possible to save more on “deals” at the supermarket, it also enabled us to waste more later on. That is because the fridge operates much like a time machine, but not without its limits. Sociologists Elizabeth Shove and Dale Southerton describe freezers as appliances that allow us to manage time: in addition to no longer having to shop multiple times per week, we can now prepare our meals in advance. The same holds for refrigerators.
Food has its own rhythm, however, and a fridge can only delay the inevitable for so long. Leftovers simultaneously get pushed down in the hierarchy of what we’d like to eat, and pushed back on refrigerator shelf, only to be forgotten and perhaps rediscovered when it’s already too late. An exotic fruit rots in the produce compartment after its exciting novelty wore off, and we were no longer sure what to do with it. And so they all end up in the trash. Domestic food waste only represents part of all the food thrown away in the U.S. today – about a third of all that is produced – but the way fridges altered out food purchasing and consumption habits is partly to blame.
Emotional Contagion is the idea that emotions spread throughout networks. If you are around happy people, you are more likely to be happy. If you are around gloomy people, you are likely to be glum.
The data scientists at Facebook set out to learn if text-based, nonverbal/non-face-to-face interactions had similar effects. They asked: Do emotions remain contagious within digitally mediated settings? They worked to answer this question experimentally by manipulating the emotional tenor of users’ News Feeds, and recording the results.
Public reaction was such that many expressed dismay that Facebook would 1) collect their data without asking and 2) manipulate their emotions.
In an experiment with people who use Facebook, we test whether emotional contagion occurs outside of in-person interaction between individuals by reducing the amount of emotional content in the News Feed. When positive expressions were reduced, people produced fewer positive posts and more negative posts; when negative expressions were reduced, the opposite pattern occurred.
In brief, Facebook made either negative or positive emotions more prevalent in users’ News Feeds, and measured how this affected users’ emotionally expressive behaviors, as indicated by users’ own posts. In line with Emotional Contagion Theory, and in contrast to “technology disconnects us and makes us sad through comparison” hypotheses, they found that indeed, those exposed to happier content expressed higher rates of positive emotion, while those exposed to sadder content expressed higher rates of negative emotion.
Looking at the data, there are three points of particular interest:
When positive posts were reduced in the News Feed, people used .01% fewer positive words in their own posts, while increasing the number of negative words they used by .04%.
When negative posts were reduced in the News Feed, people used .07% fewer negative words in their own posts, while increasing the number of positive words by.06%.
Prior to manipulation, 22.4% of posts contained negative words, as compared to 46.8% which contained positive words.
Let’s first look at points 1 and 2 — the effects of positive and negative content in users’ News Feeds. These effects, though significant and in the predicted direction, are really really tiny. None of the effects even approach 1%. In fact, the effects are all below .1%. That’s so little! The authors acknowledge the small effects, but defend them by translating these effects into raw numbers, reflecting “hundreds of thousands” of emotion-laden status updates per day. They don’t, however, acknowledge how their (and I quote) “massive” sample size of 689,003 increases the likelihood of finding significant results.
So what’s up with the tiny effects?
The answer, I argue, is that the structural affordances of Facebook are such users are far more likely to post positive content anyway. For instance, there is no dislike button, and emoticons are the primary means of visually expressing emotion. Concretely, when someone posts something sad, there is no canned way to respond, nor an adequate visual representation. Nobody wants to “Like” the death of someone’s grandmother, and a Frownie-Face emoticon seems decidedly out of place.
The emotional tenor of your News Feed is small potatoes compared to the effects of structural affordances. The affordances of Facebook buffer against variations in content. This is clear in point 3 above, in which positive posts far outnumbered negative posts, prior to any manipulation. The very small effects of experimental manipulations indicates that the overall emotional makeup of posts changed little after the study, even when positive content was artificially decreased.
So Facebook was already manipulating your emotions — our emotions — and our logical lines of action. We come to know ourselves by seeing what we do, and the selves we perform through social media become important mirrors with which we glean personal reflections. The affordances of Facebook therefore affect not just emotive expressions, but reflect back to users that they are the kind of people who express positive emotions.
Positive psychologists would say this is good; it’s a way in which Facebook helps its users achieve personal happiness. Critical theorists would disagree, arguing that Facebook’s emotional guidance is a capitalist tool which stifles rightful anger, indignation, and mobilization towards social justice. In any case, Facebook is not, nor ever was, emotionally neutral.
Jenny Davis is an Assistant Professor of Sociology at James Madison University and a weekly contributor to Cyborgology, where this post originally appeared. You can follow her on Twitter.
A few days ago, Juliano Pinto kicked off the World Cup with a first kick. It was a media stunt designed to make us verklempt. Pinto is a paraplegic who wore a mind-controlled robotic exoskeleton to make his move.
We were to be awed by the technology, too, of course, which is being developed by the Walk Again Project, a scientific consortium. Says the leading scientist on the project, “With enough political will and investment, we could make wheelchairs obsolete.”
Ask any wheelchair user, particularly one who’s been in the game a while, and they’ll tell you that they’re far too busy living their life to sit there worrying about whether or not they’ll ever walk. We just get on and do.
From his point of view, the exoskeleton is for people who aren’t in wheelchairs. Getting “non-walkers to walk again,” he says, is about making everyone else happy. As for him, he says, he’s fine:
My wheelchair is a very capable tool and to be honest, the last thing I want is to be strapped to a District 9-esque robot and become a puppet in some corporation’s half-baked execution of an obsession…
In the meantime, he says, everyone’s concern with getting him to walk again suggests that he, and everyone else who uses a wheelchair, is living a pitiable life. “These stories,” he says, “are unwittingly invalidating a unique way of life for millions of people around the globe who are really happy with their wheelchairs.” So, he goes on record: “This is not my dream.”
William Peace, an anthropologist who also uses a wheelchair, goes further, arguing that the exoskeleton is harmful to people who are newly paralyzed. The scientists developing the exoskeleton are “sell[ing] the dream of walking to newly paralyzed people who cannot imagine life as a wheelchair user.” This is bad, he says, because it encourages people to reject their new body instead of accept it. He writes: “the exoskeleton is symbolically and practically destructive to a newly paralyzed person.”
Instead of focusing on the one thing people using wheelchairs can’t do, Peace argues, we should focus on all the things they do everyday:
Work, make a decent living, and be autonomous. Own a home even. Have a family. Get married. In short, be ordinary. Walking is simply not required for all this nor should it be glorified.
Nicholson concurs: “My life as a wheelchair-user is a very good one.”
So hey, able-bodied media: quit making me feel like wheelchairs are a shitty, sub-par option. Stop beating your exoskeleton drum. And most of all, let go of your obsession with walking, because it’s totally overrated.
Last week the internet chuckled at the visual below. It shows that, since Godzilla made his first movie appearance in 1954, he has tripled in size.
Kris Holt, at PolicyMic, suggests that his enlargement is in response to growing skylines. She writes:
As time has passed, buildings have grown ever taller too. If Godzilla had stayed the same height throughout its entire existence, it would be much less imposing on a modern cityscape.
This seems plausible. Buildings have gotten taller and so, to preserve the original feel, Godzilla would have to grow too.
But rising buildings can’t be the only explanation. According to this graphic, the tallest building at the time of Gozilla’s debut was the Empire State Building, rising to 381 meters. The tallest building in the world today is (still) the Burj Khalifa. At 828 meters, it’s more than twice as tall as the Empire State Building, but it’s far from three times as tall, or 1,143 meters.
Is there an alternate explanation? Here’s one hypothesis.
In 1971, the average American was exposed to about 500 advertisements per day. Today, because of the internet, they are exposed to over 5,000. Every. Day.
Media critic Sut Jhally argues that the flood of advertising has forced marketers to shift strategies. Specifically, he says
So overwhelming has the commercial takeover of culture become, that it has now become a problem for advertisers who now worry about clutter and noise. That is, how do you make your ads stand out from the commercial impressions that people are exposed to.
One strategy has been to ratchet up shock value. “You need to get eyeballs. You need to be loud,” said Kevin Kay, Spike’s programming chief.
So, to increase shock value, everything is being made more extreme. Compared to the early ’90s, before the internet was a fixture in most homes and businesses, advertising — and I’m guessing media in general — has gotten more extreme in lots of ways. Things are sexier, more violent, more gorgeous, more satirical, and weirder.
We’re celebrating the end of the year with our most popular posts from 2013, plus a few of our favorites tossed in. Enjoy!
A recent RadioLab podcast, titled The Bitter End, identified an interesting paradox. When you ask people how they’d like to die, most will say that they want to die quickly, painlessly, and peacefully… preferably in their sleep.
But, if you ask them whether they would want various types of interventions, were they on the cusp of death and already living a low-quality of life, they typically say “yes,” “yes,” and “can I have some more please.” Blood transfusions, feeding tubes, invasive testing, chemotherapy, dialysis, ventilation, and chest pumping CPR. Most people say “yes.”
But not physicians. Doctors, it turns out, overwhelmingly say “no.” The graph below shows the answers that physicians give when asked if they would want various interventions at the bitter end. The only intervention that doctors overwhelmingly want is pain medication. In no other case do even 20% of the physicians say “yes.”
What explains the difference between physician and non-physician responses to these types of questions. USC professor and family medicine doctor Ken Murray gives us a couple clues.
First, few non-physicians actually understand how terrible undergoing these interventions can be. He discusses ventilation. When a patient is put on a breathing machine, he explains, their own breathing rhythm will clash with the forced rhythm of the machine, creating the feeling that they can’t breath. So they will uncontrollably fight the machine. The only way to keep someone on a ventilator is to paralyze them. Literally. They are fully conscious, but cannot move or communicate. This is the kind of torture, Murray suggests, that we wouldn’t impose on a terrorist. But that’s what it means to be put on a ventilator.
A second reason why physicians and non-physicians may offer such different answers has to do with the perceived effectiveness of these interventions. Murray cites a study of medical dramas from the 1990s (E.R., Chicago Hope, etc.) that showed that 75% of the time, when CPR was initiated, it worked. It’d be reasonable for the TV watching public to think that CPR brought people back from death to healthy lives a majority of the time.
In fact, CPR doesn’t work 75% of the time. It works 8% of the time. That’s the percentage of people who are subjected to CPR and are revived and live at least one month. And those 8% don’t necessarily go back to healthy lives: 3% have good outcomes, 3% return but are in a near-vegetative state, and the other 2% are somewhere in between. With those kinds of odds, you can see why physicians, who don’t have to rely on medical dramas for their information, might say “no.”
The paradox, then — the fact that people want to be actively saved if they are near or at the moment of death, but also want to die peacefully — seems to be rooted in a pretty profound medical illiteracy. Ignorance is bliss, it seems, at least until the moment of truth. Physicians, not at all ignorant to the fraught nature of intervention, know that a peaceful death is often a willing one.