Flashback Friday.

A study by doctor Ruchi Gupta and colleagues mapped rates of asthma among children in Chicago, revealing that they are closely correlated with race and income. The overall U.S. rate of childhood asthma is about 10%, but evidence indicates that asthma is very unevenly distributed. Their visuals show that there are huge variations in the rates of childhood asthma among different neighborhoods:

Photobucket

The researchers looked at how the racial/ethnic composition of neighborhoods is associated with childhood asthma. They defined a neighborhood’s racial make-up by looking at those that were over 67% White, Black, or Hispanic. This graph shows the percent of such neighborhoods that fall into three categories of rates of asthma: low (less than 10% of children have asthma), medium (10-20% of children have it), and high (over 20% of kids are affected). While 95% of White neighborhoods have low or medium rates, 56% of Hispanic neighborhoods have medium or high rates. However, the really striking finding is for Black neighborhoods; 94% have medium or high prevalence. And the racial clustering is even more pronounced if we look only at the high category, where only a tiny proportion (6%) of White neighborhoods fall but nearly half of Black ones do…a nearly mirror image of what we see for the low category:

Photobucket

Rates of asthma and racial/ethnic composition (the color of the circles) mapped onto Chicago neighborhoods (background color represents prevalence of asthma):

Photobucket

Asthma rates don’t seem to be highly clustered by education, but are highly correlated with overall neighborhood incomes:

Photobucket

It’s hard to know exactly what causes higher rates of asthma in Black and Hispanic neighborhoods than in White ones. It could be differences in access to medical care. The researchers found that asthma rates are also higher in neighborhoods that have high rates of violence. Perhaps stress from living in neighborhoods with a lot of violence is leading to more asthma. The authors of the study suggest that parents might keep their children inside more to protect them from violence, leading to more exposure to second-hand smoke and other indoor pollutants (off-gassing from certain types of paints or construction materials, for instance).

Other studies suggest that poorer neighborhoods have worse outdoor environmental conditions, particularly exposure to industries that release toxic air pollutants or store toxic waste, which increase the risk of asthma. Having a parent with asthma increases the chances of having it as well, though the connection there is equally unsure–is there a genetic factor, or does it simply indicate that parents and children are likely to grow up in neighborhoods with similar conditions?

Regardless, it’s clear that some communities — often those with the fewest resources to deal with it — are bearing the brunt of whatever conditions cause childhood asthma.

Originally posted in 2010.

Gwen Sharp is an associate professor of sociology at Nevada State College. You can follow her on Twitter at @gwensharpnv.

Trigger warning for racist language and discussions of racial violence.

After the storm had passed, while New Orleans was still in a state of crisis, residents of a predominantly white neighborhood that had escaped flooding, Algiers Point, took it upon themselves to violently patrol their streets.

“It was great!” says one man interviewed below. “It was like pheasant season in South Dakota. If it moved, you shot it!” According to one witness testimony, they were looking for “anything coming up this street darker than a paper bag…” At least 11 black men were shot.

Here is a short interview with two of the men of Algiers Point, from the documentary Welcome to New Orleans:

This next video, sent in by reader Martha O., includes some of the footage above, but focuses much more on the experiences of several African American men who lived in the neighborhood and were shot or threatened by their White neighbors.

The men talk about the panic and terror they felt during these incidents. Toward the end, Donnell Herrington watches footage of the White residents bragging about their exploits. It’s brutal to watch this man listening to the militia members talk about shooting African Americans casually and with obvious enthusiasm and pride.

The video is part of an in-depth story about the Algiers Point shootings featured in The Nation in 2008. And as Martha explained, it’s a harrowing example of how swiftly organized violent racism can emerge when external constraints are even briefly weakened.

Originally posted in 2012. Watch the full documentary here.

Gwen Sharp is an associate professor of sociology at Nevada State College. You can follow her on Twitter at @gwensharpnv.

Flashback Friday.

The AP has an interesting website about wildfires from 2002 to 2006. Each year, most wildfires occurred west of the Continental Divide:

Many of these areas are forested. Others are desert or shortgrass prairie:

There are a lot of reasons for wildfires–climate and ecology, periodic droughts, humans. The U.S. Fish and Wildlife Service reports that in the Havasu National Wildlife Refuge, the “vast majority” of wildfires are due to human activity. Many scientists expect climate change to increase wildfires.

Many wildfires affect land managed by the Bureau of Land Management. For most of the 1900s, the BLM had a policy of total fire suppression to protect valuable timber and private property.

Occasional burns were part of forest ecology. Fires came through, burning forest litter relatively quickly, then moving on or dying out. Healthy taller trees were generally unaffected; their branches were often out of the reach of flames and bark provided protection. Usually the fire moved on before trees had ignited. And some types of seeds required exposure to a fire to sprout.

Complete fire suppression allowed leaves, pine needles, brush, fallen branches, etc., to build up. Wildfires then became more intense and destructive: they were hotter, flames reached higher, and thicker layers of forest litter meant the fire lingered longer.

As a result, an uncontrolled wildfire was often more destructive. Trees were more likely to burn or to smolder and reignite a fire several days later. Hotter fires with higher flames are more dangerous to fight, and can also more easily jump naturally-occurring or artificial firebreaks. They may burn a larger area than they would otherwise, and thus do more of the damage that total fire suppression policies were supposed to prevent.

In the last few decades the BLM has recognized the importance of occasional fires in forest ecology. Fires are no longer seen as inherently bad. In some areas “controlled burns” are set to burn up some of the dry underbrush and mimic the effects of naturally-occurring fires.

But it’s not easy to undo decades of fire suppression. A controlled burn sometimes turns out to be hard to control, especially with such a buildup of forest litter. Property owners often oppose controlled burns because they fear the possibility of one getting out of hand. So the policy of fire suppression has in many ways backed forest managers into a corner: it led to changes in forests that make it difficult to change course now, even though doing so might reduce the destructive effects of wildfires when they do occur.

Given this, I’m always interested when wildfires are described as “natural disasters.” What makes something a natural disaster? The term implies a destructive situation that is not human-caused but rather emerges from “the environment.” As the case of wildfires shows, the situation is often more complex than this, because what appear to be “natural” processes are often affected by humans… and because we are, of course, part of the environment, despite the tendency to think of human societies and “nature” as separate entities.

Originally posted in 2010.

Gwen Sharp is an associate professor of sociology at Nevada State College. You can follow her on Twitter at @gwensharpnv.

Flashback Friday.

My great-grandma would put a few drops of turpentine on a sugar cube as a cure-all for any type of cough or respiratory ailment. Nobody in the family ever had any obvious negative effects from it as far as I know. And once when I had a sinus infection my grandma suggested that I try gargling kerosene. I decided to go to the doctor for antibiotics instead, but most of my relatives thought that was a perfectly legitimate suggestion.

In the not-so-recent history, lots of substances we consider unhealthy today were marketed and sold for their supposed health benefits. Joe A. of Human Rights Watch sent in these images of vintage products that openly advertised that they contained cocaine or heroin. Perhaps you would like some Bayer Heroin?

 

 

This alcohol and opium concoction was for treating asthma:

Cocaine drops for the kids:

This product, made up of 46% alcohol mixed with opium, was for all ages; on the back it includes dosages for as young as five days:

A reader named Louise sent in a recipe from her great-grandma’s cookbook. Her great-grandmother was a cook at a country house in England. The recipe is dated 1891 and calls for “tincture of opium”:

The recipe from the lower half of the right-hand page (with original spellings):

Hethys recipe for cough mixture

1 pennyworth of each
Antimonial Wine
Acetic Acid
Tincture of opium
Oil of aniseed
Essence of peppermint
1/2lb best treacle

Well mix and make up to Pint with water.

As Joe says, it’s no secret that products with cocaine, marijuana, opium, and other now-banned substances were at one time sold openly, often as medicines. The changes in attitudes toward these products, from entirely acceptable and even beneficial to inherently harmful and addicting, is a great example of social construction. While certainly opium and cocaine have negative effects on some people, so do other substances that remained legal (or were re-legalized, in the case of alcohol).

Often racist and anti-immigrant sentiment played a role in changing views of what are now illegal controlled substances; for instance, the association of opium with Chinese immigrants contributed to increasingly negative attitudes toward it as anything associated with Chinese immigrants was stigmatized, particularly in the western U.S. This combined with a push by social reformers to prohibit a variety of substances, leading to the Harrison Narcotic Act. The act, passed in 1914, regulated production and distribution of opium but, in its application, eventually basically criminalized it.

Reformers pushing for cocaine to be banned suggested that its effects led Black men to rape White women, and that it gave them nearly super-human strength that allowed them to kill Whites more effectively. A similar argument was made about Mexicans and marijuana:

A Texas police captain summed up the problem: under marijuana, Mexicans became “very violent, especially when they become angry and will attack an officer even if a gun is drawn on him. They seem to have no fear, I have also noted that under the influence of this weed they have enormous strength and that it will take several men to handle one man while under ordinary circumstances one man could handle him with ease.”

So the story of the criminalization of some substances in the U.S. is inextricably tied to various waves of anti-immigrant and racist sentiment. Some of the same discourse–the “super criminal” who is impervious to pain and therefore especially violent and dangerous, the addicted mother who harms and even abandons her child to prostitute herself as a way to get drugs–resurfaced as crack cocaine emerged in the 1980s and was perceived as the drug of choice of African Americans.

Originally posted in 2010.

Gwen Sharp is an associate professor of sociology at Nevada State College. You can follow her on Twitter at @gwensharpnv.

Flashback Friday.

Back when I was in high school and college, I learned that one of the major things that separated humans from other species was culture. The ability to develop distinct ways of living that include an understanding of symbols, language, and customs unique to the group was a specifically human trait.

And, ok, so it turned out that other species had more complex communication systems than we thought they did, but still, other animals were assumed to behave according to instinct, not community-specific cultures.

But as with so many things humans have been convinced we alone possess, it’s turning out that other species have cultures, too. One of the clearest examples is the division of orcas into two groups with distinct customs and eating habits; one eats mammals while the other is pescetarian, eating only fish. Though the two groups regularly come in contact with each other in the wild, they do not choose to intermingle or mate with one another. Here’s a video:

 

Aside from the obvious implications for our understanding of culture, this brings up an issue in terms of conservation. Take the case of orcas. Some are suggesting that they should be on the endangered species list because the population has declined. What do we do if it turns out at some point that, while the overall orca population is not fully endangered, one of the distinct orca cultural groups is? Is it enough that killer whales still exist, or do we need to think of the cultures separately and try to preserve sufficient numbers of each? In addition to being culturally different, they are functionally non-interchangeable: each group has a different effect on food chains and ecosystems.

Should conservation efforts address not just keeping the overall population alive and functioning, but ensure that the range of cultural diversity within a species is protected? If this situation occurred, should we declare one orca culture as endangered but not the other? Are both ecological niches important?

I love these questions. If we recognize that creatures can have cultures, it challenges our sense of self, but also brings significantly more complexity to the idea of wildlife preservation.

Originally posted in 2010.

Gwen Sharp is an associate professor of sociology at Nevada State College. You can follow her on Twitter at @gwensharpnv.

Flashback Friday.

I recently came upon the Jewish greeting card section at Target, way down on the bottom row. I could tell it was the Jewish section because all of the dividers that tell you what kind of card is in that slot (birthday, anniversary, etc.) had a Star of David on them.

I was interested in what a specifically Jewish birthday card might look like, so I picked this one up. It draws on the idea that Jewish people are particularly prone to feeling guilty.

 

The inside said:

…but is cake and ice cream mentioned anywhere? I think NOT! It’s your day! Enjoy! Enjoy!

Mary Waters found that people often believe that ethnicity explains all types of behaviors that are in fact very widespread. She interviewed White ethnics in the U.S.; they often attributed their families’ characteristics to their ethnicity. Take the idea of the loud, boisterous family, often including a mother who is constantly trying to get the kids to eat more of her homecooked meals and worrying if they aren’t married. Many individuals described their family this way and claimed that their ethnicity was the reason.

People who identified their background as Italian, Greek, Jewish, Polish, and others all believed that the way their family interacted was a unique custom of their ethnic group. Yet they all described pretty much the same characteristics. The cardmakers’ (and others’) allusion to guilt to signify Jewishness seems to me to fall into this category: take out the Stars of David and I bet a range of religious/ethnic groups would think it was tailored to them specifically.

So you take a card, say guilt in it, add a Star of David, and you’ve got a Jewish card. Take out the Star of David, maybe it’s a Catholic card, especially if you added a cross, since they’re often portrayed as feeling a lot of guilt. I’ve had friends who grew up Southern Baptist or Pentecostal joke about having felt guilty about everything, so you could market the card to them, too! I think it’s a good example of how we often treat characteristics or behaviors as somehow meaningfully connected to a specific ethnic background rather than being a pretty common way that people in general, across ethnic lines, behave.

Originally published in 2010.

Gwen Sharp is an associate professor of sociology at Nevada State College. You can follow her on Twitter at @gwensharpnv.

Flashback Friday.

I’ve posted about the use of apparent discounts as a marketing tool and about the rise of the shopping cart. Since I’m on a little marketing-related posting trend, I figured I might as well post about restaurant menus. New York Magazine recently provided an analysis of menus and how things such as placement, images, and so on influence purchases.

Here’s the menu analyzed in the article:

balthazarmenu091214_560

Some of the most interesting elements numbered on the menu:

1. Pictures of food on menus are tricky. They can convince people to buy a dish, but more expensive restaurants don’t want to be associated with low-cost places like Denny’s or Applebee’s. In general, the more expensive the restaurant, the less likely there are to be images of food, and if there are, they’re drawings, not color photos. And, apparently, the upper right corner is where customers’ eyes go first, so you need to make good use of that section.

2 and 3. You list something expensive (like a $115 seafood dish) in a prominent spot to serve the same function as a “manufacturer’s suggested retail price” on a sales tag at a retail store: to set an anchor price that makes other prices look like a bargain in comparison. The $70 seafood dish listed next to the $115 one seems way more reasonable than it would have it listed without the comparison anchor price.

5. Listing dishes in a column encourages customers to skim down the list, making it more likely that they’ll be focusing on the column of prices rather than the dishes themselves, and will pick from among the cheapest things on the menu. If the dish names are connected by a line of dots or dashes to specific prices, this is even more pronounced.

8. Restaurants often use “bracketing”:

…the same dish comes in different sizes. Here, that’s done with steak tartare and ravioli — but because “you never know the portion size, you’re encouraged to trade up,” Poundstone says. “Usually the smaller size is perfectly adequate.”

Notice the same things I mentioned in my post about meaningless discounts: high prices used to set an anchor that makes everything else look cheap and an emphasis on apparent savings to distract the customer from how much they’re spending.

And the bracketing thing is marketing genius: the larger portion is usually just a little bit more expensive, so the customer is likely to focus on the fact that the additional amount is actually a bargain, but you usually have very little information about how much bigger it actually is.

Knowledge is power! And now you know.

Originally posted in 2009.

Gwen Sharp is an associate professor of sociology at Nevada State College. You can follow her on Twitter at @gwensharpnv.

Flashback Friday.

Yesterday I went to Marshall’s to take some photos for this post and overheard a conversation between a teenager and her mother that perfectly illustrated what I was planning on posting about. The teen pulled her mom over to look at a purse she wanted for Christmas. It was $148, but she was making a case to her mom that it was actually a great buy compared to how much it would have been at the original price, which, as she pointed out to her mom, was listed as $368.

Ellen Ruppel Shell discusses this topic at length in Cheap: The High Cost of Discount Culture. Here’s a relevant photo I took:

It indicates that you are getting a great deal by shopping at Marshall’s compared to the original price of the item.

Except that is not, in fact, what they are saying. Look at the image again: the wording is “compare at…” The tags do not say “marked down from” or “original price” or “was.” There is a crucial difference: when you are told to “compare at,” the implication is that the shoes were originally $175, making them a super steal at $49. The “manufacturer’s suggested retail price” (MSRP) gives you the same info.

But as Shell points out, these numbers are largely fictional. Marshall’s is not actually telling you that those shoes were ever sold for $175. You’re just supposed to “compare” $49 to $175. But $175 may be an entirely meaningless number. The shoes may never have been sold for $175 at any store; certainly no specifics are given. Even if they were, the fact that a large number of them ended up at Marshall’s would indicate that many customers didn’t consider $175 an acceptable price.

The same goes for the MSRP: it’s meaningless. Among other things, that’s not how pricing works these days for big retail outlets. The manufacturer doesn’t make a product and then tell the retailer how much they ought to charge for it. Retailers hold much more power than manufacturers; generally, they pressure suppliers to meet their price and to constantly lower costs, putting the burden on the suppliers to figure out how to do so (often by reducing wages). The idea that manufacturers are able to tell Macy’s or Target or other big retailers how much to charge for their items is ridiculous. Rather, the retailer usually tells the manufacturer what MSRP to print on the tag of items they’ll be purchasing (I saw some tags at Marshall’s where it said MSRP but no price had been printed on it).

So what’s the point of a MSRP on a price tag, or a “compare at” number? These numbers serve as “anchor” prices — that is, they set a high “starting” point for the product, so the “sale” price seems like a great deal in comparison. Except the “sale” price isn’t actually a discount at all — it’s only a sale price in comparison to this fictional original price that was developed for the sole purpose of making you think “Holy crap! I can get $175 shoes for just $49!”

The point is to redirect your thinking from “Do I think these shoes are worth $49?” to “I can save $126!” This is a powerful psychological motivator; marketing research shows that people are fairly easily swayed by perceived savings. A sweater we might not think is worth $40 if we saw it at Banana Republic suddenly becomes worth $50 if we see it at Marshall’s (or T.J. Maxx, an outlet mall, Ross, etc.) and are told it used to sell for $80. We focus not on the fact that we’re spending $50, but on the fact that we’re saving $30.

And that makes us feel smart: we’ve beat the system! Instead of going to the mall and paying $368 for that purse, we hunted through the discount retailer and found it for $148! We worked for it, and we were smart enough to not get conned into buying it at the inflated price. Shell describes research that shows that, in these situations, we feel like we didn’t just save that money, we actually earned it by going to the effort to search out deals. When we buy that $148 purse, we’re likely to leave feeling like we’re somehow $220 richer (since we didn’t pay $368) rather than $148 poorer. And we’ll value it more highly because we feel like we were smart to find it; that is, we’re likely to think a $148 purse bought on “sale” is cooler and better quality than we would the identical purse if we bought it at full price for $120.

And stores capitalize on these psychological tendencies by giving us cues that seem to indicate we’re getting an amazing deal. Sometimes we are. But often we’re being distracted with numbers that seem to give us meaningful information but are largely irrelevant, if not entirely fictional.

Originally posted in 2009.

Gwen Sharp is an associate professor of sociology at Nevada State College. You can follow her on Twitter at @gwensharpnv.