history

Flashback Friday.

Heather L. sent us a link to a business called The Occasional Wife. It’s slogan: “The Modern Solution To Your Busy Life.” The store sells products that help you organize your home and office, and provides all kinds of helpful services to support your personal goals.

capturea1captured

There are two things worth noting here:

First, the business relies on and reproduces the very idea of “wife.”  As the website makes clear, wives are people who (a) make your life more pleasurable by taking care of details and daily life-maintenance (such as running errands), (b) organize special events in your life (such as holidays), and (c) deal with work-intensive home-related burdens (such as moving), all while perfectly coiffed and in high heels.

But, the business only makes sense in a world where “real” wives are obsolete.  Prior to industrialization, most men and women worked together on home farms.  With industrialization, all but the wealthiest of families relied on (at least) two breadwinners. In the 1950s, the era to which this business implicitly harkens, Americans were bombarded with ideological propaganda praising stay-at-home wives and mothers (in part to pressure women out of jobs that “belonged” to men after the war).  Since then, women have increasingly participated in wage labor.  Today, the two parent, single-earner family is only a minority of families.

So, in our “modern” world, even when there is a wife in the picture, there’s rarely a “wife.”  But, as the founder explains, it’d sure be nice to have one:captureb

See, she was his wife, but not a wife.

Of course, this is nothing new.  Tasks performed by wives have been increasingly commodified (that is, turned into services for which people pay): for example, house cleaning, cooking, and child care.  This business just makes the transition in reality explicit by referencing the ideology.  The fact that the use of the term “wife” works in this way (i.e., brings to mind the 1950s stereotype) in the face of a reality that looks very different, just goes to show how powerful ideology can be.

Originally posted in 2009; the business has grown from one location to four.

Lisa Wade, PhD is an Associate Professor at Tulane University. She is the author of American Hookup, a book about college sexual culture; a textbook about gender; and a forthcoming introductory text: Terrible Magnificent Sociology. You can follow her on Twitter and Instagram.

You may be familiar with the fact that the coca in Coca-Cola was originally cocaine. But did you know that the reason we infused such a beverage with the drug in the first place was because of prohibition? Cocaine cola replaced cocaine wine. In fact, when it was debuted in 1886, it was described as “Coca-Cola: The Temperance Drink.”

The first mass marketed cocaine product was Vin Mariani, a cocaine-infused Bordeaux introduced in the 1860s. Legal and requiring no prescription, it was believed to “restore health and vitality” and I’m sure it felt like it did. Wikipedia reports that it included 7.2 mg of cocaine per ounce; comparatively, a line snorted is about 25 mg.

3 4

Yes, Vin Mariani was good for men, women, and children. The “tonic of kings!” Even the Pope! He loved it so much he called it a “benefactor of humanity” and gave it a Vatican Gold Medal:

2

But he was just the most eminent of its fans. Mariani’s media blitz included endorsements from Sarah Bernhardt, H.G. Wells, Ulysses S. Grant, Queen Victoria, the Empress of Russia, Thomas Edison, and the then-President of the United States, William McKinley. Jules Verne reportedly joked: “Since a single bottle of Mariani’s extraordinary coca wine guarantees a lifetime of 100 years, I shall be obliged to live until the year 2700!”

Vin Mariani dominated the market, but there was an American chemist, John Smith Pemberton, who made a competing product: Pemberton’s French Wine Coca. He described it as an “intellectual beverage.” Pemberton was located in — you guessed it, Atlanta — and the state enacted temperance legislation in 1885. Hence, Coca-Cola was born.

Cross-posted at Pacific Standard.

Lisa Wade, PhD is an Associate Professor at Tulane University. She is the author of American Hookup, a book about college sexual culture; a textbook about gender; and a forthcoming introductory text: Terrible Magnificent Sociology. You can follow her on Twitter and Instagram.

The original compute-ers, people who operated computing machines, were mostly women. At that period of history, most typists were women and their skills seemed to transfer from that job to the next. As late as the second half of the 1960s, women were seen as naturals for working with computers. As Grace Hopper explained in a 1967 Cosmopolitan article:

It’s just like planning dinner. You have to plan ahead and schedule everything so it’s ready when you need it. Programming requires patience and the ability to handle detail. Women are “naturals” at computer programming.

But then, this happened:

1

Computer programming was masculinized.

The folks at NPR, who made the chart, interviewed information studies professor Jane Margolis. She interviewed hundreds of computer science majors in the 1990s, right after women started dropping out of the field. She found that having a personal computer as a kid was a strong predictor of choosing the major, and that parents were much more likely to buy a PC for their sons than they were for their daughters.

This may have been related to the advertising at the time. From NPR:

These early personal computers weren’t much more than toys. You could play pong or simple shooting games, maybe do some word processing. And these toys were marketed almostentirely to men and boys. This idea that computers are for boys became a narrative.

By the 1990s, students in introductory computer science classes were expected to have some experience with computers. The professors assumed so, inadvertently punishing students who hadn’t been so lucky, disproportionately women.

So it sounds like that’s at least part of the story.

Lisa Wade, PhD is an Associate Professor at Tulane University. She is the author of American Hookup, a book about college sexual culture; a textbook about gender; and a forthcoming introductory text: Terrible Magnificent Sociology. You can follow her on Twitter and Instagram.

Flashback Friday.

If you’re like me, you probably grew up hearing a charming story about John Chapman, aka Johnny Appleseed, in which he planted apples across America so that no one would ever go hungry again. The image, overall, is of an eccentric but kindly man who went around planting apples so pioneers could have fresh, healthy fruit to eat. Here’s the 1948 version of the story from Disney, if you have 15 minutes:


Johnny Appleseed-1948 by Kanker76

In his book The Botany of Desire, Michael Pollan discusses Johnny Appleseed. He really did exist, and he did travel around the frontier planting apples from apple seeds and later selling the apples to pioneers (and apparently giving lots of trees away, too). He was, by all accounts, extremely eccentric, wearing sackcloth as a tunic for clothing, going barefoot much of the time, and so on. He was a vegetarian, though I don’t know if chipmunks and other animals pranced around in the woods with him.

3

But there’s a little detail the Disney movie and all the kids’ books about Johnny Appleseed got wrong. His apples weren’t for eating. They were for liquor.

Apples don’t grow “true” from seeds — that is, if you plant a Granny Smith apple seed, the tree that grows will not produce Granny Smith apples (the vast majority of the time, anyway). The only way to be sure what kind of apples a tree will produce is to graft limbs onto it from another apple tree that has the kind of apples you want. Most trees that grow from seeds produce smallish apples that are bitter and very much unlike the glowing waxed fruit we’ve come to associate with health and a good diet. People would not want to eat those apples. But what they could do with them is turn them into apple cider, alcoholic apple cider.

For much of American history, alcoholic beverages were widely consumed by both adults and children. Before clean water was necessarily available, it was safer to drink alcohol, particularly in cities.

So how did we go from apples as source of liquor to apples as healthy fresh fruit? According to The Straight Dope,

We stopped drinking apples and started eating them in the early 1900s. The Women’s Christian Temperance Union publicized the evils of alcohol, the movement towards Prohibition was gaining momentum, and the apple industry saw the need to re-position the apple… We can thank prohibition for shifting the image of the apple to the healthy, wholesome, American-as-apple-pie fruit that it is today.

Anyway, it’s sort of a funny instance of both the way we sanitize history and of re-branding. Most of us, raised on images of Laura Ingalls Wilder, can’t imagine early pioneers drinking alcohol all day and happily giving it to their children, or that there might be legitimate reasons for doing so (protecting your kids from getting dysentery from polluted water, for instance). And apples have become such an icon of health that the idea of campaigns against them as sources of liquor is unimaginable.

Originally posted in 2009.

Gwen Sharp is an associate professor of sociology at Nevada State College. You can follow her on Twitter at @gwensharpnv.

2

4

At the turn of the 19th century in the U.S. and Europe, it became wildly popular — and that’s an understatement — for ladies to wear feathers and whole taxidermied birds on their hats. One ornithologist reported taking two walks in Manhattan in 1886 and counting 700 hats; 525 of which were topped by feathers or birds. Buzzfeed has a collection of vintage hats featuring birds.

At the time, not many people thought much of killing the birds. Europeans and their American cousins “didn’t believe they could put a dent in an animal’s population.” Birds seemed to be an “abundant, even inexhaustible” natural resource.  So take they did.  Millions of birds all over the world were harvested for hat makers for years. The Fashioning Feathers blog offers this example:

A single 1892 order of feathers by a London dealer… included 6,000 bird of paradise, 40,000 hummingbird and 360,000 various East Indian bird feathers. In 1902 an auction in London sold 1,608 30 ounce packages of heron… plumes. Each ounce of plume required the use of four herons, therefore each package used the plumes of 120 herons, for a grand total of 192, 960 herons killed.

Ornithologists started to sit up and take notice. One estimated that 67 types of birds — often including all of their sub-species — were at risk for extinction.  Not only were birds killed for their feathers, they were killed when their feathers were at their most resplendent. This meant killing them during mating season, interrupting their reproductive cycle and often leaving baby birds orphaned.

A campaign to end the practice began. In Europe the Royal Society for the Protection of Birds targeted women. They launched a sexist campaign accusing women of supporting the heartless slaughter of birds. Fashioning Feathers includes this image from a pamphlet titled “Feathered Women” in which the president of the Society calls them a “bird-enemy.”

2

Virginia Woolf went for the jugular, pointing out that — even though the image shows a woman swooping down to kill a bird — it was largely men who did the dirty work of murder and they were also the ones profiting from the industry.

Ironically, middle class women were at the forefront of the bird preservation movement. They were the rank and file and, thanks in part to their work, in the U.S. the movement led to the formation of the first Audubon societies.  The Massachusetts Audubon Society organized a feather boycott, angering hat makers who called them “extremists” and “sentimentalists.” Politicians worried out loud about the loss of jobs. Missouri Senator James Reed complained:

Why there should be any sympathy or sentiment about a long-legged, long-beaked, long-necked bird that lives in swamps and eats tadpoles.

Ultimately the Massachusetts Audubon Society succeeded in pushing through the first federal-level conservation legislation in the U.S., the Lacey Act of 1900.

Cross-posted at Pacific Standard.

Lisa Wade, PhD is an Associate Professor at Tulane University. She is the author of American Hookup, a book about college sexual culture; a textbook about gender; and a forthcoming introductory text: Terrible Magnificent Sociology. You can follow her on Twitter and Instagram.

2This photograph is of a Creole home right off the Mississippi river in Louisiana.  It served as the home of two families who ran a sugar cane plantation, starting in 1805. CIMG0260 I visited the home as a part of a tour of Laura Plantation and I found one architectural detail particularly interesting.  The tour guide described the two sets of double doors immediately behind the staircase as the “brise” (French for breeze, as the Creole would have spoken French). 20140428_143523 These doors were not for use by people.  They were only to let the breeze in.  They were essentially air ducts, said the tour guide and, to Creole folks, using those doors would have been as odd as entering the house through a window. Instead, according to Creole tradition, visitors were to enter through one of the doors on the far right or left of the house.  These delivered guests to the men’s and women’s quarters: one room with a bedroom, a dresser, and a desk.

All this, of course, was very bizarre to the new Americans of British descent who came to Louisiana to do business.  The front doors of their homes were in the middle of the house and they led to an entryway or reception area.  To them, it would have been very odd indeed to enter the house at one end and even more strange to enter someone’s bedroom.  Moreover, since Laura Plantation was run by women for many years, this meant doing business in a woman boudoir. How scandalous.

This is a great example of the social construction of space. Where is the proper place for a front door? What kind of activities take place in the same room? What rooms/furniture are appropriate for strangers to see? Non-Creoles had to learn how to do business in a new way — perhaps accidentally bungling their entry by knocking at the window — and, ultimately, Laura and the other female presidents of the plantation would have to negotiate their expectations, by separating the bed and office for example. Something as simple as a front door, then, turns out to be a really neat example of social construction and social change.

Cross-posted at Pacific Standard.

Lisa Wade, PhD is an Associate Professor at Tulane University. She is the author of American Hookup, a book about college sexual culture; a textbook about gender; and a forthcoming introductory text: Terrible Magnificent Sociology. You can follow her on Twitter and Instagram.

Yes, but it was a weird thing you see.

The Nazis were waging war and exterminating Jews.  Meanwhile, Christmas was about celebrating peace and the birth of Jesus, a Jew.

Said the Nazi propagandist Friedrich Rehm in 1937:

We cannot accept that a German Christmas tree has anything to do with a crib in a manger in Bethlehem.  It is inconceivable for us that Christmas and all its deep soulful content is the product of an oriental religion.

But Germans were largely Christian, so getting rid of Christmas was going to be tricky.  So Hitler turned it into a celebration of the Third Reich.  According to John Brownlee, they re-wrote Christmas carols to extol the virtues of National Socialism.  Mentions of Jesus were replaced with “Savior Führer.”  Since they well understood that Santa wasn’t white, they re-cast the character; he was played by the pagan god Odin.   And they changed the ornaments and placed swastikas atop Christmas trees.

Here are links to a Hitler ornament and Nazi tree topper, swastika cookie cutter, and swastika ornaments:

6

The last Nazi Christmas was in 1944.  Post-war Germany quickly “did with Hitler’s Christmas what they did with every other idea the Nazis had come up with: denounced it…”

Photo by Monado flickr creative commons.

Lisa Wade, PhD is an Associate Professor at Tulane University. She is the author of American Hookup, a book about college sexual culture; a textbook about gender; and a forthcoming introductory text: Terrible Magnificent Sociology. You can follow her on Twitter and Instagram.

Some nice news has come out lately that the occasional toy store is taking the words boy and girl off of their aisle signs — mostly in Sweden, I say half-jokingly — but Google ngrams suggests that we’re nowhere near backing off of separating children’s toys by sex.

Sociologist Philip Cohen graphed the frequency of “toys for boys” and “toys for girls” relative to “toys for children.” This is just language, and it’s just American English, but it’s one indication that the consciousness raising efforts of organizations like Let Toys Be Toys is still on the margins of mainstream society.

2
As you can see from the graph, the extent to which children are actively talked about as gendered subjects varies over time.

One explanation for why companies resist androgynous toys and clothes for children — an arguably adults, too — has to do with money. If parents with a boy and a girl could get away with one set of toys, they wouldn’t need to buy a second. And if they could hand down clothes from girls to boys and vice versa, they would buy less clothes. The same could be said for borrowing and trading between family members and friends.

It would really cut into the profits of these companies if we believed that all items for children were interchangeable. They work hard to sustain the lie that they are not.

Cross-posted at Pacific Standard.

Lisa Wade, PhD is an Associate Professor at Tulane University. She is the author of American Hookup, a book about college sexual culture; a textbook about gender; and a forthcoming introductory text: Terrible Magnificent Sociology. You can follow her on Twitter and Instagram.