What do you think?

B3ZPiyZCMAAjU0s

Thanks to @WyoWeeds!

Lisa Wade is a professor of sociology at Occidental College and the co-author of Gender: Ideas, Interactions, Institutions. You can follow her on Twitter and Facebook.

Strawberry shortcake, chocolate covered strawberries, strawberry daiquiris, strawberry ice cream, and strawberries in your cereal. Just delicious combinations of strawberries and things? Of course not.

According to an investigative report at The Guardian, in the first half of the 1900s, Americans didn’t eat nearly as many strawberries as they do now. There weren’t actually as many strawberries to eat. They’re a fragile crop, more prone than others to insects and unpredictable weather.

In the mid-1950s, though, scientists at the University of California began experimenting with a poison called chloropicrin. Originally used as a toxic gas in World War I, scientists had learned that it was quite toxic to fungus, weeds, parasites, bacteria, and insects. By the 1960s, they were soaking the soil underneath strawberries with the stuff. Nearly every strawberry field in California — a state that produces 80% of our strawberries — was being treated with chloropicrin or a related chemical, methyl bromide.

In the meantime, a major grower had collaborated with the University, creating heartier varieties of strawberries and ones that could be grown throughout the year. These developments doubled the strawberry crop. This was more strawberries than California — and the country — had ever seen. The supply now outpaced the demand.

Enter: Strawberry Shortcake.

1a

Strawberry Shortcake was invented by American Greetings, the greeting card company. She was created in cahoots with the strawberry growers association. They made a deal, just one part of a massive marketing campaign to raise the profile of the strawberry.

The head of the association at the time, Dave Riggs, aggressively marketed tie-ins with other products, too: Bisquick, Jello, Corn Flakes, and Cheerios. Cool Whip still has a strawberry on its container and its website is absolutely dotted with the fruit.

1b

Riggs went to the most popular women’s magazines, too — Ladies’ Home Journal, Redbook, and Good Housekeeping — and provided them with recipe ideas. It was an all out strawberry assault on America.

It worked. “Today,” according to The Guardian, “Americans eat four times as many fresh strawberries as they did in the 1970s.” We think it’s because we like them, but is it?

Lisa Wade is a professor of sociology at Occidental College and the co-author of Gender: Ideas, Interactions, Institutions. You can follow her on Twitter and Facebook.

If it were to happen that the decision as to whether the tomato was a fruit or vegetable made it to the highest court of the land — if such a strange thing were to happen — certainly the botanist’s opinion would weigh heaviest. Right?

Nope.

In fact, this decision did make it all the way to the Supreme Court. It happened in 1893. The case was brought by a tomato importing family by the last name of Nix. At the time, the law required that taxes be collected on vegetables that were imported, but not fruit.

The lawyers for the Nix family argued that the tomato is a fruit and, therefore, exempt from taxation. They were, of course, correct. Botanists define fruit according to whether it plays a reproductive role. So, any plant product with one or more seeds is a fruit, whereas vegetables don’t have seeds. Fruits are ovaries, for lack of a better term. All other plant products — stems, roots, leaves, and some seeds — are vegetables.

But the Supreme Court said, essentially, “We don’t care” and gave their gavel a good pound. Here’s some of the text of their unanimous opinion:

Botanically speaking, tomatoes are the fruit of a vine… But in the common language of the people, whether sellers or consumers of provisions, all these are vegetables which are grown in kitchen gardens, and which, whether eaten cooked or raw, are… usually served at dinner in, with, or after the soup, fish, or meats which constitute the principal part of the repast, and not, like fruits generally, as dessert.

The judges were referring to the common understanding, which has more to do with how we use the plant products than how plants use them. Your typical chef roughly divides plant products according to whether they’re sweet or savory. Fruits are sweet. Vegetables are savory and used for main courses and sides. It’s all about whether you eat them for dinner or dessert. And that’s what the Supreme Court upheld.

Culinary vs. botanical categorization (source):

2

Since the culinary scheme dominates our colloquial understanding, we mis-classify lots of other things, too. Zucchini, bell peppers, eggplants, string beans, cucumbers, avocado, and okra — all fruit. Rhubarb is a vegetable. No seeds. Pineapples are fruits. “Ah ha!” you say, “I’ve never noticed a pineapple having seeds!” That’s because commercial growers sell us seedless pineapples. Who knew. Berries are fruit, but strawberries, blackberries, and raspberries are not actually berries. Isn’t this fun?

Bruno Latour and Steve Woolgar, in Laboratory Life: The Social Construction of Scientific Facts, wrote:

If reality means anything, it is that which “resists” the pressure of a force. … That which cannot be changed at will is what counts as real.

We often think of cultural facts as somehow less real than biological ones. For the Nix family, though, biology mattered naught. They still had to pay the damn tax on their tomatoes. Culture is real, folks. Social construction is not just something we do to reality; for all intents and purposes, it is reality.

Cross-posted at Pacific Standard.

Lisa Wade is a professor of sociology at Occidental College and the co-author of Gender: Ideas, Interactions, Institutions. You can follow her on Twitter and Facebook.

This week the New York Times published an interactive that illustrates the likelihood of pregnancy despite contraceptive use. Risk is divvied up by method, for perfect and typical use, and added up over ten years. The results are a little terrifying (click to see larger or go here to explore):

23

Somewhere around half of all pregnancies are unintended.  This is why. It’s hard enough to use contraceptives perfectly but, even when we do, the risk of failure is very real.

Male condoms are the safer sex favorite. But, even when used perfectly, almost one in five women will get pregnant over a ten year period. With typical use, more than four out of five. Withdrawal, one primary foil against which male condoms are usually recommended, is only slightly less effective at preventing pregnancy, as typically used.

The favorite of Americans — The Pill, as well as some other hormonal methods — is more effective than the condom, but not nearly as much as we think it is. Under ideal conditions, only three in 100 will get pregnant over ten years; in reality, almost two-thirds — 61 in 100 — will end up pregnant.

Only the most human-error resistant methods — the IUD, hormonal implants, and sterilization — near 100% effectiveness. These are permanent or semi-permanent and not real options for a large proportion of sexually active Americans during at least some parts of their lives.

Discussions of the right to an abortion and the ease with which they can be attained needs to be had with this information at the forefront of the discussion. Unintended pregnancies happen all the time to everyone.

Cross-posted at Pacific Standard.

Lisa Wade is a professor of sociology at Occidental College and the co-author of Gender: Ideas, Interactions, Institutions. You can follow her on Twitter and Facebook.

The original compute-ers, people who operated computing machines, were mostly women. At that period of history, most typists were women and their skills seemed to transfer from that job to the next. As late as the second half of the 1960s, women were seen as naturals for working with computers. As Grace Hopper explained in a 1967 Cosmopolitan article:

It’s just like planning dinner. You have to plan ahead and schedule everything so it’s ready when you need it. Programming requires patience and the ability to handle detail. Women are “naturals” at computer programming.

But then, this happened:

1

Computer programming was masculinized.

The folks at NPR, who made the chart, interviewed information studies professor Jane Margolis. She interviewed hundreds of computer science majors in the 1990s, right after women started dropping out of the field. She found that having a personal computer as a kid was a strong predictor of choosing the major, and that parents were much more likely to buy a PC for their sons than they were for their daughters.

This may have been related to the advertising at the time. From NPR:

These early personal computers weren’t much more than toys. You could play pong or simple shooting games, maybe do some word processing. And these toys were marketed almostentirely to men and boys. This idea that computers are for boys became a narrative.

By the 1990s, students in introductory computer science classes were expected to have some experience with computers. The professors assumed so, inadvertently punishing students who hadn’t been so lucky, disproportionately women.

So it sounds like that’s at least part of the story.

Lisa Wade is a professor of sociology at Occidental College and the co-author of Gender: Ideas, Interactions, Institutions. You can follow her on Twitter and Facebook.

Flashback Friday.

New York Times article broke the story that a preference for boy children is leading to an unlikely preponderance of boy babies among Chinese-Americans and, to a lesser but still notable extent, Korean- and Indian-Americans.

15birthgraficenlarge1

Explaining the trend, Roberts writes:

In those families, if the first child was a girl, it was more likely that a second child would be a boy, according to recent studies of census data. If the first two children were girls, it was even more likely that a third child would be male.

Demographers say the statistical deviation among Asian-American families is significant, and they believe it reflects not only a preference for male children, but a growing tendency for these families to embrace sex-selection techniques, like in vitro fertilization and sperm sorting, or abortion.

The article explains the preference for boy children as cultural, as if Chinese, Indian, and Korean cultures, alone, expressed a desire to have at least one boy child.  Since white and black American births do not show an unlikely disproportion of boy children, the implication is that a preference for boys is not a cultural trait of the U.S.

Actually, it is.

In 1997 a Gallup poll found that 35% of people preferred a boy and 23% preferred a girl (the remainder had no preference). In 2007 another Gallup poll found that 37% of people preferred a boy, while 28% preferred a girl.

I bring up this data not to trivialize the preference for boys that we see in the U.S. and around the world, but to call into question the easy assumption that the data presented by the New York Times represents something uniquely “Asian.”

Instead of emphasizing the difference between “them” and “us,” it might be interesting to try to think why, given our similarities, we only see such a striking disproportionality in some groups.

Some of the explanation for this might be cultural (e.g., it might be more socially acceptable to take measures to ensure a boy-child among some groups), but some might also be institutional. Only economically privileged groups have the money to take advantage of sex selection technology (or even abortion, as that can be costly, too). Sex selection, the article explains, costs upwards of $15,000 or more. Perhaps not coincidentally, Chinese, Korean, and Indian Asians are among the more economically privileged minority groups in the U.S.

Instead of demonizing Asian people, and without suggesting that all groups have the same level of preference for boys, I propose a more interesting conversation: What enables some groups to act on a preference for boys, and not others?

Originally posted in 2009.

Lisa Wade is a professor of sociology at Occidental College and the co-author of Gender: Ideas, Interactions, Institutions. You can follow her on Twitter and Facebook.

To begin, it wasn’t just a toy. It debuted in 1890 and it was the next in a long line of devices that had been invented to allow people to communicate with spirits. These weren’t intended to be pretend; they were deadly serious.

According to Lisa Hix, who wrote a lengthy history of such devices for Collector’s Weekly, the mid-1800s was the beginning of the spiritualist movement. People had long believed in spirits, but two sisters by the name of Fox made the claim that they could communicate with them. This was new. There were no longer just spirits; now there were spiritualists.

Amateur historian Brandon Hodge, interviewed by Hix, explains:

Mediums sprang up overnight as word spread. Suddenly, there were mediums everywhere.

At first, spiritualists would communicate with spirits by asking questions and receiving, in return, a series of knocks or raps. They called it “spirit rapping.” There was a rap for yes and a rap for no and soon they started calling out the alphabet, allowing them to spell out words

Eventually they sought out more sophisticated ways to have conversations. Enter, the planchette. This was a small wooden egg-shaped device with two wheels and a hole in which to place a pencil. Participants would all place their fingers on the planchette and the spirit would presumably guide their movements, writing text.

Here is an example of a planchette from 1900 and some pre-1875 spirit scribbling, both courtesy of Hodge’s fantastic website, Mysterious Planchette:2

These were religious tools used with serious intentions. Entrepreneurs, however, saw things differently. They began marketing them as games and they were a huge hit.

Mediums resented this, so they kept innovating new and more legitimate-seeming ways of communicating. In addition, the planchette scribbles were often difficult to read. The idea of using an actual alphabet emerged and various devices were invented to allow spirits to point directly to letters and other answers.

A Telepathic Spirit Communicator and a “spiritoscope” from 1855 (source): 3

Eventually, the concept of the planchette merged with the alphabet board and what we now know as the Ouija board was invented.

An Espirito talking board (1892) and the Mitch Manitou talking board (1920s) (source): 4

Here is an antique Ouija planchette:4

In the 1920s, mediums came under attack from people determined to prove that they were liars. Houdini is the most famous of the anti-spiritualists and Hodge argues that he “ravaged spiritualism.”

He set up little “colleges” in cities like in Chicago for cops to attend to learn how to bust up séances, and there was a concerted national effort to stamp out fraud.

Meanwhile…

The Spiritualist believers never successfully cohesively banded together, because they were torn asunder by their own internal arguments about spirit materialization.

Most mediums ended up humiliated and penniless.

“But the Ouija,” Hodge says, “just came along at the right time.” It was a hit with laypeople, surviving the attacks against spiritualists. And, so, the Ouija board is one of the only widely-recognized artifacts of this time.

Cross-posted at Pacific Standard.

Lisa Wade is a professor of sociology at Occidental College and the co-author of Gender: Ideas, Interactions, Institutions. You can follow her on Twitter and Facebook.

As we live our lives increasingly in the digital realm, the sights, sounds, and moving images of the internet impact our conception of the world around us. Take, for example, the many online mapping services.  What began as simple tools to find driving directions have evolved into advanced applications that map multiple layers of data.

But who decides what we see? What features are considered sufficiently important to be included? And what information about our country do those design decisions make invisible?

Here’s the map of South Dakota provided by Google Maps. Notice that the many Indian reservations are unmarked and invisible.  If you scroll in, eventually the reservations appear. At the state level, though, they’re invisible.

3

In contrast, Indian reservations do show up on Bing:

4

Among the other map services, Yahoo! Maps and MapQuest do label Indian reservations while OpenStreetMap does not.

While these mapping tools certainly empower the individual, it is the designers and the developers behind them who hold the real power.  I can only speculate as to why Google Maps does not include reservations at the state level, but their decision impacts the way we understand (or don’t understand) the geographic and social reality of this country.

Stephen Bridenstine is pursuing a history masters degree at the University of British Columbia, where he studies popular attitudes and public memory concerning Indigenous peoples, the historic fur trade, and the natural environment. He blogs about non-Native America’s weird obsession with everything “Indian” at his blog Drawing on Indians, where this post originally appeared.

This post was updated to reflect 2014; it originally appeared on SocImages in 2011.