Last month, the CDC released a report that I’m going to pick on a little bit, though I’ve seen numerous researchers make similar faux pas in surveys I’ve taken and studies I’ve read. The report, Sexual Behavior, Sexual Attraction, and Sexual Identity in the United States, uses data from the 2006-2008 National Survey of Family Growth to summarize findings on these topics. I’m just going to harp on a tiny bit of the survey design, because I think it’s illustrative of a broader point about how survey design can reflect and even shape attitudes about what is and isn’t a sex act, and what is and isn’t a sexual relationship.
Now, to be fair, the NSFG is primarily about addressing things like pregnancy, marriage, and STIs. The portion of the survey that focuses on sexual acts includes same-sex partners but it’s still geared towards things like STI risk, and thus focuses on sex acts that have a high STI risk like penetration and oral sex. But there’s still a big problem in the way it describes the possible sex acts for males and females.
Note: The portion below the cut may not be safe for work due to frank descriptions of sexual acts.
Women and men who take the survey both get the same set of three questions about sexual history with members of the opposite sex–one about penis-in-vagina (PIV) intercourse and two about oral sex, giving to or receiving from an opposite-sex partner. However, the section on same-sex behavior is different. The differences begin with the instructions:
The next questions ask about sexual experiences you may had with another female.
The next questions ask about sexual experiences you may have had with another male. Have you ever done any of the following with another male?
Ignoring the typo, the only difference is in the second question, which females don’t get. It seems to me that the ever might encourage men taking the survey to think harder about whether they’ve done anything might qualify, while the women who don’t receive that directive could conceivably write off behavior they’re not sure about, which they might have included if they’d been given the second sentence.
More troubling, to me, are the questions themselves. Women are first asked two questions about oral sex, giving or receiving. Then, if they answer “no” to either of those questions, they get a third question, “have you ever had any sexual experience of any kind with another female?”
The justification for this method is that when the last question was used alone, it was too vague. The researchers presumably still use it in order to catch sexual activity that doesn’t fall under oral sex, but to me this seems sloppy. If they’re only looking for activities that they consider risky, why not define and ask the specific questions? Using a vague definition might lead to inclusion of activities (clothes-on grinding or fingering, for example) that are very low-risk, skewing the numbers.
Men don’t get a question like this, but instead four specific questions about specific activities: oral, giving or receiving, and anal, giving or receiving. This is going to be more clear when it comes to risk; however, my problem with the way the questions are worded is less about the results and more about how they both reflect our perceptions of “normal” activity and communicate what is “normal” to survey-takers.
There are plenty of sex acts not covered by the survey, and some of those do carry an STI risk. The researchers could do two things to avoid alienating those whose sexual practices are different. First, they could state that the following questions are being used to find out about STI risks, and therefore the acts described do not include very low-risk behaviors. In that scenario, a more exhaustive list should be given (for example, rimming or mouth-to-anus contact, vaginal or anal penetration with toys, frottage without clothes on). Second, they could list a wider variety of acts in order to include everyone, presenting all the options as normal, and simply not use the answers about low-risk activities in the work on STIs. Either way, it would make sense to evaluate the results and use them in the context of STI prevention based on sex act, not on the broad category of “same-sex” activity that includes acts carrying varying degrees of associated risk.
Without more explanation, those taking the survey are likely to feel like certain acts are “standard” and others are “deviant.” Women who have not had oral sex with a female partner are likely to be confused by the vague question about same-sex activity and any responses given there are pretty much useless, since everyone’s definition would differ. It also contributes to the idea that same-sex activity between women is pretty much a mystery, that we have no idea how women can have safe sex, and we kind of want to avoid the topic if at all possible.
This isn’t the only survey I’ve seen that does this, by far. I think, in most cases, researchers have good intentions. They want to get accurate results, focused on whatever they’re studying, be it STIs or pregnancy or relationship behaviors or whatever else. But it’s important to think about the impact the survey design may have on those taking it, and also what you might miss when designing a survey to reflect cultural norms about sex acts that might not mirror reality.
We all receive daily cultural messages to remind us that a particular series of acts is normal: possibly manual contact (entirely optional), followed by oral sex (more important for men than women, perhaps), followed by PIV sex. Same-sex couples now hear similar messages: men are expected to have oral or anal sex, and for women oral sex should be the be-all-end-all, possibly with a side of fisting (thanks, Chasing Amy!) Other forms of sexual contact, such as manual contact for its own sake or using sex toys may not come up. The erotic potential of massage, kissing, or “non-sexual” kinky activities is pretty much ignored in most mainstream conversations. This idea of a linear progression, or sex as drawing from a particular list, is an insult to our imaginations.
A good survey should be specific about the acts it’s describing, and be honest about limitations. There’s always the “other” box option where participants can describe things not on the list, but the vague yes-or-no question about sexual activity doesn’t communicate much to the researcher and makes the participant feel like her sexual activity is mystical or unusual without giving her an opportunity to actually say anything about it. Researchers shouldn’t make assumptions–for example, in the CDC survey, if someone says that they are married to or cohabit with an opposite-sex partner, history of PIV sex is assumed.
The whole approach smacks to me as ableist, gender-biased, and just kind of puritanical. Let’s try not to assume what anyone does in their sexual relationships. When writing a survey to find out about sexual behavior, do the terribly radical thing and… ask about sexual behavior.
Comments
Elvie — May 2, 2011
Great post!
So many surveys contain such flawed assumptions and such badly designed questions that you have to wonder how any of this "data" can be trusted by anyone.
Another example would be the last survey I participated in asked about how often I used different types of contraception. The question appeared to be an attempt at an extensive/exhaustive list, yet didn't include the method I use. My answers made it appear I do not use contraception at all, and in fact could suggest I've rarely taken precautions against contracting STI's, neither of which statements would be true!
gwp_admin — May 4, 2011
Thanks, Elvie!
That's a good example--I think a lot of the surveys have that problem, where if you answer in a strictly honest way you give a completely different picture of the truth. I had a similar experience recently where I was asked about safe sex and safe sex was defined as both partners getting STI tests plus using protection. By that definition, I have never had safe sex, because my sexual behavior is so low-risk and I've always negotiated clearly and used protection when necessary, so I haven't had a test. I understand why limiting responses can be necessary for purposes of analysis, but again that "other" box would give an opportunity to get closer to the full picture.