The dining rooms are coming. It’s how I know my neighborhood is becoming aspirationally middle class.

My neighborhood is filled with “shotgun” houses. Probably from West Africa, they are designed for a hot, humid climate. The homes consist of several rooms in a row. There are no hallways (and no privacy). High ceilings collect the heat and the doorways are placed in a row to encourage a breeze to blow all the way through.

Around here, more often than not, they have been built as duplexes: two long skinny houses that share a middle wall. The kitchen is usually in the back leading to an addition that houses a small bathroom. Here’s my sketch:

??????????????

As the neighborhood has been gentrifying, flippers have set their sights on these double shotguns. Instead of simply refurbishing them, though, they’ve been merging them. Duplexes are becoming larger single family homes with hallways (which substantially changes the dynamic among its residents) and makes space for dining rooms. Check out the new dining room on this flip (yikes):

8

At NPR, Mackensie Griffin offered a quick history of dining rooms, arguing that they were unusual in the US before the late 1700s. Families didn’t generally have enough room to set one aside strictly for dining. “Rooms and tables had multiple uses,” Griffin wrote, “and families would eat in shifts, if necessary.”

Thomas Jefferson would be one of the first Americans to have a dining room table. Monticello was built in 1772, dining room included. Wealthy families followed suit and eventually the trend trickled down to the middle classes. Correspondingly, the idea that the whole family should eat dinner together became a middle class value, a hallmark of good parenting, and one that was structurally — that is, architecturally — elusive to the poor and working class.

The shotgun house we find throughout the South is an example of just how elusive. Built before closets, all the rooms in a traditional shotgun are technically multi-purpose: they can be used as living rooms, bedrooms, offices, dining rooms, storage, or whatever. In practice, though, medium to large and sometimes extended families live in these homes. Many residents would be lucky to have a dedicated living room; a dining room would be a luxury indeed.

But they’re coming anyway. The rejection of the traditional floor plan in these remodels — for being too small, insufficiently private, and un-dining-roomed — hints at a turn toward a richer sort of resident, one that demands a lifestyle modeled by Jefferson and made sacred by the American middle class.

Cross-posted at Inequality by (Interior) Design.

Lisa Wade, PhD is a professor at Occidental College. She is the author of American Hookup, a book about college sexual culture, and Gender, a textbook. You can follow her on Twitter, Facebook, and Instagram.

2 (1)When you travel, the option to stay in a private home instead of a hotel might seem like a nice idea. Your experience of the city might be a little more authentic, maybe you’ll meet a local, and you can keep your money out of the hands of giant corporations. It’s a tiny way to fight the shrinking of the middle class.

These options, though, may not be a panacea. After discovering that his Brooklyn neighborhood had 1,500 listings on Airbnb, Murray Cox decided to take a closer look. How many residences now invite tourists? How small scale were the profits? Did the money really go to locals?

New Orleans wanted to know the answers to these questions, too. The city has been hit by what nola.com reporter Robert McClendon calls a “Airbnb gold rush.” It turns out the city currently has about 2,600 rentals on Airbnb, plus another 1,000 or so on VRBO.com. This has sparked a heated debate among residents, business owners, and politicians about the future of the practice.

So, Cox jumped in to give us the data and figure out where the money is going.

 

4

 

Are Airbnb hosts living in the spaces they rent?

Cox found that they generally are not. Only 34% of rentals are for rooms or shared rooms; 66% of listings are for an entire home or apartment. More than two-thirds (69%) are rented year-round. Almost half of all hosts operate at least two rentals.

These numbers suggest that your modal Airbnb host doesn’t live in the home they rent out. Some may actually live in another city altogether. Others are using Airbnb as an investment opportunity, buying homes and turning them into full time rentals.

What’s the downside?

Locals are complaining about deterioration in the feeling of community in their neighborhoods. It’s difficult to make friends with your neighbors when they turn over twice a week. Tourists are also more likely than locals to come home drunk and disorderly, disturbing the peace and quiet.

And they are pricing people who actually live in New Orleans out of the rental market. Short-term renting offers owners the opportunity to make four or five times the amount of money they could make with a long-term tenant, so it’s an economic no-brainer to sign up for Airbnb. But, as more and more people do so, there are fewer and fewer places for locals to live and so the supply-and-demand curve increasingly favors owners who can jack up long-term rental prices.

So, when you give your money to an Airbnb host in New Orleans or elsewhere, you might be giving some extra money to a local, but you might also be harming the residential neighborhoods you enjoy and the long-term viability of local life.

Cross-posted at A Nerd’s Guide to New Orleans.

Lisa Wade, PhD is a professor at Occidental College. She is the author of American Hookup, a book about college sexual culture, and Gender, a textbook. You can follow her on Twitter, Facebook, and Instagram.

2 (1)February’s edition of Contexts had a fascinating article by Amin Ghaziani titled Lesbian Geographies. Most of us are familiar with the idea of a “gayborhood,” a neighborhood enclave that attracts gay men. It turns out that lesbians have enclaves, too, but they’re not always the same ones.

Here’s the frequency of same-sex female couples (top) and same-sex male couples (bottom) in U.S. counties. Census data tracks same-sex couples but not individuals, so the conclusions here are based on couples.

7

What are the differences between where same-sex female and same-sex male couples live?

First, Same-sex female couples are more likely than their male counterparts to live in rural areas. Ghaziani thinks that “cultural cues regarding masculinity and femininity play a part.” As one interviewee told sociologist Emily Kazyak:

If you’re a flaming gay queen, they’re like, “Oh, you’re a freak, I’m scared of you.” But if you’re a really butch woman and you’re working at a factory, I think [living in the midwest is] a little easier.

If being “butch” is normative for people living in rural environments, lesbians who perform masculinity might fit in better than gay men who don’t.

Second, non-heterosexual women are about three times as likely as non-heterosexual men to be raising a child under 18. Whatever a person’s sexual orientation, parents are more likely to be looking for good schools, safe neighborhoods, and non-postage stamp-sized apartments.

Finally, there’s evidence that gay men price lesbians out. Gay men are notorious for gentrifying neighborhoods, but data shows that lesbians usually get there first. When non-heterosexual men arrive, they accelerate the gentrification, often making it less possible for non-heterosexual women to afford to stay. Thanks to the gender pay gap, times two, women living with women don’t generally make as much money as men living with men.

Or, they might leave because they don’t want to be around so many men. Ghaziani writes:

Gay men are still men, after all, and they are not exempt from the sexism that saturates our society. In reflecting on her experiences in the gay village of Manchester, England, one lesbian described gay men as “quite intimidating. They’re not very welcoming towards women.”

Cross-posted at Pacific Standard.

Lisa Wade, PhD is a professor at Occidental College. She is the author of American Hookup, a book about college sexual culture, and Gender, a textbook. You can follow her on Twitter, Facebook, and Instagram.

Flashback Friday.

A study by doctor Ruchi Gupta and colleagues mapped rates of asthma among children in Chicago, revealing that they are closely correlated with race and income. The overall U.S. rate of childhood asthma is about 10%, but evidence indicates that asthma is very unevenly distributed. Their visuals show that there are huge variations in the rates of childhood asthma among different neighborhoods:

Photobucket

The researchers looked at how the racial/ethnic composition of neighborhoods is associated with childhood asthma. They defined a neighborhood’s racial make-up by looking at those that were over 67% White, Black, or Hispanic. This graph shows the percent of such neighborhoods that fall into three categories of rates of asthma: low (less than 10% of children have asthma), medium (10-20% of children have it), and high (over 20% of kids are affected). While 95% of White neighborhoods have low or medium rates, 56% of Hispanic neighborhoods have medium or high rates. However, the really striking finding is for Black neighborhoods; 94% have medium or high prevalence. And the racial clustering is even more pronounced if we look only at the high category, where only a tiny proportion (6%) of White neighborhoods fall but nearly half of Black ones do…a nearly mirror image of what we see for the low category:

Photobucket

Rates of asthma and racial/ethnic composition (the color of the circles) mapped onto Chicago neighborhoods (background color represents prevalence of asthma):

Photobucket

Asthma rates don’t seem to be highly clustered by education, but are highly correlated with overall neighborhood incomes:

Photobucket

It’s hard to know exactly what causes higher rates of asthma in Black and Hispanic neighborhoods than in White ones. It could be differences in access to medical care. The researchers found that asthma rates are also higher in neighborhoods that have high rates of violence. Perhaps stress from living in neighborhoods with a lot of violence is leading to more asthma. The authors of the study suggest that parents might keep their children inside more to protect them from violence, leading to more exposure to second-hand smoke and other indoor pollutants (off-gassing from certain types of paints or construction materials, for instance).

Other studies suggest that poorer neighborhoods have worse outdoor environmental conditions, particularly exposure to industries that release toxic air pollutants or store toxic waste, which increase the risk of asthma. Having a parent with asthma increases the chances of having it as well, though the connection there is equally unsure–is there a genetic factor, or does it simply indicate that parents and children are likely to grow up in neighborhoods with similar conditions?

Regardless, it’s clear that some communities — often those with the fewest resources to deal with it — are bearing the brunt of whatever conditions cause childhood asthma.

Originally posted in 2010.

Gwen Sharp is an associate professor of sociology at Nevada State College. You can follow her on Twitter at @gwensharpnv.

I recently moved to a neighborhood that people routinely describe as “bad.” It’s my first time living in such a place. I’ve lived in working class neighborhoods, but never poor ones. I’ve been lucky.

This neighborhood — one, to be clear, that I had the privilege to choose to live in — is genuinely dangerous. There have been 42 shootings within one mile of my house in the last year. Often in broad daylight. Once the murderers fled down my street, careening by my front door in an SUV. One week there were six rapes by strangers — in the street and after home invasions — in seven days. People are robbed, which makes sense to me because people have to eat, but with a level of violence that I find confusing. An 11-year-old was recently arrested for pulling a gun on someone. A man was beaten until he was a quadriplegic. One day 16 people were shot in a park nearby after a parade.

I’ve lived here for a short time and — being white, middle-aged, middle class, and female — I am on the margins of the violence in my streets, and yet I have never been so constantly and excruciatingly aware of my mortality. I feel less of a hold on life itself. It feels so much more fragile, like it could be taken away from me at any time. I am acutely aware that my skin is but paper, my bones brittle, my skull just a shell ripe for bashing. I imagine a bullet sheering through me like I am nothing. That robustness that life used to have, the feeling that it is resilient and that I can count on it to be there for me, that feeling is going away.

So, when I saw the results of a new study showing that only 50% of African American teenagers believe that they will reach 35 years of age, I understood better than I have understood before. Just a tiny — a teeny, teeny, tiny — bit better.

2

I have heard this idea before. A friend who grew up the child of Mexican immigrants in a sketchy urban neighborhood told me that he, as a teenager, didn’t believe he’d make it to 18. I nodded my head and thought “wow,”‘ but I did not understand even a little bit. He would be between the first and second column from the right: 54% of 2nd generation Mexican immigrants expect that they may very well die before 35. I understand him now a tiny — a teeny, teeny tiny — bit better.

Sociologists Tara Warner and Raymond Swisher, the authors of the study, make clear that the consequences of this fatalism are far reaching. If a child does not believe that they might live to see another day, what motivation can there possibly be for investing in the future, for caring for one’s body, for avoiding harmful habits or dangerous activities? Why study? Why bother to see a doctor? Why not do drugs? Why avoid breaking the law?

Why wouldn’t a person put their future at risk — indeed, their very life — if they do not believe in that future, that life, at all?

If we really want to improve the lives of the most vulnerable people in our country, we cannot allow them to live in neighborhoods where desperation is so high that people turn to violence. Dangerous environments breed fatalism, rationally so. And once our children have given up on their own futures, no teachers’ encouragement, no promise that things will get better if they are good, no “up by your bootstraps” rhetoric will make a difference. They think they’re going to be dead, literally.

We need to boost these families with generous economic help, real opportunities, and investment in neighborhood infrastructure and schools. I think we don’t because the people with the power to do so don’t understand — even a teeny, teeny tiny bit — what it feels like to grow up thinking you’ll never grow up. Until they do, and until we decide that this is a form of cruelty that we cannot tolerate, I am sad to say that I feel pretty fatalistic about these children’s futures, too.

Re-posted at Pacific Standard.

Lisa Wade, PhD is a professor at Occidental College. She is the author of American Hookup, a book about college sexual culture, and Gender, a textbook. You can follow her on Twitter, Facebook, and Instagram.

The one-year anniversary of Eric Garner’s death passed a little more than a year ago. Before Garner’s death, I had never heard of Tompkinsville, the Staten Island neighborhood where Garner regularly hung out, near the busy intersection of Victory Boulevard and Bay Street.

This was Garner’s spot.  He played checkers and chess there, bought kids ice cream, earned the reputation of a “peacemaker” among his peers, and, yes, routinely sold untaxed “loose” cigarettes.

This was also the spot were Eric Garner was died.

Over the past year, despite the substantial media attention devoted to Garner’s death and the subsequent grand jury inquiry into the responsibility for his death, I didn’t hear or read much about Tompkinsville.

The lack of attention to the neighborhood in which Garner lived and died is strange given that the NYPD’s initial encounter with Garner was ostensibly motivated by the Broken Windows theory of crime causation.  According to the theory, “disorder” in any given neighborhood, if “left unchecked,” will result in ever greater levels of disorder, which, in turn, will ultimately result in higher rates of serious crime.  This is the justification for approaching and penalizing people like Garner who are engaged in non-violent, misdemeanors.

Based on my own research in Jersey City, New Jersey (approximately six miles, as the crow flies, from Tompkinsville), I’ve come to the conclusion that Broken Windows is more of a slogan than a theory and, when it comes down to it, morally and empirically wrong. As I wrote at City Limits:

The question that begs addressing is why the police or anyone else should ever aggressively police the likes of people who not only are “down and out,” but are doing nothing to directly harm others? Why create a situation of humiliation, tension, and hostility—the very kind of situation that led to Garner’s death—unless it is truly necessary? If only in one in a thousand instances, the circumstances are such that in such degrading and antagonistic encounters they result in death or serious injury, is that not one time too many? Or if all that results is humiliation and hostility, don’t these costs alone outweigh whatever benefits might conceivably come from cracking down on offenses like selling loosies?

In the three years of ethnographic work I did in Jersey City, I saw plenty of disorder, but this didn’t translate into serious crime:

Much as many people may not like who or what they see in the square, it is undoubtedly a safe space. I know this from experience and it is also borne out in the city’s official crime statistics.

Of course, one case doesn’t definitively show that high disorder never leads to high crime, but it does suggest that it doesn’t necessarily do so.  In any case, there has been no definitive science supporting the Broken Windows theory.

On the same logic, the case of Tompkinsville further undermines the theory that disorder leads to serious crime. According to official data, rates of serious crime in and around Tompkinsville have long been relatively low, even during the years when the NYPD was not employing the Broken Windows strategy.  This suggests that, however “disorderly” Tompkinsville may have been at times, the recent implementation of Broken Windows was, and remains, a solution in search of a problem.

Upon realizing the possibility that Tompkinsville might be another example of a high disorder/low crime space, I decided it was time to visit the neighborhood.  I made the trip twice and I did not encounter what appeared to me to be a dangerous neighborhood or a neighborhood on the verge of becoming dangerous anytime soon.

Here is the part of town where Garner lived out his days:

2

While this part of town didn’t strike me as “bad,” it is a far cry from the Tompkinsville that sits just a short walk away, separated only by a concrete path. Much of Tompkinsville is actually rather well-to-do:

3

The socioeconomic disparity on display in Tompkinsville illustrates how policing and punishment are but one part of a much larger, far more complex, and deeply-rooted equation of inequality in America. Perhaps if Garner’s part of the neighborhood had been farther from the kind of real estate that developers and wealthy residents value, a little disorder would have been a little more tolerable.

I’m more convinced than ever that Garner’s death was a gross injustice and the consequence, not just of the actions of a single individual, but of a deeply misguided policy and theory. As Jesse Myerson and Mychal Denzel Smith poignantly argued in The Nation (in the wake of a grand jury’s decision not to indict the officer whose chokehold certainly, at the very least, served as the but-for cause of Garner’s death), neither black lives, nor many other lives besides, will likely matter much unless, in addition to urgently-needed criminal justice reforms, something is done to seriously address the roots of poverty and inequality in America.

Mike Rowan is an assistant professor in the Sociology Department at John Jay College. His book in progress examines how a population of chronically homeless, jobless men and women were policed in a neighborhood of Jersey City. Dr. Rowan is also a member of the Executive Board for the Hudson County Alliance to End Homelessness, the director of the CUNY Service Corps’ Homeless Outreach and Advocacy Project, and a contributor to the Punishment to Public Health Initiative at John Jay.

A child that was 7 years old when Hurricane Katrina hit New Orleans will be 17 today. When the storm hit, he would have just started 2nd grade. Today, that 17-year-old is more likely than his same age peers in all but two other cities to be both unemployed and not in school. He is part of the Katrina generation.

(September 3, 2005 New Orleans) -- Evacuees and patients arive at New Orleans airport where FEMA's D-MATs have set up operations.  Photo: Michael Rieger/FEMA
(September 3, 2005 New Orleans) — Evacuees and patients arive at New Orleans airport where FEMA’s D-MATs have set up operations.
Photo: Michael Rieger/FEMA

When the city was evacuated, many families suffered a period of instability. A report published nine months after the storm found that families had moved an average of 3.5 times in the first nine months. One-in-five school-age children were either not enrolled in school or were only partially attending (missing more than 10 days a month).

Five years later, another study found that 40% of children still did not have stable housing and another 20% remained emotionally distressed. 34% of children had been held back in school (compared to a 19% baseline in the South).

(September 3, 2005 New Orleans) -- Evacuees and patients arive at New Orleans airport where FEMA's D-MATs have set up operations.  Photo: Michael Rieger/FEMA
(September 3, 2005 New Orleans) — Evacuees and patients arive at New Orleans airport where FEMA’s D-MATs have set up operations.
Photo: Michael Rieger/FEMA

With so much trauma and dislocation, it is easy to imagine that even young people in school would have trouble learning; for those who suffered the greatest instability, it’s likely that their education was fully on pause.

At The Atlantic, Katy Reckdahl profiles such a family. They evacuated to Houston, where they suffered abuse from locals who resented their presence. At school, boys from New Orleans were getting picked on and getting in fights. So the mother of three kept her 11- and 13-year-old boys at home, fearful for their safety. Indeed, another New Orleanian boy that they knew was killed while in Houston. The boys missed an entire year of school.

“An untold number of kids,” writes Reckdahl, “probably numbering in the tens of thousands—missed weeks, months, even years of school after Katrina.” She quotes an educator who specializes in teaching students who have fallen behind, who estimates that “90-percent-plus” of his students “didn’t learn for a year.”

When the brothers profiled by Reckdahl returned to New Orleans one year later, they were placed in the correct grade for their age, despite having missed a year of school. The system was in chaos. Teachers were inexperienced thanks to charter schools replacing the public school system. One of the boys struggled to make sense of it all and eventually dropped out and got his GED instead.

No doubt the high number of unemployed and unenrolled young people in New Orleans and other Gulf Coast cities devastated by Katrina is due, in part, to the displacement, trauma, and chaos of disaster. Optimistically, and resisting the “at risk” discourse, the Cowen Institute calls them “opportunity youth.” If there is the political will, we have the opportunity to help empower them to become healthy and productive members of our communities.

For more, pre-order sociologist Alice Fothergill and Lori Peek’s forthcoming book, Children of Katrina, watch an interview about their research, or read their preliminary findings here.

Lisa Wade, PhD is a professor at Occidental College. She is the author of American Hookup, a book about college sexual culture, and Gender, a textbook. You can follow her on Twitter, Facebook, and Instagram.

Flashback Friday.

The AP has an interesting website about wildfires from 2002 to 2006. Each year, most wildfires occurred west of the Continental Divide:

Many of these areas are forested. Others are desert or shortgrass prairie:

There are a lot of reasons for wildfires–climate and ecology, periodic droughts, humans. The U.S. Fish and Wildlife Service reports that in the Havasu National Wildlife Refuge, the “vast majority” of wildfires are due to human activity. Many scientists expect climate change to increase wildfires.

Many wildfires affect land managed by the Bureau of Land Management. For most of the 1900s, the BLM had a policy of total fire suppression to protect valuable timber and private property.

Occasional burns were part of forest ecology. Fires came through, burning forest litter relatively quickly, then moving on or dying out. Healthy taller trees were generally unaffected; their branches were often out of the reach of flames and bark provided protection. Usually the fire moved on before trees had ignited. And some types of seeds required exposure to a fire to sprout.

Complete fire suppression allowed leaves, pine needles, brush, fallen branches, etc., to build up. Wildfires then became more intense and destructive: they were hotter, flames reached higher, and thicker layers of forest litter meant the fire lingered longer.

As a result, an uncontrolled wildfire was often more destructive. Trees were more likely to burn or to smolder and reignite a fire several days later. Hotter fires with higher flames are more dangerous to fight, and can also more easily jump naturally-occurring or artificial firebreaks. They may burn a larger area than they would otherwise, and thus do more of the damage that total fire suppression policies were supposed to prevent.

In the last few decades the BLM has recognized the importance of occasional fires in forest ecology. Fires are no longer seen as inherently bad. In some areas “controlled burns” are set to burn up some of the dry underbrush and mimic the effects of naturally-occurring fires.

But it’s not easy to undo decades of fire suppression. A controlled burn sometimes turns out to be hard to control, especially with such a buildup of forest litter. Property owners often oppose controlled burns because they fear the possibility of one getting out of hand. So the policy of fire suppression has in many ways backed forest managers into a corner: it led to changes in forests that make it difficult to change course now, even though doing so might reduce the destructive effects of wildfires when they do occur.

Given this, I’m always interested when wildfires are described as “natural disasters.” What makes something a natural disaster? The term implies a destructive situation that is not human-caused but rather emerges from “the environment.” As the case of wildfires shows, the situation is often more complex than this, because what appear to be “natural” processes are often affected by humans… and because we are, of course, part of the environment, despite the tendency to think of human societies and “nature” as separate entities.

Originally posted in 2010.

Gwen Sharp is an associate professor of sociology at Nevada State College. You can follow her on Twitter at @gwensharpnv.