domino sugarIn a recent article in the New York Times, economics professor Nancy Folbre helps us understand why men have not only experienced greater job loss during the current recession but have also continued to suffer during the economic recovery.

As Folbre explains, the higher job loss does not come without historical precedence.

The Great Recession has sometimes been dubbed the Mancession because it drove unemployment among men higher than unemployment among women. Because men tend to work in more cyclical industries than women, they have historically lost more jobs on the downturn and gained more on the upturn.

However, the current upturn, has not followed this trend due to the decline in the jobs that men usually fill.

For example, men constitute more than 71 percent of the work force in manufacturing but less than 25 percent of the workers in health and education services…These two employment categories were similar in size in 2000, but manufacturing employment has failed to rise, even in non-recession years. Employment in health and education, in contrast, has risen slowly, but steadily.

The question than becomes, why aren’t more men moving to jobs traditionally occupied by women? Holbre turns to Stanford sociologists Maria Charles and David B. Grusky’s book Occupational Ghettos who illustrate how “gender segregation is a remarkably persistent and complex phenomenon shaped by deep cultural beliefs.” Or to put it more simply, men don’t want the jobs that are thought of as being ‘for women’.

With nursing and home health being projected to grow the most rapidly between now and 2018 and manufacturing jobs continuing to be outsourced to overseas locations, it appears it might be time for men to trade in the work boots for  some tasteful loafers.

Istanbul 2010 - A Panasonic Lumix TripFor many Istanbul stands as a symbol of success. It’s growing status as a ‘global city’ and a European Capital of Culture has attracted tourists, foreign investments, and massive development projects. Luis Gallo’s recent article in the Hürriyet Daily News provides a reminder that with development and prosperity there are rarely winners without losers.

[I]n the shadow of those skyscrapers, there is another Istanbul, a little-seen realm where the urban poor are coming face-to-face with the bulldozers clearing ground for the sparkling new city. The neighborhood of Sulukule, perhaps the world’s oldest Roma community, is already flattened, with just a few holdouts living amid the rubble.

This raises difficult questions as development continues.

With massive amounts of money, and the city’s international reputation, at stake, fierce debate is raging over the government’s “urban transformation” programs: They may be beautifying and enriching the city, but at what social cost?

Critics are quick to point to the increasing inequality that ‘success’ is bringing. Ozan Karaman, an urban-geography scholar from the University of Minnesota, explains

“Lack of representation will result in further marginalization of the urban poor and perhaps the emergence of a new type of poverty, in which the poor have no hope whatsoever for upward mobility and are in a state of permanent destitution.”

Tansel Korkmaz and Eda Ünlü-Yücesoy, professors of architectural design at Istanbul Bilgi University, argue that the government ignoring the plight of the poor is not simply an unexpected result of development. Instead, they claim that the government’s goal is to to hide the urban poor in 21st-century Istanbul.

“The following statement by Prime Minister Recep Erdoğan about the neighborhoods of the urban poor summarizes the essence of the official approach: ‘cancerous district[s] embedded within the city.’ Planning operations in Tarlabaşı, Fener-Balat and Sulukule are [intended] to move the urban poor to the outskirts of the city and to make available their inner-city locations for big construction companies for their fancy projects,” Korkmaz said.

Recently, in the rapidly changing Tophane neighborhood in Istanbul’s Beyoğlu district, dozens of people attacked a crowd attending an opening of art galleries. The  violence is a sign that frustration over being displaced in the name of gentrification has finally boiled over and is likely not a one time occurrence.

Experts say clashes between newcomers and longtime residents could become more frequent if people feel they have no say in the transformation of their neighborhoods and believe they must resort to violence in order to make their voices heard.

Even with the increasing tension, Ozan Karaman manages to hold onto hope while remaining critical of the current development approach.

“Urban redevelopment projects should be executed in collaboration with citizens and residents, not despite them. There is no need to re-invent the wheel; there are plenty of models of community-based development that have been successful since the 1970s.”

EpicThe hipster is a difficult group to define for those that seem to be the most exemplary examples of the term are also the most offended by the label.

A year ago Mark Greif, a professor in Literary Studies at the New School, and his colleagues began their investigation of the ‘hipster’.  In a recent essay in the NY Times, Greif reflects upon some of their findings  and explains how Pierre Bourdieu’s masterwork, Distinction: A Social Critique of the Judgement of Taste, provides a base to understand the meaning of ‘hipster’.

In conducting the study, Greif was immediately surprised by the intense emotions and self-doubt that seemingly superficial topic generated.

The responses were more impassioned than those we’d had in our discussions on health care, young conservatives and feminism. And perfectly blameless individuals began flagellating themselves: “Am I a hipster?

Greif turns to Bourdieu – A French sociologist who died in 2002 at the age 71 after achieving a level of fame and public interest rarely obtained by academics –  to help us understand why so much seems to be stake. While Bourdieu’s biographical details provide little connection to people wearing skinny black jeans and riding fixed-gear bikes, his account of the way what people consume becomes a means of separating themselves from other groups provides the framework to study the rise of the hipsters.

Taste is not stable and peaceful, but a means of strategy and competition. Those superior in wealth use it to pretend they are superior in spirit. Groups closer in social class who yet draw their status from different sources use taste and its attainments to disdain one another and get a leg up. These conflicts for social dominance through culture are exactly what drive the dynamics within communities whose members are regarded as hipsters.

From this perspective the coffee shops, bars, and Roller Derby track become the sites of social struggle.

Once you take the Bourdieuian view, you can see how hipster neighborhoods are crossroads where young people from different origins, all crammed together, jockey for social gain.

The main strategy in this competition is to establish yourself as being more ‘authentic’ than everyone else.

Proving that someone is trying desperately to boost himself instantly undoes him as an opponent. He’s a fake, while you are a natural aristocrat of taste. That’s why “He’s not for real, he’s just a hipster” is a potent insult among all the people identifiable as hipsters themselves.

This does not only apply to people with ironic mustaches.

Many of us try to justify our privileges by pretending that our superb tastes and intellect prove we deserve them, reflecting our inner superiority. Those below us economically, the reasoning goes, don’t appreciate what we do; similarly, they couldn’t fill our jobs, handle our wealth or survive our difficulties. Of course this is a terrible lie.

Wild Card WeekendRecent medical reports on the long-term effects of head injuries have resulted in increased concern about the medical risks of participating in football. While the N.F.L. has increasingly shown concern over the safety of its players, a solution has not been found. The safety issues came to a head this past Sunday when a number of players were injured as a result of highlight reel hits.

Michael Sokolove’s article in the New York Times examines the moral issues surrounding consuming a sport where players place themselves at such a high risk. As medical studies continue to build the link between head injuries in football and depression, suicide, and early death, Sokolove asks the timely question:

Is it morally defensible to watch a sport whose level of violence is demonstrably destructive? (The game, after all, must conform to consumer taste.) And where do we draw the line between sport and grotesque spectacle?

To provide insight into the question Sokolove turns to a series of cultural theorists and philosophers who have interest in the role of violent pursuits in society.

The writer Joyce Carol Oates has written admiringly of boxing, celebrating, among other aspects, the “incalculable and often self-destructive courage” of those who make their living in the ring. I wondered if she thought America’s football fans should have misgivings about sanctioning a game that, like boxing, leaves some of its participants neurologically impaired.

“There is invariably a good deal of hypocrisy in these judgments,” Ms. Oates responded by e-mail. “Supporting a war or even enabling warfare through passivity is clearly much more reprehensible than watching a football game or other dangerous sports like speed-car racing — but it may be that neither is an unambiguously ‘moral’ action of which one might be proud.”

Other ‘experts’ argue that the dangerous activity may serve a communal goal.

“We learn from dangerous activities,” said W. David Solomon, a philosophy professor at Notre Dame and director of its Center for Ethics and Culture. “In life, there are clearly focused goals, with real threats. The best games mirror that. We don’t need to feel bad about not turning away from a game in which serious injuries occur. There are worse things about me than that I enjoy a game that has violence in it. I don’t celebrate injuries or hope for them to happen. That would be a different issue. That’s moral perversion.”

Fellow philosopher Sean D. Kelly, the chairman of Harvard’s philosophy department, shares Solomon’s emphasis on the potential positive value of sports:

“You can experience a kind of spontaneous joy in watching someone perform an extraordinary athletic feat,” he said when we talked last week. “It’s life-affirming. It can expand our sense of what individuals are capable of.”

He believes that it is fine to watch football as long as the gravest injuries are a “side effect” of the game, rather than essential to whatever is good about the game and worth watching.

Sokolove concludes with the difficult question that football fans, as well as organizers and sponsors of the sport at all levels, must now ask themselves:

But what if that’s not the case? What if the brain injuries are so endemic — so resistant to changes in the rules and improvements in equipment — that the more we learn the more menacing the sport will seem?

Montréal-Nord

Patricia Cohen’s recent article in the NY Times, “‘Culture of Poverty’ Makes a Comeback,” documents culture once again being used by social scientists as an explanation in discussing poverty.

Cohen begins by setting the historical context.

The reticence was a legacy of the ugly battles that erupted after Daniel Patrick Moynihan, then an assistant labor secretary in the Johnson administration, introduced the idea of a “culture of poverty” to the public in a startling 1965 report. Although Moynihan didn’t coin the phrase (that distinction belongs to the anthropologist Oscar Lewis), his description of the urban black family as caught in an inescapable “tangle of pathology” of unmarried mothers and welfare dependency was seen as attributing self-perpetuating moral deficiencies to black people, as if blaming them for their own misfortune.

The idea was soon central to many of the conservative critiques of government aid for the needy. Within the generally liberal fields of sociology and anthropology the argument was generally treated as being in poor taste and avoided. This time of silence seems to be drawing to a close.

“We’ve finally reached the stage where people aren’t afraid of being politically incorrect,” said Douglas S. Massey, a sociologist at Princeton who has argued that Moynihan was unfairly maligned.

The new wave of culture-oriented discussions is not a direct replica of the studies of the 1960s.

Today, social scientists are rejecting the notion of a monolithic and unchanging culture of poverty. And they attribute destructive attitudes and behavior not to inherent moral character but to sustained racism and isolation.

Cohen continues by providing examples of how culture is now being examined. To do so she turns to Harvard sociologist, Robert J. Sampson. According to Sampson culture should be understood as “shared understandings.”

The shared perception of a neighborhood — is it on the rise or stagnant? — does a better job of predicting a community’s future than the actual level of poverty, he said.

William Julius Wilson, a fellow Harvard sociologist who achieved notoriety through studies of persistent poverty defines culture as the way

“individuals in a community develop an understanding of how the world works and make decisions based on that understanding.”

For some young black men, Professor Wilson said, the world works like this: “If you don’t develop a tough demeanor, you won’t survive. If you have access to weapons, you get them, and if you get into a fight, you have to use them.”

As a result of this new direction in the study of poverty, a number of assumptions about people in poverty have been challenged. One of these is idea marriage is not valued by poor, urban single mothers.

In Philadelphia, for example, low-income mothers told the sociologists Kathryn Edin and Maria Kefalas that they thought marriage was profoundly important, even sacred, but doubted that their partners were “marriage material.” Their results have prompted some lawmakers and poverty experts to conclude that programs that promote marriage without changing economic and social conditions are unlikely to work.

The question remains, why are social scientists suddenly willing to deal with this once taboo approach?

Younger academics like Professor Small, 35, attributed the upswing in cultural explanations to a “new generation of scholars without the baggage of that debate.”

Scholars like Professor Wilson, 74, who have tilled the field much longer, mentioned the development of more sophisticated data and analytical tools. He said he felt compelled to look more closely at culture after the publication of Charles Murray and Richard Herrnstein’s controversial 1994 book, “The Bell Curve,” which attributed African-Americans’ lower I.Q. scores to genetics.

The authors claimed to have taken family background into account, Professor Wilson said, but “they had not captured the cumulative effects of living in poor, racially segregated neighborhoods.”

He added, “I realized we needed a comprehensive measure of the environment, that we must consider structural and cultural forces.”

This surge of interest is particularly timely as poverty in the United States has hit a fifteen-year high. And the debate is by no means confined to the ‘Ivory Tower’.

The topic has generated interest on Capitol Hill because so much of the research intersects with policy debates. Views of the cultural roots of poverty “play important roles in shaping how lawmakers choose to address poverty issues,” Representative Lynn Woolsey, Democrat of California, noted at the briefing.

NaptimeA recent story in the Star Tribune explores the recently documented trend of women delaying the birth of their first child or choosing to not have children altogether.

More than ever before, women are deciding to forgo childbearing in favor of other life-fulfilling experiences, a trend that has been steadily on the rise for decades. Census data says that nationally, the number of women 40 to 44 who did not have children jumped 10 percentage points from 1983 to 2006.

As University of Minnesota sociologist Ross Macmillan explains, the childless trend is not limited to the United States.

The number of children born is dropping “like a stone in pretty much every country we
can find,” he said, and the United States has seen a 50-year rise in the number of  childless women.

There are also a large number of women choosing to delay childbirth. State Demographic Center research analyst Martha McMurry points out that while there has been a decline in births among women in their 20s, the number of women having children in their 30s and 40s is increasing.

This delay is in part attributed to the high cost of having and raising a child, estimated at $250,000 by some studies,  as well as the potential negative repercussions in the workplace.

“Actually, while it is true that women can have it all, it is also true that women who have children suffer from some penalties in the workplace,” said University of Minnesota associate professor Ann Meier.

She was referencing Stanford sociologist Shelley Correll’s research that shows that mothers looking for work are less likely to be hired, are offered lower pay (5 percent less per child) and that the pay gap between mothers and childless women under 35 is
actually bigger than the pay gap between women and men.

As the numbers of women choosing not to have children has risen, groups organized around the decision have sprung up.

In the Twin Cities, a one-year-old Childfree by Choice group’s numbers are growing
weekly. On Meetup.com, the site through which it is organized, other such groups are
cropping up nationwide, with such names as No Kidding and Not a Mom.

For many of these women children are simply not seen as the key ingredient to living a good life.

Aleja Santos, 44, a medical ethics researcher who started the Twin Cities Childfree by
Choice group a year ago (greeting members on the site with “Welcome, fellow non-
breeders!”), said she never wanted to have kids. “There were always other things I
wanted to do.”

NO PAIN, NO GAIN

In a recent thought piece titled, “Racing Safely to the Finish Line? Kids, Competitions, and Injuries,” Sociologist Hilary Levey, reflects upon the reaction to the recent death of thirteen-year-old Peter Lenz this past Sunday. Peter was killed in a motorcycle accident at the Indianapolis Motor Speedway during a practice session.

Levey explains that it would be an error for the public to be caught up in the type of accident that occurred and we should instead use this tragedy as an impetus to consider the dangers of increasingly competitive youth sport.

Youth racing shouldn’t be alone in getting a closer inspection. This tragedy could have happened to any girl on a balance beam or any boy in a football tackle last Sunday. We should not be distracted by the fact that Peter was in a motorcycle race.

Despite the risk of serious injuries, like concussions, and even death, millions of kids compete in almost any activity you can imagine. Did you know that there are shooting contests for young Davy Crocketts, a racing circuit for aspiring Danica Patricks, and a youth PGA for those pursuing Tiger Woods’ swing? When did American childhood become not just hyper-organized but also hyper-competitive?

Levey shows that youth sport should be examined as the culmination of a century long trajectory of increased competitiveness.

Initially the organized activities served as a way mitigate deviant behavior by reducing the amount of unmonitored idle hours.

In 1903 New York City’s Public School Athletic League for Boys was established and contests between children, organized by adults, emerged as a way to keep the boys coming back to activities and clubs. Settlement houses and ethnic clubs followed suit and the number of these clubs grew rapidly through the 1920s.

However, the level of competitiveness continued to ramp up as the 20th century progressed. National organizations were introduced after World War II and the by the 1970s, for-profit organizations were common.

And, by the turn of the twenty-first century, a variety of year-round competitive circuits, run by paid organizers and coaches, dominated families’ evenings and weekends.

Parents tried to find the activity best suited to turn their children into national champions, even at age seven. As competitive children’s activities became increasingly organized over the twentieth century, injuries increased — especially overuse injuries and concussions. More practice time, an earlier focus on only one sport, and a higher level of intensity in games create the environment for these types of injuries.

Peter Lenz’s death is indicative of an increasingly competitive and organized American childhood. Levey argues that as a society we have the responsibility to make sure the training and safety regulations keep up with the increased pressure and risk of injury. This should include greater monitoring of safety equipment and higher standards for coaches.

While catastrophic accidents like Peter Lenz’s will happen, we can work to better protect all competitive children from more common injuries like concussions and overuse injuries. Kids want to win whatever race they are in and be the champion. Adults should make sure they all safely cross the finish line.

France & Ewing in South Minneapolis

A recent feature in the University of Minnesota’s UMNews report documents Rebecca Krinke’s most recent public art creation. Krinke, an associate professor in landscape architecture, explores how memories and emotion become attached to specific spatial locations. In doing so she blurs the line between geography, sociology, urban studies, emotional exploration, and art.

The map has turned into a sociology experiment of sorts and a sounding board for people’s emotions: hope and despair, contentment and anger, love and hate.

Krinke began with a giant laser-cut map of Minneapolis and St. Paul.

Beginning in late July, Krinke started taking the map to public spaces in Minneapolis and St. Paul and inviting passersby to use the colored pencil of their choice—gold for joy and gray for pain (or both)—to express their memories of places.

The map soon was filled with color – some representing memories of excitement and wonder, others representing tragedy and grief.

One man was sharing his tale of overdosing on heroin in Minneapolis when another chimed in and said, “Yeah, that happened to me, too,” Krinke says. “And they looked at each other like, ‘Well, we made it.’”

Fortunately, the map still radiates more than its share of good times and golden memories. Of fish caught in Minneapolis lakes. Of trails hiked and biked over and over again. Of sports venues old and new.

The overwhelming reaction to the piece has inspired Krinke to look for ways to continue, and expand, the project. It also points to some sort of underlying desire to make public emotions that rarely see the light of day.

As artists and designers, “there’s a lot of potential here,” she adds. “Maybe we’re the witnesses. Maybe that’s why they like talking. It’s like testifying in a way. I guess [it’s] a deep fundamental human need to be heard.”

first grade desk IMG_4744The BBC recently reported on new research that documents the way young boys are negatively affected by gender stereotypes.

Girls believe they are cleverer, better behaved and try harder than boys from the age of four, research suggests.
By the age of eight, boys had also adopted these perceptions, the study from the University of Kent found.

Social psychologist and lead researcher, Bonny Hartley, presented children between the age of four and 10 with a series of statements describing children as being hard working, clever, and timely in the completion of the work. They then chose the silhouette of either a boy or girl depending on which gender they thought the statement most accurately described.

On average, girls of reception age right through to Year 5 said girls were cleverer, performed better, were more focused and were better behaved or more respectful, the study found.Boys in reception, Year 1 and Year 2 gave answers which were equally split between favouring boys and girls, but by Year 3 their beliefs were in line with those of the girls, the researchers said.
Ms Hartley said that children of both genders thought, in general, that adults believed that girls did better than boys at school.

Hartley also documented the immediate impact that gender expectations may have on test performance.

In a separate investigation, she tested two separate groups of children in maths, reading and writing. The first group was told that boys do not perform as well as girls, but the other was not. Boys in the first group performed “significantly worse” than in the second group, which Ms Hartley says suggests that boys’ low performance may be explained in part by low expectations.

The study demonstrates the power of socialization and speaks to the need for teachers to be particularly cognizant of vocalizing any gender-based expectations, as they may create self-fulfilling prophecies.

She also warns against the use of phrases such as “silly boys” and “school boy pranks” or teachers asking “why can’t you sit nicely like the girls?”

First Self Portrait
Contrary to more pessimistic societal assumptions, research has shown that old age often correlates with increased happiness. A recent Washington Post story reports on studies that seek to explain this trend.

One factor that may lead to increased happiness is the emotional and cognitive stability that grows with old age.

Laura Carstensen, a Stanford social psychologist, calls this the “well-being paradox.” Although adults older than 65 face challenges to body and brain, the 70s and 80s also bring an abundance of social and emotional knowledge, qualities scientists are beginning to define as wisdom. As Carstensen and another social psychologist, Fredda Blanchard-Fields of the Georgia Institute of Technology, have shown, adults gain a toolbox of social and emotional instincts as they age. According to Blanchard-Fields, seniors acquire a feel, an enhanced sense of knowing right from wrong, and therefore a way to make sound life decisions.

Wisdom, while long associated with age, has always remained a murky term. Ipsit Vahia, a geriatric psychiatrist at the University of California at San Diego, explains

“[wisdom] involves making decisions that would be to the greater benefit of a larger number of people” and maintaining “an element of pragmatism, not pure idealism. And it would involve some sense of reflection and self-understanding.”

The source of this wisdom and happiness remains subject to debate. Some emphasize neurobiological changes.

An MRI scan cannot isolate a part of the brain associated with wisdom, says Elkhonon Goldberg, a neuropsychologist and author of “The Wisdom Paradox.” Still, he says, the aging brain has a greater sense of “pattern recognition,” the ability to capture a range of similar but nonidentical information, then extract and piece together common features. That, Goldberg says, “gives some old people a cognitive leg up.”

While others attribute the change to social and emotional factors such as the ability to regulate emotions. Psychologist Susanne Scheibbe cites a pragmatic basis for cognitive change.

“Old people are good at shaping everyday life to suit their needs,” explains Scheibe. By carefully pruning their social networks or looking at life in relative terms, older adults maintain cognitive control. And although multiple chronic illnesses that cause functional disability or cognitive decline can affect well-being, most older adults are able to tune out negative information into their late 70s and 80s.

So perhaps there is something to that whole ‘respect your elders’ thing. Or as the Washington Post story concludes

If older adults are predisposed to wisdom, perhaps a graying population means a wiser one.