Mr. Draper, I don’t know what it is you really believe in but I do know what it feels like to be out of place, to be disconnected, to see the whole world laid out in front of you the way other people live it. There’s something about you that tells me you know it too.
– Mad Men, Season 1, Episode 1
The ending of Mad Men was brilliant. It was like a good mystery novel: once you know the solution – Don Draper creating one of the greatest ads in Madison Avenue history – you see that the clues were there all along. You just didn’t realize what was important and what wasn’t. Neither did the characters. This was a game played between Matt Weiner and the audience.
The ending, like the entire series, was also a sociological commentary on American culture. Or rather, it was an illustration of such a commentary. The particular sociological commentary I have in mind is Philip Slater’sPursuit of Loneliness, published in 1970, the same year that this episode takes place. It’s almost as if Slater had Don Draper in mind when he wrote the book, or as if Matt Weiner had the book in mind when he wrote this episode.
In the first chapter, “I Only Work Here,” Slater outlines “three human desires that are deeply and uniquely frustrated by American culture”:
(1) the desire for community – the wish to live in trust, cooperation, and friendship with those around one.
(2) the desire for engagement – the wish to come to grips directly with one’s social and physical environment.
(3) the desire for dependence – the wish to share responsibility for the control of one’s impulses and the direction of one’s life.
The fundamental principle that gives rise to these frustrations is, of course, individualism.
Individualism is rooted in the attempt to deny the reality of human interdependence. One of the major goals of technology in America is to “free” us from the necessity of relating to, submitting to, depending upon, or controlling other people. Unfortunately, the more we have succeeded in doing this, the more we have felt disconnected, bored, lonely, unprotected, unnecessary, and unsafe.
Most of those adjectives could apply to Don Draper at this point. In earlier episodes, we have seen Don, without explanation, walk out of an important meeting at work and, like other American heroes, light out for the territory, albeit in a new Cadillac. He is estranged from his family. He is searching for something – at first a woman, who turns out to be unattainable, and then for… he doesn’t really know what. He winds up at Esalen, where revelation comes from an unlikely source, a nebbishy man named Leonard. In a group session, Leonard says:
I’ve never been interesting to anybody. I, um – I work in an office. People walk right by me. I know they don’t see me. And I go home and I watch my wife and my kids. They don’t look up when I sit down…
I had a dream. I was on a shelf in the refrigerator. Someone closes the door and the light goes off. And I know everybody’s out there eating. And then they open the door and you see them smiling. They’re happy to see you but maybe they don’t look right at you and maybe they don’t pick you. Then the door closes again. The light goes off.
People are silent, but Don gets up, slowly moves towards Leonard and tearfully, silently, embraces him.
On the surface, the two men could not be more different. Don is interesting. And successful. People notice him. But he shares Leonard’s sense that his pursuit – of a new identity, of career success, of unattainable women – has left him feeling inauthentic, disconnected, and alone. “I’ve messed everything up,” he tells his sometime co-worker Peggy in a phone conversation. “I’m not the man you think I am.”
The next time we see him, he is watching from a distance as people do tai-chi on a hilltop.
And then he himself is sitting on a hilltop, chanting “om” in unison with a group of people. At last he is sharing something with others rather than searching for ego gratifications.
And then the punch line. We cut to the Coke hilltop ad with its steadily expanding group of happy people singing in perfect harmony. A simple product brings universal community (“I’d like to buy the world a Coke and keep it company”). It also brings authenticity. “It’s the real thing.” Esalen and Coca-Cola. Both are offering solutions to the frustrated needs Slater identifies. But both solutions suffer from the same flaw – they are personal rather than social. A few days of spiritual healing and hot springs brings nor more social change than does a bottle of sugar water.It’s not that real change is impossible, Slater says, and in the final chapter of the book, he hopes that the strands in the fabric of American culture can be rewoven. But optimism is difficult.
So many healthy new growths in our society are at some point blocked by the overwhelming force and rigidity of economic inequality… There’s a… ceiling of concentrated economic power that holds us back, frustrates change, locks in flexibility.
The Mad Men finale makes the same point, though with greater irony (the episode title is “Person to Person”). When we see the Coke mountaintop ad, we realize that Don Draper has bundled up his Esalen epiphany, brought it back to a huge ad agency in New York, and turned it into a commercial for one of the largest corporations in the world.
Earlier this year a CBS commentator in a panel with Jay Smooth embarrassingly revealed that she thought he was white (Smooth’s father is black) and this week the internet learned that Rachel Dolezal was white all along (both parents identify as white). The CBS commentator’s mistake and Dolezal’s ability to pass both speak to the strange way we’ve socially constructed blackness in this country.
The truth is that African Americans are essentially all mixed race. From the beginning, enslaved and other Africans had close relationships with poor and indentured servant whites, that’s one reason why so many black people have Irish last names. During slavery, sexual relationships between enslavers and the enslaved, occurring on a range of coercive levels, were routine. Children born to enslaved women from these encounters were identified as “black.” The one-drop rule — you are black if you have one drop of black blood — was an economic tool used to protect the institution of racialized slavery (by preserving the distinction between two increasingly indistinct racial groups) and enrich the individual enslaver (by producing another human being he could own). Those enslaved children grew up and had children with other enslaved people as well as other whites.
In addition to these, of course, voluntary relationships between free black people and white people were occurring all these years as well and they have been happening ever since, both before and after they became legal. And the descendants of those couplings have been having babies all these years, too.
We’re talking about 500 years of mixing between blacks, whites, Native Americans (who gave refuge to escaped slaves), and every other group in America. The continued assumption, then, that a black person is “black” and only “mixed race” if they claim the label reflects the ongoing power of the one-drop rule. It also explains why people with such dramatically varying phenotypes can all be considered black. Consider the image below, a collage of people interviewed and photographed for the (1)ne Drop project; Jay Smooth is in the guy at the bottom left.
My point is simply that of course Jay Smooth is sometimes mistaken for white and it should be no surprise to learn that it’s easy for a white person — even one with blond hair and green eyes — to pass as black (in fact, it’s a pastime). The racial category is a mixed race one and, more importantly, it’s more social than biological. Structural disadvantage, racism, and colorism are real. The rich cultural forms that people who identify as black have given to America are real. The loving communities people who identify as black create are real. But blackness isn’t, never was, and is now less than ever before.
A substantial body of literature suggests that women change what they eat when they eat with men. Specifically, women opt for smaller amounts and lower-calorie foods associated with femininity. So, some scholars argue that women change what they eat to appear more feminine when dining with male companions.
For my senior thesis, I explored whether women change the way they eat alongside what they eat when dining with a male vs. female companion. To examine this phenomenon, I conducted 42 hours of non-participant observation in two four-star American restaurants in a large west coast city in the United States. I observed the eating behaviors of 76 Euro-American women (37 dining with a male companion and 39 dining with a female companion) aged approximately 18 to 40 to identify differences in their eating behaviors.
I found that women did change the way they ate depending on the gender of their dining companion. Overall, when dining with a male companion, women typically constructed their bites carefully, took small bites, ate slowly, used their napkins precisely and frequently, and maintained good posture and limited body movement throughout their meals. In contrast, women dining with a female companion generally constructed their bites more haphazardly, took larger bites, used their napkins more loosely and sparingly, and moved their bodies more throughout their meals.
On the size of bites, here’s an excerpt from my field notes:
Though her plate is filled, each bite she labors onto her fork barely fills the utensil. Perhaps she’s getting full because each bite seems smaller than the last… and still she’s taking tiny bites. Somehow she has made a single vegetable last for more than five bites.
I also observed many women who were about to take a large bite but stopped themselves. Another excerpt:
She spreads a cracker generously and brings it to her mouth. Then she pauses for a moment as though she’s sizing up the cracker to decide if she can manage it in one bite. After thinking for a minute, she bites off half and gently places the rest of the cracker back down on her individual plate.
Stopping to reconstruct large bites into smaller ones is a feminine eating behavior that implies a conscious monitoring of bite size. It indicates that women may deliberately change their behavior to appear more feminine.
I also observed changes in the ways women used their napkins when dining with a male vs. female companion. When their companion was a man, women used their napkins more precisely and frequently than when their companion was another woman. In some cases, the woman would fold her napkin into fourths before using it so that she could press the straight edge of the napkin to the corners of her mouth. Other times, the woman would wrap the napkin around her finger to create a point, then dab it across her mouth or use the point to press into the corners of her mouth. Women who used their napkins precisely also tended to use them quite frequently:
Using her napkin to dab the edges of her mouth – finger in it to make a tiny point, she is using her napkin constantly… using the point of the napkin to specifically dab each corner of her mouth. She is using the napkin again even though she has not taken a single bite since the last time she used it… using napkin after literally every bite as if she is constantly scared she has food on her mouth. Using and refolding her napkin every two minutes, always dabbing the corners of her mouth lightly.
In contrast, women dining with a female companion generally used their napkins more loosely and sparingly. These women did not carefully designate a specific area of the napkin to use, and instead bunched up a portion of it in one hand and rubbed the napkin across their mouths indiscriminately.
Each of the behaviors observed more frequently among women dining with a male companion versus a female one was stereotypically feminine. Many of the behaviors that emerged as significant among women dining with a female companion, on the other hand, are considered non-feminine, i.e. behaviors that women are instructed to avoid. Behavioral differences between the two groups of women suggest two things. First, women eat in a manner more consistent with normative femininity when in the presence of a male versus a female companion. And, second, gender is something that people perform when cued to do so, not necessarily something people internalize and express all the time.
Kate Handley graduated from Occidental College this month. This post is based on her senior thesis. After gaining some experience in the tech industry, she hopes to pursue a PhD in Sociology.
In 1994, a US immigration judge lifted an order to deport a woman named Lydia Oluloro. Deportation would have forced her to either leave her five- and six-year-old children in America with an abusive father or take them with her to Nigeria. There, they would have been at risk of a genital cutting practice called infibulation, in which the labia majora and minora are trimmed and fused, leaving a small opening for urination and menstruation.
Many Americans will agree that the judge made a good decision, as children shouldn’t be separated from their mothers, left with dangerous family members, or subjected to an unnecessary and irreversible operation that they do not understand. I am among these Americans. However, I am also of the view that Americans who oppose unfamiliar genital cutting practices should think long and hard about how they articulate their opposition.
This court attempts to respect traditional cultures … but [infibulation] is cruel and serves no known medical purpose. It’s obviously a deeply ingrained cultural tradition going back 1,000 years at least.
Let’s consider the judge’s logic carefully. First, by contrasting the “court” (by which he means America)with “traditional cultures”, the judge is contrasting us (America) with a them (Nigeria). He’s implying that only places like Nigeria are “traditional” — a euphemism for states seen as backward, regressive, and uncivilised — while the US is “modern,” a state conflated with progressiveness and enlightenment.
When he says that the court “attempts to respect traditional cultures,” but cannot in this case, the judge is suggesting that the reason for the disrespect is the fault of the culture itself. In other words, he’s saying “we do our best to respect traditional cultures, but you have pushed us too far.” The reason for this, the judge implies, is that the practices in question have no redeeming value. It “serves no known medical purpose,” and societies which practice it are thus “up to no good” or are not engaging in “rational” action.
The only remaining explanation for the continuation of the practice, the judge concludes, is cruelty. If the practice is cruel the people who practice it must necessarily also be cruel; capriciously, pointlessly, even frivolously cruel.
To make matters worse, in the eyes of the judge, such cruelty can’t be helped because its perpetrators don’t have free will. The practice, he says, is “deeply ingrained” and has been so for at least 1,000 years. Such cultures cannot be expected to see reason. This is the reason why the court — or America — can and should be compelled to intervene.
In sum, the judge might well have said: “we are a modern, rational, free, good society, and you who practice female genital cutting—you are the opposite of this.”
**********
I’ve published extensively on the ways in which Americans talk about the female genital cutting practices (FGCs) that are common in parts of Africa and elsewhere, focusing on the different ways opposition can be articulated and the consequence of those choices. There are many grounds upon which to oppose FGCs: the oppression of women, the repression of sexuality, human rights abuse, child abuse, a violation of bodily integrity, harm to health, and psychological harm, to name just a few. Nevertheless, Judge Warren, chose to use one of the most common and counterproductive frames available: cultural depravity.
The main source of this frame has been the mass media, which began covering FGCs in the early 1990s. At the time anti-FGC activists were largely using the child abuse frame in their campaigns, yet journalists decided to frame the issue in terms of cultural depravity. This narrative mixed with American ethnocentrism, an obsession with fragile female sexualities, a fear of black men, and a longstanding portrayal of Africa as dark, irrational, and barbaric to make a virulent cocktail of the “African Other.”
The more common word used to describe FGCs — mutilation — is a symbol of this discourse. It perfectly captures Judge Warren’s comment. Mutilation is, perhaps by definition, the opposite of healing and of what physicians are called to do. Defining FGCs this way allows, and even demands, that we wholly condemn the practices, take a zero tolerance stance, and refuse to entertain any other point of view.
Paradoxically, this has been devastating for efforts to reduce genital cutting. People who support genital cutting typically believe that a cut body is more aesthetically pleasing. They largely find the term “mutilation” confusing or offensive. They, like anyone, generally do not appreciate being told that they are barbaric, ignorant of their own bodies, or cruel to their children.
The zero tolerance demand to end the practices has also failed. Neither foreigners intervening in long-practicing communities, nor top-down laws instituted by local politicians under pressure from Western governments, nor even laws against FGCs in Western countries have successfully stopped genital cutting. They have, however, alienated the very women that activists have tried to help, made women dislike or fear the authorities who may help them, and even increased the rate of FGCs by inspiring backlashes.
In contrast, the provision of resources to communities to achieve whatever goals they desire, and then getting out of the way, has been proven to reduce the frequency of FGCs. The most effective interventions have been village development projects that have no agenda regarding cutting, yet empower women to make choices. When women in a community have the power to do so, they often autonomously decide to abandon FGCs. Who could know better, after all, the real costs of continuing the practice?
Likewise, abandonment of the practice may be typical among immigrants to non-practicing societies. This may be related to fear of prosecution under the law. However, it is more likely the result of a real desire among migrants to fit into their new societies, a lessening of the pressures and incentives to go through with cutting, and mothers’ deep and personal familiarity with the short- and long-term pain that accompanies the practices.
The American conversation about FGCs has been warped by our own biases. As a Hastings Center Report summarizes, those who adopt the cultural depravity frame misrepresent the practices, overstate the negative health consequences, misconstrue the reasons for the practice, silence the first-person accounts of women who have undergone cutting, and ignore indigenous anti-FCG organizing. And, while it has fed into American biases about “dark” Africa and its disempowered women, the discourse of cultural depravity has actually impaired efforts to reduce rates of FGCs and the harm that they can cause.
Singer-songwriter Hozier played “guess the man buns” on VH1, and Buzzfeed facetiously claimed they had “Scientific Proof That All Celebrity Men are Hotter with Man Buns.” Brad Pitt, Chris Hemsworth, and David Beckham have all sported the man bun. And no, I’m not talking about their glutes. Men are pulling their hair back behind their ears or on top on their heads and securing it into a well manicured or, more often, fashionably disheveled knot. This hairstyle is everywhere now: in magazines and on designer runways and the red carpet. Even my neighborhood Barista is sporting a fledgling bun, and The Huffington Post recently reported on the popular Man Buns of Disneyland Instagram account that documents how “man buns are taking over the planet.”
At first glance, the man bun seems a marker of progressive manhood. The bun, after all, is often associated with women—portrayed in the popular imagination via the stern librarian and graceful ballerina. In my forthcoming book, Styling Masculinity: Gender, Class, and Inequality in the Men’s Grooming Industry, however, I discuss how linguistic modifiers such as manlights (blonde highlights for men’s hair) reveal the gendered norm of a word. Buns are still implicitly feminine; it’s the man bun that is masculine. But in addition to reminding us that men, like women, are embodied subjects invested in the careful cultivation of their appearances, the man bun also reflects the process of cultural appropriation. To better understand this process, we have to consider: Whocan pull off the man bun and under what circumstances?
I spotted my first man bun in college. And it was not a blonde-haired, blue-eyed, all-American guy rocking the look in an effort to appear effortlessly cool. This bun belonged to a young Sikh man who, on a largely white U.S. campus, received lingering stares for his hair, patka, and sometimes turban. His hair marked him as an ethnic and religious other. Sikhs often practice Kesh by letting their hair grow uncut in a tribute to the sacredness of God’s creation. He was marginalized on campus and his appearance seen by fellow classmates as the antithesis of sexy. In one particularly alarming 2007 case, a teenage boy in Queens was charged with a hate crime when he tore off the turban of a young Sikh boy to forcefully shave his head.
A journalist for The New York Times claims that Brooklyn bartenders and Jared Leto “initially popularized” the man bun. It’s “stylish” and keeps men’s hair out of their faces when they are “changing Marconi light bulbs,” he says. In other words, it’s artsy and sported by hipsters. This proclamation ignores the fact that Japanese samurai have long worn the topknot or chonmage, which are still sported by sumo wrestlers.
Nobody is slapping sumo wrestlers on the cover of GQ magazine, though, and praising them for challenging gender stereotypes. And anyway, we know from research on men in hair salons and straight men who adopt “gay” aesthetic that men’s careful coiffing does not necessarily undercut the gender binary. Rather, differences along the lines of class, race, ethnicity, and sexuality continue to distinguish the meaning of men’s practices, even if those practices appear to be the same. When a dominant group takes on the cultural elements of marginalized people and claims them as their own—making the man bun exalting for some and stigmatizing for others, for example—who exactly has power and the harmful effects of cultural appropriation become clear.
Yes, the man bun can be fun to wear and even utilitarian, with men pulling their hair out of their faces to see better. And like long-haired hippies in the 1960s and 1970s, the man bun has the potential to resist conservative values around what bodies should look like. But it is also important to consider that white western men’s interest in the man bun comes from somewhere, and weaving a narrative about its novelty overlooks its long history among Asian men, its religious significance, and ultimately its ability to make high-status white men appear worldly and exotic. In the west, the man bun trend fetishizes the ethnic other at the same time it can be used to further marginalize and objectify them. And so cultural privilege is involved in experiencing it as a symbol of cutting-edge masculinity.
Kristen Barber, PhD is a member of the faculty at Southern Illinois University. Her interests are in qualitative and feminist research and what gender-boundary crossing can teach us about the flexibility of gender, the mechanisms for reproducing gender hierarchies, and the potential for reorganization. She blogs at Feminist Reflections, where this post originally appeared.
What creeps us out? Psychologists Francis McAndrew and Sara Koehnke wanted to know.
Their hypothesis was that being creeped out was a signal that something might be dangerous. Things we know are dangerous scare us — no creepiness there — but if we’re unsure if we’re under threat, that’s when things get creepy.
Think of the vaguely threatening doll, not being able to see in a suddenly dark room, footsteps behind you in an isolated place. Creepy, right? We don’t know for sure that we’re in danger, but we don’t feel safe either, and that’s creepy.
They surveyed 1,341 people about what they found creepy and, among their findings, they found that people (1) find it creepy when they can’t predict how someone will behave and (2) are less creeped out if they think they understand a person’s intentions. Both are consistent with the hypothesis that being unsure about a threat is behind the the feeling of creepiness.
They also hypothesized that people would find men creepy more often than women since men are statistically more likely than women to commit violent crimes. In fact, 95% of their respondents agreed that a creepy person was most likely to be a man. This is also consistent with their working definition.
Generally, people who didn’t or maybe couldn’t follow social conventions were thought of as creepy: people who hadn’t washed their hair in a while, stood closer to other people than was normal, dressed oddly or in dirty clothes, or laughed at unpredictable times.
Likewise, people who had taboo hobbies or occupations, ones that spoke to a disregard for being normal, were seen as creepy: taxidermists and funeral directors (both of which handle the dead) and adults who collect dolls or dress up like a clown (both of which blur the lines between adulthood and childhood)
If people we interact with are willing to break one social rule, or perhaps can’t help themselves, then who’s to say they won’t break a more serious one? Creepy. Most of their respondents also didn’t think that creepy people knew that they were creepy, suggesting that they don’t know they’re breaking social norms. Even creepier.
McAndrew and Koehnke summarize their results:
While they may not be overtly threatening, individuals who display unusual nonverbal behaviors… odd emotional behavior… or highly distinctive physical characteristics are outside of the norm, and by definition unpredictable. This activates our “creepiness detector” and increases our vigilance as we try to discern if there is in fact something to fear or not from the person in question.
D'Lane Compton PhD and Tristan Bridges PhD on December 26, 2015
“Lumbersexual” recently entered our cultural lexicon. What it means exactly is still being negotiated. At a basic level, it’s an identity category that relies on a set of stereotypes about regionally specific and classed masculinities. Lumbersexuals are probably best recognized by a set of hirsute bodies and grooming habits. Their attire, bodies, and comportment are presumed to cite stereotypes of lumberjacks in the cultural imaginary. However, combined with the overall cultural portrayal of the lumbersexual, this stereotype set fundamentally creates an aesthetic with a particular subset of men that idealizes a cold weather, rugged, large, hard-bodied, bewhiskered configuration of masculinity.
Similar to hipster masculinity, “lumbersexual” is a classification largely reserved for young, straight, white, and arguably class-privileged men. While some position lumbersexuals as the antithesis of the metrosexual, others understand lumbersexuals as within a spectrum of identity options made available by metrosexuality. Urbandicionary.com defines the lumbersexual as “a sexy man who dresses in denim, leather, and flannel, and has a ruggedly sensual beard.”
One of the key signifiers of the “lumbersexual,” however, is that he is not, in fact, a lumberjack. Like the hipster, the lumbersexual is less of an identity men claim and more of one used to describe them (perhaps, against their wishes). It’s used to mock young, straight, white men for participating in a kind of identity work. Gearjunkie.com describes the identity this way:
Whether the roots of the lumbersexual are a cultural shift toward environmentalism, rebellion against the grind of 9-5 office jobs, or simply recognition that outdoor gear is just more comfortable, functional and durable, the lumbersexual is on the rise (here).
Many aspects of masculinity are “comfortable.” And, men don’t need outdoor gear and lumberjack attire to be comfortable. Lumbersexual has less to do with comfort and more to do with masculinity. It is a practice of masculinization. It’s part of a collection of practices associated with “hybrid masculinities”—categories and identity work practices made available to young, white, heterosexual men that allow them to collect masculine status they might otherwise see themselves (or be seen by others) as lacking. Hybridization offers young, straight, class-privileged white men an avenue to negotiate, compensate, and attempt to control meanings attached to their identities as men. Hybrid configurations of masculinity, like the lumbersexual, accomplish two things at once. They enable young, straight, class-privileged, white men to discursively distance themselves from what they might perceive as something akin to the stigma of privilege. They simultaneously offer a way out of the “emptiness” a great deal of scholarship has discussed as associated with racially, sexually, class-privileged identities (see here, here, and here).
The lumbersexual highlights a series of rival binaries associated with masculinities: rural vs. urban, rugged vs. refined, tidy vs. unkempt. But the lumbersexual is so compelling precisely because, rather than “choosing sides,” this identity attempts to delicately walk the line between these binaries. It’s “delicate” precisely because this is a heteromasculine configuration—falling too far toward one side or the other could call him into question. But, a lumbersexual isn’t a lumberjack just like a metrosexual isn’t gay. Their identity work attempts to establish a connection with identities to which they have no authentic claim by flirting with stereotypes surrounding sets of interests and aesthetics associated with various marginalized and subordinated groups of men. Yet, these collections are largely mythologies. The bristly woodsmen they are ostensibly parroting were, in fact, created for precisely this purpose. As Willa Brown writes,
The archetypal lumberjack—the Paul Bunyanesque hipster naturalist—was an invention of urban journalists and advertisers. He was created not as a portrait of real working-class life, but as a model for middle-class urban men to aspire to, a cure for chronic neurasthenics. He came to life not in the forests of Minnesota, but in the pages of magazines (here).
Perhaps less obviously, however, the lumbersexual is also coopting elements of sexual minority subcultures. If we look through queer lenses we might suggest that lumbersexuals are more similar to metrosexuals than they may acknowledge as many elements of “lumberjack” identities are already connected with configurations of lesbian and gay identities. For instance, lumbersexuals share a lot of common ground with “bear masculinity” (a subculture of gay men defined by larger bodies with lots of hair) and some rural configurations of lesbian identity. Arguably, whether someone is a “bear” or a “lumbersexual” may solely be a question of sexual identity. After all, bear culture emerged to celebrate a queer masculinity, creating symbolic distance from stereotypes of gay masculinities as feminine or effeminate. Lumbersexuals could be read as a similar move in response to metrosexuality.
Lumbersexual masculinity is certainly an illustration that certain groups of young, straight, class-privileged, white men are playing with gender. In the process, however, systems of power and inequality are probably better understood as obscured than challenged. Like the phrase “no homo,” hybrid configurations of masculinity afford young straight men new kinds of flexibility in identities and practice, but don’t challenge relations of power and inequality in any meaningful way.
The authors would like to thank the Orange Couch of NOLA, Urban Outfitters, the rural (&) queer community, and Andrea Herrera for suggesting we tackle this piece. Additional thanks to C.J. Pascoe and Lisa Wade for advanced reading and comments.
It seems certain that the political economy textbooks of the future will include a chapter on the experience of Greece in 2015.
On July 5, 2015, the people of Greece overwhelmingly voted “NO” to the austerity ultimatum demanded by what is colloquially being called the Troika, the three institutions that have the power to shape Greece’s future: the European Commission, the International Monetary Fund, and the European Central Bank.
The people of Greece have stood up for the rights of working people everywhere.
Background
Greece has experienced six consecutive years of recession and the social costs have been enormous. The following charts provide only the barest glimpse into the human suffering:
While the Troika has been eager to blame this outcome on the bungling and dishonesty of successive Greek governments and even the Greek people, the fact is that it is Troika policies that are primarily responsible. In broad brush, Greece grew rapidly over the 2000s in large part thanks to government borrowing, especially from French and German banks. When the global financial crisis hit in late 2008, Greece was quickly thrown into recession and the Greek government found its revenue in steep decline and its ability to borrow sharply limited. By 2010, without its own national currency, it faced bankruptcy.
Enter the Troika. In 2010, they penned the first bailout agreement with the Greek government. The Greek government received new loans in exchange for its acceptance of austerity policies and monitoring by the IMF. Most of the new money went back out of the country, largely to its bank creditors. And the massive cuts in public spending deepened the country’s recession.
By 2011 it had become clear that the Troika’s policies were self-defeating. The deeper recession further reduced tax revenues, making it harder for the Greek government to pay its debts. Thus in 2012 the Troika again extended loans to the Greek government as part of a second bailout which included . . . wait for it . . . yet new austerity measures.
Not surprisingly, the outcome was more of the same. By then, French and German banks were off the hook. It was now the European governments and the International Monetary Fund that worried about repayment. And the Greek economy continued its downward ascent.
Significantly, in 2012, IMF staff acknowledged that the its support for austerity in 2010 was a mistake. Simply put, if you ask a government to cut spending during a period of recession you will only worsen the recession. And a country in recession will not be able to pay its debts. It was a pretty clear and obvious conclusion.
But, significantly, this acknowledgement did little to change Troika policies toward Greece.
By the end of 2014, the Greek people were fed up. Their government had done most of what was demanded of it and yet the economy continued to worsen and the country was deeper in debt than it had been at the start of the bailouts. And, once again, the Greek government was unable to make its debt payments without access to new loans. So, in January 2015 they elected a left wing, radical party known as Syriza because of the party’s commitment to negotiate a new understanding with the Troika, one that would enable the country to return to growth, which meant an end to austerity and debt relief.
Syriza entered the negotiations hopeful that the lessons of the past had been learned. But no, the Troika refused all additional financial support unless Greece agreed to implement yet another round of austerity. What started out as negotiations quickly turned into a one way scolding. The Troika continued to demand significant cuts in public spending to boost Greek government revenue for debt repayment. Greece eventually won a compromise that limited the size of the primary surplus required, but when they proposed achieving it by tax increases on corporations and the wealthy rather than spending cuts, they were rebuffed, principally by the IMF.
The Troika demanded cuts in pensions, again to reduce government spending. When Greece countered with an offer to boost contributions rather than slash the benefits going to those at the bottom of the income distribution, they were again rebuffed. On and on it went. Even the previous head of the IMF penned an intervention warning that the IMF was in danger of repeating its past mistakes, but to no avail.
Finally on June 25, the Troika made its final offer. It would provide additional funds to Greece, enough to enable it to make its debt payments over the next five months in exchange for more austerity. However, as the Greek government recognized, this would just be “kicking the can down the road.” In five months the country would again be forced to ask for more money and accept more austerity. No wonder the Greek Prime Minister announced he was done, that he would take this offer to the Greek people with a recommendation of a “NO” vote.
The Referendum
Almost immediately after the Greek government announced its plans for a referendum, the leaders of the Troika intervened in the Greek debate. For example, as the New York Timesreported:
By long-established diplomatic tradition, leaders and international institutions do not meddle in the domestic politics of other countries. But under cover of a referendum in which the rest of Europe has a clear stake, European leaders who have found [Greece Prime Minister] Tsipras difficult to deal with have been clear about the outcome they prefer.
Many are openly opposing him on the referendum, which could very possibly make way for a new government and a new approach to finding a compromise. The situation in Greece, analysts said, is not the first time that European politics have crossed borders, but it is the most open instance and the one with the greatest potential effect so far on European unity…
Martin Schulz, a German who is president of the European Parliament, offered at one point to travel to Greece to campaign for the “yes” forces, those in favor of taking a deal along the lines offered by the
creditors.
On Thursday, Mr. Schulz was on television making clear that he had little regard for Mr. Tsipras and his government. “We will help the Greek people but most certainly not the government,” he said.
European leaders actively worked to distort the terms of the referendum. Greeks were voting on whether to accept or reject Troika austerity policies yet the Troika leaders falsely claimed the vote was on whether Greece should remain in the Eurozone. In fact, there is no mechanism for kicking a country out of the Eurozone and the Greek government was always clear that it was not seeking to leave the zone.
Having whipped up popular fears of an end to the euro, some Greeks began talking their money out of the banks. On June 28, the European Central Bank then took the aggressive step of limiting its support to the Greek financial system.
This was a very significant and highly political step. Eurozone governments do not print their own money or control their own monetary systems. The European Central Bank is in charge of regional monetary policy and is duty bound to support the stability of the region’s financial system. By limiting its support for Greek banks it forced the Greek government to limit withdrawals which only worsened economic conditions and heightened fears about an economic collapse. This was, as reported by the New York Times, a clear attempt to influence the vote, one might even say an act of economic terrorism:
Some experts say the timing of the European Central Bank action in capping emergency funding to Greek banks this week appeared to be part of a campaign to influence voters.
“I don’t see how anybody can believe that the timing of this was coincidence,” said Mark Weisbrot, an economist and a co-director of the Center for Economic and Policy Research in Washington. “When you restrict the flow of cash enough to close the banks during the week of a referendum, this is a very deliberate move to scare people.”
Then on July 2, three days before the referendum, an IMF staff report on Greece was made public. Echos of 2010, the report made clear that Troika austerity demands were counterproductive. Greece needed massive new loans and debt forgiveness. The Bruegel Institute, a European think tank, offered a summary and analysis of the report, concluding that “the creditors negotiated with Greece in bad faith” and used “indefensible economic logic.”
The leaders of the Troika were insisting on policies that the IMF’s own staff viewed as misguided. Moreover, as noted above, European leaders desperately but unsuccessfully tried to kill the report. Only one conclusion is possible: the negotiations were a sham.
The Troika’s goals were political: they wanted to destroy the leftist, radical Syriza because it represented a threat to a status quo in which working people suffer to generate profits for the region’s leading corporations. It apparently didn’t matter to them that what they were demanding was disastrous for the people of Greece. In fact, quite the opposite was likely true: punishing Greece was part of their plan to ensure that voters would reject insurgent movements in other countries, especially Spain.
The Vote
And despite, or perhaps because of all of the interventions and threats highlighted above, the Greek people stood firm. As the headlines of a Bloomberg news story proclaimed: “Varoufakis: Greeks Said ‘No’ to Five Years of Hypocrisy.”
The Greek vote was a huge victory for working people everywhere.
Now, we need to learn the lessons of this experience. Among the most important are: those who speak for dominant capitalist interests are not to be trusted. Our strength is in organization and collective action. Our efforts can shape alternatives.
Sociological Images encourages people to exercise and develop their sociological imaginations with discussions of compelling visuals that span the breadth of sociological inquiry. Read more…