I’m not sure what to make of this latest finding from Gallup that finds fewer Americans support reducing immigration than they did at the same point in time last year. According to the survey. The percentage of respondents who supported a decline in immigration rates was equal to the percentage that believed is should stay at the present rate.

What’s puzzling about this finding is that the conventional wisdom is that anti-immigrant sentiment is typically more pronounced during periods of economic decline and less virulent during times of economic prosperity. Indeed, if you look at the chart below, you find that support for reducing immigration rates increased during the early part of the 1990’s when the economy was in recession and decreased with the economic upturn of the late 1990’s.

Why has support for reducing immigration declined during a period of economic decline? Gallup says:

One reason anti-immigration opinion has diminished somewhat may be that immigration has receded as an issue this year as Americans have focused on the struggling economy and record-high gas prices.

But that’s not consistent with past history. It could be that the demographic increase in Latinos in the U.S. would drive support for immigration. But these attitudes change only slightly when the data is broken down by racial and ethnic group.

Are we coming to some sort of sophisticated understanding of the effects of globalization? Are we experience a backlash to the backlash of anti-immigration sentiment post 9-11? A recalibrating of our attitudes towards the rest of the world?

Edge.org has a wonderful symposium on reactions to Chris Anderson’s Wired article on The End of Theory.. What strikes me from reading the symposium is the lack of regard for inductive methodologies as “science.” The presumption is that, what Richard Fenno called, soaking and poking, is something new in the world of science. Traditionally in my discipline, it has always been thought of as a prelude to the real work of hypothesis testing.

What strikes me as fascinating is the ability of “computing in the cloud” to hyper-soak and poke. Kevin Kelly uses some interesting examples from Google about this potential.

It may turn out that tremendously large volumes of data are sufficient to skip the theory part in order to make a predicted observation. Google was one of the first to notice this. For instance, take Google’s spell checker. When you misspell a word when googling, Google suggests the proper spelling. How does it know this? How does it predict the correctly spelled word? It is not because it has a theory of good spelling, or has mastered spelling rules. In fact Google knows nothing about spelling rules at all.

Instead Google operates a very large dataset of observations which show that for any given spelling of a word, x number of people say “yes” when asked if they meant to spell word “y. ” Google’s spelling engine consists entirely of these datapoints, rather than any notion of what correct English spelling is. That is why the same system can correct spelling in any language.

In fact, Google uses the same philosophy of learning via massive data for their translation programs. They can translate from English to French, or German to Chinese by matching up huge datasets of humanly translated material. For instance, Google trained their French/English translation engine by feeding it Canadian documents which are often released in both English and French versions. The Googlers have no theory of language, especially of French, no AI translator. Instead they have zillions of datapoints which in aggregate link “this to that” from one language to another.

Once you have such a translation system tweaked, it can translate from any language to another. And the translation is pretty good. Not expert level, but enough to give you the gist. You can take a Chinese web page and at least get a sense of what it means in English. Yet, as Peter Norvig, head of research at Google, once boasted to me, “Not one person who worked on the Chinese translator spoke Chinese. ” There was no theory of Chinese, no understanding. Just data. (If anyone ever wanted a disproof of Searle’s riddle of the Chinese Room, here it is. )

This is no doubt true when it comes to Social Science where we are notoriously dreadful at prediction. It is not so true for meaning making, science’s other core purpose. Here’s Bruce Sterling’s amusing rejoinder to Kelly’s observations which seem to correctly mock the view that theory will become obsolete.

Surely there are other low-hanging fruit that petabytes could fruitfully harvest before aspiring to the remote, frail, towering limbs of science. (Another metaphor—I’m rolling here. )

For instance: political ideology. Everyone knows that ideology is closely akin to advertising. So why don’t we have zillionics establish our political beliefs, based on some large-scale, statistically verifiable associations with other phenomena, like, say, our skin color or the place of our birth?

The practice of law. Why argue cases logically, attempting to determine the facts, guilt or innocence? Just drop the entire legal load of all known casework into the petabyte hopper, and let algorithms sift out the results of the trial. Then we can “hang all the lawyers, ” as Shakespeare said. (Not a metaphor. )

Love and marriage. I can’t understand why people still insist on marrying childhood playmates when a swift petabyte search of billions of potential mates worldwide is demonstrably cheaper and more effective.

Investment. Quanting the stock market has got to be job one for petabyte tech. No human being knows how the market moves—it’s all “triple witching hour, ” it’s mere, low, dirty superstition. Yet surely petabyte owners can mechanically out-guess the (only apparent) chaos of the markets, becoming ultra-super-moguls. Then they simply buy all of science and do whatever they like with it. The skeptics won’t be laughing then.

Chris Anderson has an interesting, if not strange, article in WIRED where he makes the claim that we are arriving at the “end of theory.” He make makes the case that massive amounts of data (what he calls the Perabyte era) make the scientific method obsolete. The large volumes of data collection and analysis that lightning fast processing speed and massive storage capacity of modern computing allows, makes pattern matching a much more viable approach to knowledge creating than hypothesis testing.

There is now a better way. Petabytes allow us to say: “Correlation is enough.” We can stop looking for models. We can analyze the data without hypotheses about what it might show. We can throw the numbers into the biggest computing clusters the world has ever seen and let statistical algorithms find patterns where science cannot.

While the poor guy is getting shellacked on the comment boards, he’s on to something. He probably overstates his case for the natural sciences, but his argument is more telling for the social sciences. If theory, even universal theory, about human behavior is time bound and context dependent, and society is innovating and changing at an exponentially rapid pace, then what good is universal theory?

Bent Flyvbjerg’s wonderful book Making Social Science Matter makes a related but different argument about the shortcomings of applying scientific principles to social science. he argues for an emphasis in social science on phronesis, or knowledge on the “art of living,” rather than episteme, or knowledge for its own sake. Here’s a telling passage from an essay derived in part from his book.

Regradless of how much we let mathematical and staistical modeling dominate the social sciences, they are unlikely to become scientific in the natural sciences sense. This is so because the phenomena modelled are social, and thus “answer back” in ways natural phenomena do not.

This is the guiding principle behind my own thinking about race scholarship. it is much more instructive for use to be guiding our scholarship towards knowledge that enhances the art of living in a multicultural democracy over the quixotic search for some universal law of race relations.

Lapinski and Huber have an interesting article in the latest issue of Perspectives on Politics. In it they challenge Tali Mendelberg’s notion of implicit bias. Mendelberg makes the argument that direct racial appeals do not work in contemporary society because of the norm of racial equality. Lapinski and Huber claim that implicit appeals are no more effective than explicit racist appeals among the public. Their case is particularly compelling for less educated voters. For this group, they found that:

For these citizens, explicit appeals therefore do not generate the egalitarian counter-reaction that inhibits racial priming,

This Real News Network video featuring interviews with West Virginia voters before the Democratic primary in that state provides some anecdotal evidence that subtlety in racial appeals is not necessary for some constituents.

What I have less trouble buying wholesale is that higher educated voters are not affected by racial appeals. Their argument for why this is the case is that educated people

already bring their racial resentment to bear in expressing policy opinions on important issues that might otherwise be vulnerable to racialization.

They claim that higher educated people are more likely to “self-prime” or bring racial attitudes into their policy decision-making regardless of they types of appeals made. I’m less inclined to buy this argument. I think the type of issue examined has an impact on how much priming effects matter. I also wonder what effect age has on priming effects.

This article by Jeffrey Grogger at the University of Chicago highlighted on the Freakonomics blog estimates the cost of “sounding black” to be a 10 percent decline in wages when controlling for other factors. What’s impressive about this study is the methodology used to derive the 10 percent figure:

How does Grogger know who “sounds black?” As part of a large longitudinal study called the National Longitudinal Survey of Youth, follow-up validation interviews were conducted over the phone and recorded.

Grogger was able to take these phone interviews, purge them of any identifying information, and then ask people to try to identify the voices as to whether the speaker was black or white. The listeners were pretty good at distinguishing race through voices: 98 percent of the time they got the gender of the speaker right, 84 percent of white speakers were correctly identified as white, and 77 percent of black speakers were correctly identified as black.

Grogger asked multiple listeners to rate each voice and assigned the voice either to a distinctly white or black category (if the listeners all tended to agree on the race), or an indistinct category if there was disagreement.

It’s nice to have an army of graduate students 🙂 Here are a few interesting findings as described in the Freakonomics blog:

* whites who “sound black” earn 6 percent lower than other whites (as opposed to 10% for Blacks who “sound black.”

* blacks who do not “sound black” earn essentially the same as whites.

* sounding “Southern” is almost as bad for your wages as “sounding Black”

These findings seem to reflect where we are in America right now. We are much more open to assimilation but actual cultural integration is still a high hurdle for most Americans. The majority culture is, in large part, prepared to accept those from historically discriminated groups in positions of power and influence as long as they conform to “conventional” society, which included a “normal” way of speaking and acting. hence the consternation amongst elements of the media over “terrorist fist bumps” and angry preachers.

I gave a talk a few days ago about diversity and multiculturalism and the conversation with the group got on to the “Obama is a Muslim” e-mail. I began to discuss the e-mails as a “smear” when one audience member stopped me and said “but he is a Muslim, isn’t he?” Of course, five minutes later, that same person was on about how much they hated Obama’s pastor.

Because Obama seems so incredibly conventional in his presentation, some people struggle to find a way to put him in the conventional “black” box. I think this is much more the case for older Americans than for younger ones who have grown up accustomed to the idea of full assimilation. But even among the young, the idea of “black speech” is associated with part exoticization and part inferiority. Take for example the frequent derogatory use of the word “ghetto,” denoting anything that is run down or in disrepair. Of course, my students say…. “that’s not about black people, whites can be ghetto too.”

These same students also think that “ghetto is cool” under certain circumstances and at certain times. “Ghetto” is great in the car with the windows rolled up, but not so hot when you are applying for a job.

According to Politico, it looks like Barack Obama will accept the Democratic Party’s nomination outdoors at Invesco Field in Denver rather than indoors at the Pepsi Center. I’m impressed by the Obama campaign’s ability to innovate. This is definitely an interesting spin on the staid convention format. I think they intend it to be read as Obama “breaking out” of a confining arena to welcome those outside the party (i.e. accepting the nomination outdoors). I think the media will play it this way too.

Patrick Ruffini at techPresident has an interesting post about how much credit Barack Obama should recieve for allowing protests to his FISA bill support on his website. It brings up interesting questions about the inherent value of dialogue.

there is a danger that we’ll use a superficial semblance of openness to give the Obama campaign a pass on the key issue: whether Obama is actually responding to this protest in any meaningful way. Isn’t that the point of having these tools, after all? That the candidate will actually listen and maybe even modify his policies as a result?

Proponents of deliberative democracy herald the inherent value of talk in fostering civic engagement. However, talk if not followed by sustained action can also lead to a dimminishing of interest in politics. Will we run the risk of “talking ourselves to death” online while major social issues go unaddressed? Or does enough sustained talk with the threat of action lead to social change?

01238 magazine has a story on a paper entitled “Hatred and Profits: Getting Under the Hood of the Ku Klux Klan,” by a-list economists Steve Levitt and Ronald Fryer. They find that, in the main, the activity of the clan was more akin to the Elks or Masons than a contemporary terrorist/hate group:

The research sample showed that the average Klan member was better educated and wealthier than the surrounding population. He was also more likely to see the Klan as a fraternity of sorts than as a violent posse. When the two economists uncovered a trove of expense receipts in Pennsylvania, Fryer says, “I thought maybe we’d find something exciting, like rope or guns, but instead they were buying stuff like ice cream.

I haven’t read the entire article. I’d take issue with the research as presented in the article if it in any way downplays the incidents of real terrorism and brutality in which the Klan has historically engaged. But the article does highlight some fascinating aspects of how the Klan worked. It appeared that it worked much like modern pyramid schemes:

The Klan was highly effective at one thing, however: making money for its leaders. Rank-and-file members had to pay joining fees, “realm taxes,” and routine costs like robe purchases. Most of this money made its way to the top via an army of “salesmen,” who took their own cut. Levitt and Fryer calculated that in one year, David Curtis Stephenson, the Grand Dragon of Indiana and 22 other states, took home about $2.5 million (in 2006 dollars). “The Klan was able to bundle hatred with fraternity and make a real sell of it,” says Fryer.

Their study reminds us that racial/nativist hatreds can be exploited for both political and economic gain. I wonder if anyone has looked at contemporary nativist groups and examined their organizational and financial structures.

In preparation for my Race and Politics course this fall semester, I’ve brushed up on the latest work out there on social desirability bias. The general idea is that we harbor implicitly biased views about other groups that we do not share implicitly lest we run afoul of social norms.

The web can provide a safe space for unleashing these implicit biases. One such place where college students can vent their implicit biases is Juicy Campus. A piece in the latest issue of Radar features the controversy over the site’s content. The founder of the site seemed to have innocuous intentions:

We thought people might talk about what happened at some fraternity party last weekend, or to rank sororities. That sort of thing,” he insists. “And if you look, you’ll definitely find those fun stories. And then there’s a bunch more stuff that we didn’t realize people would use the site for.

But the site has turned into a dustbin of offensive, unsubstantiated accusations and slurs:

promiscuity, drug abuse, plastic surgery, homosexuality, rape, and eating disorders, along with enough racist, anti-Semitic, and misogynistic invective to make David Duke blanch—that seems to generate the majority of the page views.

I first heard of this site from a student in my Community Development class last semester. What struck me (perhaps it shouldn’t have) is how graphic the comments were on the site. I can remember hearing some pretty graphic stuff in my own college days, but I couldn’t imagine the desire to make such comments public. I suppose that is the point, social networking sites make the private immediately public. Devices like cell phones with SMS technologies and sites like Twitter allow you to post your impulses. I wonder how many of the posts on this juicy campus site are infused with alcohol or other drugs. What social networking and participatory culture allows us to do is to be on-line in the moment. But to me the unanswered question is whether is simply captures a moment of unvarnished racism or sexism, or does it encourage the creation of routines that support further exposition of offensive views?

There has been some good recent scholarship (here and here) in political science challenging the use of the hypothetico-deductive model to explain how race impacts the political process. Traditionally, political scientists have taken race or ethnic identification to logically precede group-based interest-formation and mobilization.

The reality of race and ethnicity is that they are multifaceted, inter-sectional and contextual constructs that cannot be captured through survey research that asks respondents to check a box next to the ethnicity with which they identify.

Attempts by statistical researchers to ‘control for third variables’… ignore the ontological embeddedness of locatedness of entities within actual situation contexts.” (Emirbayer 1997, 289).

This is true, but then the question remains, how do you validly and reliably study identity in the political process. One interesting approach might be to employ folksonomies to race questions in political science. Rather than asking people to classify themselves according to the controlled vocabulary of the survey researcher, a folksonomy would allow the respondent to use as many self-identifiers they want to describe themselves. You can use social network analysis to group respondents based on the similarity of their self-tagging structures into clusters and then test whether cluster membership is related to a desired political outcome.