If you’re not busy and are interested in democratic outcomes, you should really read this important piece by Ben Page and Martin Gilens.

The authors test four preeminent theories of democratic influence in which different actors have disproportionate influence in the American political system (average voters, economic elites, general interest groups and business oriented interest groups). Here’s the takeaway:

Economic elite policy preferences strongly correlate with “average” citizen policy preferences, but aggregated interest groups preferences do not. Business interest group influence does not always correlate with economic elite influence (economic elites want all government spending reduced and business interest groups want spending on their areas of influence).

When it comes of policy outcomes, economic elites and interest groups have the most influence…

a proposed policy change with low support among economically elite Americans (one-out-of-five in favor) is adopted only about 18 percent of the time, while a proposed change with high support (four-out-of-five in favor) is adopted about 45 percent of the time. Similarly, when support for policy change is low among interest groups (with five groups strongly opposed and none in favor) the probability of that policy change occurring is only .16, but the probability rises to .47 when interest groups are strongly favorable (see the bottom two panels of Figure 1.)

This is an empirical confirmation of my “NCAA Tournament” view of American politics. The “3 seed” usually beats the “14th seed,” but not always. A good way of measuring democratic health is how often “bracket busters” occur.

The Journal of Integrated Social Sciences (JISS) is searching for a new Political Science editor. The journal is a web-based, peer-reviewed international journal committed to the scholarly investigation of social phenomena.

In particular, JISS aims to predominantly publish work within the following social science disciplines: Psychology, Political Sciences, Sociology, and Gender Studies. A further goal of JISS is to encourage work that unites these disciplines by being either (a) interdisciplinary, (b) holistically oriented, or (c) captive of the transformative (developmental) nature of social phenomena. Aside from the theoretical implications of a particular study, we are also interested in serious reflections upon the specific methodology employed – and its implications on the results. JISS encourages undergraduate and graduate students to submit their best work under the supervision of a faculty sponsor. More details can be found at www.jiss.org.

General responsibilities include:

• The day to day running of the journal political science editorial office, including managing article peer review, liaison with authors, editing of articles, and preparation of editorial copy.
• Contributing to strategic development of the Journal
• Attracting submissions and themed issue proposals to the journal to ensure continued relevance and quality of content
• Promotional activities, including attending conferences

To make an application, you will need to send a statement outlining your reasons for seeking the position, and overall objectives as political science editor of JISS.

To discuss further or submit an application, please contact Dr. Jose Marichal (current Political Science Divisional Editor of JISS) ~ marichal@clunet.edu.

The following post is by Ryan Larson ’14, a senior sociology major at Concordia College. He loves sports of all kinds, plays jazz sax, and will begin a graduate program in sociology in the fall.

With the NCAA’s Men’s Basketball Tournament starting today, the media are alight with predictions as to who will cut down the nets April 7th. The annual phenomenon of penciling in the winners in tens of millions of brackets has a new twist this year: a billion dollar prize. The grand prize is being offered by Quicken Loans, the Detroit mortgage lender, with the backing of Warren E. Buffett, to anyone who fills out a perfect 2014 tournament bracket. The prize money will be paid out in 40 annual payments of $25 million, or a one-time lump sum of $500 million. However, how likely is a perfect bracket to surface?

Dunkin' Robot

In all likelihood, it won’t. No record of a perfect bracket has surfaced to date, and the advent of Internet-based bracket filling makes this much easier to track. For example, in the 16 years of the ESPN online bracket challenge, not one has been perfect (this also holds for the other Internet-based hosts). Jeff Berge, Professor of Mathematics at DePaul University says the odds of picking a perfect bracket randomly is 1 in 9,223,372,036,854,775,808 (the probability of getting 63 out of 63 right is the product of the probability of getting each one right, which for a coin flip is 50 percent). If everyone on earth filled out 100 brackets, it would theoretically take 13 million years to get a perfect bracket. In sum, the prediction worth putting much credence in is the notion that Buffett won’t have to part with his billion.

However, not all NCAA March Madness contests are a 50/50 coin flip. A no. 1 seed has never lost to a no. 16 seed, which makes these games easier to predict correctly than the Final Four contests. Incorporating just this one piece of information, University of Minnesota Professor of Biostatistics Brad Carlin put the odds at more like “1 in 128 billion.” This estimate is based solely on the probabilities of correct predictions in each round: the probability of calling a first-round game correctly ranges from 51 percent for the No. 8 vs. No. 9 game to 100 percent for the No. 1 vs. No. 16; and that second-round games can be called with 65 percent accuracy. The figures are 60 percent for Sweet Sixteen games and 50 percent for every game from the Elite Eight through the final. To put this in perspective, your odds of being killed by a vending machine are higher than picking a perfect bracket at even with the incorporation of these conditions.

All hope is not lost (although it’s pretty close to it). Implementing statistical modeling techniques on historical tournament data can help increase your chances of picking games correctly (however, at a very modest rate). Arguably the most popular model is that of former New York Times, now ESPN prognosticator Nate Silver. Silver, and his team at fivethirtyeight, are in their fourth year of building a model to correctly pick the winners of the March Madness contests. The model is primarily based (weighted at 5/7 of the model) of a composite of computer college basketball rankings. These computer based rankings are combined with two human based metrics (2/7 of the model): the NCAA selection committee’s S-Curve and preseason rankings from the Associated Press and the coaches (used as an indicator for “underlying player and coaching talent”). Additionally, Silver and his team adjust for injuries and player suspensions (using a statistic called win shares) and travel distance. Silver then simulates the tournament thousands of times to obtain predicted probabilities of each team advancing in each round (interactive graphic with the final model can be found here).

What other factors influence a win probability? Other inquiry has backed up Silver’s notion that rankings matter, and that season performance (wins (particularly away wins), offensive scoring) and historical team performance (final four appearances, championships) also can lend some predictive insight. Ken Pomeroy’s predictive rankings are also very popular (and also incorporated into Silver’s model), although details of his methods are hidden behind a paywall. His models highlight the importance of strength of schedule as an important factor in the equation. Additionally, ESPN’s Basketball Power Index (BPI), created by Alok Pattani and Dean Oliver, accounts for the final score, pace of play, site, strength of opponent and absence of key players in every Division I men’s game (a new addition to silver’s model this year). However, the inclusion of these metrics into a regression equation rarely gets you more predictive prowess than a coin toss (R2=.5).

Although modeling could help you gain valuable insight into your office bracket pool, it will not lead to a perfect bracket without a large amount of luck coming your way. Although sports do have a large amount of systematic variation, the inclusion of a good amount of random variation is what makes both prediction difficult and athletic contests beloved. When filling out your brackets this year, data driven analysis should give you leg up wouldn’t have had otherwise. Listen to what the fox has to say. (For further reading: predictive analytics are also used to predict which teams will be selected to the tournament on Selection Sunday, with surprising accuracy).

The following is a guest post by Concordia College sociology major Ryan Larson ’14 and continues his series predicting Olympic hockey results.

With the semi-finals set, I have indicated where I got predictions right and wrong. Keep in mind these are probabilities, and upsets are common in Olympic hockey (1980 anyone?). Recall that the models, at best, explained just under a third of the variance in probabilities. Therefore, getting more than 50% of the games correct would be a case of the model outperforming itself. These are probabilistic statements, and in the case of the Finland Russia game, we would expect each team to each win 50 games were 100 games between them to be played (In sum, it is no surprise that Finland won the game). Also, Slovenia’s defeat of Austria would have 30 times if 100 games were played (in theory), and that game Tuesday morning happened to be one of them. On the other hand, Latvia’s win was a bit more impressive considering their lack of NHL talent. Furthermore, the models are built to explain medal wins, not necessarily qualification playoffs.

In the following bracket, correct predictions are highlighted and incorrect forecasts are marked in red. Additionally, teams who were eliminated are crossed out. In terms of the semi-finals, Sweden’s probability of advancing to the gold medal game marginally increased with Finland’s defeat of host nation Russia. Predicting such rare events (Olympic medal wins), off of small sample sizes (only 4 previous games allowed NHL talent to participate), in a game with a lot of randomness is a difficult endeavor.

Semi-Finals

The following is a second guest post by Concordia College sociology major Ryan Larson ’14. An earlier post describes his models predicting the outcomes of the Olympic hockey tournament. After graduation, Ryan intends to pursue graduate study in sociology and criminology.

With the bracket set, I have decided to apply my models to the bracket to see how well my predictions fare. The previous analysis was completed before the bracket was released. With the bracket seedings now set, there cannot be a 1-2 Canada-USA finish. Therefore, I applied my model predicting whether a team would win any medal to the bracket competitions all the up to the final two games (Gold medal and Bronze medal game). For the two medal games, I used the gold medal model, for reasons discussed in the previous post. Below is the bracket with predictions of each game. Each probability is normalized to each game, so the relayed probability is the team’s chances of winning the game relative to who the team is playing.

Bracket

The following is a guest post by Concordia College sociology major Ryan Larson ’14. After graduation, Ryan intends to pursue graduate study in sociology and criminology. He is also a huge hockey fan.

Hockey is back at the forefront of the national sports consciousness thanks to T.J. Oshie and his Olympic shootout heroics against host team Russia on Saturday morning. Many in the media have made claims as to which country will obtain the coveted title of world hockey dominance (via a gold medal, which isn’t actually solid gold). However, to what extent are these claims mere speculation?

Oshie celebrates
The Claims

Baseball has long been the hallmark choice for sports analytics, due to its large sample sizes (162 game seasons) and relatively independent events (for a more thorough discussion, I highly recommend Nate Silver’s The Signal and The Noise, Ch. 3). Recently, analytics has moved to ice hockey which has been spearheaded by Rob Vollman. Not surprisingly, he has made one of the only claims on who takes home the gold peppered with any quantitative substance. Vollman makes an implicit assumption having many NHL players (also good ones) is an indicator for Olympic success. This makes theoretical sense, as the hegemonic domination of the NHL in the professional hockey market clearly attracts the world’s finest athletic performers. Jaideep Kanungo, in an aptly titled “Hockeynomics” article (following scholarship in Simon Kuper and Stefan Szymanski’s Soccernomics) claims that countries with higher populations (higher likelihood of producing elite talent), gross domestic product (more resources to support player development such as indoor ice and equipment), and experience (proxy of country support) may give clues to a team’s success in Sochi.

The Data

To evaluate these claims, I channeled my inner Nate Silver and constructed a dataset using the Olympic mens hockey teams from 1998-2010 (prior to 1998 NHL players were not allowed to participate). I coded each team’s aggregate NHL games played, goaltender games played, goals, assists, points, and all-stars. Additionally, I appended the NHL data with GDP per capita, population, and IIHF World Ranking in each respective competition year (the IIHF World Ranking was instituted in 2003, so I manually calculated the rankings of each country in the 1998 and 2002 games). The IIHF ranking is utilized as an indicator of international competition success. I also coded if a team won gold, or if any medal was won irrespective of its elemental composition. As could be assumed, the NHL measures are all highly correlated (Table 1). Therefore, in each analysis I chose to use the highest correlated NHL metric with each respective dependent variable (specifically, NHL games played for medal win and all-stars for gold win). For the stats geeks out there, I use of a multilevel random effects probit model structured hierarchically by year. Probit regression models probabilities of outcomes (here, of winning any medal and of winning a gold medal). This model deals with the non-independence of the dependent measure of cases in the same Olympic year, because when three teams medal (or one team obtains gold) all others do not. While these analyses have very few cases (n=52), the dataset is a population of all relevant teams and years (making statistical significance irrelevant).

Table 1

The Model

Table 2 depicts each predictor’s effect on the change in probability of success in the Winter Olympics.

Table 2

Looking at Table 2, we can glean three major insights on what best predicts Olympic hockey success:

1. NHL measures are relatively good predictors for Olympic team success. The addition of 1 NHL player increases a team’s probability of winning a medal by 12.9% and the addition of 1 all-star increases a participating country’s probability of winning gold by 13.4%. The NHL measures outperformed other predictors in the models by accounting for about a third of the variation in medal and gold medal wins by themselves. This finding supports the notion that having players with experience in the best league on the planet is crucial for Olympic success. These effects are particularly impressive considering the small size of the population and the fact that these models are predicting relatively rare events.

2. IIHF World Ranking points, GDP per capita, and country’s population prove to be relatively poor predictors of Olympic medal winning. Compared to the NHL metrics, the other factors in the model were not as predictive. The only measure that was associated with any substantial probability change was population size in the gold model – and it decreased the probability of winning a gold! This finding is most likely a statistical artifact of the small sample size, as only 4 gold winners were included in the analysis. A possible explanation for this artifact could be the cultural hockey support (which is outside the scope of this data) present in countries with relatively small populations that tend to fare well in the tournaments (Czech Republic, Sweden). This same explanation most likely holds for GDP per capita as well, and a bigger sample size may show positive effects. For the above theoretical reasons, population and GDP per capita were not included in the final model (brings Pseudo R2 to .25).

3. NHL all-stars are what drive gold medal wins. Olympic play is characterized by preliminary round robins followed by a bracket single-elimination tournament. As far as the NHL metrics are concerned, getting to be one of the select teams on a podium come tournament end is best predicted by the number of NHL players present on a country’s team. However, when predicting the rare event of a gold medal, all-stars take the predictive lead. In other words, when only 4 teams remain in the bracket (most likely littered with many NHL players) it is the team with the most all-star players that has the greatest probability to take home the title of world champion.

In sum, the models support Vollman’s notion that NHL players matter, and having very good players (all-stars) is key to winning the gold. However, the impacts of GDP per capita, IIHF ranking, and population were relatively weak. However, to fully investigate this notion a larger sample would be ideal (which may soon become impossible).

Predicting Sochi 2014

Using the above models, I entered the 2014 Olympic teams’ data into the equation (excluded GDP per capita and population from the gold model for reasons discussed above). Table 3 relays each team’s probability of winning any medal as well as taking home the gold in Sochi. As illustrated by the pseudo R2 values in Table 2, these predictive models do not account for the majority of variation in probabilities, but model fits of .329 (medal) and .25 (gold) are far from nothing. In spite of the small historical sample size and attempting to predict who will win out of the 12 very best international squads (tight competition), the predictors included should allow us to get a better idea of who will “bring home some hardware” in Sochi above and beyond the speculation rampant in the media.

Table 3

Much to my chagrin given my love for the Yanks, my models predict that the medalists for the 2014 Winter Olympics are as follows:

Table 4

No time for a think piece today — I have too many buffalo wings to eat, watery beers to drink, and hours of pre-game coverage to pass before my glazed eyes. But I thought I’d share some worthwhile readings for Super Bowl Sunday.

Trying to decide who to root for? Perhaps the political contributions of team owners will sway you? Broncos lean Right, Seahawks lean slightly Left.

football
Is the NFL ruining football with an ever-more complex rulebook? Yes. Is it to make more money? Most likely.

Is that shiny new stadium going to help your community? Is it worth public money? Al Jazeera, says no and no. Sociology Lens reviewed the scholarship on the subject back in November.

The Super Bowl is a festival of gendered marketing. What can sociology tells us about that?

We’ve all heard about the concerns about concussions in the NFL (if you haven’t seen Frontline on the subject yet, you must). The Grey Lady’s Frank Bruni has done a great job following this issue and connecting with larger concerns about violence, greed, and bloodlust. But I was also very fond of the introspective contribution by ThickCulture’s own Jose Marichal.

YouTube Preview Image

This is from John Green’s Crash Course series on US History. It gives an account of the rise of modern US conservatism, but I’m not sure how conservatives and libertarians will agree with this account. I think it’s interesting because it’s useful in framing the current ideological divides. The video starts off with Goldwater and segues to Nixon. While many might argue that current conservatism owes its roots to the founders or that the video ignores the 1920s (as evident in some of the YouTube comments), I think that Goldwater and the 1960s represents a good point of departure for modern US conservatism, since it represented a deterioration of the Democratic “solid South” and sets up the current political landscape.

What’s instructive here is how it explains how policy and politics aren’t independent of popular opinion. So, not all of Nixon’s policies are “conservative” (e.g., The EPA), as the Nixonian conservatism was embedded in a particular historical circumstance. While the “Silent Majority” who elected Nixon wasn’t happy with the social direction of the country, there was hardly a wholesale reduction of the federal government to pre-WWI levels.

Going beyond the video, I think that there are three distinct eras in modern conservatism. The rise of Nixon in 1968 (who lost in 1960 to Kennedy in the general election) was a backlash against the counter culture, in all of its manifestations. The rise of Reagan (who lost to Ford in the 1976 primaries) was not only a backlash against Carter, but brought together the anti-Communist stance of Goldwater, a move towards laissez-faire economic policy, and a social conservatism. Newt Gingrich’s 1994 “Contract with America” (which didn’t feature a social conservative stance) brought both houses of Congress under control of the GOP, but it signaled a divide: the “country club” Republicans versus the socially conservative populists. While George W. Bush managed to squeak by in 2000 with the help of the Supreme Court, he had a little more breathing room in 2004, winning with a “War on Terror” = “War in Iraq” messaging. He managed to keep together a coalition of social conservatives and fiscal conservatives, which fell apart by 2006 and evident in his nomination of Harriet Miers to the Supreme Court.

The fragmented state of the GOP is an interesting case because the party cannot contain the ideologies of its factions. Strong leadership may remedy this, but perhaps only to a point. What the conservative factions want and popular opinions on issues such as taxes, deficits, regulation, income inequality, minimum wage, abortion/reproductive rights, guns, entitlements, gay marriage, and immigration create too many possible failpoints for Presidential candidates and legislators.

While 2016 presidential election is a far off on the horizon, I’m not the first to point out that the Republican who wins (since 1968, after the South realigned) had his challenger come from the middle:

  • 1968: Nixon, challenged by Nelson Rockefeller
  • 1972: Nixon, challenged by Pete McCloskey
  • 1980: Reagan, challenged by George Bush
  • 1988: George Bush, challenged by Bob Dole
  • 2000: George W. Bush, challenged by John McCain

I’m not sure what a 2016 “most conservative electable candidate” looks like, but looking at a likely rough primary fight and swing state math, they’re not in an enviable position.

 

 

I’m back from Haiti.  It was pretty difficult to post from there with no electricity and one laptop for over 12 people.

Before I get into the meat of the service projects, I just want to post my impressions of being back in Port-au-Prince for the first time in nearly three and a half years.

The traffic in Port-au-Prince is just as congested as it was in the last quarter of 2010.  There were signs of sustainability in solar panels on the tops of the street lampposts.

But the most obvious change was the absence of rubble and numerous buildings in full or partial ruins.  Some of my colleagues in Hands of Light in Action who had been in the capital city during my absence noted the change in an October visit.

The other noticeable change was the lack of tent cities teeming with earthquake survivors rendered homeless by the seismic catastrophe.  The one near the airport was gone.  On a trip to Pétion-Ville, I didn’t see any evidence of the camp in the Place St. Pierre across from the St. Pierre Church.  Apparently the settlement had been cleared in 2011, an occasion marked by some as a milestone in earthquake recovery.

An article in the HuffPost posted last April noted that the number of persons in tent camps had declined by 79 percent.  In the months immediately following the quake, the number of people clustering in these deplorable conditions soared to 1.5 million.  While the International Organization of  Migration issued a report that indicated that yearlong rent subsidies had helped some households to move out of the settlements into more secure housing.  The report said that six percent of the departures from the camps were due to evictions.  It didn’t give a reason for the evictions.

In other cases, violence was used to empty out the camps.  I spent my first night in Haiti with a family who resides in Pétion-Ville.  On waking, I ventured outside to see the familiar blue tarps marking flimsy shelters on a steep hillside.  The displaced, like the poor, are with us still.

After three years, I’m about to embark on another trip to Haiti.  This time I’m allied with a California Lutheran University club SEEdS. (Students for Enlightenment and the Education of Sustainability) for Haiti, headed by Ryan Glatt, an Exercise Science Major from Simi Valley.  In all, 12 students from CLU will be heading to Haiti to do permaculture, and construction projects from Dec. 27 to January 17.  Due to family obligations, I’ll only be spending five days with the projects.

The student club will be working with Hands of Light in Action (HOLIA), a charity  that has responded to disasters in Haiti, Washington, Illinois, and Boulder, Colorado.  HOLIA was founded by Nancy Malone, a physical therapist.

More information is available on the trip in a news story on the CLU website at http://www.callutheran.edu/news/news_detail.php?story_id=10202 and in a Maria Sanchez podcast consisting of an interview with Ryan Glatt at http://mariasanchezshow.com/ryan-glatt-clu/

I’m excited about the opportunity to accompany CLU students on a service project of their own creation, rather than recruiting them to travel study courses that I have originated, albeit with a service component.