crime/law

Buzzfeed News recently ran a story about reputation management companies using fake online personas to help their clients cover up convictions for fraud. These firms buy up domains and create personal websites for a crowd of fake professionals (stock photo headshots and all) who share the same name as the client. The idea is that search results for the client’s name will return these websites instead, hiding any news about white collar crime.

In a sea of profiles with the same name, how do you vet a new hire? Image source: anon617, Flickr CC

This is a fascinating response to a big trend in criminal justice where private companies are hosting mugshots, criminal histories, and other personal information online. Sociologist Sarah Lageson studies these sites, and her research shows that these databases are often unregulated, inaccurate, and hard to correct. The result is more inequality as people struggle to fix their digital history and often have to pay private firms to clean up these records. This makes it harder to get a job, or even just to move into a new neighborhood.

The Buzzfeed story shows how this pattern flips for wealthy clients, whose money goes toward making information about their past difficult to find and difficult to trust. Beyond the criminal justice world, this is an important point about the sociology of deception and “fake news.” The goal is not necessarily to fool people with outright deception, but to create just enough uncertainty so that it isn’t worth the effort to figure out whether the information you have is correct. The time and money that come with social class make it easier to navigate uncertainty, and we need to talk about how those class inequalities can also create a motive to keep things complicated in public policy, the legal system, and other large bureaucracies.

Evan Stewart is an assistant professor of sociology at University of Massachusetts Boston. You can follow him on Twitter.

As summer approaches and ads for part-time student work start popping up all over campus, it is a good time to talk about the sociology of sales. The Annex podcast recently ran a segment on multi-level marketing (MLM) organizations, and I just finished the binge-worthy podcast series The Dream, which follows the history of these companies and the lives of people who sell their products.

Photo Credit: Retrogasm, Flickr CC

Sometimes called direct sales or network marketing, these organizations offer part time, independent work selling everything from handbags to health supplements. The tricky part is that many of these groups spend more time encouraging people to recruit friends and family to sell, rather than moving products through traditional retail markets. People draw on their nearby social networks to make sales and earn bonuses, often by hosting parties or meeting in small groups.

You might have seen pitches for one of these groups at your local coffee shop or campus. Some MLMs get busted for using this model to build illegal pyramid schemes, while other direct sales companies claim to follow the law by providing employee protections.

Photo Credit: Neo_II, Flickr CC

MLMs are a rich example for all kinds of sociology. You could do an entire Introduction to Sociology class branching out from this case alone! Here are a few examples that The Dream inspired for me (find episodes here):

  • Economic sociologists can talk about the rise of precarious labor and the gig economy—conditions where more people feel like they need to be entrepreneurs just to survive. MLMs are particularly good at using these social conditions for recruitment.
  • Sociologists of gender will have a lot to say about how these groups recruit women, targeting our gendered assumptions about who needs part-time, flexible work and who is best suited to do the emotional work of sales. Pair readings with Episode 2: “Women’s Work.”
  • I’ve seen a fair number of MLM pitches in coffee shops and accidentally walked into a few in college. Watching these pitches is a masterclass in symbolic interactionism, and students can see how people build rapport with each other through face work and sales parties as rituals. Pair with Episode 3: “Do you party?” 
  • Many of these companies are either religiously-affiliated or lean on religious claims to inspire and motivate recruits. Sociologists of religion and culture can do a lot with the history of the New Thought movement. Pair The Protestant Ethic with Episode 4: “The Mind is a Fertile Field.”
  • Political sociologists can use the history of how these groups get around regulation to talk about corporate influence in the political world and how elites coordinate. Sociologists of Law will also love the conversation about legitimacy, especially how direct sales organizations learned to distinguish themselves from “clearly illegal pyramid schemes.” Pair with Episode 7: “Lazy, Stupid, Greedy or Dead.”

This is a great focus topic for the social sciences, both because it touches on so many trends in the US culture and economy, and because college students and recent graduates are often a target market for many of these groups.

Evan Stewart is an assistant professor of sociology at University of Massachusetts Boston. You can follow him on Twitter.

The rise of online shopping at the holiday season highlights some pretty Grinchy behavior. Local news and home security companies have been trumpeting market research about so-called “porch pirates” swiping deliveries before people can get home from work or school to bring them inside.

Most of the current solutions for package security aren’t that great. If you don’t feel comfortable trusting Amazon or some other company to remotely run your door locks for deliveries (or if you live in an apartment building without a fancy mailroom), getting packages can be a gamble unless you can route them to a secure delivery site. If someone wants to send you a gift with all the warm intentions of a classic Christmas tradition, their surprise could end up costing everyone a lot more time, money, and stress.

That friction between the idea of the gift and the gift itself is a great example of sociological theory at work. Pierre Bourdieu wrote about gift exchanges throughout his work, especially the idea that giving a gift has a “double truth.” People want to show kindness and generosity, expecting nothing in return, but gifts are still exchanged in relationships. That exchange implicitly demands some things: your thanks, your continued commitment to the relationship, and often a different gift at a different time. This seems like a contradiction, but both things can be true because there are different styles of gift-giving tied to time and place. Exchange too quickly and you look like you’re trying to tie up a relationship and move on. Respond too slowly, and it looks like you have forgotten your loved ones.

To betray one’s haste to be free of an obligation one has incurred, and thus to reveal too overtly one’s desire to pay off services rendered or gifts received, so as to be quits, is to denounce the initial gift…It is all a question of style, which means in this case timing and choice of occasion, for the same act-giving, giving in return, offering one’s services, paying a visit, etc. – can have completely different meanings at different times, coming as it may at the right or the wrong moment… (Outline of a Theory of Practice, 1977, Pp. 5-6)

Package pirates put a whole new strain on our relationships at special occasions. Now, if someone mails you a gift, accepting it gracefully might also mean being responsible for its security. What happens if your apartment has said they will not be liable for packages delivered, or your work schedule may not get you home in time to receive them? Do you sound ungrateful if you complain about these things or ask not to receive gifts?

On the other hand, it might also become much more rude to send someone a holiday surprise without a heads up first. It is also important to ask ourselves whether we are putting the idea of sending a gift ahead of the actual experience of our loved ones receiving it.

This time of year, we often say “it’s the thought that counts.” If that’s true, we might have to think carefully about some of the social norms for sending gifts until the shipping industry can catch up.Evan Stewart is an assistant professor of sociology at University of Massachusetts Boston. You can follow him on Twitter.

By now, you’ve probably heard about the family separation and detention policies at the U.S. border. The facts are horrifying.

Recent media coverage has led to a flurry of outrage and debate about the origins of this policy. It is a lot to take in, but this case also got me thinking about an important lesson from sociology for following politics in 2018: we’re not powerless in the face of “fake news.”

Photo Credit: Fibonacci Blue, Flickr CC

Political sociologists talk a lot about framing—the way movements and leaders select different interpretations of an issue to define and promote their position. Frames are powerful interpretive tools, and sociologists have shown how framing matters for everything from welfare reform and nuclear power advocacy to pro-life and labor movements.

One of the big assumptions in framing theory is that leaders coordinate. There might be competition to establish a message at first, but actors on the same side have to get together fairly quickly to present a clean, easy to understand “package” of ideas to people in order to make political change.

The trick is that it is easy to get cynical about framing, to think that only powerful people get to define the terms of debate. We assume that a slick, well-funded media campaign will win out, and any counter-frames will get pushed to the side. But the recent uproar over boarder separation policies shows how framing can be a very messy process. Over just a few days, these are a few of the frames coming from administration officials and border authorities:

We don’t know how this issue is going to turn out, but many of these frames have been met with skepticism, more outrage, and plenty of counter-evidence. Calling out these frames alone is not enough; it will take mobilization, activism, lobbying, and legislation to change these policies. Nevertheless, this is an important reminder that framing is a social process, and, especially in an age of social media, it is easier than ever to disrupt a political narrative before it has the chance to get organized.Evan Stewart is an assistant professor of sociology at University of Massachusetts Boston. You can follow him on Twitter.

The recent controversial arrests at a Philadelphia Starbucks, where a manager called the police on two Black men who had only been in the store a few minutes, are an important reminder that bias in the American criminal justice system creates both large scale, dramatic disparities and little, everyday inequalities. Research shows that common misdemeanors are a big part of this, because fines and fees can pile up on people who are more likely to be policed for small infractions.

A great example is the common traffic ticket. Some drivers who get pulled over get a ticket, while others get let off with a warning. Does that discretion shake out differently depending on the driver’s race? The Stanford Open Policing Project has collected data on over 60 million traffic stops, and a working paper from the project finds that Black and Hispanic drivers are more likely to be ticketed or searched at a stop than white drivers.

To see some of these patterns in a quick exercise, we pulled the project’s data on over four million stop records from Illinois and over eight million records from South Carolina. These charts are only a first look—we split the recorded outcomes of stops across the different codes for driver race available in the data and didn’t control for additional factors. However, they give a troubling basic picture about who gets a ticket and who drives away with a warning.

(Click to Enlarge)

(Click to Enlarge)

These charts show more dramatic disparities in South Carolina, but a larger proportion of white drivers who were stopped got off with warnings (and fewer got tickets) in Illinois as well. In fact, with millions of observations in each data set, differences of even a few percentage points can represent hundreds, even thousands of drivers. Think about how much revenue those tickets bring in, and who has to pay them. In the criminal justice system, the little things can add up quickly.

The Washington Post has been collecting data on documented fatal police shootings of civilians since 2015, and they recently released an update to the data set with incidents through the beginning of 2018. Over at Sociology Toolbox, Todd Beer has a great summary of the data set and a number of charts on how these shootings break down by race.

One of the main policy reforms suggested to address this problem is body cameras—the idea being that video evidence will reduce the number of killings by monitoring police behavior. Of course, not all police departments implement these cameras and their impact may be quite small. One small way to address these problems is public visibility and pressure.

So, how often are body cameras incorporated into incident reporting? Not that often, it turns out. I looked at all the shootings of unarmed civilians in The Washington Post’s dataset, flagging the ones where news reports indicated a body camera was in use. The measure isn’t perfect, but it lends some important context.

(Click to Enlarge)

Body cameras were only logged in 37 of 219 cases—about 17% of the time—and a log doesn’t necessarily mean the camera present was even recording. Sociologists know that organizations are often slow to implement new policies, and they don’t often just bend to public pressure. But there also hasn’t been a change in the reporting of body cameras, and this highlights another potential stumbling block as we track efforts for police reform.Evan Stewart is an assistant professor of sociology at University of Massachusetts Boston. You can follow him on Twitter.

Over at Family Inequality, Phil Cohen has a list of demographic facts you should know cold. They include basic figures like the US population (326 million), and how many Americans have a BA or higher (30%). These got me thinking—if we want to have smarter conversations and fight fake news, it is also helpful to know which way things are moving. “What’s Trending?” is a post series at Sociological Images with quick looks at what’s up, what’s down, and what sociologists have to say about it.

The Crime Drop

You may have heard about a recent spike in the murder rate across major U.S. cities last year. It was a key talking point for the Trump campaign on policing policy, but it also may be leveling off. Social scientists can also help put this bounce into context, because violent and property crimes in the U.S. have been going down for the past twenty years.

You can read more on the social sources of this drop in a feature post at The Society Pages. Neighborhood safety is a serious issue, but the data on crime rates doesn’t always support the drama.Evan Stewart is an assistant professor of sociology at University of Massachusetts Boston. You can follow him on Twitter.

Originally posted at Scatterplot.

There has been a lot of great discussion, research, and reporting on the promise and pitfalls of algorithmic decisionmaking in the past few years. As Cathy O’Neil nicely shows in her Weapons of Math Destruction (and associated columns), algorithmic decisionmaking has become increasingly important in domains as diverse as credit, insurance, education, and criminal justice. The algorithms O’Neil studies are characterized by their opacity, their scale, and their capacity to damage.

Much of the public debate has focused on a class of algorithms employed in criminal justice, especially in sentencing and parole decisions. As scholars like Bernard Harcourt and Jonathan Simon have noted, criminal justice has been a testing ground for algorithmic decisionmaking since the early 20th century. But most of these early efforts had limited reach (low scale), and they were often published in scholarly venues (low opacity). Modern algorithms are proprietary, and are increasingly employed to decide the sentences or parole decisions for entire states.

“Code of Silence,” Rebecca Wexler’s new piece in Washington Monthlyexplores one such influential algorithm: COMPAS (also the study of an extensive, if contested, ProPublica report). Like O’Neil, Wexler focuses on the problem of opacity. The COMPAS algorithm is owned by a for-profit company, Northpointe, and the details of the algorithm are protected by trade secret law. The problems here are both obvious and massive, as Wexler documents.

Beyond the issue of secrecy, though, one issue struck me in reading Wexler’s account. One of the main justifications for a tool like COMPAS is that it reduces subjectivity in decisionmaking. The problems here are real: we know that decisionmakers at every point in the criminal justice system treat white and black individuals differently, from who gets stopped and frisked to who receives the death penalty. Complex, secretive algorithms like COMPAS are supposed to help solve this problem by turning the process of making consequential decisions into a mechanically objective one – no subjectivity, no bias.

But as Wexler’s reporting shows, some of the variables that COMPAS considers (and apparently considers quite strongly) are just as subjective as the process it was designed to replace. Questions like:

Based on the screener’s observations, is this person a suspected or admitted gang member?

In your neighborhood, have some of your friends or family been crime victims?

How often do you have barely enough money to get by?

Wexler reports on the case of Glenn Rodríguez, a model inmate who was denied parole on the basis of his puzzlingly high COMPAS score:

Glenn Rodríguez had managed to work around this problem and show not only the presence of the error, but also its significance. He had been in prison so long, he later explained to me, that he knew inmates with similar backgrounds who were willing to let him see their COMPAS results. “This one guy, everything was the same except question 19,” he said. “I thought, this one answer is changing everything for me.” Then another inmate with a “yes” for that question was reassessed, and the single input switched to “no.” His final score dropped on a ten-point scale from 8 to 1. This was no red herring.

So what is question 19? The New York State version of COMPAS uses two separate inputs to evaluate prison misconduct. One is the inmate’s official disciplinary record. The other is question 19, which asks the evaluator, “Does this person appear to have notable disciplinary issues?”

Advocates of predictive models for criminal justice use often argue that computer systems can be more objective and transparent than human decisionmakers. But New York’s use of COMPAS for parole decisions shows that the opposite is also possible. An inmate’s disciplinary record can reflect past biases in the prison’s procedures, as when guards single out certain inmates or racial groups for harsh treatment. And question 19 explicitly asks for an evaluator’s opinion. The system can actually end up compounding and obscuring subjectivity.

This story was all too familiar to me from Emily Bosk’s work on similar decisionmaking systems in the child welfare system where case workers must answer similarly subjective questions about parental behaviors and problems in order to produce a seemingly objective score used to make decisions about removing children from home in cases of abuse and neglect. A statistical scoring system that takes subjective inputs (and it’s hard to imagine one that doesn’t) can’t produce a perfectly objective decision. To put it differently: this sort of algorithmic decisionmaking replaces your biases with someone else’s biases.

Dan Hirschman is a professor of sociology at Brown University. He writes for scatterplot and is an editor of the ASA blog Work in Progress. You can follow him on Twitter.