Ugh. I hate the new Facebook. I liked it better without the massive psychological experiments.
Facebook experimented on us in a way that we really didn’t like. Its important to frame it that way because, as Jenny Davis pointed out earlier this week, they experiment on us all the time and in much more invasive ways. The ever-changing affordances of Facebook are a relatively large intervention in the lives of millions of people and yet the outrage over these undemocratic changes never really go beyond a complaint about the new font or the increased visibility of your favorite movies (mine have been and always will be True Stories and Die Hard). To date no organization, as Zeynep Tufekci observed, has had the “stealth methods to quietly model our personality, our vulnerabilities, identify our networks, and effectively nudge and shape our ideas, desires and dreams.” When we do get mad at Facebook, it always seems to be a matter of unintended consequences or unavoidable external forces: There was justified outrage over changes in privacy settings that initiated unwanted context collapse, and we didn’t like the hard truth that Facebook had been releasing its data to governments. Until this week, it was never quite so clear just how much unchecked power Facebook has over its 1.01 billion monthly active users. What would governing such a massive sociotechnical system even look like?
I am, by far, not the first person to ask this question. I’m not even the first person to ask this in the wake of this most recent revelation. Kate Crawford in The Atlantic suggests that Facebook implement an opt-in system for experimental testing. That way, users could be presented with extremely clear and concise terms for their participation. I would add this might even be an opportunity to provide value back to users through small payments for being a part of the study. I’m not particularly thrilled with the power dynamics at play, but if I can get paid to take experimental drugs, I think I deserve some money to have my emotions manipulated by computer scientists.
In either case, an opt-in system would still be an in-house solution. Would it be possible, or even favorable, to have external oversight of Facebook’s practices? Should the government do it? What about some kind of “user’s advocate” position within the company? If the latter were to be implemented would we vote on representatives or would be invited through a lottery? And what about very specific and complex issues like this emotion study? Are current institutions enough or do we need something new? Let’s take these questions one by one:
Should the government do it?
This is a deceptively tricky question because on the one hand, they already do through the Securities and Exchange Commission (now that they’re publicly traded) and through the Federal Trade Commission (in their role as a consumer protections agency). Back in 2011, when Facebook defaulted a lot of privacy to “public” the FTC required that Facebook open itself up for regular audits for “no less than 20 years.” On the other hand, trust in government agencies is at an all time low (no link required) so why would we trust the fox to tell us about the security status of our hen house? Hell, we even rate companies based on how much they protect us from our own government.
Should we have a user advocate position within the company?
This seems like an elegant solution but a few simple questions make it seem implausible. This person, if they were to have any real oversight power, would actually need to be in the Facebook offices. That would mean they’d need to move away from their current source of income and take up shop in Menlo Park or some other regional office. This would also be a full-time job, so any kind of employment and requisite compensation would need to be replaced. If these were paid government positions (good luck making that happen), we’d be giving even more money to corporations. Being paid by Facebook would be an obvious conflict of interest. Making it an unpaid position would ensure that only independently wealthy Google Glass Explorers would run for office or accept the position if they “won” the lottery. Also, given Facebook’s global reach, how would we handle language barriers? Even India, which holds the largest democratic election in the world and contains dozens of languages within its borders, resorts to using one language (Hindi) that only about half of the country speaks. Representation will be exclusionary, it’s only a question of how much and for whom.
Regardless of whether we hold an election or a lottery we run up against the classic problems of representing others’ interests within a complicated bureaucracy. Several centuries of political thought suggest a couple of inevitable problems: First, the representative will start realizing, once they settle into the job, that really progressive campaign promises or previously held beliefs are “not realistic” within the confines of their job or term. It would not take long for them to, at least from the outside, look like the Silicon Valley equivalent of a “beltway insider.” It isn’t an indictment of the person, it’s a sociological fact of complex bureaucracies: they only work through internal logical and cultural consistency. It would be impossible for one person (or even a committee of people) to make any kind of substantive change without acquiescing to a fair amount of the existing business culture.
Should the advocate be elected?
It’s tempting to hold elections for someone that will represent us in Menlo Park. It just seems like the very epitome of democratic control. We’d all vote for someone that wants to protect our privacy from governments and the company itself. Maybe they would even campaign on the implementation of a “dislike” button. Regardless of their platform we’d run up against the same old problems with all elected officials: first, like all enormous elections, those with a shot of winning are the ones that appeal to the most amount of people. This isn’t always the best way to fill a job position. Not only would we run the risk of having User Advocate Grumpy Cat, we’d also probably end up with someone that knows how to, and has the resources for, a global media campaign. That doesn’t sound like your average person.
Should we hold an advocate lottery?
A lottery would not solve the compensation and culture problems faced by elections, but they might actually be more representative. In theory, this randomly chosen person has the best chance of being the modal Facebook user and thus providing a more representative perspective of most Facebook users. They might even have a shot at being more persuasive given that it’s a somewhat unenviable position to have never chosen to pursue the job but still be under pressure to advocate on behalf of fellow users. They have the rhetorical position of a juror: serving a public duty in as impartial a manner as possible.
How do we regulate or popularly govern complicated tasks and technologies?
This final question gets at one of the biggest and longest-standing issues of governing in a technologically advanced society. If a lay advocate doesn’t understand the technology or the experimental design on their own, they’ll have to have it explained to them by someone else. If that’s the Facebook employee trying to implement the feature or design the experiment that can get very tricky very quickly. There would be nothing stopping them from obscuring or understating the possibilities of harm to users. How would the advocate make an informed decision?
Some of the first urban planners, in fact, were obsessed with this kind of question. Modern cities, if they were to fairly and efficiently distribute goods and services, would have to be deliberately planned so that the technologies of daily life didn’t end up in too few hands. It was obvious to them that without diligent and proactive planning, large cities would always be places of extreme power and wealth inequality. There was no other way around it. The very early work of those planners still remains severely underutilized in street, as well as digital, networks.
Are our existing institutions up for the task, or do we need new ones?
This is, essentially, the kind of question the federal government ran up against in the late 50s and early 60s when several disturbing psychological and medical experiments became public knowledge. Expertise can become so specific and so complex that only fellow experts appear fit to assess an experiment’s validity, efficacy, or ethical standing. Laura Stark’s work on the early history of American social science and medical ethics review seems incredibly prescient right now more than ever. It is tempting to paint Institutional Review Boards –those obscure university bodies that assess the ethics of research designs–as outgunned and outmaneuvered by private companies but that would be missing the point. IRBs, and Cornell’s in the case of the Facebook study, are doing exactly what they were designed to do.
According to Stark, in an essay for the Law & Society Review [paywall], IRBs were not originally set up to solely defend the rights of research subjects. She writes, “At first, there was not a tremendously high priority on determining what, precisely, constituted proper treatment of human subjects: the federal aim was above all to disperse responsibility for this new thing called subjects’ rights.”
How does the initial motivation for IRBs influence their current behaviour? A lot actually. IRBs have a great deal of discretion and that discretion is invariably wrapped up in how decisions can be justified to an angry government looking to not only disperse blame, but to come up with a rationale of why something was approved in the first place. Stark, theorizing the initial formation of these boards, writes “IRBs were declarative groups– their act of deeming a practice acceptable would make it so.” Indeed, that is still the case. Cornell’s IRB declared that “[b]ecause the research was conducted independently by Facebook and [Cornell’s] Professor Hancock had access only to results” the study design was ethical.
It is incredibly difficult to say whether IRBs’ wide discretion needs to be reined in. While this particular Facebook study should not have happened the way it did, making all research more complicated is not the answer. If you are a researcher, or even a friend of one, you probably know the pain and frustration of IRBs’ seemingly arbitrary research design changes. You also probably know that the pre-packaged ethics training one receives as a prerequisite for submitting something to the IRB has all of the intellectual stimulation of an SAT test. There is something deeply broken and no apparent way to fix it.
Starks’ prescriptions for improved IRB boards include “(1) drawing more people into the ethics review process, and (2) pressing this new cast of decision makers to talk to each other.” These are good suggestions in the University context, but what about corporations? Is this something that IRBs need more training in, or do we need to pass new laws that require IRBs in the corporate sector? Given the ever-increasing overlap of industry and academia, I’m more inclined to revamp IRB board training and mandated ethics training for researchers. We will continue to see collaboration across companies and universities for the foreseeable future, but at least the ones doing the research will have to pass through a university first. Part of being a researcher in data science, cognitive science, or any of the classic social and behavioral sciences will need much better training. That way, the next time a social scientist is presented with the alluring and increasingly infrequent opportunity to work on a well-funded project they will design a better, more ethical experiment.
Comments 4
JJ — July 3, 2014
I appreciate your lengthy criticism. It's very important to approach such issues with as many views as possible.
However, I think the solution is far simpler. I'm not Randian idealist, but it seems to me a market solution works far better. No website should be treated like a utility, even if it superficially appears to be. Facebook is no different than any other company. If it fails to keep its users (product) engaged, it will fail to satisfy the needs of its customers (advertisers).
I don't think Facebook is anything worth saving. Delaying the issue with bureaucracy and regulation doesn't solve it. The only solution is the adoption of free and open source software tools that give the end user full control and choice about how their data is used. Only once the power has been returned to the individual can they have agency and self-determination over their data.
You can't force democracy onto a private company. The closest thing we have resembling that is open web standards with transparent systems of governance. Users that value agency, autonomy and free speech have already adopted free software and open, royalty free standards. Companies that adopt and support these standards will succeed with users that acknowledge their importance.
Facebook is a relic of 20th century authoritarian command and control philosophy. Individuals will never have a voice in that model. It's time to stop perpetuating its existence by giving further support to companies like Facebook that ascribe to it.
Atomic Geography — July 3, 2014
David, enjoyed this piece.
Echoing JJ's comment though, yes FB has at present a fairly unique role in cyberscape (not to be digitally dualist, but to recognize a 'scape with distinguishable characteristics), but is it a "natural monopoly"? (again not endorsing any connotations of "natural" just using the lingo)
In other words, are the barriers to competition to FB likely to be insurmountable?
Already there is an array of social networking sites exploiting various niches. They don't have the scale of FB, but how much is that a product of how they run their business rather than the characteristics of social networking?
If FB is classified as a public utility, perhaps Walmart should be as well.
This goes considerably beyond rules and standards to a more fundamental re-odering of our economic system. I agree this could be a good thing. But it would have far ranging implications that deserve to be acknowledged.
Library: A Round-up of Reading | Res Communis — July 21, 2014
[…] Can Facebook Be Governed? – Cyborgology […]