Samaritans

The blow-up over Samaritans Radar is a couple of weeks old now, but I still want to say something about it, because – watching stuff about it spin past on my Twitter timeline – some things struck me.

For those who don’t know, Samaritans Radar is/was a Twitter app – which makes use of the Twitter API – that allowed one to monitor someone’s Twitter profile for any tweets that suggested plans or intentions to commit suicide, and would accordingly send notifications to the person monitoring. It’s since been yanked while the developers hopefully sit in a corner and think about what they’ve done, but the intention was – ostensibly – to enable family, friends, and other loved ones to identify when someone was in trouble in situations where it might otherwise be difficult to tell and to reach out to that person, offering help and counsel.

All well and good, except for how no.

Almost immediately people began pointing out the obvious problems – and the obvious dangers – of such a thing. Among other things, as Adrian Short (the author of several posts about the app, as well as the Change.org petition demanding Twitter shut the app down) pointed out, there are major technical problems inherent in identifying something as situational as someone’s unique expression of their mental and emotional state via algorithmic classification. Essentially, he writes, it doesn’t work, it can’t currently work, and the developers never should have released it at all for those reasons alone:

There’s little evidence that Samaritans and Jam have really taken the trouble to think about these issues other than at the most superficial level. And so when they start thinking about tweaking or redesigning Samaritans Radar, it doesn’t really matter whether it’s opt-in or opt-out, whether you think it’s ethical or whether your lawyers say it’s lawful. It doesn’t actually matter what I or anyone else thinks about what you’re trying to achieve. The thing just won’t actually function unless you do the really hard work to solve a series of hard problems in a new domain. And if you can manage that, the first thing you should be doing is publishing research papers not sending press releases and launching apps. As it stands, the core classifier in Samaritans Radar is cargo cult technology: a black box that purports to do something useful but actually does very little at all. Build it into any app you like: it still won’t work.

But something else that emerged as the app attracted increasing levels of attention was what specifically much of the media seemed to focus on as the primary issue, at least at first. The center of the narrative was often the potential abuse of privacy, given that the app made it possible to surveil someone’s activity on Twitter without their knowledge, and drew potentially sensitive inferences from it (and some even insisted it was worth the violation of that privacy). Clearly the privacy aspect is a problem – a big one – but I saw a number of people concerned with mental health and safe spaces pointing out another huge flaw, one that a lot of others were ignoring in favor of privacy concerns.

Given that it highlights specific points of vulnerability in already vulnerably people, Samaritans Radar is/was explicitly, potentially lethally dangerous.

One of the ugliest things that we’ve seen recently as a part of harassment campaigns are repeated attempts to get targets to injure or kill themselves, especially if those targets are already struggling with the pain from other attacks. Given this, Samaritan’s Radar was ready-made for this kind of organized assault, and it both made an existing unsafe space even less safe and highlighted ways in which it was already fraught with hazard.

Others have pointed this out already. I’m not saying anything new there. What I want to highlight are two things that stand out to me.

First, the developers of the app claim to have created the app specifically to save lives. This isn’t something that was created to do something completely unrelated to mental health that just so happened to contain a dangerous side function. This was an intentional foray into a particular arena that ended up doing the exact opposite of what it was allegedly intended to do. This places it in company with things like Apple’s Health app (and other related apps), which was created to assist in efforts to maintain good health but which in fact might be worse than useless for many people.

Second, what follows from the first point is the conclusion that the app’s developers simply didn’t realize that this would be a problem. Which is a pretty glaring omission of the imagination. Many people – especially people entrenched in social justice activism – noted that the potential for targeted harassment should have been obvious to anyone who thought about it for more than a minute.

But this shouldn’t be surprising, and it shouldn’t be surprising that more than one news story on the app initially played down or neglected this aspect. Given who’s disproportionately likely to experience this kind of targeted harassment, it would have been fairly remarkable – and heartening – if it had leaped to the top of the narrative from the start.

As with the Health app, what I think this comes down to is that you can’t design for what you don’t see, what you don’t know, what it doesn’t occur to you to imagine. People who aren’t members of marginalized groups are far less likely to be aware of how harassment works in digital spaces, and will therefore be ill-equipped to fight it. The developers of Samaritans Radar were aware that people could express powerfully negative emotion on Twitter. They didn’t realize how that emotion could be – and is – weaponized.

So no, this isn’t a new story. It’s just another example of why the stakes of “diversity” in the tech industry are far higher than the kind of preening self-congratulation that we see from companies like Apple and Google. We are, again, talking about the safety of actual human beings.

Sarah is on Twitter – @dynamicsymmetry