Almost ten years ago, then-editor of Wired magazine Thomas Goetz wrote an article titled “Harnessing the Power of Feedback Loops.” Goetz rightly predicted that, as the cost of producing sensors and other hardware continues to decrease, the feedback loop will become an essential mechanism used to govern many aspects of our lives through the stages of evidence, relevance, consequence, and action. Provide people with important and actionable information, and we can expect them to act to improve the activity monitored to generate that information. 

Behavior modification technologies (BMT) have indeed become a large market, especially in the wellness industry. These technologies augment the body and affect behavior through surveillance and feedback. One has  augmented willpower when using gamified apps which encourage physical activity, augmented memory through products that remind users of things they need to do, augmented sensations when a water bottle tells its user when to drink. Supplementing and replacing mental processes with feedback systems, users tie them to a standardized measure: a codified difference between enough and not enough. Users employ these technologies because they promise self-optimization with the technologies’ help. Failing to use these tools, or failing to respond to their prompts, is increasingly cast as irresponsible, as healthcare costs rise and chronic ‘lifestyle diseases’ lead the charts in causes of death in the United States.

BMT materializes the premise that individuals can control personal health outcomes, and solidifies health and wellness as personal moral imperatives. The personalization of health and related moral connotations wrought by BMT resonates with another temporal-historic ideological trend that has become a defining feature of  2020: that of public health as a matter of personal decision-making in a pandemic.

The response to the COVID-19 pandemic here in the United States is based in many cases on the explicit suggestion that one’s health is a personal matter, somehow individually controllable. One might think the image of humans as monads independent of context would run up against a barrier in a pandemic; imagining a contagious disease in any manner other than essentially social and environmental seems almost maximally counterintuitive. Yet this is exactly the approach offered by federal and state administrators seeking to return to business as usual. This may become the only understanding of which we are now capable, after the neoliberal hollowing-out of any conceptions of public or social goods as anything other than mere sum totals of individual benefits and costs.

Control and concomitant responsibility over one’s own health is a fantasy fed from two very American ideological currents: the individualist and techno-idealist. If the understanding of public health as merely an agglomeration of personal actions becomes entrenched in the wake of the COVID-19 pandemic, it will not be solely because of libertarian tendencies and Trump populism; the fantasy of having one’s own health entirely within one’s own control has been long in the making, cultivated by progressive techno-elites who have been at the forefront of personal optimization technologies that assume and entrench an aspirational, technologically augmented, continuous journey towards the individual “best self.” This notion of personal health control is at least honest insofar as it displays the degree to which good health in the United States is massively dependent on socioeconomic standing.

So there is another loop operating here, of circular reasoning. First, responsibility for one’s own health is created by scarcity, most obviously through the denial of adequate universal healthcare and high costs of private alternatives. This state of precarity is excused by individual empowerment and responsibility in the form of self-surveillance: one can wear the interpellating sensors and enter into a constant state of health maintenance, in fear of slipping up but encouraged by the promise of complete self control. Acceptance of control and responsibility over one’s own health, which seems at first as a democratic and liberating technological achievement, opens the door to excusable deaths in the ongoing, mundane circumstances of heart disease and other chronic lifestyle-related diseases. In a world of personalized health responsibility, these deaths cease to be results of anything but individual will. Sugar subsidies, food deserts, cultural factors, and economic determinants disappear, leaving only “individual choices.”

The pandemic will end someday, but the trend manifest in BMT is only growing. In a nation willing to sacrifice the lives of its citizens to preserve the claim that its healthcare resources must remain competitively scarce, is it not a consistent movement to institute competitive measures to ensure those who receive care have done what they can to minimize their risks? We should not forget the lesson of the second wave of COVID-19 cases: much as we may want to believe we can control our own health, to accept sole responsibility for it creates space for personal consequences on the grounds of public failures and erases any realistic concept of collective wellbeing.

Headline pic via: Source

Daniel Affsprung is a recent graduate of Dartmouth College’s Master of Arts in Liberal Studies program, where he researched AI, big data and health tracking.

The following is an edited transcript of a brief talk I gave as part of the ANU School of Sociology Pandemic Society Panel Series on May 25, 2020.  

 The rapid shift online due to physical distancing measures has resulted in significant changes to the way we work and interact. One highly salient change is the use of Zoom and other video conferencing programs to facilitate face-to-face communications that would have otherwise taken place in a shared physical venue.

A surprising side effect that’s emerging from this move online has been the seemingly ubiquitous, or at least widespread, experience of physical exhaustion. Many of us know this exhaustion first-hand and more than likely, have commiserated with friends and colleagues who are struggling with the same. This “Zoom fatigue,” as it’s been called, presents something of a puzzle.

Interacting via video should ostensibly require lower energy outputs than an in-person engagement. Take teaching as an example. Teaching a class online means sitting or standing in front of a computer, in the same spot, in your own home. In contrast, teaching in a physical classroom means getting yourself to campus, traipsing up and down stairs, pacing around a lecture hall, and racing to get coffee in the moments between class ending and an appointment that begins 2 minutes sooner than the amount of time it takes you to get back to your office. The latter should be more tiring. The former, apparently, is. What’s going on here? Why are we so tired?

I’ll suggest two reasons rooted in the social psychology of interaction that help explain this strange and sleepy phenomenon. The first has to do with social cues and the specific features, or affordances, of the Zoom platform. The second is far more basic.

Affordances refer to how the design features of a technology enable and constrain the use of that technology with ripple effects onto broader social dynamics. The features of Zoom are such that we have a lot of social cues, but in slightly degraded form to those which we express in traditional, shared space settings. We can look each other in the eye and hear each other’s voices, but our faces aren’t as clear, the details blurrier. Our wrinkles fade away but so too do the subtleties they communicate. We thus have almost enough interactive resources to succeed and don’t bother supplementing in the way we might on a telephone call, nor do we get extra time to pause and process in the way we might in a text-based exchange. Communication is more effortful in this context and siphons energy we may not realize we’re expending.

So the first reason is techno-social. The features of this platform require extra interactive effort and thus bring forth that sense of fatigue that so many of us feel. We don’t have the luxury of time, as provided by text-based exchanges, or the benefit of extra performative effort, like we give each other on the phone, nor do we have the full social cues provided by traditional, face-to-face interaction.

However, I can think of plenty of video calls I’ve had outside of COVID-19 that haven’t felt so draining. Living in a country that is not my home country means I often talk with friends, family, and colleagues via video. I’ve been doing this for years. I didn’t dread the calls nor did I need a nap afterwards. I enjoyed them and often, got an energy jolt. So why then, and not now? Or perhaps why now, and not then? Why were those calls experientially sustaining and these calls demanding?  This leads me to a second proposal in which I suggest a more basic, less technical interactive circumstance that compounds the energy-sapping effects of video conferencing and its slightly degraded social cues.

The second, low-tech reason we may be so tired is because of a basic social psychological process, enacted during a time of a crisis. The process I’m talking about is role-taking, or putting the self in the position of the other, perceiving the world from the other’s perspective. This is a classic tenet of social psychology and integral to all forms of social interaction. All of us, all the time, are entering each other’s perspectives and sharing in each other’s affective states. When we do this now, during our Zoom encounters—because these are the primary encounters we are able to have—we are engaging with people whose moods are, on balance, in various states of disrepair. I would venture that interacting in person at the moment would also contain an element of heightened anxiety and malaise because in the midst of social upheaval, that’s the current state of emotional affairs.

Ultimately what we’re left with is a set of interactive conditions in which we have to strain to see each other and when we do, we’re hit with ambient distress. This is why Zoom meetings seem to have a natural, hard attention limit, and why sitting at a computer has left so many of us physically fatigued.

 

Jenny Davis is on Twitter @Jenny_L_Davis

The term “meme” first appeared in the 1975 Richard Dawkins’ bestselling book The selfish gene. The neologism is derived from the ancient Greek mīmēma, which means “imitated thing”. Richard Dawkins, a notorious evolutionary biologist, coined it to describe “a unit of cultural content that is transmitted by a human mind to another” through a process that can be referred as “imitation”. For instance, anytime a philosopher ideates a new concept, their contemporaries interrogate it. If the idea is brilliant, other philosophers may eventually decide to cite it in their essays and speeches, with the outcome of propagating it. Originally, the concept was proposed to describe an analogy between the “behaviour” of genes and cultural products. A gene is transmitted from one generation to another, and if selected, it can accumulate in a given population. Similarly, a meme can spread from one mind to another, and it can become popular in the cultural context of a given civilization. The term “meme” is indeed a monosyllable, which resembles the word “gene”.

The concept of memes becomes relevant when they are considered as “selfish” entities. Dawkins’ book revolves around the idea that genes are the biological units upon which natural selection acts. Metaphorically, the genes that are positively selected – if they had a consciousness – would for example use their vehicles, or hosts, for their own preservation and propagation. They would behave as though they were “selfish”.

When this principle is applied to memes, we should not believe that cultural products – such as popular songs, books or conversations – can reason in a human sense, exactly as Dawkins did not mean that genes can think as humans do. We basically mean that their intrinsic capability to be retained in the human mind and proliferate does not necessarily favour their vehicles, the humans. As an example, Dawkins proposes the idea of “God”. God is a simplified explanation for a complex plethora of questions on the origin of human existence and, overall, of the entire universe. However, given its comforting power, and its ability to release the human mind from the chains of perpetual anguish, the idea of “God” is contagious. Most likely, starting with the creation of God, the human mind got infected by other memes, such as “life after death”. When they realized they could survive their biological end, humans no longer feared death. However, if taken to the extreme, this meme could favour the spread of “martyrs”, people who would sacrifice their biological life for the sake of the immortal one.

There are many other examples of memes that displayed, and still display, a dangerous and apparent “selfish” behaviour. The religious ideology that led to the massacres of the Crusades, which are estimated to have taken the lives of 1.7 million people, or the suicidal behaviour of terrorists, or even, on a global scale, the human culture as a threat to the well-being of the planet, and to humanity itself.

Thus, a meme is a viral piece of information, detrimental, beneficial or irrelevant for the host, that is capable of self-replication and propagation in the population, with the potential of lasting forever. This definition is instrumental to understand its role today.

Dawkins ideated the memes in a pre-Internet era, when the concept was purely theoretical and aimed at describing the spreading process of cultural information. However, in present times, thanks to the wide distribution of high-speed Internet and the invention of social media, the old neologism “meme” has acquired a new and specific meaning.

“Internet memes” are described as “any fad, joke or memorable piece of content that spreads virally across the web, usually in the form of short videos or photography accompanied by a clever caption.” Despite a variety of meme types that can be found online, most of them are usually geared toward causing a stereotypical reaction: a brief laughter.

I recently reflected on this stereotypical reaction while re-watching Who framed Roget Rabbit, a 1988 live-action animated movie, which is set in a Hollywood where cartoon characters and real people co-exist and work together. While the protagonist is a hilarious bunny who was framed for the murder of a businessman, the antagonists are a group of armed weasels who try to capture him. The main trait of these weasels is that they are victims of frequent fits of laughter, which burst irrationally and cannot be stopped, as their reaction far exceed the stimulus. The reason for the weasels’ behaviour is not obvious until the end of the film, when they literally laugh to death.

A brief introduction on the concept of humour is instrumental to understanding the message this deadly laughter conveys. The Italian dramatist and novelist Luigi Pirandello articulates it in two phases. The first one is called “the perception of the opposite”, according to which the spectator laughs at a scene, because the scene is the opposite of what the spectator’s mind would consider as a normal situation. Intriguingly, a humoristic scene does not stop here, but instead it induces the spectator to reflect upon the scene. In this second step, called “the feeling of the opposite”, the spectator rationalizes the reasons why the scene appears to be the opposite of what they expected. They stop laughing, taking the point of view of the “derided”, and eventually empathizing with them. In Who framed Roger Rabbit, the weasels are incapable of rationalizing the meaning of their laughs, which are reiterated as a vacuous gesture. They laugh when people get hurt, without understanding what it means to get hurt. Given that their irrational instinct to laugh does not encounter a rational obstacle, the laughter becomes undigested and then toxic for their mind. It consumes their soul and ultimately, their mortal bodies. In the movie, the weasels’ death is indeed not caused by a biological blight, but rather their souls literally fly out of their otherwise healthy bodies. Their laughter is de facto a disease that consumes the mind.

Internet memes are integral to communication practices on social media platforms. Some memes are fun, silly and supportive, and their evocation of a smile or laugh is relatively unproblematic. However, other memes are actively degrading: they spread hate at a viral scale, targeting racial and ethnic minorities, people with disabilities, people who are gender non-conforming, and so on. I will focus my analysis on the latter. Why has laughing at socially-degrading memes has become a normative and widespread practice?

I present two possible explanations.

The first one is exemplified by Arthur Fleck’s character in the recent movie Joker by Todd Philips. Arthur is a miserable man, affected by an impulsive laughter in situations of psychological distress or discomfort. Arthur Fleck himself is also a source of laughter. In light of the “feeling of the opposite,” the spectator is therefore confronted with a double scenario: anytime they laugh when Arthur Fleck behaves weirdly or appears ridiculous, they may also realise they shouldn’t. They should not laugh at someone’s laughter that is not genuine and intentional, a symptom of a hidden, unconscious psychological distress. Yet people do laugh at Fleck, and the reason for this laughter is instructive for understanding why we laugh in response to degrading memes. Laughing at Arthur Fleck puts a distance between the spectators and the troubled character. Dealing with other people’s desperation, disability, change or death is a complex matter. It is far simpler “to laugh about it” and move on. This is part of what the “meme industry” is offering.

There is also another explanation for the success of derogatory Internet memes. Laughing is 30 times more likely to happen in a social context rather than when people are alone. It is also an imitational process, which can be simply triggered by watching other people laughing. Even more intriguingly, in comparison to other mechanisms, such as suppression, laughter is associated with a better reduction of stress, which is commonly caused by negative emotions, including terror, rage or distaste. Thus, by definition, laughing also constitutes a social way to relieve pain, to share the grief. In this context, in order to emotionally counterbalance the negative sensations triggered by the obscenity or the turpitude of the Internet meme, the user laughs, and immediately spreads the source of their laugher to laugh with others.

Now, moving back to Richard Dawkins’ original definition of memes, are “Internet memes” beneficial or detrimental to the host? Should they be pictured as “selfish”?

On the individual level, Internet memes, including the socially derogatory ones, have clear benefits for the host. As previously explained, the laughter induced by memes generate personal well-being and social connection.

However, if people are, at scale, laughing at violence, at abuses, at disparities, there may emerge a calloused approach to human suffering, an alarming process which is indeed already on the rise. The difference between laughing at a picture that makes fun of a marginalized group and allowing their discrimination, mistreatment and segregation in real life is very subtle, and the two practices are connected. There is a direct line between laughing at a meme of someone who is hurt, ill, or dead and apathetically watching your nation’s army bombing a village. Not to mention Internet memes that tacitly portray white supremacy. Let us imagine politicians, seated in their offices, laughing at a screen.

From this wide picture, Internet memes that portray such messages emerge to be cultural traits that are eventually dangerous for the well-being of the community, even if not for the individual per se. This scenario fosters a mimetic diffusion of oppression, one shot of laughter at a time.

Headline pic via: Source

Brief biography

Simone is a molecular biologist on the verge of obtaining a doctoral title at the University of Ulm, Germany. He is Vice-Director at Culturico, where his writings span from Literature to Sociology, from Philosophy to Science.

Simone can be reached on Twitter: @simredaelli

Simone can be reached at: simred [at] hotmail . it

 

 

When it comes to sensitive political issues, one would not necessarily consider Reddit the first point of call to receive up-to-date and accurate information. Despite being one of the most popular digital platforms in the world, Reddit also has reputation as a space which, amongst the memes and play, fosters conspiracy theories, bigotry, and the spread of other hateful material. In turn it would seem like Reddit would be the perfect place for the development and spread of the myriad of conspiracy theories and misinformation that have followed the spread of COVID-19 itself.

Despite this, the main discussion channel, or ‘subreddit’, associated with coronavirus — r/Coronavirus — alongside its sister-subreddt r/COVID19, have quickly developed a reputation as some of the most reliable sources to gain up-to-date information about the virus. How Reddit has achieved this could potentially provide a framework for how large digital platforms could engage in difficult issues such as coronavirus in the future. 

r/Coronavirus has exploded in popularity as the virus has spread around the world. In January the subreddit had just over 1,000 subscribers — a small but dedicated cohort of users interested in the development and spread of the at-the-time relatively unknown disease. Since then it has ballooned to over 1.9 million subscribers, with hundreds of posts appearing on the channel every day.

In turn Reddit, which has a reputation as a space of ‘everything goes’, has been required to develop a unique approach to dealing with discussion on the platform, one that is proving quite successful. How have they done it?

The success of Reddit’s r/Coronavirus lies primarily in the way the space has been moderated. Subreddits can be founded by any registered user. These users usually then act as moderators, and, depending on the size of the subreddit may recruit other moderators to help with this process. Larger subreddits often work with the site-wide administrators of Reddit in order to maintain the effective running of the specific subreddit.

While Reddit has a range of site-wide rules that apply to the platform overall, subreddit moderators also have the capacity to shape both the look of the space, and the rules which apply to them. In turn, as Tarleton Gillespie argues in his book Custodians of the Internet content policies and moderation practices help shape the shape of public discourse online. The success of the r/Coronavirus lies in how moderators, and overall site-administrators have shaped the space.

We can identify three clear things that the Reddit admin and moderators of r/Coronavirus have done to effectively shape the space.

The first lies in the rules of the subreddit. r/Coronavirus has a total of seven rules, most of which focus around the types of content they allow on the subreddit. These rules are: (1) be civil, (2) no edited titles, (3) avoid reposting information, (4) avoid politics, (5) keep information quality high, (6) no clickbait, and (7) no spam or self-promotion. In essence these rules dictate that r/Coronavirus should be limited entirely to information about the virus, sourced from high-quality outlets, which are linked to in the subreddit itself. Users are only allowed to post content that is based off a news report or other form of information, with titles that are a direct replicate of the content of the report itself. Posts that don’t link back to high-quality sources, such as posts that are text only, are explicitly banned and deleted by the moderators. r/Coronavirus promotes this information-based approach through the design of the subreddit as well. Redditors, for example, are able to filter their information based on region, giving localised content based on where a user lives. These regional filters are clearly visible on each post, meaning users can easily see where information comes from.

These content rules promote a subreddit that is focused on high quality information and avoids the acrimonious debates for which Reddit is (in)famous. This is best articulated through rule 4, ‘avoid politics’, which r/Coronavirus defines as shaming campaigns against businesses and individuals, posts about a politician’s take on events (unless they are actively discussing policy or legislation), and some opinion pieces. The moderators argue that posts about what has happened are preferred to posts about what should happen, in turn focusing content on information about what is going on, rather than debates about the consequences and implications of this.

Secondly, r/Coronavirus manages these rules through an active moderation process. The existence of rules are all well and good, but if they are unenforceable they often mean nothing. r/Coronavirus has developed a large moderation team, each of whom are dedicating large amounts of their time to the site. r/Coronavirus has approximately 60 moderators, many of whom have expertise in the area – including researchers of infectious disease, virologists, computer scientists, doctors and nurses, and more. This breadth of expertise has given moderators an authority within the space, reducing internal debates (or what is colloquially known as ‘Subreddit Drama’) about moderation practices. Moderators in turn play an active role in the subreddit, including (through an AutoModerator) posting a daily discussion thread, which includes links to a range of high-quality information about the disease.

Finally, Reddit has worked hard to make r/Coronavirus the go-to place for Redditors who wish to engage with content on the disease. As the situation became more severe Reddit began to run push notifications to users encouraging them to join. Registered users of Reddit who are not following the subreddit also now see occasional web banners encouraging them to join. These actions have promoted r/Coronavirus as the official space on Reddit for coronavirus related issues, implicitly discrediting other channels about the disease which are under less control from the site-wide administrators and may include more political material. This allows Reddit administrations to more effectively control discussion of the disease on the platform through channeling activity through one highly-moderated space, rather than having to manage a number of messier communities.

Of course, all of this has limitations. r/Coronavirus is a space for information, and information only. But the coronavirus, and the response to it, is political, and it requires political engagement. Every day politicians are making society-altering decisions in response to this crisis – from the increase of policing to huge stimulus packages to keep economies going. Due to the way r/Coronavirus is shaped, political discussions around the consequences and implications of these decisions, as well as debates about how governments should respond, is either very limited or simply not possible. In turn, while r/Coronavirus has done a good job of creating a space where information about the disease can be shared, it has not solved the problem of how to create a political space on Reddit which does not automatically descend into bigotry and acrimony.

In creating this information space r/Coronavirus is also very hierarchical. Moderators have a large amount of power, in particular in deciding what is considered ‘high quality’ information. This reinforces particular hierarchies about the values of particular types of sciences and other authorial sources of information, with little space to challenge the role of these professions in the policy response to the spread of the disease.

r/Coronavirus therefore only plays a particular role in the discussion about coronavirus on Reddit – it is a space to gather information on what has happened in relation to the disease. But that role is also important in and of itself, particularly in a time where there are such big changes happening around the world, and at such speed. In doing so Reddit has created an effective subreddit that is an excellent one-stop-shop for all coronavirus information. It has done so, ironically, by going actively off-brand.

Simon Copland (@SimonCopland) is a PhD candidate in Sociology at the Australian National University (ANU), studying the online ‘manosphere’ on Reddit. He has research interests in online misogyny, extremism and male violence, as well as in the politics of digital platforms and Reddit specifically.

 

Headline image via: Source

How is robot care for older adults envisioned in fiction? In the 2012 movie ‘Robot and Frank’ directed by Jake Schreier, the son of an older adult – Frank – with moderate dementia gives his father the choice between being placed in a care facility or accepting being taken care of by a home-care robot

Living with a home-care robot 

Robots in fiction can play a pivotal role in influencing the design of actual robots. It is therefore useful to analyze dramatic productions in which robots fulfill roles for which they are currently being designed. High-drama action packed robot films make for big hits at the box office. Slower paced films, in which robots integrate into the spheres of daily domestic life, are perhaps better positioned to reveal something about where we are as a society, and possible future scenarios. ‘Robot and Frank’ is one such film, focusing on care work outsourced to  machines. 

‘Robot and Frank’ focuses on the meeting of different generations’ widely varying acceptance of robot technology. The main character, Frank, is an older adult diagnosed with moderate dementia. He appreciates a  simple life, having retired from a career as a cat burglar. Frank lives alone in a small village, and most of his daily interaction is with the local librarian. Due to his worsening dementia, Frank’s son Hunter gives him a home-care robot. Frank says, in his own words, “[I] don’t want a robot, and don’t need a robot”. However, after a while, the robot becomes an increasingly important part of his life – not solely because of his medical and daily needs, but because of how he reinvents himself through robotic aid. The robot’s advanced nature makes communication between them possible by almost fully resembling human interaction. The robot is portrayed in a comedic manner, as when Frank is about to drink an unhealthy beverage:

 

Robot: You should not drink those, Frank. It is not good for gout.
Frank: I don’t have gout.
Robot: You do not have gout. Yet.

 

Hunter programmed the robot to aid Frank through healthy eating and mental and physical exercises. Although Frank  is still convinced that this is a waste of money and time, he gradually develops a bond and level of trust that changes his perception of his robot and his relationship with it. By walking to and from the local library, cooking meals and eating meals, meeting new people and sharing past experiences, Frank connects with his controversial past as a cat burglar. Frank’s unnamed care robot and the librarian’s robot colleague ‘Mr. Darcy’ are the only two robots featured in this movie. On several occasions the robots meet at the same time as their owners do. The robots do not seem to take much notice of each other’s presence, but the human actors demand that the machines greet each other and make conversation. When asked to do so, Mr. Darcy replies: “I have no functions or tasks that require verbal interaction with VGC 60 L” (the care robot’s model number). Frank and the librarian seem surprised that the robots do not wish to interact with each other and jokingly ask how the robots are going to form a society when humans are extinct if they do not wish to speak together. (This is an intriguing question that has several fascinating portrayals, e.g. in shows like Battlestar Galactica where robots develop spirituality.) Even though Frank and the librarian have accepted their robot helpers as useful companions, this shows that the human actors might still see the robots as somewhat alien and incapable of acting outside of their programming. 

Questions raised by automated care

In a wider scientific and technological context, the movie triggers relevant discussions and questions on the ‘humanity of robots’ pertaining to human-robot relations, robot-robot relations as well as human-human relations. This influences robot design studies and debates about  what robots could, should or should not do. This is especially salient because the context of the film – care by robots – is often a contested space. However, there is a mismatch between what robots in fiction are portrayed capable of and what actual robots can do. Despite the fact that robots fundamentally lack human factors such as emotion, ‘Robot and Frank’ provides an opportunity to consider what constitutes a good relationship. Their relationship is depicted as far more giving and mutual than Frank’s relationship with his children. This is but one of the many arrays of possibilities that technologies such as care robots can produce in dialogue with humans. By exploring this interaction, new perspectives and understandings of what is normal may come to light. This is an especially important investigation in the healthcare context because of significant changes in healthcare technology that will have significant consequences for both patients and workers, both at home and in healthcare facilities.

 Imagining and planning the implementation of care robots or other technologies not only creates opportunities for those involved; it also leads to controversies and deep challenges for those who  are engaged in technological transformations. Therefore, it is pivotal that all new implementations are developed in close dialogue with those most likely to experience its fullest effect. ‘Robot and Frank’ breaks down stereotypes of human-robot relations by showing that, given time, productive and close relationships may arise. Perhaps robots can most easily and successfully be introduced into people’s lives by providing time and opportunities for significant exposure to each other. 

Caregiver exhaustion versus robotic resilience 

Being an informal caregiver is a difficult task, especially taking care of a parent who had previously been one’s main support. Conflicts often arise as a result of the role change between parent and offspring that comes with old age. It is not only the human-robot relationship in the movie that sparks thoughts for discussion. Frank’s two children, Hunter and Madison, have distinct ways of dealing with their father’s growing dementia and solitude. Because of his illness, he is in need of domestic support. Hunter, the main informal caregiver, is exhausted by the tasks of caring. Living several hours away and busy with his own work and family life, Hunter’s situation is likely familiar to many adults who care for aging parents.  Hunter wants to outsource some of his care work to a robot.

There is little love coming from Hunter, and it is unclear how much of this stems from a strained childhood relationship and how much from the over-burden Hunter feels from his caretaker role.For Frank’s daughter, Madison, the story is quite different. Being an anti-robot activist, she spends her days traveling the globe and has little time to see her dad. Filled with both a contempt for robots and bad conscience about not seeing her dad regularly, she decides to move in and care for him – turning off the robot in the process. This leads the house to fall into chaos, as his daughter does not cook healthy or tasty food, cannot clean and becomes too tired to do fun excursions. Frank further aggravates this situation by making messes on purpose and complaining to his daughter that her caregiving is unsatisfactory. Frustrated at his daughter’s arrival, the bond between him and the robot becomes increasingly visible. Madison picks up on this special bond through Frank’s reluctant acknowledgement that the robot is his friend. She turns the robot back on and agrees to letting it help around the house. She soon becomes accustomed to the robotic services. Madison comes to like – or at least tolerate – the robot, especially when it serves her drinks. 

Frank’s relationship with his adult children is challenging, not just because of his criminal past serving long prison sentences, but also because of the time and effort that they feel obliged to spend on him. Throughout the movie, meaningful friendships and high quality interactions between people who share interests seem to be more important than  vague family engagements and obligations. Although Frank expresses love for his children, there are tense and difficult moments for all as his dementia worsens. When Frank’s condition peaks he struggles to recognize his children, let alone remember what is going on in their day-to-day lives. He pretends to remember what they are talking about, but his confusion is painfully clear. As the children have their own lives, they seem more focused on his medical well-being and less interested in Frank as a person. For the robot, who is solely devoted to Frank, the situation is different. Time is needed to create trust and friendship. The latter aspect surely seems important to Frank as he, anew, finds energy and motivation to go about his controversial interest of planning robberies and stealing supported reluctantly, but compassionately, by his robotic companion.

–SPOILERS–

Can a care robot help retired thieves with diamond theft?

Towards the end of the story, Frank remains a main suspect of a large-scale jewelry theft. Because he wipes the memory from his care robot, the robot cannot be used as conclusive evidence to determine whether Frank is guilty. The ethical side of diamond theft is of less importance here than the ethical side of care through technology. It is not what Frank steals that is of interest, but that he trains his care robot to steal. This raises some ethical dilemmas—should Frank no longer be allowed to have a care robot because he may have used it to commit a crime—and is Frank even indictable as a criminal to begin with, given his mental state? Should some of the blame lie with the programmers who neglected to incorporate legal constraints in the care robot’s programming?

At the final scene of the movie, Frank has moved into a care home with other residents having identical care robots. As Frank’s robot confirmed several times throughout the movie, Frank’s dementia-condition improved greatly during the time they spent together—as someone was there for him 100% of the time, making sure he had a healthy body and mind—and even allowing some escapades of theft as long as it kept Frank engaged. Care is at the core of human value, dignity and autonomy—and in this movie, we learn how a robot can help care for someone – in a deeply human way.

 

The authors are on Twitter @rogerSora , @SutcliffeEdward & @NienkeBruyning

 

The best way I can describe the experience of summer 2019-2020 in Australia is with a single word: exhausting. We have been on fire for months. There are immediate threats in progress and new ones at the ready. Our air quality levels dip in and out of hazardous, more often in the former category than the latter. This has been challenging for everyone. For many, mere exhaustion may feel like a luxury.

In the trenches of the ongoing fires are the Australian emergency service workers, especially the “fireys,” who have been tireless in their efforts to save homes, people, and wildlife. While the primary and most visible part of their  work is the relentless job of managing fires, there is also a secondary–though critical–task of public communication, keeping people informed and providing material for anxious-refreshers looking for information about “fires near me.”  In the last few days, as fires have approached the Canberra suburbs where I live, an interesting variant of public safety communication has emerged: Instagramable photography.

A tweet from the local emergency service account (@ACT_ESA) announced Wednesday night that a major road would be closed to anyone who isn’t a resident of the area. The reason for the closure was to prevent a growing obstacle to public safety—disaster tourism. Apparently, people have been “visiting” the fires, generally taking dramatic photographs to share on social media. These disaster tourists put themselves in harm’s way, clog the roads, and generally create more work for emergency responders. The road closure was a hard and fast way to keep people out. It was not, however, the ESA’s only action. In addition to closing roads and posting notices, the team also created and shared imagery of the fires-in-progress with direct allusion to the perceived goals of would-be disaster tourists (i.e., social sharing).

 

The response by the ACT ESA is a subtle combination of empathy, understanding, and practicality. Rather than a punitive or derogating reproach, the response assumes–I suspect correctly– that visitors aren’t there to get in the way or cultivate clout, but to bear witness, bolster awareness, seek validation, and more generally, cope. Visually, the fires traverse beauty and horror in a way that is difficult to describe. You need to see it for yourself. And that’s why people take and share pictures. They are in the midst of something that is inarticulable,  and yet feel compelled to articulate it through the means at their disposal.  Capturing the destruction, from the best angle, means speaking with clarity. It means concretizing an experience that would be surreal, were it not happening with such immediacy and acuity. Words do little justice to the gorgeous tragedy of a red sunset.

And so, the work of fire safety in Australia 2020 now includes mollifying would-be disaster tourists by taking more Instagramable photos than visitors could take themselves. It’s a warning and a plea, delivered with a gift.

Headline Image Credit Gary Hooker, ACTRFS (Australian Capital Territory Rural Fire Service), via @ACT_ESA

Want to help? Here are some options

Jenny Davis is on Twitter @Jenny_L_Davis

 

Drew Harwell (@DrewHarwell) wrote a balanced article in the Washington Post about the ways universities are using wifi, bluetooth, and mobile phones to enact systematic monitoring of student populations. The article offers multiple perspectives that variously support and critique the technologies at play and their institutional implementation. I’m here to lay out in clear terms why these systems should be categorically resisted.

The article focuses on the SpotterEDU app which advertises itself as an “automated attendance monitoring and early alerting platform.” The idea is that students download the app and then universities can easily keep track of who’s coming to class and also, identify students who may be in, or on the brink of, crisis (e.g., a student only leaves her room to eat and therefore may be experiencing mental health issues). As university faculty, I would find these data useful. They are not worth the social costs.

One social consequence of SpotterEDU and similar tracking applications is that these technologies normalize surveillance and degrade autonomy. This is especially troublesome among a population of emerging adults. For many traditionally aged students (18-24), university is a time of developmental transition—like adulting with a safety net. There is a fine line between mechanisms of support and mechanisms of control. These tracking technologies veer towards the latter, portending a very near future in which extrinsic accountability displaces intrinsic motivation and data extraction looms inevitable.

Speaking of data extraction, these tracking technologies run on data. Data is a valuable resource. Historically, valuable resources are exploited to the benefit of those in power and the detriment of those in positions of disadvantage. This pattern of reinforced and amplified inequality via data economies has already played out in public view (see: targeted political advertising, racist parole decisions, sexist hiring algorithms). One can imagine numerous ways in which student tracking will disproportionately affect disadvantaged groups. To name a few: students on financial aid may have their funding predicated on behavioral metrics such as class attendance or library time; “normal” behaviors will be defined by averages, which implicitly creates standards that reflect the demographic majority (e.g., white, upper-middle class) and flags demographic minorities as abnormal (and thus in need of deeper monitoring or intervention); students who work full-time may be penalized for attending class less regularly or studying from remote locations. The point is that data systems come from society and society is unequal. Overlaying data systems onto social systems wraps inequality in a veneer of objectivity and intensifies its effects.

Finally, tracking systems will not be constrained to students. It will almost certainly spread to faculty. Universities are under heavy pressure to demonstrate value for money. They are funded by governments, donors, and tuition-paying students and their families. It is not at all a stretch to say that faculty will be held to account for face time with students, time spent in offices, duration of classes, and engagement with the university. This kind of monitoring erodes the richness of the academic profession with profound effects on the nature of work for tenure-line faculty and the security of work for contingent lecturers (who make up an increasing majority of the academic workforce).

To end on a hopeful note, SpotterEDU and other tracking applications are embedded in spaces disposed to collective action. Students have always been leaders of social change and drivers of resistance. Faculty have an abundance of cultural capital to expend on such endeavors. These technologies affect everyone on campus. Tenure-line faculty, contingent faculty, and students each have something to lose and thus a shared interest and common struggle[1]. We are all in the mess together and together, we can resist our way out.  

Jenny Davis is in Twitter @Jenny_L_Davis

Headline pic via: Source


[1] I thank James Chouinard (@jamesbc81) for highlighting this point

Mark Zuckerberg testified to congress this week. The testimony was supposed to address Facebook’s move into the currency market. Instead, they mostly talked about Facebook’s policy of not banning or fact-checking politicians on the platform.  Zuckerberg roots the policy in values of free expression and democratic ideals. Here is a quick primer on why that rationale is ridiculous.

For background, Facebook does partner with third party fact-checkers, but exempts politicians’ organic content and paid advertisements from review. This policy is not new. Here is an overview of the policy’s parameters.

To summarize the company’s rationale, Facebook believes that constituents should have unadulterated knowledge about political candidates. When politicians lie, the people should know about it, and they will know about it because of a collective fact-checking effort. This is premised on the assumption that journalists, opposing political operatives, and the vast network of Facebook users will scrutinize all forms of political speech thus debunking dishonest claims and exposing dishonest politicians.

In short, Facebook claims that crowdsourced fact-checking will provide an information safety net which allows political speech to remain unregulated, thus fostering an optimally informed electorate.

On a simple technical level, the premise of crowdsourced fact-checking on Facebook does not work. The reason crowdsourced fact-checking cannot work on Facebook is because content is microtargeted. Facebook’s entire financial structure is premised on delivering different content—both organic and advertised—to different users. Facebook gives users the content that will keep them “stuck” on the site as long as possible,  and distributes advertisements to granular user segments who will be most influenced by specific messages. For these reasons, each Facebook feed is distinct and no two Facebook users encounter the exact same content.

Crowdsourced fact-checking only works when “the crowd” all encounter the same facts. On Facebook, the this is not the case, and that is by design. Would-be fact-checkers may never encounter a piece of dishonest content, and if they do, those inclined to believe the content (because it supports their existing worldview) are less likely to encounter the fact-checker’s debunking.

Facebook’s ideological justification for unregulated political speech is not just thin, it’s technically untenable. I’m going to assume that Zuckerberg understands this. Facebook’s profit motive thus shines through from behind a  moral veil, however earnestly Zuckerberg presents the company’s case.

 

Jenny Davis is on Twitter @Jenny_L_Davis

Headline image via: source

 

As technology expands its footprint across nearly every domain of contemporary life, some spheres raise particularly acute issues that illuminate larger trends at hand. The criminal justice system is one such area, with automated systems being adopted widely and rapidly—and with activists and advocates beginning to push back with alternate politics that seek to ameliorate existing inequalities rather than instantiate and exacerbate them. The criminal justice system (and its well-known subsidiary, the prison-industrial complex) is a space often cited for its dehumanizing tendencies and outcomes; technologizing this realm may feed into these patterns, despite proponents pitching this as an “alternative to incarceration” that will promote more humane treatment through rehabilitation and employment opportunities.

As such, calls to modernize and reform criminal justice often manifest as a rapid move toward automated processes throughout many penal systems. Numerous jurisdictions are adopting digital tools at all levels, from policing to parole, in order to promote efficiency and (it is claimed) fairness. However, critics argue that mechanized systems—driven by Big Data, artificial intelligence, and human-coded algorithms—are ushering in an era of expansive policing, digital profiling, and punitive methods that can intensify structural inequalities. In this view, the embedded biases in algorithms can serve to deepen inequities, via automated systems built on platforms that are opaque and unregulated; likewise, emerging policing and surveillance technologies are often deployed disproportionately toward vulnerable segments of the population. In an era of digital saturation and rapidly shifting societal norms, these contrasting views of efficiency and inequality are playing out in quintessential ways throughout the realm of criminal justice.

Tracking this arc, critical discourses on technology and social control have brought to light how decision-making algorithms can be a mechanism to “reinforce oppressive social relationships and enact new modes of racial profiling,” as Safiya Umoja Noble argues in her 2018 book, Algorithms of Oppression. In this view, the use of machine learning and artificial intelligence as tools of justice can yield self-reinforcing patterns of racial and socioeconomic inequality. As Cathy O’Neil discerns in Weapons of Math Destruction (2016), emerging models such as “predictive policing” can exacerbate disparate impacts by perpetuating data-driven policies whereby, “because of the strong correlation between poverty and reported crime, the poor continue to get caught up in these digital dragnets.” And in Automating Inequality (2018), Virginia Eubanks further explains how marginalized communities “face the heaviest burdens of high-tech scrutiny,” even as “the widespread use of these systems impacts the quality of democracy for us all.” In talks deriving from his forthcoming book Halfway Home, Reuben Miller advances the concept of “mass supervision” as an extension of systems of mass incarceration; whereas the latter has drawn a great deal of critical analysis in recent years, the former is potentially more dangerous as an outgrowth of patterns of mass surveillance and the erosion of privacy in the digital age—leading to what Miller terms a “supervised society.”

Techniques of digital monitoring impact the entire population, but the leading edge of regulatory and punitive technologies are applied most directly to communities that are already over-policed. Some scholars and critics have been describing these trends under the banner of “E-carceration,” calling out methods that utilize tracking and monitoring devices to extend practices of social control that are doubly (though not exclusively) impacting vulnerable communities. As Michelle Alexander recently wrote in the New York Times, these modes of digital penality are built on a foundation of “corporate secrets” and a thinly veiled impetus toward “perpetual criminalization,” constituting what she terms “the newest Jim Crow.” Nonetheless, while marginalized sectors are most directly impacted, as one of Eubanks’s informants warned us all: “You’re next.”

Advocates of automated and algorithmic justice methods often tout the capacity of such systems to reduce or eliminate human biases, achieve greater efficiency and consistency of outcomes, and ameliorate existing inequities through the use of better data and faster results. This trend is evident across a myriad of jurisdictions in the U.S. in particular (but not solely), as courts nationwide “are making greater use of computer algorithms to help determine whether defendants should be released into the community while they await trial.” In 2017, for instance, New Jersey introduced a statewide “risk assessment” system using algorithms and large data sets to determine bail, in some cases serving to potentially supplant judicial discretion altogether.

Many have been critical of these processes, noting that these automated decisions are only as good as the data points utilized—which are often tainted both by preexisting subjective biases and prior accumulations of structural bias recorded in people’s records based on them. The algorithms deployed for these purposes are primarily conceived as “proprietary techniques” that are largely opaque and obscured from public scrutiny; as a recent law review article asserts, we may be in the process of opening up “Pandora’s algorithmic black box.” In evaluating these emerging techniques, researchers at Harvard University thus have expressed a pair of related concerns: (1) the critical “need for explainable algorithmic decisions to satisfy both legal and ethical imperatives,” and (2) the fact that “AI systems may not be able to provide human-interpretable reasons for their decisions given their complexity and ability to account for thousands of factors.” This raises foundational questions of justice, ethics, and accountability, but in practice this discussion is in danger of being mooted by widespread implementation.

The net effect of adopting digital mechanisms for policing and crime control without more scrutiny can yield a divided society in which the inner workings (and associated power relations) of these tools are almost completely opaque and thus shielded from critique, while the outer manifestations are concretely inscribed and societally pervasive. The CBC radio program SPARK recently examined a range of these new policing technologies, from Body Cams and virtual Ride-Along applications to those such as Shot Spotter that draw upon data gleaned from a vast network of recording devices embedded in public spaces. Critically assessing the much-touted benefits of such nouveau tools as a “Thin Blue Lie,” Matt Stroud challenges the prevailing view that these technologies are inherently helpful innovations, arguing instead that they have actually made policing more reckless, discriminatory, and unaccountable in the process.

This has prompted a recent spate of critical interventions and resistance efforts, including a network galvanized under the banner of “Challenging E-Carceration.” In this lexicon, it is argued that “E-Carceration may be the successor to mass incarceration as we exchange prison cells for being confined in our own homes and communities.” The cumulative impacts of this potential “net-widening” of enforcement mechanisms include new technologies that gather information about our daily lives, such as license plate readers and facial recognition software. As Miller suggested in his invocation of “mass supervision” as the logical extension of such patterns and practices, these effects may be most immediately felt by those already overburdened by systems of crime control, but the impacts are harbingers of wider forms of social control.

Some advocates thus have begun calling for a form of “digital sanctuary.” An important intervention along these lines has been offered by the Sunlight Foundation, which advocates for “responsible municipal data management.” Their detailed proposal begins with the larger justice implications inherent in emerging technologies, calling upon cities to establish sound digital policies: “Municipal departments need to consider their formal data collection, retention, storage and sharing practices, [and] their informal data practices.” In particular, it is urged that cities should not collect sensitive information “unless it is absolutely necessary to do so,” and likewise should “publicly document all policies, practices and requests which result in the sharing of information.” In light of the escalating use of data-gathering systems, this framework calls for protections that would benefit vulnerable populations and all residents.

These notions parallel the emergence of a wider societal discussion on technology, providing a basis for assessing which current techniques present the greatest threats to, and/or opportunities for, the cultivation of justice. Despite these efforts, we are left with critical questions of whether the debate will catch up to utilization trends, and how the trajectory of tools will continue to evolve if left unchecked. As Adam Greenfield plaintively inquired in his 2017 book Radical Technologies: “Can we make other politics with these technologies? Can we use them in ways that don’t simply reproduce all-too-familiar arrangements of power?” This is the overarching task at hand, even as opportunities for public oversight seemingly remain elusive.

 

Randall Amster, J.D., Ph.D., is a teaching professor and co-director of environmental studies at Georgetown University in Washington, DC, and is the author of books including Peace Ecology. Recent work focuses on the ways in which technology can make people long for a time when children played outside and everyone was a great conversationalist. He cannot be reached on Twitter @randallamster.

 

Headline pic via: Source

fff-anonymous-10071137-1440-900

In the wake of the terrifying violence that shook El Paso and Dayton, there have been a lot of questions around the role of the Internet in facilitating communities of hate and the radicalization of angry white men. Digital affordances like anonymity and pseudonymity are especially suspect for their alleged ability to provide cover for far-right extremist communities. These connections seem to be crystal clear. For one, 8chan, an anonymous image board, has been the host of several far-right manifestos posted on its feeds preceding mass shootings. And Kiwi Farms, a forum board populated with trolls and stalkers who spend their days monitoring and harassing women, has been keeping a record of mass killings and became infamous after its administrator “Null”, Joshua Conner Moon, refused to take down the Christchurch manifesto.

The KF community claim to merely be archiving mass shootings, however, it’s clear that the racist and misogynistic politics on the forum board are closely aligned with that of the shooters. The Christchurch extremist had alleged membership to the KF community and had posted white supremacist content on the forum. New Zealand authorities requested access to their data to assist in their investigation and were promptly refused. Afterwards, Null encouraged Kiwi users to use anonymizing tools and purged the website’s data. It is becoming increasingly clear that these far-right communities are radicalizing white men to commit atrocities, even if such radicalization is only a tacit consequence of constant streams of racist and sexist vitriol.

With the existence of sites like 8chan and Kiwi Farms, it becomes exceedingly easy to blame digital technology as a root cause of mass violence. Following the recent shootings, the Trump administration attempted to pin the root of the US violence crisis on, among other things, video games. And though this might seem like a convincing explanation of mass violence on the surface, as angry white men are known to spend time playing violent video games like Fortnite, there has yet to be much conclusive or convincing empirical accounts that causally link videogames to acts of violence.

One pattern has been crystal clear, and that’s that mass and targeted violence seem to coalesce around white supremacists and nationalists. In fact, as FBI director Christopher Wray told the US Congress, most instances of domestic terrorism come from white supremacists. From this perspective, it’s easy to see how technological explanations are a bait and switch that try to hide white supremacy behind a smoke screen. This is a convenient strategy for Trump, as his constant streams of racism have legitimized a renewed rise in white supremacy and far-right politics across the US.

For those of us who do research on social media and trolling, one thing is for certain, easy technological solutions risk arbitrary punitive responses that don’t address the root of the issue. Blaming the growing violence crisis on technology will only lead to an increase in censorship and surveillance and intensify the growing chill of fear in the age of social media.

To better understand this issue, the fraught story of the anonymous social media platform Yik Yak is quite instructive. As a mainstream platform, Yik Yak was used widely across North American university and college campuses. Yak users were able to communicate anonymously on a series of GPS determined local news feeds where they could upvote and downvote content and engage in nameless conversations under random images to delineate users from each other.

Tragically, Yik Yak was plagued by the presence of vitriolic and toxic users who engaged in forms of bullying, harassment, and racist or sexist violence. This included more extreme threats, such as bomb threats, threats of gun violence, and threats of racist lynching. The seemingly endless stream of vitriol prompted an enormous amount of negative public attention that had alarming consequences for Yik Yak. After being removed from the top charts of the Google Play Store for allegedly fostering a hostile climate on the platform, Yik Yak administrators acted to remove the anonymity feature and impose user handles on its users in order to instil a sense of user accountability. Though this move was effective in dampening the degree of toxic and violent behavior on Yik Yak’s feeds, it also led to users abandoning the platform and the company eventually collapsing.

Though anonymity is often associated with facilitating violence, the ability to be anonymous on the Internet does not directly give rise to violent digital communities or acts of IRL (“In-real-life”) violence. In my ethnographic research on Yik Yak in Kingston, Ontario, I found that despite intense presence of vitriolic content, there was also a diverse range of users who engaged in forms of entertainment, leisure, and caretaking. And though it may be clear that anonymity affords users the ability to engage in undisciplined or vitriolic behavior, the Yik Yak platform, much like other digital and corporeal publics, allowed users to engage in creative and empowering forms of communication that otherwise wouldn’t exist.

For instance, there was a contingent of users who were able to communicate their mental health issues and secret everyday ruminations. Users in crisis would post calls for help that were often met with other users interested in providing some form of caretaking, deep and helpful conversations, and the sharing of crucial resources. Other users expressed that they were able to be themselves without the worrisome consequences of discrimination that entails being LGBTQ or a person of color.

What was clear to me was that there was an abundance of forms of human interaction that would never flourish on social media platforms where you are forced to identify under your legal name. Anonymity has a crucial place in a culture that has become accustomed to constant surveillance from corporations, government institutions, and family and peers. Merely removing the ability to interact anonymously on a social media platform doesn’t actually address the underlying explanation for violent behavior. But it does discard a form of communication that has increasingly important social utility.

In her multiyear ethnography on trolling practices in the US, researcher Whitney Phillips concluded that violent online communities largely exist because mainstream media and culture enable them. Pointing to the increasingly sensationalist news media and the vitriolic battlefield of electoral politics, Phillips asserts that acts of vitriolic trolling borrow the same cultural material used in the mainstream, explaining, “the difference is that trolling is condemned, while ostensibly ‘normal’ behaviors are accepted as given, if not actively celebrated.” In other words, removing the affordances of anonymity on the Internet will not stave off the intensification of mass violence in our society. We need to address the cultural foundations of white supremacy itself.

As Trump belches out a consistent stream of racist hatred and the alt-right continue to find footing in electoral politics and the imaginations of the citizenry, communities of hatred on the Internet will continue to expand and inspire future instances of IRL violence. We need to look beyond technological solutions, censorship, and surveillance and begin addressing how we might face-off against white supremacy and the rise of the far-right.

 

Abigail Curlew is a doctoral researcher and Trudeau Scholar at Carleton University. She works with digital ethnography to study how anti-transgender far-right vigilantes doxx and harass politically involved trans women. Her bylines can be found in Vice Canada, the Conversation and Briarpatch Magazine.

 

https://medium.com/@abigail.curlew

Twitter: @Curlew_A

 

Headline image via: Source