The following is an edited transcript of a brief talk I gave as part of the ANU School of Sociology Pandemic Society Panel Series on May 25, 2020.  

 The rapid shift online due to physical distancing measures has resulted in significant changes to the way we work and interact. One highly salient change is the use of Zoom and other video conferencing programs to facilitate face-to-face communications that would have otherwise taken place in a shared physical venue.

A surprising side effect that’s emerging from this move online has been the seemingly ubiquitous, or at least widespread, experience of physical exhaustion. Many of us know this exhaustion first-hand and more than likely, have commiserated with friends and colleagues who are struggling with the same. This “Zoom fatigue,” as it’s been called, presents something of a puzzle.

Interacting via video should ostensibly require lower energy outputs than an in-person engagement. Take teaching as an example. Teaching a class online means sitting or standing in front of a computer, in the same spot, in your own home. In contrast, teaching in a physical classroom means getting yourself to campus, traipsing up and down stairs, pacing around a lecture hall, and racing to get coffee in the moments between class ending and an appointment that begins 2 minutes sooner than the amount of time it takes you to get back to your office. The latter should be more tiring. The former, apparently, is. What’s going on here? Why are we so tired?

I’ll suggest two reasons rooted in the social psychology of interaction that help explain this strange and sleepy phenomenon. The first has to do with social cues and the specific features, or affordances, of the Zoom platform. The second is far more basic.

Affordances refer to how the design features of a technology enable and constrain the use of that technology with ripple effects onto broader social dynamics. The features of Zoom are such that we have a lot of social cues, but in slightly degraded form to those which we express in traditional, shared space settings. We can look each other in the eye and hear each other’s voices, but our faces aren’t as clear, the details blurrier. Our wrinkles fade away but so too do the subtleties they communicate. We thus have almost enough interactive resources to succeed and don’t bother supplementing in the way we might on a telephone call, nor do we get extra time to pause and process in the way we might in a text-based exchange. Communication is more effortful in this context and siphons energy we may not realize we’re expending.

So the first reason is techno-social. The features of this platform require extra interactive effort and thus bring forth that sense of fatigue that so many of us feel. We don’t have the luxury of time, as provided by text-based exchanges, or the benefit of extra performative effort, like we give each other on the phone, nor do we have the full social cues provided by traditional, face-to-face interaction.

However, I can think of plenty of video calls I’ve had outside of COVID-19 that haven’t felt so draining. Living in a country that is not my home country means I often talk with friends, family, and colleagues via video. I’ve been doing this for years. I didn’t dread the calls nor did I need a nap afterwards. I enjoyed them and often, got an energy jolt. So why then, and not now? Or perhaps why now, and not then? Why were those calls experientially sustaining and these calls demanding?  This leads me to a second proposal in which I suggest a more basic, less technical interactive circumstance that compounds the energy-sapping effects of video conferencing and its slightly degraded social cues.

The second, low-tech reason we may be so tired is because of a basic social psychological process, enacted during a time of a crisis. The process I’m talking about is role-taking, or putting the self in the position of the other, perceiving the world from the other’s perspective. This is a classic tenet of social psychology and integral to all forms of social interaction. All of us, all the time, are entering each other’s perspectives and sharing in each other’s affective states. When we do this now, during our Zoom encounters—because these are the primary encounters we are able to have—we are engaging with people whose moods are, on balance, in various states of disrepair. I would venture that interacting in person at the moment would also contain an element of heightened anxiety and malaise because in the midst of social upheaval, that’s the current state of emotional affairs.

Ultimately what we’re left with is a set of interactive conditions in which we have to strain to see each other and when we do, we’re hit with ambient distress. This is why Zoom meetings seem to have a natural, hard attention limit, and why sitting at a computer has left so many of us physically fatigued.

 

Jenny Davis is on Twitter @Jenny_L_Davis

The term “meme” first appeared in the 1975 Richard Dawkins’ bestselling book The selfish gene. The neologism is derived from the ancient Greek mīmēma, which means “imitated thing”. Richard Dawkins, a notorious evolutionary biologist, coined it to describe “a unit of cultural content that is transmitted by a human mind to another” through a process that can be referred as “imitation”. For instance, anytime a philosopher ideates a new concept, their contemporaries interrogate it. If the idea is brilliant, other philosophers may eventually decide to cite it in their essays and speeches, with the outcome of propagating it. Originally, the concept was proposed to describe an analogy between the “behaviour” of genes and cultural products. A gene is transmitted from one generation to another, and if selected, it can accumulate in a given population. Similarly, a meme can spread from one mind to another, and it can become popular in the cultural context of a given civilization. The term “meme” is indeed a monosyllable, which resembles the word “gene”.

The concept of memes becomes relevant when they are considered as “selfish” entities. Dawkins’ book revolves around the idea that genes are the biological units upon which natural selection acts. Metaphorically, the genes that are positively selected – if they had a consciousness – would for example use their vehicles, or hosts, for their own preservation and propagation. They would behave as though they were “selfish”.

When this principle is applied to memes, we should not believe that cultural products – such as popular songs, books or conversations – can reason in a human sense, exactly as Dawkins did not mean that genes can think as humans do. We basically mean that their intrinsic capability to be retained in the human mind and proliferate does not necessarily favour their vehicles, the humans. As an example, Dawkins proposes the idea of “God”. God is a simplified explanation for a complex plethora of questions on the origin of human existence and, overall, of the entire universe. However, given its comforting power, and its ability to release the human mind from the chains of perpetual anguish, the idea of “God” is contagious. Most likely, starting with the creation of God, the human mind got infected by other memes, such as “life after death”. When they realized they could survive their biological end, humans no longer feared death. However, if taken to the extreme, this meme could favour the spread of “martyrs”, people who would sacrifice their biological life for the sake of the immortal one.

There are many other examples of memes that displayed, and still display, a dangerous and apparent “selfish” behaviour. The religious ideology that led to the massacres of the Crusades, which are estimated to have taken the lives of 1.7 million people, or the suicidal behaviour of terrorists, or even, on a global scale, the human culture as a threat to the well-being of the planet, and to humanity itself.

Thus, a meme is a viral piece of information, detrimental, beneficial or irrelevant for the host, that is capable of self-replication and propagation in the population, with the potential of lasting forever. This definition is instrumental to understand its role today.

Dawkins ideated the memes in a pre-Internet era, when the concept was purely theoretical and aimed at describing the spreading process of cultural information. However, in present times, thanks to the wide distribution of high-speed Internet and the invention of social media, the old neologism “meme” has acquired a new and specific meaning.

“Internet memes” are described as “any fad, joke or memorable piece of content that spreads virally across the web, usually in the form of short videos or photography accompanied by a clever caption.” Despite a variety of meme types that can be found online, most of them are usually geared toward causing a stereotypical reaction: a brief laughter.

I recently reflected on this stereotypical reaction while re-watching Who framed Roget Rabbit, a 1988 live-action animated movie, which is set in a Hollywood where cartoon characters and real people co-exist and work together. While the protagonist is a hilarious bunny who was framed for the murder of a businessman, the antagonists are a group of armed weasels who try to capture him. The main trait of these weasels is that they are victims of frequent fits of laughter, which burst irrationally and cannot be stopped, as their reaction far exceed the stimulus. The reason for the weasels’ behaviour is not obvious until the end of the film, when they literally laugh to death.

A brief introduction on the concept of humour is instrumental to understanding the message this deadly laughter conveys. The Italian dramatist and novelist Luigi Pirandello articulates it in two phases. The first one is called “the perception of the opposite”, according to which the spectator laughs at a scene, because the scene is the opposite of what the spectator’s mind would consider as a normal situation. Intriguingly, a humoristic scene does not stop here, but instead it induces the spectator to reflect upon the scene. In this second step, called “the feeling of the opposite”, the spectator rationalizes the reasons why the scene appears to be the opposite of what they expected. They stop laughing, taking the point of view of the “derided”, and eventually empathizing with them. In Who framed Roger Rabbit, the weasels are incapable of rationalizing the meaning of their laughs, which are reiterated as a vacuous gesture. They laugh when people get hurt, without understanding what it means to get hurt. Given that their irrational instinct to laugh does not encounter a rational obstacle, the laughter becomes undigested and then toxic for their mind. It consumes their soul and ultimately, their mortal bodies. In the movie, the weasels’ death is indeed not caused by a biological blight, but rather their souls literally fly out of their otherwise healthy bodies. Their laughter is de facto a disease that consumes the mind.

Internet memes are integral to communication practices on social media platforms. Some memes are fun, silly and supportive, and their evocation of a smile or laugh is relatively unproblematic. However, other memes are actively degrading: they spread hate at a viral scale, targeting racial and ethnic minorities, people with disabilities, people who are gender non-conforming, and so on. I will focus my analysis on the latter. Why has laughing at socially-degrading memes has become a normative and widespread practice?

I present two possible explanations.

The first one is exemplified by Arthur Fleck’s character in the recent movie Joker by Todd Philips. Arthur is a miserable man, affected by an impulsive laughter in situations of psychological distress or discomfort. Arthur Fleck himself is also a source of laughter. In light of the “feeling of the opposite,” the spectator is therefore confronted with a double scenario: anytime they laugh when Arthur Fleck behaves weirdly or appears ridiculous, they may also realise they shouldn’t. They should not laugh at someone’s laughter that is not genuine and intentional, a symptom of a hidden, unconscious psychological distress. Yet people do laugh at Fleck, and the reason for this laughter is instructive for understanding why we laugh in response to degrading memes. Laughing at Arthur Fleck puts a distance between the spectators and the troubled character. Dealing with other people’s desperation, disability, change or death is a complex matter. It is far simpler “to laugh about it” and move on. This is part of what the “meme industry” is offering.

There is also another explanation for the success of derogatory Internet memes. Laughing is 30 times more likely to happen in a social context rather than when people are alone. It is also an imitational process, which can be simply triggered by watching other people laughing. Even more intriguingly, in comparison to other mechanisms, such as suppression, laughter is associated with a better reduction of stress, which is commonly caused by negative emotions, including terror, rage or distaste. Thus, by definition, laughing also constitutes a social way to relieve pain, to share the grief. In this context, in order to emotionally counterbalance the negative sensations triggered by the obscenity or the turpitude of the Internet meme, the user laughs, and immediately spreads the source of their laugher to laugh with others.

Now, moving back to Richard Dawkins’ original definition of memes, are “Internet memes” beneficial or detrimental to the host? Should they be pictured as “selfish”?

On the individual level, Internet memes, including the socially derogatory ones, have clear benefits for the host. As previously explained, the laughter induced by memes generate personal well-being and social connection.

However, if people are, at scale, laughing at violence, at abuses, at disparities, there may emerge a calloused approach to human suffering, an alarming process which is indeed already on the rise. The difference between laughing at a picture that makes fun of a marginalized group and allowing their discrimination, mistreatment and segregation in real life is very subtle, and the two practices are connected. There is a direct line between laughing at a meme of someone who is hurt, ill, or dead and apathetically watching your nation’s army bombing a village. Not to mention Internet memes that tacitly portray white supremacy. Let us imagine politicians, seated in their offices, laughing at a screen.

From this wide picture, Internet memes that portray such messages emerge to be cultural traits that are eventually dangerous for the well-being of the community, even if not for the individual per se. This scenario fosters a mimetic diffusion of oppression, one shot of laughter at a time.

Headline pic via: Source

Brief biography

Simone is a molecular biologist on the verge of obtaining a doctoral title at the University of Ulm, Germany. He is Vice-Director at Culturico, where his writings span from Literature to Sociology, from Philosophy to Science.

Simone can be reached on Twitter: @simredaelli

Simone can be reached at: simred [at] hotmail . it

 

 

When it comes to sensitive political issues, one would not necessarily consider Reddit the first point of call to receive up-to-date and accurate information. Despite being one of the most popular digital platforms in the world, Reddit also has reputation as a space which, amongst the memes and play, fosters conspiracy theories, bigotry, and the spread of other hateful material. In turn it would seem like Reddit would be the perfect place for the development and spread of the myriad of conspiracy theories and misinformation that have followed the spread of COVID-19 itself.

Despite this, the main discussion channel, or ‘subreddit’, associated with coronavirus — r/Coronavirus — alongside its sister-subreddt r/COVID19, have quickly developed a reputation as some of the most reliable sources to gain up-to-date information about the virus. How Reddit has achieved this could potentially provide a framework for how large digital platforms could engage in difficult issues such as coronavirus in the future. We as a whole realize this stunning Vitamin supports your safe framework yet it can likewise diminish torment after a physical issue and assist you with recuperating quicker. The iv therapy nutrient C gives you a jolt of energy and an enthusiastic one as well. Helps dull spots and staining of the skin and lifts collagen creation. Nutrient C is outstanding amongst other antiviral operators accessible, with its capacity to kill and wipe out a wide scope of poisons. The National Institutes of Health has distributed proof affirming Vitamin C’s enemy of disease properties. It is outstanding amongst other antiviral specialists accessible, with the capacity to kill and dispense with a wide scope of poisons. The National Institutes of Health has distributed proof affirming Vitamin C’s enemy of malignant growth properties. It is additionally utilized in chemo patients to diminish the unfriendly reactions from chemotherapy. With the coronavirus (COVID-19) outbreak, it’s not surprising that many people are taking extra steps to stay safe, including stocking up on sanitizing sprays, gels and soaps. But Valm hand sanitizers are the best defense against bacteria and viruses like coronavirus and influenza.

r/Coronavirus has exploded in popularity as the virus has spread around the world. In January the subreddit had just over 1,000 subscribers — a small but dedicated cohort of users interested in the development and spread of the at-the-time relatively unknown disease. Since then it has ballooned to over 1.9 million subscribers, with hundreds of posts appearing on the channel every day.

he Health Ministry on Sunday issued guidelines for disinfecting public places including offices and a set of dos and don’ts for the elderly in view of the coronavirus outbreak.

In its guidelines on environment cleaning, decontamination of public places in areas reporting Covid-19, the ministry said outdoor areas have less risk than indoor areas due to air currents and exposure to sunlight.

“These include bus stops, railway platforms, parks, roads. Office disinfection services should be targeted to frequently touched/contaminated surfaces. Sanitary workers must use separate set of cleaning equipment for toilets (mops, nylon scrubber) and separate set for sink and commode. They should always wear disposable protective gloves while cleaning a toilet,” the guidelines stated.

The guidelines said indoor areas such as entrance lobbies, corridors and staircases, escalators, elevators, security guard booths, meeting rooms, cafeteria and high-contact surfaces such elevator buttons, handrails/handles and call buttons, should be mopped with a disinfectant with 1 per cent sodium hypochlorite or phenolic disinfectants.

For metallic surfaces like door handles, security locks, keys, the ministry recommended 70 per cent alcohol to wipe down surfaces where the use of bleach is not suitable.

Recommending the use of personal protective equipment, it said the PPE should be worn while carrying out cleaning and disinfection work.

“Wear disposable rubber boots, gloves (heavy duty), and a triple-layer mask and gloves should be removed and discarded and a new pair worn. All disposable PPE should be removed and discarded after cleaning activities are completed,” it said.

For the elderly, the ministry advised complete stay at home as they were at a higher risk of Covid-19 infection due to their decreased immunity and body reserves, as well as multiple associated comorbidities.

“Exercise and meditate. Take your daily prescribed medicines regularly. Talk to your family members (not staying with you), relatives, friends via call or video conferencing, take help from family members if needed. Postpone your elective surgeries (if any) like cataract surgery or total knee replacement” are some of the dos prescribed by the ministry for the aged.

The don’ts include avoiding going to the hospital for routine checkups or follow up, avoid crowded places like parks, markets and religious places among others.

In turn Reddit, which has a reputation as a space of ‘everything goes’, has been required to develop a unique approach to dealing with discussion on the platform, one that is proving quite successful. How have they done it?

The success of Reddit’s r/Coronavirus lies primarily in the way the space has been moderated. Subreddits can be founded by any registered user. These users usually then act as moderators, and, depending on the size of the subreddit may recruit other moderators to help with this process. Larger subreddits often work with the site-wide administrators of Reddit in order to maintain the effective running of the specific subreddit.

While Reddit has a range of site-wide rules that apply to the platform overall, subreddit moderators also have the capacity to shape both the look of the space, and the rules which apply to them. In turn, as Tarleton Gillespie argues in his book Custodians of the Internet content policies and moderation practices help shape the shape of public discourse online. The success of the r/Coronavirus lies in how moderators, and overall site-administrators have shaped the space.

We can identify three clear things that the Reddit admin and moderators of r/Coronavirus have done to effectively shape the space.

The first lies in the rules of the subreddit. r/Coronavirus has a total of seven rules, most of which focus around the types of content they allow on the subreddit. These rules are: (1) be civil, (2) no edited titles, (3) avoid reposting information, (4) avoid politics, (5) keep information quality high, (6) no clickbait, and (7) no spam or self-promotion. In essence these rules dictate that r/Coronavirus should be limited entirely to information about the virus, sourced from high-quality outlets, which are linked to in the subreddit itself. Users are only allowed to post content that is based off a news report or other form of information, with titles that are a direct replicate of the content of the report itself. Posts that don’t link back to high-quality sources, such as posts that are text only, are explicitly banned and deleted by the moderators. r/Coronavirus promotes this information-based approach through the design of the subreddit as well. Redditors, for example, are able to filter their information based on region, giving localised content based on where a user lives. These regional filters are clearly visible on each post, meaning users can easily see where information comes from.

These content rules promote a subreddit that is focused on high quality information and avoids the acrimonious debates for which Reddit is (in)famous. This is best articulated through rule 4, ‘avoid politics’, which r/Coronavirus defines as shaming campaigns against businesses and individuals, posts about a politician’s take on events (unless they are actively discussing policy or legislation), and some opinion pieces. The moderators argue that posts about what has happened are preferred to posts about what should happen, in turn focusing content on information about what is going on, rather than debates about the consequences and implications of this.

Secondly, r/Coronavirus manages these rules through an active moderation process. The existence of rules are all well and good, but if they are unenforceable they often mean nothing. r/Coronavirus has developed a large moderation team, each of whom are dedicating large amounts of their time to the site. r/Coronavirus has approximately 60 moderators, many of whom have expertise in the area – including researchers of infectious disease, virologists, computer scientists, doctors and nurses, and more. This breadth of expertise has given moderators an authority within the space, reducing internal debates (or what is colloquially known as ‘Subreddit Drama’) about moderation practices. Moderators in turn play an active role in the subreddit, including (through an AutoModerator) posting a daily discussion thread, which includes links to a range of high-quality information about the disease.

Finally, Reddit has worked hard to make r/Coronavirus the go-to place for Redditors who wish to engage with content on the disease. As the situation became more severe Reddit began to run push notifications to users encouraging them to join. Registered users of Reddit who are not following the subreddit also now see occasional web banners encouraging them to join. These actions have promoted r/Coronavirus as the official space on Reddit for coronavirus related issues, implicitly discrediting other channels about the disease which are under less control from the site-wide administrators and may include more political material. This allows Reddit administrations to more effectively control discussion of the disease on the platform through channeling activity through one highly-moderated space, rather than having to manage a number of messier communities.

Of course, all of this has limitations. r/Coronavirus is a space for information, and information only. But the coronavirus, and the response to it, is political, and it requires political engagement. Every day politicians are making society-altering decisions in response to this crisis – from the increase of policing to huge stimulus packages to keep economies going. Due to the way r/Coronavirus is shaped, political discussions around the consequences and implications of these decisions, as well as debates about how governments should respond, is either very limited or simply not possible. In turn, while r/Coronavirus has done a good job of creating a space where information about the disease can be shared, it has not solved the problem of how to create a political space on Reddit which does not automatically descend into bigotry and acrimony.

In creating this information space r/Coronavirus is also very hierarchical. Moderators have a large amount of power, in particular in deciding what is considered ‘high quality’ information. This reinforces particular hierarchies about the values of particular types of sciences and other authorial sources of information, with little space to challenge the role of these professions in the policy response to the spread of the disease.

r/Coronavirus therefore only plays a particular role in the discussion about coronavirus on Reddit – it is a space to gather information on what has happened in relation to the disease. But that role is also important in and of itself, particularly in a time where there are such big changes happening around the world, and at such speed. In doing so Reddit has created an effective subreddit that is an excellent one-stop-shop for all coronavirus information. It has done so, ironically, by going actively off-brand.

Simon Copland (@SimonCopland) is a PhD candidate in Sociology at the Australian National University (ANU), studying the online ‘manosphere’ on Reddit. He has research interests in online misogyny, extremism and male violence, as well as in the politics of digital platforms and Reddit specifically.

 

Headline image via: Source

How is robot care for older adults envisioned in fiction? In the 2012 movie ‘Robot and Frank’ directed by Jake Schreier, the son of an older adult – Frank – with moderate dementia gives his father the choice between being placed in a care facility or accepting being taken care of by a home-care robot

Living with a home-care robot 

Robots in fiction can play a pivotal role in influencing the design of actual robots. It is therefore useful to analyze dramatic productions in which robots fulfill roles for which they are currently being designed. High-drama action packed robot films make for big hits at the box office. Slower paced films, in which robots integrate into the spheres of daily domestic life, are perhaps better positioned to reveal something about where we are as a society, and possible future scenarios. ‘Robot and Frank’ is one such film, focusing on care work outsourced to  machines. 

‘Robot and Frank’ focuses on the meeting of different generations’ widely varying acceptance of robot technology. The main character, Frank, is an older adult diagnosed with moderate dementia. He appreciates a  simple life, having retired from a career as a cat burglar. Frank lives alone in a small village, and most of his daily interaction is with the local librarian. Due to his worsening dementia, Frank’s son Hunter gives him a home-care robot. Frank says, in his own words, “[I] don’t want a robot, and don’t need a robot”. However, after a while, the robot becomes an increasingly important part of his life – not solely because of his medical and daily needs, but because of how he reinvents himself through robotic aid. The robot’s advanced nature makes communication between them possible by almost fully resembling human interaction. The robot is portrayed in a comedic manner, as when Frank is about to drink an unhealthy beverage:

 

Robot: You should not drink those, Frank. It is not good for gout.
Frank: I don’t have gout.
Robot: You do not have gout. Yet.

 

Hunter programmed the robot to aid Frank through healthy eating and mental and physical exercises. Although Frank  is still convinced that this is a waste of money and time, he gradually develops a bond and level of trust that changes his perception of his robot and his relationship with it. By walking to and from the local library, cooking meals and eating meals, meeting new people and sharing past experiences, Frank connects with his controversial past as a cat burglar. Frank’s unnamed care robot and the librarian’s robot colleague ‘Mr. Darcy’ are the only two robots featured in this movie. On several occasions the robots meet at the same time as their owners do. The robots do not seem to take much notice of each other’s presence, but the human actors demand that the machines greet each other and make conversation. When asked to do so, Mr. Darcy replies: “I have no functions or tasks that require verbal interaction with VGC 60 L” (the care robot’s model number). Frank and the librarian seem surprised that the robots do not wish to interact with each other and jokingly ask how the robots are going to form a society when humans are extinct if they do not wish to speak together. (This is an intriguing question that has several fascinating portrayals, e.g. in shows like Battlestar Galactica where robots develop spirituality.) Even though Frank and the librarian have accepted their robot helpers as useful companions, this shows that the human actors might still see the robots as somewhat alien and incapable of acting outside of their programming. 

Questions raised by automated care

In a wider scientific and technological context, the movie triggers relevant discussions and questions on the ‘humanity of robots’ pertaining to human-robot relations, robot-robot relations as well as human-human relations. This influences robot design studies and debates about  what robots could, should or should not do. This is especially salient because the context of the film – care by robots – is often a contested space. However, there is a mismatch between what robots in fiction are portrayed capable of and what actual robots can do. Despite the fact that robots fundamentally lack human factors such as emotion, ‘Robot and Frank’ provides an opportunity to consider what constitutes a good relationship. Their relationship is depicted as far more giving and mutual than Frank’s relationship with his children. This is but one of the many arrays of possibilities that technologies such as care robots can produce in dialogue with humans. By exploring this interaction, new perspectives and understandings of what is normal may come to light. This is an especially important investigation in the healthcare context because of significant changes in healthcare technology that will have significant consequences for both patients and workers, both at home and in healthcare facilities.

 Imagining and planning the implementation of care robots or other technologies not only creates opportunities for those involved; it also leads to controversies and deep challenges for those who  are engaged in technological transformations. Therefore, it is pivotal that all new implementations are developed in close dialogue with those most likely to experience its fullest effect. ‘Robot and Frank’ breaks down stereotypes of human-robot relations by showing that, given time, productive and close relationships may arise. Perhaps robots can most easily and successfully be introduced into people’s lives by providing time and opportunities for significant exposure to each other. 

Caregiver exhaustion versus robotic resilience 

Being an informal caregiver is a difficult task, especially taking care of a parent who had previously been one’s main support. Conflicts often arise as a result of the role change between parent and offspring that comes with old age. It is not only the human-robot relationship in the movie that sparks thoughts for discussion. Frank’s two children, Hunter and Madison, have distinct ways of dealing with their father’s growing dementia and solitude. Because of his illness, he is in need of domestic support. Hunter, the main informal caregiver, is exhausted by the tasks of caring. Living several hours away and busy with his own work and family life, Hunter’s situation is likely familiar to many adults who care for aging parents.  Hunter wants to outsource some of his care work to a robot.

There is little love coming from Hunter, and it is unclear how much of this stems from a strained childhood relationship and how much from the over-burden Hunter feels from his caretaker role.For Frank’s daughter, Madison, the story is quite different. Being an anti-robot activist, she spends her days traveling the globe and has little time to see her dad. Filled with both a contempt for robots and bad conscience about not seeing her dad regularly, she decides to move in and care for him – turning off the robot in the process. This leads the house to fall into chaos, as his daughter does not cook healthy or tasty food, cannot clean and becomes too tired to do fun excursions. Frank further aggravates this situation by making messes on purpose and complaining to his daughter that her caregiving is unsatisfactory. Frustrated at his daughter’s arrival, the bond between him and the robot becomes increasingly visible. Madison picks up on this special bond through Frank’s reluctant acknowledgement that the robot is his friend. She turns the robot back on and agrees to letting it help around the house. She soon becomes accustomed to the robotic services. Madison comes to like – or at least tolerate – the robot, especially when it serves her drinks. 

Frank’s relationship with his adult children is challenging, not just because of his criminal past serving long prison sentences, but also because of the time and effort that they feel obliged to spend on him. Throughout the movie, meaningful friendships and high quality interactions between people who share interests seem to be more important than  vague family engagements and obligations. Although Frank expresses love for his children, there are tense and difficult moments for all as his dementia worsens. When Frank’s condition peaks he struggles to recognize his children, let alone remember what is going on in their day-to-day lives. He pretends to remember what they are talking about, but his confusion is painfully clear. As the children have their own lives, they seem more focused on his medical well-being and less interested in Frank as a person. For the robot, who is solely devoted to Frank, the situation is different. Time is needed to create trust and friendship. The latter aspect surely seems important to Frank as he, anew, finds energy and motivation to go about his controversial interest of planning robberies and stealing supported reluctantly, but compassionately, by his robotic companion.

–SPOILERS–

Can a care robot help retired thieves with diamond theft?

Towards the end of the story, Frank remains a main suspect of a large-scale jewelry theft. Because he wipes the memory from his care robot, the robot cannot be used as conclusive evidence to determine whether Frank is guilty. The ethical side of diamond theft is of less importance here than the ethical side of care through technology. It is not what Frank steals that is of interest, but that he trains his care robot to steal. This raises some ethical dilemmas—should Frank no longer be allowed to have a care robot because he may have used it to commit a crime—and is Frank even indictable as a criminal to begin with, given his mental state? Should some of the blame lie with the programmers who neglected to incorporate legal constraints in the care robot’s programming?

At the final scene of the movie, Frank has moved into a care home with other residents having identical care robots. As Frank’s robot confirmed several times throughout the movie, Frank’s dementia-condition improved greatly during the time they spent together—as someone was there for him 100% of the time, making sure he had a healthy body and mind—and even allowing some escapades of theft as long as it kept Frank engaged. Care is at the core of human value, dignity and autonomy—and in this movie, we learn how a robot can help care for someone – in a deeply human way.

 

The authors are on Twitter @rogerSora , @SutcliffeEdward & @NienkeBruyning

 

The best way I can describe the experience of summer 2019-2020 in Australia is with a single word: exhausting. We have been on fire for months. There are immediate threats in progress and new ones at the ready. Our air quality levels dip in and out of hazardous, more often in the former category than the latter. This has been challenging for everyone. For many, mere exhaustion may feel like a luxury.

In the trenches of the ongoing fires are the Australian emergency service workers, especially the “fireys,” who have been tireless in their efforts to save homes, people, and wildlife. While the primary and most visible part of their  work is the relentless job of managing fires, there is also a secondary–though critical–task of public communication, keeping people informed and providing material for anxious-refreshers looking for information about “fires near me.”  In the last few days, as fires have approached the Canberra suburbs where I live, an interesting variant of public safety communication has emerged: Instagramable photography.

A tweet from the local emergency service account (@ACT_ESA) announced Wednesday night that a major road would be closed to anyone who isn’t a resident of the area. The reason for the closure was to prevent a growing obstacle to public safety—disaster tourism. Apparently, people have been “visiting” the fires, generally taking dramatic photographs to share on social media. These disaster tourists put themselves in harm’s way, clog the roads, and generally create more work for emergency responders. The road closure was a hard and fast way to keep people out. It was not, however, the ESA’s only action. In addition to closing roads and posting notices, the team also created and shared imagery of the fires-in-progress with direct allusion to the perceived goals of would-be disaster tourists (i.e., social sharing).

 

The response by the ACT ESA is a subtle combination of empathy, understanding, and practicality. Rather than a punitive or derogating reproach, the response assumes–I suspect correctly– that visitors aren’t there to get in the way or cultivate clout, but to bear witness, bolster awareness, seek validation, and more generally, cope. Visually, the fires traverse beauty and horror in a way that is difficult to describe. You need to see it for yourself. And that’s why people take and share pictures. They are in the midst of something that is inarticulable,  and yet feel compelled to articulate it through the means at their disposal.  Capturing the destruction, from the best angle, means speaking with clarity. It means concretizing an experience that would be surreal, were it not happening with such immediacy and acuity. Words do little justice to the gorgeous tragedy of a red sunset.

And so, the work of fire safety in Australia 2020 now includes mollifying would-be disaster tourists by taking more Instagramable photos than visitors could take themselves. It’s a warning and a plea, delivered with a gift.

Headline Image Credit Gary Hooker, ACTRFS (Australian Capital Territory Rural Fire Service), via @ACT_ESA

Want to help? Here are some options

Jenny Davis is on Twitter @Jenny_L_Davis

 

Drew Harwell (@DrewHarwell) wrote a balanced article in the Washington Post about the ways universities are using wifi, bluetooth, and mobile phones to enact systematic monitoring of student populations. The article offers multiple perspectives that variously support and critique the technologies at play and their institutional implementation. I’m here to lay out in clear terms why these systems should be categorically resisted.

The article focuses on the SpotterEDU app which advertises itself as an “automated attendance monitoring and early alerting platform.” The idea is that students download the app and then universities can easily keep track of who’s coming to class and also, identify students who may be in, or on the brink of, crisis (e.g., a student only leaves her room to eat and therefore may be experiencing mental health issues). As university faculty, I would find these data useful. They are not worth the social costs.

One social consequence of SpotterEDU and similar tracking applications is that these technologies normalize surveillance and degrade autonomy. This is especially troublesome among a population of emerging adults. For many traditionally aged students (18-24), university is a time of developmental transition—like adulting with a safety net. There is a fine line between mechanisms of support and mechanisms of control. These tracking technologies veer towards the latter, portending a very near future in which extrinsic accountability displaces intrinsic motivation and data extraction looms inevitable.

Speaking of data extraction, these tracking technologies run on data. Data is a valuable resource. Historically, valuable resources are exploited to the benefit of those in power and the detriment of those in positions of disadvantage. This pattern of reinforced and amplified inequality via data economies has already played out in public view (see: targeted political advertising, racist parole decisions, sexist hiring algorithms). One can imagine numerous ways in which student tracking will disproportionately affect disadvantaged groups. To name a few: students on financial aid may have their funding predicated on behavioral metrics such as class attendance or library time; “normal” behaviors will be defined by averages, which implicitly creates standards that reflect the demographic majority (e.g., white, upper-middle class) and flags demographic minorities as abnormal (and thus in need of deeper monitoring or intervention); students who work full-time may be penalized for attending class less regularly or studying from remote locations. The point is that data systems come from society and society is unequal. Overlaying data systems onto social systems wraps inequality in a veneer of objectivity and intensifies its effects.

Finally, tracking systems will not be constrained to students. It will almost certainly spread to faculty. Universities are under heavy pressure to demonstrate value for money. They are funded by governments, donors, and tuition-paying students and their families. It is not at all a stretch to say that faculty will be held to account for face time with students, time spent in offices, duration of classes, and engagement with the university. This kind of monitoring erodes the richness of the academic profession with profound effects on the nature of work for tenure-line faculty and the security of work for contingent lecturers (who make up an increasing majority of the academic workforce).

To end on a hopeful note, SpotterEDU and other tracking applications are embedded in spaces disposed to collective action. Students have always been leaders of social change and drivers of resistance. Faculty have an abundance of cultural capital to expend on such endeavors. These technologies affect everyone on campus. Tenure-line faculty, contingent faculty, and students each have something to lose and thus a shared interest and common struggle[1]. We are all in the mess together and together, we can resist our way out.  

Jenny Davis is in Twitter @Jenny_L_Davis

Headline pic via: Source


[1] I thank James Chouinard (@jamesbc81) for highlighting this point

Mark Zuckerberg testified to congress this week. The testimony was supposed to address Facebook’s move into the currency market. Instead, they mostly talked about Facebook’s policy of not banning or fact-checking politicians on the platform.  Zuckerberg roots the policy in values of free expression and democratic ideals. Here is a quick primer on why that rationale is ridiculous.

For background, Facebook does partner with third party fact-checkers, but exempts politicians’ organic content and paid advertisements from review. This policy is not new. Here is an overview of the policy’s parameters.

To summarize the company’s rationale, Facebook believes that constituents should have unadulterated knowledge about political candidates. When politicians lie, the people should know about it, and they will know about it because of a collective fact-checking effort. This is premised on the assumption that journalists, opposing political operatives, and the vast network of Facebook users will scrutinize all forms of political speech thus debunking dishonest claims and exposing dishonest politicians.

In short, Facebook claims that crowdsourced fact-checking will provide an information safety net which allows political speech to remain unregulated, thus fostering an optimally informed electorate.

On a simple technical level, the premise of crowdsourced fact-checking on Facebook does not work. The reason crowdsourced fact-checking cannot work on Facebook is because content is microtargeted. Facebook’s entire financial structure is premised on delivering different content—both organic and advertised—to different users. Facebook gives users the content that will keep them “stuck” on the site as long as possible,  and distributes advertisements to granular user segments who will be most influenced by specific messages. For these reasons, each Facebook feed is distinct and no two Facebook users encounter the exact same content.

Crowdsourced fact-checking only works when “the crowd” all encounter the same facts. On Facebook, the this is not the case, and that is by design. Would-be fact-checkers may never encounter a piece of dishonest content, and if they do, those inclined to believe the content (because it supports their existing worldview) are less likely to encounter the fact-checker’s debunking.

Facebook’s ideological justification for unregulated political speech is not just thin, it’s technically untenable. I’m going to assume that Zuckerberg understands this. Facebook’s profit motive thus shines through from behind a  moral veil, however earnestly Zuckerberg presents the company’s case.

 

Jenny Davis is on Twitter @Jenny_L_Davis

Headline image via: source

 

As technology expands its footprint across nearly every domain of contemporary life, some spheres raise particularly acute issues that illuminate larger trends at hand. The criminal justice system is one such area, with automated systems being adopted widely and rapidly—and with activists and advocates beginning to push back with alternate politics that seek to ameliorate existing inequalities rather than instantiate and exacerbate them. The criminal justice system (and its well-known subsidiary, the prison-industrial complex) is a space often cited for its dehumanizing tendencies and outcomes; technologizing this realm may feed into these patterns, despite proponents pitching this as an “alternative to incarceration” that will promote more humane treatment through rehabilitation and employment opportunities.

As such, calls to modernize and reform criminal justice often manifest as a rapid move toward automated processes throughout many penal systems. Numerous jurisdictions are adopting digital tools at all levels, from policing to parole, in order to promote efficiency and (it is claimed) fairness. However, critics argue that mechanized systems—driven by Big Data, artificial intelligence, and human-coded algorithms—are ushering in an era of expansive policing, digital profiling, and punitive methods that can intensify structural inequalities. In this view, the embedded biases in algorithms can serve to deepen inequities, via automated systems built on platforms that are opaque and unregulated; likewise, emerging policing and surveillance technologies are often deployed disproportionately toward vulnerable segments of the population. In an era of digital saturation and rapidly shifting societal norms, these contrasting views of efficiency and inequality are playing out in quintessential ways throughout the realm of criminal justice.

Tracking this arc, critical discourses on technology and social control have brought to light how decision-making algorithms can be a mechanism to “reinforce oppressive social relationships and enact new modes of racial profiling,” as Safiya Umoja Noble argues in her 2018 book, Algorithms of Oppression. In this view, the use of machine learning and artificial intelligence as tools of justice can yield self-reinforcing patterns of racial and socioeconomic inequality. As Cathy O’Neil discerns in Weapons of Math Destruction (2016), emerging models such as “predictive policing” can exacerbate disparate impacts by perpetuating data-driven policies whereby, “because of the strong correlation between poverty and reported crime, the poor continue to get caught up in these digital dragnets.” And in Automating Inequality (2018), Virginia Eubanks further explains how marginalized communities “face the heaviest burdens of high-tech scrutiny,” even as “the widespread use of these systems impacts the quality of democracy for us all.” In talks deriving from his forthcoming book Halfway Home, Reuben Miller advances the concept of “mass supervision” as an extension of systems of mass incarceration; whereas the latter has drawn a great deal of critical analysis in recent years, the former is potentially more dangerous as an outgrowth of patterns of mass surveillance and the erosion of privacy in the digital age—leading to what Miller terms a “supervised society.”

Techniques of digital monitoring impact the entire population, but the leading edge of regulatory and punitive technologies are applied most directly to communities that are already over-policed. Some scholars and critics have been describing these trends under the banner of “E-carceration,” calling out methods that utilize tracking and monitoring devices to extend practices of social control that are doubly (though not exclusively) impacting vulnerable communities. As Michelle Alexander recently wrote in the New York Times, these modes of digital penality are built on a foundation of “corporate secrets” and a thinly veiled impetus toward “perpetual criminalization,” constituting what she terms “the newest Jim Crow.” Nonetheless, while marginalized sectors are most directly impacted, as one of Eubanks’s informants warned us all: “You’re next.”

Advocates of automated and algorithmic justice methods often tout the capacity of such systems to reduce or eliminate human biases, achieve greater efficiency and consistency of outcomes, and ameliorate existing inequities through the use of better data and faster results. This trend is evident across a myriad of jurisdictions in the U.S. in particular (but not solely), as courts nationwide “are making greater use of computer algorithms to help determine whether defendants should be released into the community while they await trial.” In 2017, for instance, New Jersey introduced a statewide “risk assessment” system using algorithms and large data sets to determine bail, in some cases serving to potentially supplant judicial discretion altogether.

Many have been critical of these processes, noting that these automated decisions are only as good as the data points utilized—which are often tainted both by preexisting subjective biases and prior accumulations of structural bias recorded in people’s records based on them. The algorithms deployed for these purposes are primarily conceived as “proprietary techniques” that are largely opaque and obscured from public scrutiny; as a recent law review article asserts, we may be in the process of opening up “Pandora’s algorithmic black box.” In evaluating these emerging techniques, researchers at Harvard University thus have expressed a pair of related concerns: (1) the critical “need for explainable algorithmic decisions to satisfy both legal and ethical imperatives,” and (2) the fact that “AI systems may not be able to provide human-interpretable reasons for their decisions given their complexity and ability to account for thousands of factors.” This raises foundational questions of justice, ethics, and accountability, but in practice this discussion is in danger of being mooted by widespread implementation.

The net effect of adopting digital mechanisms for policing and crime control without more scrutiny can yield a divided society in which the inner workings (and associated power relations) of these tools are almost completely opaque and thus shielded from critique, while the outer manifestations are concretely inscribed and societally pervasive. The CBC radio program SPARK recently examined a range of these new policing technologies, from Body Cams and virtual Ride-Along applications to those such as Shot Spotter that draw upon data gleaned from a vast network of recording devices embedded in public spaces. Critically assessing the much-touted benefits of such nouveau tools as a “Thin Blue Lie,” Matt Stroud challenges the prevailing view that these technologies are inherently helpful innovations, arguing instead that they have actually made policing more reckless, discriminatory, and unaccountable in the process.

This has prompted a recent spate of critical interventions and resistance efforts, including a network galvanized under the banner of “Challenging E-Carceration.” In this lexicon, it is argued that “E-Carceration may be the successor to mass incarceration as we exchange prison cells for being confined in our own homes and communities.” The cumulative impacts of this potential “net-widening” of enforcement mechanisms include new technologies that gather information about our daily lives, such as license plate readers and facial recognition software. As Miller suggested in his invocation of “mass supervision” as the logical extension of such patterns and practices, these effects may be most immediately felt by those already overburdened by systems of crime control, but the impacts are harbingers of wider forms of social control.

Some advocates thus have begun calling for a form of “digital sanctuary.” An important intervention along these lines has been offered by the Sunlight Foundation, which advocates for “responsible municipal data management.” Their detailed proposal begins with the larger justice implications inherent in emerging technologies, calling upon cities to establish sound digital policies: “Municipal departments need to consider their formal data collection, retention, storage and sharing practices, [and] their informal data practices.” In particular, it is urged that cities should not collect sensitive information “unless it is absolutely necessary to do so,” and likewise should “publicly document all policies, practices and requests which result in the sharing of information.” In light of the escalating use of data-gathering systems, this framework calls for protections that would benefit vulnerable populations and all residents.

These notions parallel the emergence of a wider societal discussion on technology, providing a basis for assessing which current techniques present the greatest threats to, and/or opportunities for, the cultivation of justice. Despite these efforts, we are left with critical questions of whether the debate will catch up to utilization trends, and how the trajectory of tools will continue to evolve if left unchecked. As Adam Greenfield plaintively inquired in his 2017 book Radical Technologies: “Can we make other politics with these technologies? Can we use them in ways that don’t simply reproduce all-too-familiar arrangements of power?” This is the overarching task at hand, even as opportunities for public oversight seemingly remain elusive.

 

Randall Amster, J.D., Ph.D., is a teaching professor and co-director of environmental studies at Georgetown University in Washington, DC, and is the author of books including Peace Ecology. Recent work focuses on the ways in which technology can make people long for a time when children played outside and everyone was a great conversationalist. He cannot be reached on Twitter @randallamster.

 

Headline pic via: Source

fff-anonymous-10071137-1440-900

In the wake of the terrifying violence that shook El Paso and Dayton, there have been a lot of questions around the role of the Internet in facilitating communities of hate and the radicalization of angry white men. Digital affordances like anonymity and pseudonymity are especially suspect for their alleged ability to provide cover for far-right extremist communities. These connections seem to be crystal clear. For one, 8chan, an anonymous image board, has been the host of several far-right manifestos posted on its feeds preceding mass shootings. And Kiwi Farms, a forum board populated with trolls and stalkers who spend their days monitoring and harassing women, has been keeping a record of mass killings and became infamous after its administrator “Null”, Joshua Conner Moon, refused to take down the Christchurch manifesto.

The KF community claim to merely be archiving mass shootings, however, it’s clear that the racist and misogynistic politics on the forum board are closely aligned with that of the shooters. The Christchurch extremist had alleged membership to the KF community and had posted white supremacist content on the forum. New Zealand authorities requested access to their data to assist in their investigation and were promptly refused. Afterwards, Null encouraged Kiwi users to use anonymizing tools and purged the website’s data. It is becoming increasingly clear that these far-right communities are radicalizing white men to commit atrocities, even if such radicalization is only a tacit consequence of constant streams of racist and sexist vitriol.

With the existence of sites like 8chan and Kiwi Farms, it becomes exceedingly easy to blame digital technology as a root cause of mass violence. Following the recent shootings, the Trump administration attempted to pin the root of the US violence crisis on, among other things, video games. And though this might seem like a convincing explanation of mass violence on the surface, as angry white men are known to spend time playing violent video games like Fortnite, there has yet to be much conclusive or convincing empirical accounts that causally link videogames to acts of violence.

One pattern has been crystal clear, and that’s that mass and targeted violence seem to coalesce around white supremacists and nationalists. In fact, as FBI director Christopher Wray told the US Congress, most instances of domestic terrorism come from white supremacists. From this perspective, it’s easy to see how technological explanations are a bait and switch that try to hide white supremacy behind a smoke screen. This is a convenient strategy for Trump, as his constant streams of racism have legitimized a renewed rise in white supremacy and far-right politics across the US.

For those of us who do research on social media and trolling, one thing is for certain, easy technological solutions risk arbitrary punitive responses that don’t address the root of the issue. Blaming the growing violence crisis on technology will only lead to an increase in censorship and surveillance and intensify the growing chill of fear in the age of social media.

To better understand this issue, the fraught story of the anonymous social media platform Yik Yak is quite instructive. As a mainstream platform, Yik Yak was used widely across North American university and college campuses. Yak users were able to communicate anonymously on a series of GPS determined local news feeds where they could upvote and downvote content and engage in nameless conversations under random images to delineate users from each other.

Tragically, Yik Yak was plagued by the presence of vitriolic and toxic users who engaged in forms of bullying, harassment, and racist or sexist violence. This included more extreme threats, such as bomb threats, threats of gun violence, and threats of racist lynching. The seemingly endless stream of vitriol prompted an enormous amount of negative public attention that had alarming consequences for Yik Yak. After being removed from the top charts of the Google Play Store for allegedly fostering a hostile climate on the platform, Yik Yak administrators acted to remove the anonymity feature and impose user handles on its users in order to instil a sense of user accountability. Though this move was effective in dampening the degree of toxic and violent behavior on Yik Yak’s feeds, it also led to users abandoning the platform and the company eventually collapsing.

Though anonymity is often associated with facilitating violence, the ability to be anonymous on the Internet does not directly give rise to violent digital communities or acts of IRL (“In-real-life”) violence. In my ethnographic research on Yik Yak in Kingston, Ontario, I found that despite intense presence of vitriolic content, there was also a diverse range of users who engaged in forms of entertainment, leisure, and caretaking. And though it may be clear that anonymity affords users the ability to engage in undisciplined or vitriolic behavior, the Yik Yak platform, much like other digital and corporeal publics, allowed users to engage in creative and empowering forms of communication that otherwise wouldn’t exist.

For instance, there was a contingent of users who were able to communicate their mental health issues and secret everyday ruminations. Users in crisis would post calls for help that were often met with other users interested in providing some form of caretaking, deep and helpful conversations, and the sharing of crucial resources. Other users expressed that they were able to be themselves without the worrisome consequences of discrimination that entails being LGBTQ or a person of color.

What was clear to me was that there was an abundance of forms of human interaction that would never flourish on social media platforms where you are forced to identify under your legal name. Anonymity has a crucial place in a culture that has become accustomed to constant surveillance from corporations, government institutions, and family and peers. Merely removing the ability to interact anonymously on a social media platform doesn’t actually address the underlying explanation for violent behavior. But it does discard a form of communication that has increasingly important social utility.

In her multiyear ethnography on trolling practices in the US, researcher Whitney Phillips concluded that violent online communities largely exist because mainstream media and culture enable them. Pointing to the increasingly sensationalist news media and the vitriolic battlefield of electoral politics, Phillips asserts that acts of vitriolic trolling borrow the same cultural material used in the mainstream, explaining, “the difference is that trolling is condemned, while ostensibly ‘normal’ behaviors are accepted as given, if not actively celebrated.” In other words, removing the affordances of anonymity on the Internet will not stave off the intensification of mass violence in our society. We need to address the cultural foundations of white supremacy itself.

As Trump belches out a consistent stream of racist hatred and the alt-right continue to find footing in electoral politics and the imaginations of the citizenry, communities of hatred on the Internet will continue to expand and inspire future instances of IRL violence. We need to look beyond technological solutions, censorship, and surveillance and begin addressing how we might face-off against white supremacy and the rise of the far-right.

 

Abigail Curlew is a doctoral researcher and Trudeau Scholar at Carleton University. She works with digital ethnography to study how anti-transgender far-right vigilantes doxx and harass politically involved trans women. Her bylines can be found in Vice Canada, the Conversation and Briarpatch Magazine.

 

https://medium.com/@abigail.curlew

Twitter: @Curlew_A

 

Headline image via: Source

View post on imgur.com

While putting together the most recent project for External Pages, I have had the pleasure to work with artist and designer Anna Tokareva in developing Baba Yaga Myco Glitch™, an online exhibition about corporate mystification techniques that boost the digital presence of biotech companies. Working on BYMG™ catalysed the exploration of the shifting critiques of interface design in the User Experience community. These discourses shape powerful standards on not just illusions of consumer choice, but corporate identity itself. However, I propose that as designers, artists and users, we are able to recognise the importance of visually identifying such deceptive websites in order to interfere with corporate control over online content circulation. Scrutinising multiple website examples to inform the aesthetic themes and initial conceptual stages of the exhibition, we specifically focused on finding common user interfaces and content language that result in enhancing internet marketing.

Anna’s research on political fictions that direct the necessity for a global mobilisation of big data in Нооскоп: The Nooscope as Geopolitical Myth of Planetary Scale Computation lead to a detailed study of current biotech incentives as motivating forces of technological singularity. She argues that in order to achieve “planetary computation”, political myth-building and semantics are used for scientific thought to centre itself on the merging of humans and technology. Exploring Russian legends in fairytales and folklore that traverse seemingly binary oppositions of the human and non-human, Anna interprets the Baba Yaga (a Slavic fictitious female shapeshifter, villain or witch) as a representation of the ambitious motivations of biotech’s endeavour to achieve superhumanity. We used Baba Yaga as a main character to further investigate such cultural construction by experimenting with storytelling through website production.

The commercial biotech websites that we looked at for inspiration were either incredibly blasé, where descriptions of the company’s purpose would be extremely vague and unoriginal (e.g., GENEWIZ), or unnervingly overwhelming with dense articles, research and testimonials (e.g., Synbio Technologies). Struck by the aesthetic and experiential banality of these websites, we wondered why they all seemed to mimic each other. Generic corporate interface features such as full-width nav bars, header slideshows, fade animations, and contact information were distributed in a determined chronology of vertically-partitioned main sections. Starting from the top and moving down, we were presented with a navigation menu, slideshow, company services, awards and partners, “learn more” or “order now” button, and eventually land on an extensive footer.

This UI conformity easily permits a visual establishment of professionalism and validity; a quick seal of approval for legitimacy. It is customary throughout the UX and HCI paradigm, a phenomenon that Olia Lialina describes as “mainstream practices based on the postulate that the best interface is intuitive, transparent, or actually no interface” in Once Again, The Doorknob. Referring back to Don Norman’s Why Interfaces Don’t Work, which champions computers to only serve as devices of simplifying human lives, Lialina explains why this ethos contributes to mitigating user control, a sense of individualism and society-centred computing in general. She applies GeoCities as a counterpoint to Norman’s design attitude and an example of sites where users are expected to create their own interface. Defining the problematic nature in designing computers to be machines that only make life easier via such “transparent” interfaces, she argues:

“’The question is not, “What is the answer?” The question is, “What is the question?”’” Licklider (2003) quoted french philosopher Anry Puancare when he wrote his programmatic Man Computer Symbiosis, meaning that computers as colleagues should be a part of formulating questions.”

Coining the term “User Centred Design” and scheming the foundations of User Experience during his position as the first User Experience Architect at Apple in 1993, Norman’s advocacy of transparent design has unfortunately manifested into a universal reality. It has advanced into a standard so impenetrable that a business’s legitimacy and success is probably at stake if they do not follow these rules. The idea that we’ve become dependent on reviewing the website rather than the company themselves – leading to user choices being heavily navigated by websites rather than company ethos – is nothing new. And additionally, the invisibility of transparent interface design has proceeded to fooling users into algorithmic “free” will. Jenny Davis’s work on affordances highlights that just because functions or information may be technically accessible, they are not necessarily “socially available”, and the critique of affordance extends to the critique of society. In Beyond the Self, Jack Self illustrates website loading animations or throbbers (moving graphics that illustrate the site’s current background actions) as synchronised “illusions of smoothness” that support neoliberal incentives of real-time efficiency.

“The throbber is thus integral to maintaining the illusion of inescapability, dissimulating the possibility of exiting the network—one that has become both spatially and temporally coextensive with the world. This is the truth of the real-time we now inhabit: a surreal simulation so perfectly smooth it is indistinguishable from, and indeed preferable to, reality.”

These homogeneous plain sailing interfaces reinforce a mindset of inevitability, and at the same time, can create slick operations that cheat the user. Actions like “dark patterns” are implemented requests that trick users into completing tasks such as enlisting or purchasing which may be unconsented. My lengthy experience with recruitment websites could represent the type of impact that sites have on the portrayal of true company intentions. Constantly reading about the struggles of obtaining a position in the tech industry, I wondered how these agencies make commission when finding employment seems so rare. I persisted and filled out countless job applications and forms, received nagging emails and calls from recruiters for a profile update or elaboration, until I finally realised that I have been swindled by the consultancies for my monetised data (which I handed off via applications). Having found out that these companies profit on applicant data and not job offer commissions, I slowly withdrew from any further communication as I knew this would only lead to another dead end. As Anna and I roamed through examples of biotech companies online, it was easy to spot familiar UI between recruitment and lab websites; welcoming slideshows and all the obvious keywords like “future” and “innovation” stamped across images of professionals doing their work. It was impossible not to question the sincerity of what the websites displayed.

Along with the financial motives behind tech businesses, there are also fundamental internal and external design factors that diminish the trustworthiness of websites. Search engine optimisation is vital in controlling how websites are marketed and ranked. In order to fit into the confines of web indexing, site traffic now depends on not just handling Google Analytics but creating keywords that are either exposed in the page’s content or mostly hidden within metadata and backlinks. As the increase in backlinks correlates with the growth of SEO, corporate websites implement dense footers with links to all their pages, web directories, social media, newsletters and contact information. The more noise a website makes via its calls to external platforms, the more noise it makes on the internet in general.

The online consumer’s behavior is another factor in manipulating marketing strategies. Besides brainstorming what users might search, SEO managers are inclined to find related terms by scrolling through Google’s results page and seeing what else their users already searched for. Here, we can see how Google’s algorithms produce a tailored feedback loop of strategic content distribution that simultaneously feeds an uninterrupted rotating dependency on their search engine.

It is clear that keyword research helps companies come up with their content delivery and governance, and I worry about the line blurring between the information’s delivery strategy and its actual meaning. Alex Rosenblat observes how Uber uses multiple definitions for their in court hearings in order to shift blame onto their drivers as they are “consumers of its software”, subsequently enabling tech companies to switch so often between the words “users” and “workers” until they become fully entangled. In the SEO world, avoiding keyword repetition additionally helps to stay away from competing with their own content, and companies like Uber easily benefit from this specific game plan as they can freely work with interchanging their wording when necessary. With the increase in applying a varied range of buzzwords, encouraged by using multiple words to portray one thing, it’s evident that Google’s SEO system plays a role in stimulating corporations to implement ambiguous language on their sites.

However, search engine restrictions also further the SEO manipulation of content. There have been a multitude of studies (such as Enquiro, EyeTools and Did-It or Google’s Search Quality blog and User Experience findings) that look at our eye-tracking patterns when searching for information, many of which back up the rules of the “Golden Triangle” – a triangular space in which the highest density of attention remains on the top and trickles down on the left of the search engine results page (SERP). While the shape changes in relation to SERP’s interface evolution (as explained in a Moz Blog by Rebecca Maynes), the studies reveal how Google’s search engine interface offers the illusion of choice, while subsequently exploiting the fact that users will pick the first three results.

In a Digital Visual Cultural podcast, Padmini Ray Murray describes Mitchell Whitelaw’s project, The Generous Interface, where new forms of searching are reviewed through interface design to show the actual scope and intricacy of digital heritage collections. In order to realise generous interfaces, Whitelaw considers functions like changing results every time the page is loaded or randomly juxtaposing content. Murray underpins the importance of Whitelaw’s suggestions to completely rethink how we display collections as a way to untie us from the Golden Triangle’s logic. She claims that our reliance on such online infrastructures is a design flaw.

“The state of the web today – the corporate web – the fact that it’s completely taken over by four major players, is a design problem. We are working in a culture where everything we understand as a way into information is hierarchical. And that hierarchy is being decided by capital.”

Interfaces of choice are contested and monopolised, guiding and informing user experience. After we have clicked on our illusion of choice, we are given yet another illusion – through the mirage of soft and polished animations, friendly welcome page slideshows and statements of social motivation – we read about company ethos (perhaps we’re given the generic slideshow and fade animation to distract us if the information is misleading).

 

View post on imgur.com

Murray goes on to describe a project developed by Google’s Cultural Institute called Womenwill India, which approaches institutions to digitise cultural artefacts that speak to women in India. This paved the way for scandals where institutions that could not afford or have the expertise to digitalise their own collections ended up simply administering them to Google. She goes on to study the suspiciousness of the program through the motivations that lie beneath the concept of digitising collections and the institute’s loaded power: “it’s used for machines to get smarter, not altruism […] there is no interest in curating with any sense of sophistication, nuance or empathy”. Demonstrating the program’s dubious incentives, she points to the website’s cultivation of exoticism with the use of “– India” affixed to the product’s title. She continues to describe the website to be “absolutely inexplicable” as it flippantly throws together unrelated images of labeled ‘Intellectuals’, ‘Gods and Goddesses’ and ‘Artworks’ with ‘Women Who Have Encountered Sexual Violence During The Partition’.

When capital has power over the online circulation of public relations, the distinction between website design and content begins to fade, which leads design to take on multiple roles. Since design acts as a way of presenting information, Murray believes it therefore has the potential to correct it.

“This is a metadata problem as well. Who is creating this? Who is telling us that this is what things are? The only way that we can push back against the Google machine is to start thinking about interventions in terms of metadata.”

The bottom-up approach to consider interventions as metadata could also then be applied to the algorithmic activities of web crawlers. The metadata (a word I believe Murray also uses to express the act of naming and describing information) of a website specifies “what things are”. While the algorithmic activity of web crawlers further enhance content delivery, search engine infrastructure is ruled by the unification of two very specific forces – of crawler and website. As algorithms remain to be inherently non-neutral, developed by agents with very specific motives, the suggestion to use metadata as a vehicle for intervention (within both crawlers and websites) can employ bottom-up processing to be a strong political tactic.

Web crawlers’ functions are unintelligible and concealed to the user’s eye. Yet they’re connected to metadata, whose information seeps through to reach public visibility via either content descriptions on the results page, drawn-out footers containing extensive amounts of links, ambiguous buzzword language or any of the conforming UI features mentioned above. This allows for users (as visual perceivers) to begin to identify suspicious motives of websites through their interfaces. These aesthetic cues give us little snippets of what the “Google machine” actually wants from us. And, while it may just present the tip of the iceberg, it is a prompt to not underestimate, ignore or become numb to the general corporate visual language of dullness and disguise. The idea of making interfaces invisible has formed into an aesthetic of deception, and Norman’s transparent design manifesto has collapsed onto itself. When metadata and user interfaces work as ways of publicising the commercial logic of computation by exposing hidden algorithms, we can start to collectively see, understand and hopefully rethink these digital forms of (what used to be invisible) labour.

 

Ana Meisel is a web developer and curator of External Pages, starting her MSc in Human Computer Interaction and Design later this year. anameisel.com, @ananamei

Headline Photo: Source

Photo 2: Source