Mark Zuckerberg testified to congress this week. The testimony was supposed to address Facebook’s move into the currency market. Instead, they mostly talked about Facebook’s policy of not banning or fact-checking politicians on the platform.  Zuckerberg roots the policy in values of free expression and democratic ideals. Here is a quick primer on why that rationale is ridiculous.

For background, Facebook does partner with third party fact-checkers, but exempts politicians’ organic content and paid advertisements from review. This policy is not new. Here is an overview of the policy’s parameters.

To summarize the company’s rationale, Facebook believes that constituents should have unadulterated knowledge about political candidates. When politicians lie, the people should know about it, and they will know about it because of a collective fact-checking effort. This is premised on the assumption that journalists, opposing political operatives, and the vast network of Facebook users will scrutinize all forms of political speech thus debunking dishonest claims and exposing dishonest politicians.

In short, Facebook claims that crowdsourced fact-checking will provide an information safety net which allows political speech to remain unregulated, thus fostering an optimally informed electorate.

On a simple technical level, the premise of crowdsourced fact-checking on Facebook does not work. The reason crowdsourced fact-checking cannot work on Facebook is because content is microtargeted. Facebook’s entire financial structure is premised on delivering different content—both organic and advertised—to different users. Facebook gives users the content that will keep them “stuck” on the site as long as possible,  and distributes advertisements to granular user segments who will be most influenced by specific messages. For these reasons, each Facebook feed is distinct and no two Facebook users encounter the exact same content.

Crowdsourced fact-checking only works when “the crowd” all encounter the same facts. On Facebook, the this is not the case, and that is by design. Would-be fact-checkers may never encounter a piece of dishonest content, and if they do, those inclined to believe the content (because it supports their existing worldview) are less likely to encounter the fact-checker’s debunking.

Facebook’s ideological justification for unregulated political speech is not just thin, it’s technically untenable. I’m going to assume that Zuckerberg understands this. Facebook’s profit motive thus shines through from behind a  moral veil, however earnestly Zuckerberg presents the company’s case.

 

Jenny Davis is on Twitter @Jenny_L_Davis

Headline image via: source

 

As technology expands its footprint across nearly every domain of contemporary life, some spheres raise particularly acute issues that illuminate larger trends at hand. The criminal justice system is one such area, with automated systems being adopted widely and rapidly—and with activists and advocates beginning to push back with alternate politics that seek to ameliorate existing inequalities rather than instantiate and exacerbate them. The criminal justice system (and its well-known subsidiary, the prison-industrial complex) is a space often cited for its dehumanizing tendencies and outcomes; technologizing this realm may feed into these patterns, despite proponents pitching this as an “alternative to incarceration” that will promote more humane treatment through rehabilitation and employment opportunities.

As such, calls to modernize and reform criminal justice often manifest as a rapid move toward automated processes throughout many penal systems. Numerous jurisdictions are adopting digital tools at all levels, from policing to parole, in order to promote efficiency and (it is claimed) fairness. However, critics argue that mechanized systems—driven by Big Data, artificial intelligence, and human-coded algorithms—are ushering in an era of expansive policing, digital profiling, and punitive methods that can intensify structural inequalities. In this view, the embedded biases in algorithms can serve to deepen inequities, via automated systems built on platforms that are opaque and unregulated; likewise, emerging policing and surveillance technologies are often deployed disproportionately toward vulnerable segments of the population. In an era of digital saturation and rapidly shifting societal norms, these contrasting views of efficiency and inequality are playing out in quintessential ways throughout the realm of criminal justice.

Tracking this arc, critical discourses on technology and social control have brought to light how decision-making algorithms can be a mechanism to “reinforce oppressive social relationships and enact new modes of racial profiling,” as Safiya Umoja Noble argues in her 2018 book, Algorithms of Oppression. In this view, the use of machine learning and artificial intelligence as tools of justice can yield self-reinforcing patterns of racial and socioeconomic inequality. As Cathy O’Neil discerns in Weapons of Math Destruction (2016), emerging models such as “predictive policing” can exacerbate disparate impacts by perpetuating data-driven policies whereby, “because of the strong correlation between poverty and reported crime, the poor continue to get caught up in these digital dragnets.” And in Automating Inequality (2018), Virginia Eubanks further explains how marginalized communities “face the heaviest burdens of high-tech scrutiny,” even as “the widespread use of these systems impacts the quality of democracy for us all.” In talks deriving from his forthcoming book Halfway Home, Reuben Miller advances the concept of “mass supervision” as an extension of systems of mass incarceration; whereas the latter has drawn a great deal of critical analysis in recent years, the former is potentially more dangerous as an outgrowth of patterns of mass surveillance and the erosion of privacy in the digital age—leading to what Miller terms a “supervised society.”

Techniques of digital monitoring impact the entire population, but the leading edge of regulatory and punitive technologies are applied most directly to communities that are already over-policed. Some scholars and critics have been describing these trends under the banner of “E-carceration,” calling out methods that utilize tracking and monitoring devices to extend practices of social control that are doubly (though not exclusively) impacting vulnerable communities. As Michelle Alexander recently wrote in the New York Times, these modes of digital penality are built on a foundation of “corporate secrets” and a thinly veiled impetus toward “perpetual criminalization,” constituting what she terms “the newest Jim Crow.” Nonetheless, while marginalized sectors are most directly impacted, as one of Eubanks’s informants warned us all: “You’re next.”

Advocates of automated and algorithmic justice methods often tout the capacity of such systems to reduce or eliminate human biases, achieve greater efficiency and consistency of outcomes, and ameliorate existing inequities through the use of better data and faster results. This trend is evident across a myriad of jurisdictions in the U.S. in particular (but not solely), as courts nationwide “are making greater use of computer algorithms to help determine whether defendants should be released into the community while they await trial.” In 2017, for instance, New Jersey introduced a statewide “risk assessment” system using algorithms and large data sets to determine bail, in some cases serving to potentially supplant judicial discretion altogether.

Many have been critical of these processes, noting that these automated decisions are only as good as the data points utilized—which are often tainted both by preexisting subjective biases and prior accumulations of structural bias recorded in people’s records based on them. The algorithms deployed for these purposes are primarily conceived as “proprietary techniques” that are largely opaque and obscured from public scrutiny; as a recent law review article asserts, we may be in the process of opening up “Pandora’s algorithmic black box.” In evaluating these emerging techniques, researchers at Harvard University thus have expressed a pair of related concerns: (1) the critical “need for explainable algorithmic decisions to satisfy both legal and ethical imperatives,” and (2) the fact that “AI systems may not be able to provide human-interpretable reasons for their decisions given their complexity and ability to account for thousands of factors.” This raises foundational questions of justice, ethics, and accountability, but in practice this discussion is in danger of being mooted by widespread implementation.

The net effect of adopting digital mechanisms for policing and crime control without more scrutiny can yield a divided society in which the inner workings (and associated power relations) of these tools are almost completely opaque and thus shielded from critique, while the outer manifestations are concretely inscribed and societally pervasive. The CBC radio program SPARK recently examined a range of these new policing technologies, from Body Cams and virtual Ride-Along applications to those such as Shot Spotter that draw upon data gleaned from a vast network of recording devices embedded in public spaces. Critically assessing the much-touted benefits of such nouveau tools as a “Thin Blue Lie,” Matt Stroud challenges the prevailing view that these technologies are inherently helpful innovations, arguing instead that they have actually made policing more reckless, discriminatory, and unaccountable in the process.

This has prompted a recent spate of critical interventions and resistance efforts, including a network galvanized under the banner of “Challenging E-Carceration.” In this lexicon, it is argued that “E-Carceration may be the successor to mass incarceration as we exchange prison cells for being confined in our own homes and communities.” The cumulative impacts of this potential “net-widening” of enforcement mechanisms include new technologies that gather information about our daily lives, such as license plate readers and facial recognition software. As Miller suggested in his invocation of “mass supervision” as the logical extension of such patterns and practices, these effects may be most immediately felt by those already overburdened by systems of crime control, but the impacts are harbingers of wider forms of social control.

Some advocates thus have begun calling for a form of “digital sanctuary.” An important intervention along these lines has been offered by the Sunlight Foundation, which advocates for “responsible municipal data management.” Their detailed proposal begins with the larger justice implications inherent in emerging technologies, calling upon cities to establish sound digital policies: “Municipal departments need to consider their formal data collection, retention, storage and sharing practices, [and] their informal data practices.” In particular, it is urged that cities should not collect sensitive information “unless it is absolutely necessary to do so,” and likewise should “publicly document all policies, practices and requests which result in the sharing of information.” In light of the escalating use of data-gathering systems, this framework calls for protections that would benefit vulnerable populations and all residents.

These notions parallel the emergence of a wider societal discussion on technology, providing a basis for assessing which current techniques present the greatest threats to, and/or opportunities for, the cultivation of justice. Despite these efforts, we are left with critical questions of whether the debate will catch up to utilization trends, and how the trajectory of tools will continue to evolve if left unchecked. As Adam Greenfield plaintively inquired in his 2017 book Radical Technologies: “Can we make other politics with these technologies? Can we use them in ways that don’t simply reproduce all-too-familiar arrangements of power?” This is the overarching task at hand, even as opportunities for public oversight seemingly remain elusive.

 

Randall Amster, J.D., Ph.D., is a teaching professor and co-director of environmental studies at Georgetown University in Washington, DC, and is the author of books including Peace Ecology. Recent work focuses on the ways in which technology can make people long for a time when children played outside and everyone was a great conversationalist. He cannot be reached on Twitter @randallamster.

 

Headline pic via: Source

fff-anonymous-10071137-1440-900

In the wake of the terrifying violence that shook El Paso and Dayton, there have been a lot of questions around the role of the Internet in facilitating communities of hate and the radicalization of angry white men. Digital affordances like anonymity and pseudonymity are especially suspect for their alleged ability to provide cover for far-right extremist communities. These connections seem to be crystal clear. For one, 8chan, an anonymous image board, has been the host of several far-right manifestos posted on its feeds preceding mass shootings. And Kiwi Farms, a forum board populated with trolls and stalkers who spend their days monitoring and harassing women, has been keeping a record of mass killings and became infamous after its administrator “Null”, Joshua Conner Moon, refused to take down the Christchurch manifesto.

The KF community claim to merely be archiving mass shootings, however, it’s clear that the racist and misogynistic politics on the forum board are closely aligned with that of the shooters. The Christchurch extremist had alleged membership to the KF community and had posted white supremacist content on the forum. New Zealand authorities requested access to their data to assist in their investigation and were promptly refused. Afterwards, Null encouraged Kiwi users to use anonymizing tools and purged the website’s data. It is becoming increasingly clear that these far-right communities are radicalizing white men to commit atrocities, even if such radicalization is only a tacit consequence of constant streams of racist and sexist vitriol.

With the existence of sites like 8chan and Kiwi Farms, it becomes exceedingly easy to blame digital technology as a root cause of mass violence. Following the recent shootings, the Trump administration attempted to pin the root of the US violence crisis on, among other things, video games. And though this might seem like a convincing explanation of mass violence on the surface, as angry white men are known to spend time playing violent video games like Fortnite, there has yet to be much conclusive or convincing empirical accounts that causally link videogames to acts of violence.

One pattern has been crystal clear, and that’s that mass and targeted violence seem to coalesce around white supremacists and nationalists. In fact, as FBI director Christopher Wray told the US Congress, most instances of domestic terrorism come from white supremacists. From this perspective, it’s easy to see how technological explanations are a bait and switch that try to hide white supremacy behind a smoke screen. This is a convenient strategy for Trump, as his constant streams of racism have legitimized a renewed rise in white supremacy and far-right politics across the US.

For those of us who do research on social media and trolling, one thing is for certain, easy technological solutions risk arbitrary punitive responses that don’t address the root of the issue. Blaming the growing violence crisis on technology will only lead to an increase in censorship and surveillance and intensify the growing chill of fear in the age of social media.

To better understand this issue, the fraught story of the anonymous social media platform Yik Yak is quite instructive. As a mainstream platform, Yik Yak was used widely across North American university and college campuses. Yak users were able to communicate anonymously on a series of GPS determined local news feeds where they could upvote and downvote content and engage in nameless conversations under random images to delineate users from each other.

Tragically, Yik Yak was plagued by the presence of vitriolic and toxic users who engaged in forms of bullying, harassment, and racist or sexist violence. This included more extreme threats, such as bomb threats, threats of gun violence, and threats of racist lynching. The seemingly endless stream of vitriol prompted an enormous amount of negative public attention that had alarming consequences for Yik Yak. After being removed from the top charts of the Google Play Store for allegedly fostering a hostile climate on the platform, Yik Yak administrators acted to remove the anonymity feature and impose user handles on its users in order to instil a sense of user accountability. Though this move was effective in dampening the degree of toxic and violent behavior on Yik Yak’s feeds, it also led to users abandoning the platform and the company eventually collapsing.

Though anonymity is often associated with facilitating violence, the ability to be anonymous on the Internet does not directly give rise to violent digital communities or acts of IRL (“In-real-life”) violence. In my ethnographic research on Yik Yak in Kingston, Ontario, I found that despite intense presence of vitriolic content, there was also a diverse range of users who engaged in forms of entertainment, leisure, and caretaking. And though it may be clear that anonymity affords users the ability to engage in undisciplined or vitriolic behavior, the Yik Yak platform, much like other digital and corporeal publics, allowed users to engage in creative and empowering forms of communication that otherwise wouldn’t exist.

For instance, there was a contingent of users who were able to communicate their mental health issues and secret everyday ruminations. Users in crisis would post calls for help that were often met with other users interested in providing some form of caretaking, deep and helpful conversations, and the sharing of crucial resources. Other users expressed that they were able to be themselves without the worrisome consequences of discrimination that entails being LGBTQ or a person of color.

What was clear to me was that there was an abundance of forms of human interaction that would never flourish on social media platforms where you are forced to identify under your legal name. Anonymity has a crucial place in a culture that has become accustomed to constant surveillance from corporations, government institutions, and family and peers. Merely removing the ability to interact anonymously on a social media platform doesn’t actually address the underlying explanation for violent behavior. But it does discard a form of communication that has increasingly important social utility.

In her multiyear ethnography on trolling practices in the US, researcher Whitney Phillips concluded that violent online communities largely exist because mainstream media and culture enable them. Pointing to the increasingly sensationalist news media and the vitriolic battlefield of electoral politics, Phillips asserts that acts of vitriolic trolling borrow the same cultural material used in the mainstream, explaining, “the difference is that trolling is condemned, while ostensibly ‘normal’ behaviors are accepted as given, if not actively celebrated.” In other words, removing the affordances of anonymity on the Internet will not stave off the intensification of mass violence in our society. We need to address the cultural foundations of white supremacy itself.

As Trump belches out a consistent stream of racist hatred and the alt-right continue to find footing in electoral politics and the imaginations of the citizenry, communities of hatred on the Internet will continue to expand and inspire future instances of IRL violence. We need to look beyond technological solutions, censorship, and surveillance and begin addressing how we might face-off against white supremacy and the rise of the far-right.

 

Abigail Curlew is a doctoral researcher and Trudeau Scholar at Carleton University. She works with digital ethnography to study how anti-transgender far-right vigilantes doxx and harass politically involved trans women. Her bylines can be found in Vice Canada, the Conversation and Briarpatch Magazine.

 

https://medium.com/@abigail.curlew

Twitter: @Curlew_A

 

Headline image via: Source

View post on imgur.com

While putting together the most recent project for External Pages, I have had the pleasure to work with artist and designer Anna Tokareva in developing Baba Yaga Myco Glitch™, an online exhibition about corporate mystification techniques that boost the digital presence of biotech companies. Working on BYMG™ catalysed the exploration of the shifting critiques of interface design in the User Experience community. These discourses shape powerful standards on not just illusions of consumer choice, but corporate identity itself. However, I propose that as designers, artists and users, we are able to recognise the importance of visually identifying such deceptive websites in order to interfere with corporate control over online content circulation. Scrutinising multiple website examples to inform the aesthetic themes and initial conceptual stages of the exhibition, we specifically focused on finding common user interfaces and content language that result in enhancing internet marketing.

Anna’s research on political fictions that direct the necessity for a global mobilisation of big data in Нооскоп: The Nooscope as Geopolitical Myth of Planetary Scale Computation lead to a detailed study of current biotech incentives as motivating forces of technological singularity. She argues that in order to achieve “planetary computation”, political myth-building and semantics are used for scientific thought to centre itself on the merging of humans and technology. Exploring Russian legends in fairytales and folklore that traverse seemingly binary oppositions of the human and non-human, Anna interprets the Baba Yaga (a Slavic fictitious female shapeshifter, villain or witch) as a representation of the ambitious motivations of biotech’s endeavour to achieve superhumanity. We used Baba Yaga as a main character to further investigate such cultural construction by experimenting with storytelling through website production.

The commercial biotech websites that we looked at for inspiration were either incredibly blasé, where descriptions of the company’s purpose would be extremely vague and unoriginal (e.g., GENEWIZ), or unnervingly overwhelming with dense articles, research and testimonials (e.g., Synbio Technologies). Struck by the aesthetic and experiential banality of these websites, we wondered why they all seemed to mimic each other. Generic corporate interface features such as full-width nav bars, header slideshows, fade animations, and contact information were distributed in a determined chronology of vertically-partitioned main sections. Starting from the top and moving down, we were presented with a navigation menu, slideshow, company services, awards and partners, “learn more” or “order now” button, and eventually land on an extensive footer.

This UI conformity easily permits a visual establishment of professionalism and validity; a quick seal of approval for legitimacy. It is customary throughout the UX and HCI paradigm, a phenomenon that Olia Lialina describes as “mainstream practices based on the postulate that the best interface is intuitive, transparent, or actually no interface” in Once Again, The Doorknob. Referring back to Don Norman’s Why Interfaces Don’t Work, which champions computers to only serve as devices of simplifying human lives, Lialina explains why this ethos contributes to mitigating user control, a sense of individualism and society-centred computing in general. She applies GeoCities as a counterpoint to Norman’s design attitude and an example of sites where users are expected to create their own interface. Defining the problematic nature in designing computers to be machines that only make life easier via such “transparent” interfaces, she argues:

“’The question is not, “What is the answer?” The question is, “What is the question?”’” Licklider (2003) quoted french philosopher Anry Puancare when he wrote his programmatic Man Computer Symbiosis, meaning that computers as colleagues should be a part of formulating questions.”

Coining the term “User Centred Design” and scheming the foundations of User Experience during his position as the first User Experience Architect at Apple in 1993, Norman’s advocacy of transparent design has unfortunately manifested into a universal reality. It has advanced into a standard so impenetrable that a business’s legitimacy and success is probably at stake if they do not follow these rules. The idea that we’ve become dependent on reviewing the website rather than the company themselves – leading to user choices being heavily navigated by websites rather than company ethos – is nothing new. And additionally, the invisibility of transparent interface design has proceeded to fooling users into algorithmic “free” will. Jenny Davis’s work on affordances highlights that just because functions or information may be technically accessible, they are not necessarily “socially available”, and the critique of affordance extends to the critique of society. In Beyond the Self, Jack Self illustrates website loading animations or throbbers (moving graphics that illustrate the site’s current background actions) as synchronised “illusions of smoothness” that support neoliberal incentives of real-time efficiency.

“The throbber is thus integral to maintaining the illusion of inescapability, dissimulating the possibility of exiting the network—one that has become both spatially and temporally coextensive with the world. This is the truth of the real-time we now inhabit: a surreal simulation so perfectly smooth it is indistinguishable from, and indeed preferable to, reality.”

These homogeneous plain sailing interfaces reinforce a mindset of inevitability, and at the same time, can create slick operations that cheat the user. Actions like “dark patterns” are implemented requests that trick users into completing tasks such as enlisting or purchasing which may be unconsented. My lengthy experience with recruitment websites could represent the type of impact that sites have on the portrayal of true company intentions. Constantly reading about the struggles of obtaining a position in the tech industry, I wondered how these agencies make commission when finding employment seems so rare. I persisted and filled out countless job applications and forms, received nagging emails and calls from recruiters for a profile update or elaboration, until I finally realised that I have been swindled by the consultancies for my monetised data (which I handed off via applications). Having found out that these companies profit on applicant data and not job offer commissions, I slowly withdrew from any further communication as I knew this would only lead to another dead end. As Anna and I roamed through examples of biotech companies online, it was easy to spot familiar UI between recruitment and lab websites; welcoming slideshows and all the obvious keywords like “future” and “innovation” stamped across images of professionals doing their work. It was impossible not to question the sincerity of what the websites displayed.

Along with the financial motives behind tech businesses, there are also fundamental internal and external design factors that diminish the trustworthiness of websites. Search engine optimisation is vital in controlling how websites are marketed and ranked. In order to fit into the confines of web indexing, site traffic now depends on not just handling Google Analytics but creating keywords that are either exposed in the page’s content or mostly hidden within metadata and backlinks. As the increase in backlinks correlates with the growth of SEO, corporate websites implement dense footers with links to all their pages, web directories, social media, newsletters and contact information. The more noise a website makes via its calls to external platforms, the more noise it makes on the internet in general.

The online consumer’s behavior is another factor in manipulating marketing strategies. Besides brainstorming what users might search, SEO managers are inclined to find related terms by scrolling through Google’s results page and seeing what else their users already searched for. Here, we can see how Google’s algorithms produce a tailored feedback loop of strategic content distribution that simultaneously feeds an uninterrupted rotating dependency on their search engine.

It is clear that keyword research helps companies come up with their content delivery and governance, and I worry about the line blurring between the information’s delivery strategy and its actual meaning. Alex Rosenblat observes how Uber uses multiple definitions for their in court hearings in order to shift blame onto their drivers as they are “consumers of its software”, subsequently enabling tech companies to switch so often between the words “users” and “workers” until they become fully entangled. In the SEO world, avoiding keyword repetition additionally helps to stay away from competing with their own content, and companies like Uber easily benefit from this specific game plan as they can freely work with interchanging their wording when necessary. With the increase in applying a varied range of buzzwords, encouraged by using multiple words to portray one thing, it’s evident that Google’s SEO system plays a role in stimulating corporations to implement ambiguous language on their sites.

However, search engine restrictions also further the SEO manipulation of content. There have been a multitude of studies (such as Enquiro, EyeTools and Did-It or Google’s Search Quality blog and User Experience findings) that look at our eye-tracking patterns when searching for information, many of which back up the rules of the “Golden Triangle” – a triangular space in which the highest density of attention remains on the top and trickles down on the left of the search engine results page (SERP). While the shape changes in relation to SERP’s interface evolution (as explained in a Moz Blog by Rebecca Maynes), the studies reveal how Google’s search engine interface offers the illusion of choice, while subsequently exploiting the fact that users will pick the first three results.

In a Digital Visual Cultural podcast, Padmini Ray Murray describes Mitchell Whitelaw’s project, The Generous Interface, where new forms of searching are reviewed through interface design to show the actual scope and intricacy of digital heritage collections. In order to realise generous interfaces, Whitelaw considers functions like changing results every time the page is loaded or randomly juxtaposing content. Murray underpins the importance of Whitelaw’s suggestions to completely rethink how we display collections as a way to untie us from the Golden Triangle’s logic. She claims that our reliance on such online infrastructures is a design flaw.

“The state of the web today – the corporate web – the fact that it’s completely taken over by four major players, is a design problem. We are working in a culture where everything we understand as a way into information is hierarchical. And that hierarchy is being decided by capital.”

Interfaces of choice are contested and monopolised, guiding and informing user experience. After we have clicked on our illusion of choice, we are given yet another illusion – through the mirage of soft and polished animations, friendly welcome page slideshows and statements of social motivation – we read about company ethos (perhaps we’re given the generic slideshow and fade animation to distract us if the information is misleading).

 

View post on imgur.com

Murray goes on to describe a project developed by Google’s Cultural Institute called Womenwill India, which approaches institutions to digitise cultural artefacts that speak to women in India. This paved the way for scandals where institutions that could not afford or have the expertise to digitalise their own collections ended up simply administering them to Google. She goes on to study the suspiciousness of the program through the motivations that lie beneath the concept of digitising collections and the institute’s loaded power: “it’s used for machines to get smarter, not altruism […] there is no interest in curating with any sense of sophistication, nuance or empathy”. Demonstrating the program’s dubious incentives, she points to the website’s cultivation of exoticism with the use of “– India” affixed to the product’s title. She continues to describe the website to be “absolutely inexplicable” as it flippantly throws together unrelated images of labeled ‘Intellectuals’, ‘Gods and Goddesses’ and ‘Artworks’ with ‘Women Who Have Encountered Sexual Violence During The Partition’.

When capital has power over the online circulation of public relations, the distinction between website design and content begins to fade, which leads design to take on multiple roles. Since design acts as a way of presenting information, Murray believes it therefore has the potential to correct it.

“This is a metadata problem as well. Who is creating this? Who is telling us that this is what things are? The only way that we can push back against the Google machine is to start thinking about interventions in terms of metadata.”

The bottom-up approach to consider interventions as metadata could also then be applied to the algorithmic activities of web crawlers. The metadata (a word I believe Murray also uses to express the act of naming and describing information) of a website specifies “what things are”. While the algorithmic activity of web crawlers further enhance content delivery, search engine infrastructure is ruled by the unification of two very specific forces – of crawler and website. As algorithms remain to be inherently non-neutral, developed by agents with very specific motives, the suggestion to use metadata as a vehicle for intervention (within both crawlers and websites) can employ bottom-up processing to be a strong political tactic.

Web crawlers’ functions are unintelligible and concealed to the user’s eye. Yet they’re connected to metadata, whose information seeps through to reach public visibility via either content descriptions on the results page, drawn-out footers containing extensive amounts of links, ambiguous buzzword language or any of the conforming UI features mentioned above. This allows for users (as visual perceivers) to begin to identify suspicious motives of websites through their interfaces. These aesthetic cues give us little snippets of what the “Google machine” actually wants from us. And, while it may just present the tip of the iceberg, it is a prompt to not underestimate, ignore or become numb to the general corporate visual language of dullness and disguise. The idea of making interfaces invisible has formed into an aesthetic of deception, and Norman’s transparent design manifesto has collapsed onto itself. When metadata and user interfaces work as ways of publicising the commercial logic of computation by exposing hidden algorithms, we can start to collectively see, understand and hopefully rethink these digital forms of (what used to be invisible) labour.

 

Ana Meisel is a web developer and curator of External Pages, starting her MSc in Human Computer Interaction and Design later this year. anameisel.com, @ananamei

Headline Photo: Source

Photo 2: Source

 Today I worked on three separate collaborations: feedback on a thesis draft, a paper revision with colleagues at other universities, and a grant proposal with mostly senior scholars. Each collaboration represents my integration with distinct project teams, on which my status varies. And along with my relative status, so too varies my relationship with the Track Changes editing tool.

When giving feedback on my student’s thesis, I wrote over existing text with reckless abandon. I also left comments, moved paragraphs, and deleted at will. When working on my paper collaboration, I also edited freely, though was more likely to include comments justifying major alterations. When working on the research grant, for a project team on which I am the most junior member, I knew not to change any of the text directly. Instead, I made suggestions using the Comment function, sometimes with alternative text, always phrased and punctuated as a question.

These experiences are, of course, not just tied to me nor to the specific tasks I undertook today. They are part of a larger and complex rule structure that has emerged with collaborative editing tools. Without anyone saying anything, the rules generally go like this: those higher on the status hierarchy maintain control over the document. Those lower on the status hierarchy do not. Even though Track Changes positions everything as a suggestion (i.e., collaborators can accept or reject any change), there is something gutsy about striking someone’s words and replacing them with your own, and something far meeker about a text-bubble in the margins.

Track Changes (and other collaboration tools) do not enforce status structures. They do, however, reflect and enact them. Who you are affects which functions are socially available, even as the entire suite of functions remain technically available. Users infuse these tools with existing social arrangements and keep these arrangements intact. The rules are not explicit. Nobody told me not to mess with the grant proposal text, just as nobody sanctioned my commanding approach to the student’s thesis, or the “clean” (Track Changes all accepted) manuscript copy I eventually sent to my co-authors. Rather, these rules are implicit. They are tacit. And yet, they are palpable. Missteps and transgressions could result in passive aggressive friction in the mildest case, and severed working relationships in the more extreme.

Just like all technologies, Track Changes is of the culture from which it stems. Status hierarchies in the social system reemerge in the technical artifact and the social relations facilitated through it. Stories of Track Changes norm breaching would illustrate this point with particular clarity. I’m struck, however, by not having on hand a single personal example of such a breach. Everyone I work with seems, somehow, to just know what to do.

Jenny Davis is on Twitter @Jenny_L_Davis

 

Stories of data breaches and privacy violations dot the news landscape on a near daily basis. This week, security vendor Carbon Black published their Australian Threat Report based on 250 interviews with tech executives across multiple business sectors. 89% Of those interviewed reported some form of data breach in their companies. That’s almost everyone. These breaches represent both a business problem and a social problem. Privacy violations threaten institutional and organizational trust and also, expose individuals to surveillance and potential harm.

But “breaches” are not the only way that data exposure and privacy violations take shape. Often, widespread surveillance and exposure are integral to technological design. In such cases, exposure isn’t leveled at powerful organizations, but enacted by them.  Legacy services like Facebook and Google trade in data. They provide information and social connection, and users provide copious information about themselves. These services are not common goods, but businesses that operate through a data extraction economy.

 I’ve been thinking a lot about the cost-benefit dynamics of data economies and in particular, how to grapple with the fact that for most individuals, including myself, the data exchange feels relatively inconsequential or even mildly beneficial. Yet at a societal level, the breadth and depth of normative surveillance is devastating. Resolving this tension isn’t just an intellectual exercise, but a way of answering the persistent and nagging question: “why should I care if Facebook knows where I ate brunch?” This is often wrapped in a broader “nothing to hide” narrative, in which data exposure is a problem only for deviant actors.

Nothing to hide narratives derive from a fundamental obfuscation of how data works at scale. “Opt-out” and even “opt-in” settings rely on a denatured calculus. Individuals solve for data privacy as a personal trouble when in contrast, it is very much a public issue.  

Data privacy is a public issue because data are sui generis—greater than the sum of their parts. Data trades don’t just affect individuals, but collectively generate an encompassing surveillance system. Most individual data are meaningless on their own. Data become valuable—and powerful—through aggregation. Singular datum are thus primarily effectual when they combine into plural data. In other words, my data comes to matter in the context of our data. With our data, patterns are rendered perceptible and those patterns become tools for political advantage and economic gain.

Individuals can trade their data for services which, at the individual level, make for a relatively low cost (and even personally advantageous) exchange. Accessing information through highly efficient search engines and connecting with friends, colleagues, communities, and fellow hobbyists are plausibly worth as much or more than the personal data that a user “pays” for this access and connection. At the individual level, data often buy more than they cost.

However, the costs of collective data are much greater, and include power transfers to state and corporate actors. Siva Vaidhyanathan is excellent on this point. In his book Anti-Social Media: How Facebook Disconnects us and Undermines Democracy Vaidhyanathan demonstrates how the platform’s norm of peer-sharing turns to peer surveillance, turns, though mass data collection, to corporate and state surveillance. Facebook collects our data and gifts it back to us in the form of pleasing News Feeds. Yet in turn, Facebook sells our data for corporate and political gain. This model only works en masse . Both News Feeds and political operatives would be less effective without the data aggregates, collected through seemingly banal clicks, shares, and key strokes.

Individual privacy decisions are thus not just personal choices and risks, nor even network-based practices. Our data are all wrapped up in each other. Ingeniously, big tech companies have devised a system in which data exchange benefits the individual, while damaging the whole. Each click is a contribution to that system. Nobody’s data much matters, but everybody’s data matters a lot.

Jenny Davis is on Twitter @Jenny_L_Davis

Headline Pic Via: Source

Stories about AI gone bigoted are easy to find: Microsoft’s Neo-Nazi “Tay” bot, her still racist sister “Zo”, Google’s autocomplete function that assumed men occupy high status jobs, and Facebook’s job-related targeted advertising which assumed the same.

A key factor in AI bias is that the technology is trained on faulty databases. Databases are made up of existing content. Existing content comes from people interacting in society. Society has historic, entrenched, and persistent patterns of privilege and disadvantage across demographic markers. Databases reflect these structural societal patterns and thus, replicate discriminatory assumptions. For example, Rashida Richardson, Jason Schultz, and Kate Crawford put out a paper this week showing how policing jurisdictions with a history of racist and unprofessional practices generate “dirty data” and thus produce dubious databases from which policing algorithms are derived. The point is that database construction is a social and political task, not just a technical one. Without concerted effort and attention, databases will be biased by default. 

Ari Schlesinger, Kenton P. O’Hara, and Alex S. Taylor present an interesting suggestion/question about database construction. They are interested in chatbots in particular, but their point easily expands to other forms of AI. They note that the standard practice is to manage AI databases through the construction of a “blacklist”. A blacklist is a list of words that will be filtered from the AI training. Blacklists generally include racist, sexist, homophobic, and other forms of offensive language. The authors point out, however, that this  approach is less than ideal for two reasons. First, it can eliminate innocuous terms and phrases in the name of caution. This doesn’t just limit the AI, but can also erase forms of identity and experience. The authors give the example of “Paki”. This is a derogatory racist term. However, filtering this string of letters also filters out the word Pakistan, which is an entire country/nationality that gets deleted from the lexicon. Second, language is dynamic and meanings change. Blacklists are relatively static and thus quickly dated and ineffective.

The authors suggest instead that databases are constructed proactively through modeling. Rather than tell databases what not to say (or read/hear/calculate etc.), we ought to manufacture models of desirable content (e.g., people talking about race in a race conscious way). I think there’s an interesting point here, and an interesting question about preventative versus proactive approaches to AI and bias. Likely, the approach has to come from both directions–omit that which is explicitly offensive and teach in ways that are socially conscious. How to achieve this balance remains an open question both technically and socially. My guess is that constructing social models of equity will be the most complex part of the puzzle.

 

Jenny Davis is on Twitter @Jenny_L_Davis

Headline Pic Via: Source

Yes, Please to this article by Amy Orben and Andrew K. Przybylski, which I plan to pass around like I’m Oprah with cars. Titled The Association Between Adolescent Well-Being and Digital Technology Use, the paper does two of my favorite things: demonstrates the importance of theoretical precision & conceptual clarity in research design, and undermines moral panics about digital technology and mental health.

The effects of digital technologies on the mental lives of young people has been a topic of interdisciplinary concern and general popular worry. Such conversations are kept afloat by contradictory research findings in which digital technologies are alternately shown to enhance mental well-being, damage mental well-being, or to have little effect at all. Much (though not all) of this work comes from secondary analyses of large datasets, building on a broader scientific trend of big data analytics as an ostensibly superlative research tool. Orben and Przybylski base their own study on analysis of three exemplary datasets including over 350,000 cases. However, rather than simply address digital technology and mental well-being, the authors rigorously interrogate how existing datasets define key variables of interest, operationalize those variables, and model them with controls (i.e., other relevant factors).

A key finding from their work is that existing datasets conceptualize, operationalize, and model digital technology and mental well-being in a lot of different ways. The variation is so great, the authors find, that researchers can construct trillions of different combinations, just to answer a single research question (i.e., is digital technology making teens sad?). Moreover, they find that analytic flexibility has a significant effect on research outcomes. Running over 20,000 analyses on their three datasets, Orben and Przybylski find that design and analysis can result in thousands of different outputs, some of which show negative effects of digital technology, some positive, and others, none at all.  Their findings drive home the point that big data is not intrinsically valuable as a data source but instead, must be treated with theoretical care. More data does not equal better science by the nature of its size. Better science is theoretically informed science, for which big data can act as a tool. While others have made similar arguments, Orben and Przybylski articulate the case in the home language of big data practitioners.

The second element of their study is that analyzing across the three datasets gives a result in which digital technology explains less than 1% of the variation in teens’ mental well-being (0.4%, to be exact). The findings do show that the relationship is negative, but only slightly. This means that 99% of teens’ mental well-being is correlated with things other than digital technology. Regularly eating potatoes showed a negative correlation similar to that of technology use in relation to teens’ mental health. The minuscule space of digital technology in the mental lives of young people sits in stark contrast to its bloated expanse within the public imagination and policy initiatives. For example, while  I was searching for popular press pieces about mental health among university students to use in a class for which I am preparing, it was a seeming requirement that authors agree on the trouble of technology, even when they disagreed on everything else.

The article’s unassuming title could easily have blended with the myriad existing studies about technology, youth, and mental health. Its titular simplicity belies the explosive dual contributions contained within. Luckily, Twitter

 

Jenny Davis is on Twitter @Jenny_L_Davis

A series of studies was just published showing that White Liberals present themselves as less competent when interacting with Black people than when interacting with other White people. This pattern does not emerge among White Conservatives. The authors of the studies, Cynthia H. Dupree (Yale University) and Susan T.  Fiske (Princeton University), refer to this as the “competence downshift” and explain that reliance on racial stereotypes result in patronizing patterns of speech when Liberal Whites engage with a racial outgroup. The original article appears in the journal Personality and Social Psychology. I make the case that these human-based findings have something to tell us about AI and its continued struggle with bigotry. 

Since the article’s publication, the Conservative response has been swift and expected. Holding the report as evidence of White Liberal hypocrisy, a Washington Times story describes how the findings “fly in the face of a standard talking point of the political left,” and a Patriot Post story concludes that “Without even realizing it, ‘woke’ leftists are the ones most guilty of perpetrating the very racial stereotypes they so vehemently condemn.”

Conservative commentators aren’t wrong to call out the racism of White Liberals, nor would White Liberals be justified in coming to their own defense. The data do indeed tell a damning story. However, the data also reveal ingroup racial preference among White Conservatives and an actively unwelcoming interaction style when White Conservatives engage with people of color. In other words, White Conservatives aren’t wrong, they are just racist, too.

Overall, the studies show the insidiousness of racism across ideological bounds. Once racial status processes activate, they inform everyday encounters in ways that are often difficult to detect, and yet have lasting impacts. While White Liberals talk down to Black people in an effort to connect, White Conservatives look down on Black people and would prefer to remain within their own racial group. Neither of these outcomes are good for Black people, and that story is clear.

Racism is rampant across ideological lines. That is the story that the data tell. This story has implications beyond the laboratory settings in which the data were collected.  I think one of those implications has to do with AI. Namely, the findings tell us something insightful about how and why AI keeps being accidentally racist (and sexist/homophobic/classist/generally bigoted), despite continued efforts and promises to rectify such issues. .

The tales of problematic AI are regular and fast-coming. Facial recognition software that misidentifies people of color; job advertisements that show women lower paying gigs; welfare algorithms that punish poverty; and search platforms that rely on raced and gendered stereotypes. I could go on.

The AI bigotry problem is easy to identify and diagnose, but the findings of the above study show that it is especially tricky—though not impossible—to resolve. AI comes out prejudice because society is prejudice. AI is made by people who live in society, trained by data that comes from society, and deployed through culturally entrenched social systems. AI hardware and software are thus bound to pick up and enact the status structures that govern human social relations. The problem isn’t so much with faulty technology, but with historically ingrained “isms” that have become so normative they disappear from conscious thought until Surprise! Gmail autocomplete assumes investors are men.  These #AIFails are like an embarrassing super power which renders invisible inequalities hyper-visible and blatantly clear.

The oft proposed solution—besides technical fixes—has been a call for a more critical lens in the tech sector. This means collaboration between technologists and critical social thinkers such that technological design can better attend to and account for the complexities of social life, including issues of power, status, and intersecting oppressions.

The solution of a critical lens, however, is somewhat undermined by Dupree and Fiske’s findings. One of the main reasons the authors give for the competence downshift is White Liberals’ disproportionate desire to engage with racial minorities and their concern that racial minorities will find White Liberals racist. That is, Liberal Whites wanted to reach across race lines, and they were aware of how their Whiteness can trouble interracial interaction. This is a solid critical starting point, one I imagine most progressive thinkers would hope for among people who build AI. And yet, it was this exact critical position that created racist interaction patterns.

When White Liberals interacted with Black people in Dupree and Fiske’s studies, they activated stereotypes along with an understanding of their own positionality. This combination resulted in “talking down” to people in a racial outgroup. In short, White Liberals weren’t racist despite their best intentions and critical toolbox, but because of them. If racism is so insidious in humans, how can we expect machines, made by humans, to be better?

One pathway is simple: diversify the tech field and check all products rigorously and empirically against a critical standard. The standpoint of technologists matter. An overly White, male, hetero field promises a racist, sexist, heteronormative result. A race-gender diverse field is better. Social scientists can help, too. Social scientists are trained in detecting otherwise imperceptible patterns. We take a lot of methods classes just for this purpose, and pair those methods with years of theory training. A critical lens is not enough. It never will be. It can, however, be a tool that intersects with diverse standpoints and rigorous social science. AI will still surprise us in unflattering and perhaps devastating ways, but critical awareness and a firm directive to“stop being racist,” can’t be the simple solution.

 

Jenny Davis is on Twitter @Jenny_L_Davis

Headline pic via: Source

In late September the social news networking site ‘Reddit’ announced a revamp of their ‘quarantine’ function. A policy that has been in place for almost three years now, quarantines were designed to stop casual Redditors from accessing offensive and gruesome subreddits (topic based communities within the site), without banning these channels outright. In doing so the function impacted a small number of small subreddits and received little attention. The revamp of the quarantine function however has led to the policy applying to much larger subreddits, creating significant controversy. As an attempt to shape the affordances of the site, the revamped quarantine function highlights many of political and architectural issues that Reddit is facing in today’s current political climate.

As a platform, Reddit sits in a frequently uncomfortable position. Reddit was initially established as a haven for free speech, a place in which anything and everything could and should be discussed. When, for example, discussion about #gamergate, the controversy in 2014 over the ethics of the gaming industry that resulted in a number of high-profile women game designers and journalists being publicly harassed, was banned on the often more insidious 4chan, it was Reddit where discussion continued to flourish. However, in recent years, Reddit has come under increasing pressure due to this free for all policy. Reddit has been blamed for fueling misogyny, facilitating online abuse, and even leading to the misidentification of suspects in the aftermath of the Boston Marathon Bombings.

Reddit announced the revamp of its quarantining policy via a long post on the subreddit r/announcements. In doing so, one of Reddit’s moderators u/landoflobsters highlighted the bind that Reddit faces. They said:

“On a platform as open and diverse as Reddit, there will sometimes be communities that, while not prohibited by the Content Policy, average redditors may nevertheless find highly offensive or upsetting. In other cases, communities may be dedicated to promoting hoaxes (yes we used that word) that warrant additional scrutiny, as there are some things that are either verifiable or falsifiable and not seriously up for debate (eg, the Holocaust did happen and the number of people who died is well documented). In these circumstances, Reddit administrators may apply a quarantine.”

u/landoflobsters Argued that a quarantine function was not designed to shut down discourse, but rather to ensure those who didn’t wish to see it did not view it and to potentially encourage change:

“The purpose of quarantining a community is to prevent its content from being accidentally viewed by those who do not knowingly wish to do so, or viewed without appropriate context. We’ve also learned that quarantining a community may have a positive effect on the behavior of its subscribers by publicly signaling that there is a problem. This both forces subscribers to reconsider their behavior and incentivizes moderators to make changes.”

A quarantine has a number of impacts on a subreddit. A quarantined community is unable to generate revenue, does not appear on Reddit’s front page, nor on other non-subscription-based feeds. These subreddits cannot be found via the search or recommendation function. To find a quarantined subreddit a user must search for the name specifically, normally through Google. When a user attempts to subscribe to a quarantined subreddit a warning is displayed requiring the user to explicitly opt-in to view the content. Information about these subreddits, for example the number of subscribers, is also scrubbed from their front page. In essence, quarantines are designed to halt the growth of subreddits without banning them outright.

Through the mechanism of affordances we can understand this attempt from Reddit more thoroughly. Affordances is a term that has become a critical analytical tool within science and technology studies, and increasingly in relation to the study of social media architectures. Davis and Chouinard argue that “affordance refers to the range of functions and constraints that an object provides for, and places upon, structurally situated subjects.”

In relation to studies of social media we can understand affordances to be the functions of the site that shape what users can do within the space. Davis and Chouinard provide a theoretical scaffold for affordance analyses by modeling the mechanisms of affordance. These mechanisms capture variation in the strength with which artefacts shape human behavior and social dynamics. Davis and Chouinard propose that artefacts don’t just afford or not afford but instead, request, demand, encourage, discourage, refuse, and allow.

Requests and demands refer to bids that the artefact places upon the subject. Encouragement, discouragement, and refusal refer to how the artefact responds to a subject’s desired actions. Allow pertains to both bids placed upon on the subject and bids placed upon the artefact.”

In relation to Reddit’s quarantine function, we can see a strong attempt at discouragement. Davis and Chouinard argue that “artifacts discourage when one line of action, though available should subjects wish to pursue it, is only accessible through concerted effort.” Reddit has attempted discouragement in a number of ways — through requiring users to specifically search for quarantined subreddits, through the warning messages provided when users access quarantined subreddits, and through the inability of these subreddits to earn income, in turn discouraging use and development of these subreddits.

Through the expansion of the quarantine function we can see the deep politics that exist behind artefact affordances. The expansion of the quarantine function has seen significant pushback by affected subreddits. One of the largest for example, r/TheRedPill, which was quarantined specifically for misogynistic rhetoric, has launched a concerted campaign against their quarantine. Moderators of r/TheRedPill have argued strongly against the politics of the quarantine, stating that Reddit has tacitly endorsed male abuse and denied its victims. In doing so they have worked to subvert the new affordance of the site, labeling it as an inherently political act.

In response to the quarantine, we see the politics of affordances play out. While affordances provide a framework for what users can and cannot do with a particular artefact, this can, and is frequently subverted by users in numerous ways. This mutability is in particular a feature of Reddit. Having an open source philosophy Reddit users have changed and shaped the interface and functions of the site regularly across its history. Whether users will be able to successfully subvert the quarantine revamp is yet to be seen. Yet, what is interesting about this episode is how it highlights the politics that exist behind the artefact’s affordances, politics that are playing out once again on centre stage.

Reddit finds itself once again stuck in a bind. The site seems to be trying to please everybody, hoping to hold on to its status as a place of free speech, while at the same time responding to critics about the very speech that occurs on the site. In doing so it is likely that Reddit will end up pleasing no one.

 

Simon Copland is a PhD candidate in Sociology at the Australian National University (ANU), studying online men’s communities, frequently called the ‘manosphere’

Simon is a freelance writer and has been published in The Guardian, BBC Online and Overland Journal, amongst others. He co-produces the podcast‘Queers’, a fortnightly discussion on queer politics and culture, and is the co-editor of the opinion and analysis site ‘Green Agenda’.

Simon is a David Bowie and rugby union fanatic.”

Headline pic via: Source