The best way I can describe the experience of summer 2019-2020 in Australia is with a single word: exhausting. We have been on fire for months. There are immediate threats in progress and new ones at the ready. Our air quality levels dip in and out of hazardous, more often in the former category than the latter. This has been challenging for everyone. For many, mere exhaustion may feel like a luxury.

In the trenches of the ongoing fires are the Australian emergency service workers, especially the “fireys,” who have been tireless in their efforts to save homes, people, and wildlife. While the primary and most visible part of their  work is the relentless job of managing fires, there is also a secondary–though critical–task of public communication, keeping people informed and providing material for anxious-refreshers looking for information about “fires near me.”  In the last few days, as fires have approached the Canberra suburbs where I live, an interesting variant of public safety communication has emerged: Instagramable photography.

A tweet from the local emergency service account (@ACT_ESA) announced Wednesday night that a major road would be closed to anyone who isn’t a resident of the area. The reason for the closure was to prevent a growing obstacle to public safety—disaster tourism. Apparently, people have been “visiting” the fires, generally taking dramatic photographs to share on social media. These disaster tourists put themselves in harm’s way, clog the roads, and generally create more work for emergency responders. The road closure was a hard and fast way to keep people out. It was not, however, the ESA’s only action. In addition to closing roads and posting notices, the team also created and shared imagery of the fires-in-progress with direct allusion to the perceived goals of would-be disaster tourists (i.e., social sharing).

 

The response by the ACT ESA is a subtle combination of empathy, understanding, and practicality. Rather than a punitive or derogating reproach, the response assumes–I suspect correctly– that visitors aren’t there to get in the way or cultivate clout, but to bear witness, bolster awareness, seek validation, and more generally, cope. Visually, the fires traverse beauty and horror in a way that is difficult to describe. You need to see it for yourself. And that’s why people take and share pictures. They are in the midst of something that is inarticulable,  and yet feel compelled to articulate it through the means at their disposal.  Capturing the destruction, from the best angle, means speaking with clarity. It means concretizing an experience that would be surreal, were it not happening with such immediacy and acuity. Words do little justice to the gorgeous tragedy of a red sunset.

And so, the work of fire safety in Australia 2020 now includes mollifying would-be disaster tourists by taking more Instagramable photos than visitors could take themselves. It’s a warning and a plea, delivered with a gift.

Headline Image Credit Gary Hooker, ACTRFS (Australian Capital Territory Rural Fire Service), via @ACT_ESA

Want to help? Here are some options

Jenny Davis is on Twitter @Jenny_L_Davis

 

Drew Harwell (@DrewHarwell) wrote a balanced article in the Washington Post about the ways universities are using wifi, bluetooth, and mobile phones to enact systematic monitoring of student populations. The article offers multiple perspectives that variously support and critique the technologies at play and their institutional implementation. I’m here to lay out in clear terms why these systems should be categorically resisted.

The article focuses on the SpotterEDU app which advertises itself as an “automated attendance monitoring and early alerting platform.” The idea is that students download the app and then universities can easily keep track of who’s coming to class and also, identify students who may be in, or on the brink of, crisis (e.g., a student only leaves her room to eat and therefore may be experiencing mental health issues). As university faculty, I would find these data useful. They are not worth the social costs.

One social consequence of SpotterEDU and similar tracking applications is that these technologies normalize surveillance and degrade autonomy. This is especially troublesome among a population of emerging adults. For many traditionally aged students (18-24), university is a time of developmental transition—like adulting with a safety net. There is a fine line between mechanisms of support and mechanisms of control. These tracking technologies veer towards the latter, portending a very near future in which extrinsic accountability displaces intrinsic motivation and data extraction looms inevitable.

Speaking of data extraction, these tracking technologies run on data. Data is a valuable resource. Historically, valuable resources are exploited to the benefit of those in power and the detriment of those in positions of disadvantage. This pattern of reinforced and amplified inequality via data economies has already played out in public view (see: targeted political advertising, racist parole decisions, sexist hiring algorithms). One can imagine numerous ways in which student tracking will disproportionately affect disadvantaged groups. To name a few: students on financial aid may have their funding predicated on behavioral metrics such as class attendance or library time; “normal” behaviors will be defined by averages, which implicitly creates standards that reflect the demographic majority (e.g., white, upper-middle class) and flags demographic minorities as abnormal (and thus in need of deeper monitoring or intervention); students who work full-time may be penalized for attending class less regularly or studying from remote locations. The point is that data systems come from society and society is unequal. Overlaying data systems onto social systems wraps inequality in a veneer of objectivity and intensifies its effects.

Finally, tracking systems will not be constrained to students. It will almost certainly spread to faculty. Universities are under heavy pressure to demonstrate value for money. They are funded by governments, donors, and tuition-paying students and their families. It is not at all a stretch to say that faculty will be held to account for face time with students, time spent in offices, duration of classes, and engagement with the university. This kind of monitoring erodes the richness of the academic profession with profound effects on the nature of work for tenure-line faculty and the security of work for contingent lecturers (who make up an increasing majority of the academic workforce).

To end on a hopeful note, SpotterEDU and other tracking applications are embedded in spaces disposed to collective action. Students have always been leaders of social change and drivers of resistance. Faculty have an abundance of cultural capital to expend on such endeavors. These technologies affect everyone on campus. Tenure-line faculty, contingent faculty, and students each have something to lose and thus a shared interest and common struggle[1]. We are all in the mess together and together, we can resist our way out.  

Jenny Davis is in Twitter @Jenny_L_Davis

Headline pic via: Source


[1] I thank James Chouinard (@jamesbc81) for highlighting this point

Mark Zuckerberg testified to congress this week. The testimony was supposed to address Facebook’s move into the currency market. Instead, they mostly talked about Facebook’s policy of not banning or fact-checking politicians on the platform.  Zuckerberg roots the policy in values of free expression and democratic ideals. Here is a quick primer on why that rationale is ridiculous.

For background, Facebook does partner with third party fact-checkers, but exempts politicians’ organic content and paid advertisements from review. This policy is not new. Here is an overview of the policy’s parameters.

To summarize the company’s rationale, Facebook believes that constituents should have unadulterated knowledge about political candidates. When politicians lie, the people should know about it, and they will know about it because of a collective fact-checking effort. This is premised on the assumption that journalists, opposing political operatives, and the vast network of Facebook users will scrutinize all forms of political speech thus debunking dishonest claims and exposing dishonest politicians.

In short, Facebook claims that crowdsourced fact-checking will provide an information safety net which allows political speech to remain unregulated, thus fostering an optimally informed electorate.

On a simple technical level, the premise of crowdsourced fact-checking on Facebook does not work. The reason crowdsourced fact-checking cannot work on Facebook is because content is microtargeted. Facebook’s entire financial structure is premised on delivering different content—both organic and advertised—to different users. Facebook gives users the content that will keep them “stuck” on the site as long as possible,  and distributes advertisements to granular user segments who will be most influenced by specific messages. For these reasons, each Facebook feed is distinct and no two Facebook users encounter the exact same content.

Crowdsourced fact-checking only works when “the crowd” all encounter the same facts. On Facebook, the this is not the case, and that is by design. Would-be fact-checkers may never encounter a piece of dishonest content, and if they do, those inclined to believe the content (because it supports their existing worldview) are less likely to encounter the fact-checker’s debunking.

Facebook’s ideological justification for unregulated political speech is not just thin, it’s technically untenable. I’m going to assume that Zuckerberg understands this. Facebook’s profit motive thus shines through from behind a  moral veil, however earnestly Zuckerberg presents the company’s case.

 

Jenny Davis is on Twitter @Jenny_L_Davis

Headline image via: source

 

As technology expands its footprint across nearly every domain of contemporary life, some spheres raise particularly acute issues that illuminate larger trends at hand. The criminal justice system is one such area, with automated systems being adopted widely and rapidly—and with activists and advocates beginning to push back with alternate politics that seek to ameliorate existing inequalities rather than instantiate and exacerbate them. The criminal justice system (and its well-known subsidiary, the prison-industrial complex) is a space often cited for its dehumanizing tendencies and outcomes; technologizing this realm may feed into these patterns, despite proponents pitching this as an “alternative to incarceration” that will promote more humane treatment through rehabilitation and employment opportunities.

As such, calls to modernize and reform criminal justice often manifest as a rapid move toward automated processes throughout many penal systems. Numerous jurisdictions are adopting digital tools at all levels, from policing to parole, in order to promote efficiency and (it is claimed) fairness. However, critics argue that mechanized systems—driven by Big Data, artificial intelligence, and human-coded algorithms—are ushering in an era of expansive policing, digital profiling, and punitive methods that can intensify structural inequalities. In this view, the embedded biases in algorithms can serve to deepen inequities, via automated systems built on platforms that are opaque and unregulated; likewise, emerging policing and surveillance technologies are often deployed disproportionately toward vulnerable segments of the population. In an era of digital saturation and rapidly shifting societal norms, these contrasting views of efficiency and inequality are playing out in quintessential ways throughout the realm of criminal justice.

Tracking this arc, critical discourses on technology and social control have brought to light how decision-making algorithms can be a mechanism to “reinforce oppressive social relationships and enact new modes of racial profiling,” as Safiya Umoja Noble argues in her 2018 book, Algorithms of Oppression. In this view, the use of machine learning and artificial intelligence as tools of justice can yield self-reinforcing patterns of racial and socioeconomic inequality. As Cathy O’Neil discerns in Weapons of Math Destruction (2016), emerging models such as “predictive policing” can exacerbate disparate impacts by perpetuating data-driven policies whereby, “because of the strong correlation between poverty and reported crime, the poor continue to get caught up in these digital dragnets.” And in Automating Inequality (2018), Virginia Eubanks further explains how marginalized communities “face the heaviest burdens of high-tech scrutiny,” even as “the widespread use of these systems impacts the quality of democracy for us all.” In talks deriving from his forthcoming book Halfway Home, Reuben Miller advances the concept of “mass supervision” as an extension of systems of mass incarceration; whereas the latter has drawn a great deal of critical analysis in recent years, the former is potentially more dangerous as an outgrowth of patterns of mass surveillance and the erosion of privacy in the digital age—leading to what Miller terms a “supervised society.”

Techniques of digital monitoring impact the entire population, but the leading edge of regulatory and punitive technologies are applied most directly to communities that are already over-policed. Some scholars and critics have been describing these trends under the banner of “E-carceration,” calling out methods that utilize tracking and monitoring devices to extend practices of social control that are doubly (though not exclusively) impacting vulnerable communities. As Michelle Alexander recently wrote in the New York Times, these modes of digital penality are built on a foundation of “corporate secrets” and a thinly veiled impetus toward “perpetual criminalization,” constituting what she terms “the newest Jim Crow.” Nonetheless, while marginalized sectors are most directly impacted, as one of Eubanks’s informants warned us all: “You’re next.”

Advocates of automated and algorithmic justice methods often tout the capacity of such systems to reduce or eliminate human biases, achieve greater efficiency and consistency of outcomes, and ameliorate existing inequities through the use of better data and faster results. This trend is evident across a myriad of jurisdictions in the U.S. in particular (but not solely), as courts nationwide “are making greater use of computer algorithms to help determine whether defendants should be released into the community while they await trial.” In 2017, for instance, New Jersey introduced a statewide “risk assessment” system using algorithms and large data sets to determine bail, in some cases serving to potentially supplant judicial discretion altogether.

Many have been critical of these processes, noting that these automated decisions are only as good as the data points utilized—which are often tainted both by preexisting subjective biases and prior accumulations of structural bias recorded in people’s records based on them. The algorithms deployed for these purposes are primarily conceived as “proprietary techniques” that are largely opaque and obscured from public scrutiny; as a recent law review article asserts, we may be in the process of opening up “Pandora’s algorithmic black box.” In evaluating these emerging techniques, researchers at Harvard University thus have expressed a pair of related concerns: (1) the critical “need for explainable algorithmic decisions to satisfy both legal and ethical imperatives,” and (2) the fact that “AI systems may not be able to provide human-interpretable reasons for their decisions given their complexity and ability to account for thousands of factors.” This raises foundational questions of justice, ethics, and accountability, but in practice this discussion is in danger of being mooted by widespread implementation.

The net effect of adopting digital mechanisms for policing and crime control without more scrutiny can yield a divided society in which the inner workings (and associated power relations) of these tools are almost completely opaque and thus shielded from critique, while the outer manifestations are concretely inscribed and societally pervasive. The CBC radio program SPARK recently examined a range of these new policing technologies, from Body Cams and virtual Ride-Along applications to those such as Shot Spotter that draw upon data gleaned from a vast network of recording devices embedded in public spaces. Critically assessing the much-touted benefits of such nouveau tools as a “Thin Blue Lie,” Matt Stroud challenges the prevailing view that these technologies are inherently helpful innovations, arguing instead that they have actually made policing more reckless, discriminatory, and unaccountable in the process.

This has prompted a recent spate of critical interventions and resistance efforts, including a network galvanized under the banner of “Challenging E-Carceration.” In this lexicon, it is argued that “E-Carceration may be the successor to mass incarceration as we exchange prison cells for being confined in our own homes and communities.” The cumulative impacts of this potential “net-widening” of enforcement mechanisms include new technologies that gather information about our daily lives, such as license plate readers and facial recognition software. As Miller suggested in his invocation of “mass supervision” as the logical extension of such patterns and practices, these effects may be most immediately felt by those already overburdened by systems of crime control, but the impacts are harbingers of wider forms of social control.

Some advocates thus have begun calling for a form of “digital sanctuary.” An important intervention along these lines has been offered by the Sunlight Foundation, which advocates for “responsible municipal data management.” Their detailed proposal begins with the larger justice implications inherent in emerging technologies, calling upon cities to establish sound digital policies: “Municipal departments need to consider their formal data collection, retention, storage and sharing practices, [and] their informal data practices.” In particular, it is urged that cities should not collect sensitive information “unless it is absolutely necessary to do so,” and likewise should “publicly document all policies, practices and requests which result in the sharing of information.” In light of the escalating use of data-gathering systems, this framework calls for protections that would benefit vulnerable populations and all residents.

These notions parallel the emergence of a wider societal discussion on technology, providing a basis for assessing which current techniques present the greatest threats to, and/or opportunities for, the cultivation of justice. Despite these efforts, we are left with critical questions of whether the debate will catch up to utilization trends, and how the trajectory of tools will continue to evolve if left unchecked. As Adam Greenfield plaintively inquired in his 2017 book Radical Technologies: “Can we make other politics with these technologies? Can we use them in ways that don’t simply reproduce all-too-familiar arrangements of power?” This is the overarching task at hand, even as opportunities for public oversight seemingly remain elusive.

 

Randall Amster, J.D., Ph.D., is a teaching professor and co-director of environmental studies at Georgetown University in Washington, DC, and is the author of books including Peace Ecology. Recent work focuses on the ways in which technology can make people long for a time when children played outside and everyone was a great conversationalist. He cannot be reached on Twitter @randallamster.

 

Headline pic via: Source

fff-anonymous-10071137-1440-900

In the wake of the terrifying violence that shook El Paso and Dayton, there have been a lot of questions around the role of the Internet in facilitating communities of hate and the radicalization of angry white men. Digital affordances like anonymity and pseudonymity are especially suspect for their alleged ability to provide cover for far-right extremist communities. These connections seem to be crystal clear. For one, 8chan, an anonymous image board, has been the host of several far-right manifestos posted on its feeds preceding mass shootings. And Kiwi Farms, a forum board populated with trolls and stalkers who spend their days monitoring and harassing women, has been keeping a record of mass killings and became infamous after its administrator “Null”, Joshua Conner Moon, refused to take down the Christchurch manifesto.

The KF community claim to merely be archiving mass shootings, however, it’s clear that the racist and misogynistic politics on the forum board are closely aligned with that of the shooters. The Christchurch extremist had alleged membership to the KF community and had posted white supremacist content on the forum. New Zealand authorities requested access to their data to assist in their investigation and were promptly refused. Afterwards, Null encouraged Kiwi users to use anonymizing tools and purged the website’s data. It is becoming increasingly clear that these far-right communities are radicalizing white men to commit atrocities, even if such radicalization is only a tacit consequence of constant streams of racist and sexist vitriol.

With the existence of sites like 8chan and Kiwi Farms, it becomes exceedingly easy to blame digital technology as a root cause of mass violence. Following the recent shootings, the Trump administration attempted to pin the root of the US violence crisis on, among other things, video games. And though this might seem like a convincing explanation of mass violence on the surface, as angry white men are known to spend time playing violent video games like Fortnite, there has yet to be much conclusive or convincing empirical accounts that causally link videogames to acts of violence.

One pattern has been crystal clear, and that’s that mass and targeted violence seem to coalesce around white supremacists and nationalists. In fact, as FBI director Christopher Wray told the US Congress, most instances of domestic terrorism come from white supremacists. From this perspective, it’s easy to see how technological explanations are a bait and switch that try to hide white supremacy behind a smoke screen. This is a convenient strategy for Trump, as his constant streams of racism have legitimized a renewed rise in white supremacy and far-right politics across the US.

For those of us who do research on social media and trolling, one thing is for certain, easy technological solutions risk arbitrary punitive responses that don’t address the root of the issue. Blaming the growing violence crisis on technology will only lead to an increase in censorship and surveillance and intensify the growing chill of fear in the age of social media.

To better understand this issue, the fraught story of the anonymous social media platform Yik Yak is quite instructive. As a mainstream platform, Yik Yak was used widely across North American university and college campuses. Yak users were able to communicate anonymously on a series of GPS determined local news feeds where they could upvote and downvote content and engage in nameless conversations under random images to delineate users from each other.

Tragically, Yik Yak was plagued by the presence of vitriolic and toxic users who engaged in forms of bullying, harassment, and racist or sexist violence. This included more extreme threats, such as bomb threats, threats of gun violence, and threats of racist lynching. The seemingly endless stream of vitriol prompted an enormous amount of negative public attention that had alarming consequences for Yik Yak. After being removed from the top charts of the Google Play Store for allegedly fostering a hostile climate on the platform, Yik Yak administrators acted to remove the anonymity feature and impose user handles on its users in order to instil a sense of user accountability. Though this move was effective in dampening the degree of toxic and violent behavior on Yik Yak’s feeds, it also led to users abandoning the platform and the company eventually collapsing.

Though anonymity is often associated with facilitating violence, the ability to be anonymous on the Internet does not directly give rise to violent digital communities or acts of IRL (“In-real-life”) violence. In my ethnographic research on Yik Yak in Kingston, Ontario, I found that despite intense presence of vitriolic content, there was also a diverse range of users who engaged in forms of entertainment, leisure, and caretaking. And though it may be clear that anonymity affords users the ability to engage in undisciplined or vitriolic behavior, the Yik Yak platform, much like other digital and corporeal publics, allowed users to engage in creative and empowering forms of communication that otherwise wouldn’t exist.

For instance, there was a contingent of users who were able to communicate their mental health issues and secret everyday ruminations. Users in crisis would post calls for help that were often met with other users interested in providing some form of caretaking, deep and helpful conversations, and the sharing of crucial resources. Other users expressed that they were able to be themselves without the worrisome consequences of discrimination that entails being LGBTQ or a person of color.

What was clear to me was that there was an abundance of forms of human interaction that would never flourish on social media platforms where you are forced to identify under your legal name. Anonymity has a crucial place in a culture that has become accustomed to constant surveillance from corporations, government institutions, and family and peers. Merely removing the ability to interact anonymously on a social media platform doesn’t actually address the underlying explanation for violent behavior. But it does discard a form of communication that has increasingly important social utility.

In her multiyear ethnography on trolling practices in the US, researcher Whitney Phillips concluded that violent online communities largely exist because mainstream media and culture enable them. Pointing to the increasingly sensationalist news media and the vitriolic battlefield of electoral politics, Phillips asserts that acts of vitriolic trolling borrow the same cultural material used in the mainstream, explaining, “the difference is that trolling is condemned, while ostensibly ‘normal’ behaviors are accepted as given, if not actively celebrated.” In other words, removing the affordances of anonymity on the Internet will not stave off the intensification of mass violence in our society. We need to address the cultural foundations of white supremacy itself.

As Trump belches out a consistent stream of racist hatred and the alt-right continue to find footing in electoral politics and the imaginations of the citizenry, communities of hatred on the Internet will continue to expand and inspire future instances of IRL violence. We need to look beyond technological solutions, censorship, and surveillance and begin addressing how we might face-off against white supremacy and the rise of the far-right.

 

Abigail Curlew is a doctoral researcher and Trudeau Scholar at Carleton University. She works with digital ethnography to study how anti-transgender far-right vigilantes doxx and harass politically involved trans women. Her bylines can be found in Vice Canada, the Conversation and Briarpatch Magazine.

 

https://medium.com/@abigail.curlew

Twitter: @Curlew_A

 

Headline image via: Source

View post on imgur.com

While putting together the most recent project for External Pages, I have had the pleasure to work with artist and designer Anna Tokareva in developing Baba Yaga Myco Glitch™, an online exhibition about corporate mystification techniques that boost the digital presence of biotech companies. Working on BYMG™ catalysed the exploration of the shifting critiques of interface design in the User Experience community. These discourses shape powerful standards on not just illusions of consumer choice, but corporate identity itself. However, I propose that as designers, artists and users, we are able to recognise the importance of visually identifying such deceptive websites in order to interfere with corporate control over online content circulation. Scrutinising multiple website examples to inform the aesthetic themes and initial conceptual stages of the exhibition, we specifically focused on finding common user interfaces and content language that result in enhancing internet marketing.

Anna’s research on political fictions that direct the necessity for a global mobilisation of big data in Нооскоп: The Nooscope as Geopolitical Myth of Planetary Scale Computation lead to a detailed study of current biotech incentives as motivating forces of technological singularity. She argues that in order to achieve “planetary computation”, political myth-building and semantics are used for scientific thought to centre itself on the merging of humans and technology. Exploring Russian legends in fairytales and folklore that traverse seemingly binary oppositions of the human and non-human, Anna interprets the Baba Yaga (a Slavic fictitious female shapeshifter, villain or witch) as a representation of the ambitious motivations of biotech’s endeavour to achieve superhumanity. We used Baba Yaga as a main character to further investigate such cultural construction by experimenting with storytelling through website production.

The commercial biotech websites that we looked at for inspiration were either incredibly blasé, where descriptions of the company’s purpose would be extremely vague and unoriginal (e.g., GENEWIZ), or unnervingly overwhelming with dense articles, research and testimonials (e.g., Synbio Technologies). Struck by the aesthetic and experiential banality of these websites, we wondered why they all seemed to mimic each other. Generic corporate interface features such as full-width nav bars, header slideshows, fade animations, and contact information were distributed in a determined chronology of vertically-partitioned main sections. Starting from the top and moving down, we were presented with a navigation menu, slideshow, company services, awards and partners, “learn more” or “order now” button, and eventually land on an extensive footer.

This UI conformity easily permits a visual establishment of professionalism and validity; a quick seal of approval for legitimacy. It is customary throughout the UX and HCI paradigm, a phenomenon that Olia Lialina describes as “mainstream practices based on the postulate that the best interface is intuitive, transparent, or actually no interface” in Once Again, The Doorknob. Referring back to Don Norman’s Why Interfaces Don’t Work, which champions computers to only serve as devices of simplifying human lives, Lialina explains why this ethos contributes to mitigating user control, a sense of individualism and society-centred computing in general. She applies GeoCities as a counterpoint to Norman’s design attitude and an example of sites where users are expected to create their own interface. Defining the problematic nature in designing computers to be machines that only make life easier via such “transparent” interfaces, she argues:

“’The question is not, “What is the answer?” The question is, “What is the question?”’” Licklider (2003) quoted french philosopher Anry Puancare when he wrote his programmatic Man Computer Symbiosis, meaning that computers as colleagues should be a part of formulating questions.”

Coining the term “User Centred Design” and scheming the foundations of User Experience during his position as the first User Experience Architect at Apple in 1993, Norman’s advocacy of transparent design has unfortunately manifested into a universal reality. It has advanced into a standard so impenetrable that a business’s legitimacy and success is probably at stake if they do not follow these rules. The idea that we’ve become dependent on reviewing the website rather than the company themselves – leading to user choices being heavily navigated by websites rather than company ethos – is nothing new. And additionally, the invisibility of transparent interface design has proceeded to fooling users into algorithmic “free” will. Jenny Davis’s work on affordances highlights that just because functions or information may be technically accessible, they are not necessarily “socially available”, and the critique of affordance extends to the critique of society. In Beyond the Self, Jack Self illustrates website loading animations or throbbers (moving graphics that illustrate the site’s current background actions) as synchronised “illusions of smoothness” that support neoliberal incentives of real-time efficiency.

“The throbber is thus integral to maintaining the illusion of inescapability, dissimulating the possibility of exiting the network—one that has become both spatially and temporally coextensive with the world. This is the truth of the real-time we now inhabit: a surreal simulation so perfectly smooth it is indistinguishable from, and indeed preferable to, reality.”

These homogeneous plain sailing interfaces reinforce a mindset of inevitability, and at the same time, can create slick operations that cheat the user. Actions like “dark patterns” are implemented requests that trick users into completing tasks such as enlisting or purchasing which may be unconsented. My lengthy experience with recruitment websites could represent the type of impact that sites have on the portrayal of true company intentions. Constantly reading about the struggles of obtaining a position in the tech industry, I wondered how these agencies make commission when finding employment seems so rare. I persisted and filled out countless job applications and forms, received nagging emails and calls from recruiters for a profile update or elaboration, until I finally realised that I have been swindled by the consultancies for my monetised data (which I handed off via applications). Having found out that these companies profit on applicant data and not job offer commissions, I slowly withdrew from any further communication as I knew this would only lead to another dead end. As Anna and I roamed through examples of biotech companies online, it was easy to spot familiar UI between recruitment and lab websites; welcoming slideshows and all the obvious keywords like “future” and “innovation” stamped across images of professionals doing their work. It was impossible not to question the sincerity of what the websites displayed.

Along with the financial motives behind tech businesses, there are also fundamental internal and external design factors that diminish the trustworthiness of websites. Search engine optimisation is vital in controlling how websites are marketed and ranked. In order to fit into the confines of web indexing, site traffic now depends on not just handling Google Analytics but creating keywords that are either exposed in the page’s content or mostly hidden within metadata and backlinks. As the increase in backlinks correlates with the growth of SEO, corporate websites implement dense footers with links to all their pages, web directories, social media, newsletters and contact information. The more noise a website makes via its calls to external platforms, the more noise it makes on the internet in general.

The online consumer’s behavior is another factor in manipulating marketing strategies. Besides brainstorming what users might search, SEO managers are inclined to find related terms by scrolling through Google’s results page and seeing what else their users already searched for. Here, we can see how Google’s algorithms produce a tailored feedback loop of strategic content distribution that simultaneously feeds an uninterrupted rotating dependency on their search engine.

It is clear that keyword research helps companies come up with their content delivery and governance, and I worry about the line blurring between the information’s delivery strategy and its actual meaning. Alex Rosenblat observes how Uber uses multiple definitions for their in court hearings in order to shift blame onto their drivers as they are “consumers of its software”, subsequently enabling tech companies to switch so often between the words “users” and “workers” until they become fully entangled. In the SEO world, avoiding keyword repetition additionally helps to stay away from competing with their own content, and companies like Uber easily benefit from this specific game plan as they can freely work with interchanging their wording when necessary. With the increase in applying a varied range of buzzwords, encouraged by using multiple words to portray one thing, it’s evident that Google’s SEO system plays a role in stimulating corporations to implement ambiguous language on their sites.

However, search engine restrictions also further the SEO manipulation of content. There have been a multitude of studies (such as Enquiro, EyeTools and Did-It or Google’s Search Quality blog and User Experience findings) that look at our eye-tracking patterns when searching for information, many of which back up the rules of the “Golden Triangle” – a triangular space in which the highest density of attention remains on the top and trickles down on the left of the search engine results page (SERP). While the shape changes in relation to SERP’s interface evolution (as explained in a Moz Blog by Rebecca Maynes), the studies reveal how Google’s search engine interface offers the illusion of choice, while subsequently exploiting the fact that users will pick the first three results.

In a Digital Visual Cultural podcast, Padmini Ray Murray describes Mitchell Whitelaw’s project, The Generous Interface, where new forms of searching are reviewed through interface design to show the actual scope and intricacy of digital heritage collections. In order to realise generous interfaces, Whitelaw considers functions like changing results every time the page is loaded or randomly juxtaposing content. Murray underpins the importance of Whitelaw’s suggestions to completely rethink how we display collections as a way to untie us from the Golden Triangle’s logic. She claims that our reliance on such online infrastructures is a design flaw.

“The state of the web today – the corporate web – the fact that it’s completely taken over by four major players, is a design problem. We are working in a culture where everything we understand as a way into information is hierarchical. And that hierarchy is being decided by capital.”

Interfaces of choice are contested and monopolised, guiding and informing user experience. After we have clicked on our illusion of choice, we are given yet another illusion – through the mirage of soft and polished animations, friendly welcome page slideshows and statements of social motivation – we read about company ethos (perhaps we’re given the generic slideshow and fade animation to distract us if the information is misleading).

 

View post on imgur.com

Murray goes on to describe a project developed by Google’s Cultural Institute called Womenwill India, which approaches institutions to digitise cultural artefacts that speak to women in India. This paved the way for scandals where institutions that could not afford or have the expertise to digitalise their own collections ended up simply administering them to Google. She goes on to study the suspiciousness of the program through the motivations that lie beneath the concept of digitising collections and the institute’s loaded power: “it’s used for machines to get smarter, not altruism […] there is no interest in curating with any sense of sophistication, nuance or empathy”. Demonstrating the program’s dubious incentives, she points to the website’s cultivation of exoticism with the use of “– India” affixed to the product’s title. She continues to describe the website to be “absolutely inexplicable” as it flippantly throws together unrelated images of labeled ‘Intellectuals’, ‘Gods and Goddesses’ and ‘Artworks’ with ‘Women Who Have Encountered Sexual Violence During The Partition’.

When capital has power over the online circulation of public relations, the distinction between website design and content begins to fade, which leads design to take on multiple roles. Since design acts as a way of presenting information, Murray believes it therefore has the potential to correct it.

“This is a metadata problem as well. Who is creating this? Who is telling us that this is what things are? The only way that we can push back against the Google machine is to start thinking about interventions in terms of metadata.”

The bottom-up approach to consider interventions as metadata could also then be applied to the algorithmic activities of web crawlers. The metadata (a word I believe Murray also uses to express the act of naming and describing information) of a website specifies “what things are”. While the algorithmic activity of web crawlers further enhance content delivery, search engine infrastructure is ruled by the unification of two very specific forces – of crawler and website. As algorithms remain to be inherently non-neutral, developed by agents with very specific motives, the suggestion to use metadata as a vehicle for intervention (within both crawlers and websites) can employ bottom-up processing to be a strong political tactic.

Web crawlers’ functions are unintelligible and concealed to the user’s eye. Yet they’re connected to metadata, whose information seeps through to reach public visibility via either content descriptions on the results page, drawn-out footers containing extensive amounts of links, ambiguous buzzword language or any of the conforming UI features mentioned above. This allows for users (as visual perceivers) to begin to identify suspicious motives of websites through their interfaces. These aesthetic cues give us little snippets of what the “Google machine” actually wants from us. And, while it may just present the tip of the iceberg, it is a prompt to not underestimate, ignore or become numb to the general corporate visual language of dullness and disguise. The idea of making interfaces invisible has formed into an aesthetic of deception, and Norman’s transparent design manifesto has collapsed onto itself. When metadata and user interfaces work as ways of publicising the commercial logic of computation by exposing hidden algorithms, we can start to collectively see, understand and hopefully rethink these digital forms of (what used to be invisible) labour.

 

Ana Meisel is a web developer and curator of External Pages, starting her MSc in Human Computer Interaction and Design later this year. anameisel.com, @ananamei

Headline Photo: Source

Photo 2: Source

 Today I worked on three separate collaborations: feedback on a thesis draft, a paper revision with colleagues at other universities, and a grant proposal with mostly senior scholars. Each collaboration represents my integration with distinct project teams, on which my status varies. And along with my relative status, so too varies my relationship with the Track Changes editing tool.

When giving feedback on my student’s thesis, I wrote over existing text with reckless abandon. I also left comments, moved paragraphs, and deleted at will. When working on my paper collaboration, I also edited freely, though was more likely to include comments justifying major alterations. When working on the research grant, for a project team on which I am the most junior member, I knew not to change any of the text directly. Instead, I made suggestions using the Comment function, sometimes with alternative text, always phrased and punctuated as a question.

These experiences are, of course, not just tied to me nor to the specific tasks I undertook today. They are part of a larger and complex rule structure that has emerged with collaborative editing tools. Without anyone saying anything, the rules generally go like this: those higher on the status hierarchy maintain control over the document. Those lower on the status hierarchy do not. Even though Track Changes positions everything as a suggestion (i.e., collaborators can accept or reject any change), there is something gutsy about striking someone’s words and replacing them with your own, and something far meeker about a text-bubble in the margins.

Track Changes (and other collaboration tools) do not enforce status structures. They do, however, reflect and enact them. Who you are affects which functions are socially available, even as the entire suite of functions remain technically available. Users infuse these tools with existing social arrangements and keep these arrangements intact. The rules are not explicit. Nobody told me not to mess with the grant proposal text, just as nobody sanctioned my commanding approach to the student’s thesis, or the “clean” (Track Changes all accepted) manuscript copy I eventually sent to my co-authors. Rather, these rules are implicit. They are tacit. And yet, they are palpable. Missteps and transgressions could result in passive aggressive friction in the mildest case, and severed working relationships in the more extreme.

Just like all technologies, Track Changes is of the culture from which it stems. Status hierarchies in the social system reemerge in the technical artifact and the social relations facilitated through it. Stories of Track Changes norm breaching would illustrate this point with particular clarity. I’m struck, however, by not having on hand a single personal example of such a breach. Everyone I work with seems, somehow, to just know what to do.

Jenny Davis is on Twitter @Jenny_L_Davis

 

Stories of data breaches and privacy violations dot the news landscape on a near daily basis. This week, security vendor Carbon Black published their Australian Threat Report based on 250 interviews with tech executives across multiple business sectors. 89% Of those interviewed reported some form of data breach in their companies. That’s almost everyone. These breaches represent both a business problem and a social problem. Privacy violations threaten institutional and organizational trust and also, expose individuals to surveillance and potential harm.

But “breaches” are not the only way that data exposure and privacy violations take shape. Often, widespread surveillance and exposure are integral to technological design. In such cases, exposure isn’t leveled at powerful organizations, but enacted by them.  Legacy services like Facebook and Google trade in data. They provide information and social connection, and users provide copious information about themselves. These services are not common goods, but businesses that operate through a data extraction economy.

 I’ve been thinking a lot about the cost-benefit dynamics of data economies and in particular, how to grapple with the fact that for most individuals, including myself, the data exchange feels relatively inconsequential or even mildly beneficial. Yet at a societal level, the breadth and depth of normative surveillance is devastating. Resolving this tension isn’t just an intellectual exercise, but a way of answering the persistent and nagging question: “why should I care if Facebook knows where I ate brunch?” This is often wrapped in a broader “nothing to hide” narrative, in which data exposure is a problem only for deviant actors.

Nothing to hide narratives derive from a fundamental obfuscation of how data works at scale. “Opt-out” and even “opt-in” settings rely on a denatured calculus. Individuals solve for data privacy as a personal trouble when in contrast, it is very much a public issue.  

Data privacy is a public issue because data are sui generis—greater than the sum of their parts. Data trades don’t just affect individuals, but collectively generate an encompassing surveillance system. Most individual data are meaningless on their own. Data become valuable—and powerful—through aggregation. Singular datum are thus primarily effectual when they combine into plural data. In other words, my data comes to matter in the context of our data. With our data, patterns are rendered perceptible and those patterns become tools for political advantage and economic gain.

Individuals can trade their data for services which, at the individual level, make for a relatively low cost (and even personally advantageous) exchange. Accessing information through highly efficient search engines and connecting with friends, colleagues, communities, and fellow hobbyists are plausibly worth as much or more than the personal data that a user “pays” for this access and connection. At the individual level, data often buy more than they cost.

However, the costs of collective data are much greater, and include power transfers to state and corporate actors. Siva Vaidhyanathan is excellent on this point. In his book Anti-Social Media: How Facebook Disconnects us and Undermines Democracy Vaidhyanathan demonstrates how the platform’s norm of peer-sharing turns to peer surveillance, turns, though mass data collection, to corporate and state surveillance. Facebook collects our data and gifts it back to us in the form of pleasing News Feeds. Yet in turn, Facebook sells our data for corporate and political gain. This model only works en masse . Both News Feeds and political operatives would be less effective without the data aggregates, collected through seemingly banal clicks, shares, and key strokes.

Individual privacy decisions are thus not just personal choices and risks, nor even network-based practices. Our data are all wrapped up in each other. Ingeniously, big tech companies have devised a system in which data exchange benefits the individual, while damaging the whole. Each click is a contribution to that system. Nobody’s data much matters, but everybody’s data matters a lot.

Jenny Davis is on Twitter @Jenny_L_Davis

Headline Pic Via: Source

Stories about AI gone bigoted are easy to find: Microsoft’s Neo-Nazi “Tay” bot, her still racist sister “Zo”, Google’s autocomplete function that assumed men occupy high status jobs, and Facebook’s job-related targeted advertising which assumed the same.

A key factor in AI bias is that the technology is trained on faulty databases. Databases are made up of existing content. Existing content comes from people interacting in society. Society has historic, entrenched, and persistent patterns of privilege and disadvantage across demographic markers. Databases reflect these structural societal patterns and thus, replicate discriminatory assumptions. For example, Rashida Richardson, Jason Schultz, and Kate Crawford put out a paper this week showing how policing jurisdictions with a history of racist and unprofessional practices generate “dirty data” and thus produce dubious databases from which policing algorithms are derived. The point is that database construction is a social and political task, not just a technical one. Without concerted effort and attention, databases will be biased by default. 

Ari Schlesinger, Kenton P. O’Hara, and Alex S. Taylor present an interesting suggestion/question about database construction. They are interested in chatbots in particular, but their point easily expands to other forms of AI. They note that the standard practice is to manage AI databases through the construction of a “blacklist”. A blacklist is a list of words that will be filtered from the AI training. Blacklists generally include racist, sexist, homophobic, and other forms of offensive language. The authors point out, however, that this  approach is less than ideal for two reasons. First, it can eliminate innocuous terms and phrases in the name of caution. This doesn’t just limit the AI, but can also erase forms of identity and experience. The authors give the example of “Paki”. This is a derogatory racist term. However, filtering this string of letters also filters out the word Pakistan, which is an entire country/nationality that gets deleted from the lexicon. Second, language is dynamic and meanings change. Blacklists are relatively static and thus quickly dated and ineffective.

The authors suggest instead that databases are constructed proactively through modeling. Rather than tell databases what not to say (or read/hear/calculate etc.), we ought to manufacture models of desirable content (e.g., people talking about race in a race conscious way). I think there’s an interesting point here, and an interesting question about preventative versus proactive approaches to AI and bias. Likely, the approach has to come from both directions–omit that which is explicitly offensive and teach in ways that are socially conscious. How to achieve this balance remains an open question both technically and socially. My guess is that constructing social models of equity will be the most complex part of the puzzle.

 

Jenny Davis is on Twitter @Jenny_L_Davis

Headline Pic Via: Source

Yes, Please to this article by Amy Orben and Andrew K. Przybylski, which I plan to pass around like I’m Oprah with cars. Titled The Association Between Adolescent Well-Being and Digital Technology Use, the paper does two of my favorite things: demonstrates the importance of theoretical precision & conceptual clarity in research design, and undermines moral panics about digital technology and mental health.

The effects of digital technologies on the mental lives of young people has been a topic of interdisciplinary concern and general popular worry. Such conversations are kept afloat by contradictory research findings in which digital technologies are alternately shown to enhance mental well-being, damage mental well-being, or to have little effect at all. Much (though not all) of this work comes from secondary analyses of large datasets, building on a broader scientific trend of big data analytics as an ostensibly superlative research tool. Orben and Przybylski base their own study on analysis of three exemplary datasets including over 350,000 cases. However, rather than simply address digital technology and mental well-being, the authors rigorously interrogate how existing datasets define key variables of interest, operationalize those variables, and model them with controls (i.e., other relevant factors).

A key finding from their work is that existing datasets conceptualize, operationalize, and model digital technology and mental well-being in a lot of different ways. The variation is so great, the authors find, that researchers can construct trillions of different combinations, just to answer a single research question (i.e., is digital technology making teens sad?). Moreover, they find that analytic flexibility has a significant effect on research outcomes. Running over 20,000 analyses on their three datasets, Orben and Przybylski find that design and analysis can result in thousands of different outputs, some of which show negative effects of digital technology, some positive, and others, none at all.  Their findings drive home the point that big data is not intrinsically valuable as a data source but instead, must be treated with theoretical care. More data does not equal better science by the nature of its size. Better science is theoretically informed science, for which big data can act as a tool. While others have made similar arguments, Orben and Przybylski articulate the case in the home language of big data practitioners.

The second element of their study is that analyzing across the three datasets gives a result in which digital technology explains less than 1% of the variation in teens’ mental well-being (0.4%, to be exact). The findings do show that the relationship is negative, but only slightly. This means that 99% of teens’ mental well-being is correlated with things other than digital technology. Regularly eating potatoes showed a negative correlation similar to that of technology use in relation to teens’ mental health. The minuscule space of digital technology in the mental lives of young people sits in stark contrast to its bloated expanse within the public imagination and policy initiatives. For example, while  I was searching for popular press pieces about mental health among university students to use in a class for which I am preparing, it was a seeming requirement that authors agree on the trouble of technology, even when they disagreed on everything else.

The article’s unassuming title could easily have blended with the myriad existing studies about technology, youth, and mental health. Its titular simplicity belies the explosive dual contributions contained within. Luckily, Twitter

 

Jenny Davis is on Twitter @Jenny_L_Davis