At the end of May our local police department released a statement on city traffic stops, a day ahead of the attorney general’s annual report covering all stops made across the state. “Black drivers continue to be overrepresented in Columbia Police Department traffic stops” as a local newspaper summed it up, “and the numbers are even worse than in 2016.” Despite Black residents making up less than 10% of the city’s population, Black drivers were over 4 times more likely to be stopped than White drivers, as one city council member noted at the end of a public comment session where several local residents spoke out on the issue. From the statistical data, to residents’ critical comments, including one Black resident’s direct experiences being routinely followed and stopped, racial profiling by seemingly all accounts remains the norm, and overall appears to be getting steadily worse.

By all accounts, well, except for the police and the city manager’s anyway. “We continue to look at data and we have not seen an apparent pattern of profiling…,” the city manager assured. “[H]owever, we acknowledge that some community members have experiences with officers that make them have negative feelings and perceptions about police.” His assurances, among other things, sound eerily close to the police chief’s own statements last year about the previous year’s report: “We will vigilantly continue to look for additional data we can collect that would give our community a fuller picture of the reason each traffic stop is conducted” (emphasis mine). But if a “disparity index of 3.28 for African American drivers, an increase from 3.13 in 2016” doesn’t signify a pattern, what would? According to our officials, the answer is the same as it was a year ago: more data and/or analysis is needed to say for sure what the data is telling them. Meanwhile, the dissonance between what they say and what the data shows continues to grow. Indeed, it almost seems as though these two things exist in parallel dimensions from one another.

Watching city officials apparently disregard the data that they themselves cite as valid is infuriating and perplexing. It feels like watching bad TV. You asked me to suspend my disbelief for this and yet here is a glaring plot hole in your story. Not only for the obvious ways that it downplays the well-known and common negative interactions people of color and Black people in particular experience from heightened policing. But also, for the ways that it disturbs our implicit faith in statistical analysis, both as a check on power and as the basis of a supposedly fairer, less biased form of civic governance. More upsetting than the mixed messages from city leaders, then, is what it implies about our reliance on statistics and data-driven analysis in the first place.

“Statistics are never objective,” as Jenny Davis put it here. “Rather, they use numeric language to tell a story, and entail all of the subjectivity of storytelling.” Both the CPD’s statement and the attorney general’s comprehensive report epitomize this subjective, data storytelling, even if they come dressed in the authority of objective fact. The data can be inconsistent, contain “deficiencies” of reporting or “may not accurately portray relevant rates,” according to the attorney general’s neutral-sounding language, but it can’t ever be biased. The higher rate at which police stop Black drivers, as rendered in a disparity index and spelled out in the report, thus serve not as evidence of profiling, but proof of the state’s transparency and impartiality. Don’t take our word for it, in other words, look at the data and draw your own conclusions.

Though statistical storytelling is often leveraged by the powerful to “discount the voices of the oppressed,” as Davis argues, this fact doesn’t preclude marginalized groups from using statistics to counter power’s universalizing narratives. Drawing on Candace Lanius’s point, that demands for statistical proof of racism are themselves racist, Davis argues for “making a distinction between objects and subjects of statistical demand.” “That is, we can ask,” Davis says, “who is demanding statistical proof from whom?” By backing personal stories with statistical facts, this tactic “assumes that the powerful are oppressive, unless they can prove otherwise,” and so “challenges those voices whose authority is, generally, taken for granted.” The same data used by the powerful to mollify or defuse dissenting voices, then, can be turned into a liability, one that organizers and activists may exploit to their advantage.

Local groups and community members have applied this tactic in my city to visible effect. Through public pressure at city council meetings, numerous op-eds and social media word of mouth, racial justice groups and informed residents have shaped local media coverage and public conversation in their favor. These efforts led the city council to enact a community engagement process last year, and have pushed the city manager and police chief into defensive apologia. Perhaps the most substantive outcome of all has been in fostering greater public skepticism of everyday policing practices and community interest in alternatives.

These accomplishments speak to statistical data’s efficacy as a tool for influencing governance and encouraging political participation. But without taking anything away from this success, like every tactic it has limits. As many people directly involved will point out, the City has yet to pass meaningful policy changes to reverse the excessive policing of Black residents, nor has it adopted a “community-oriented” model that many are calling for. Indeed, the worsening racial disparity figures reflect this lack of material progress. Statistical storytelling isn’t less necessary, but it can only go so far.

This point was made acute for me after a regular council speaker and member of racial justice group Race Matters, Friends shared an analysis of the issue as it existed under slightly different leadership in 2014. And aside from marginally less bad numbers at the time, the analysis only seems more relevant to the present moment:

But while the results of the attorney general’s study seem to show an unequivocal bias against [Black people], the response to the report from the police, the community and researchers has been a mixed bag. The debate over what to make of the numbers, or even whether anything should be made of them, has done more to muddle the issues surrounding racial disparities in policing than to clarify them.

Instead of “muddling” the issue, we might revise this to say statistics have arguably augmented and entrenched each party’s positions. This is not to imply that “both sides” hold similar authority, merit or responsibility however. The point is that each side has applied the data to bolster their respective narratives.

Statistical storytelling can force a conversation with power, but it can’t make it listen. The fact a 4-year-old analysis resonates even more with today’s situation may show City leaders’ intransigence, but it also offers us an opportunity to reassess the present moment and how recent history might inform future efforts. Because if my city’s recent past is an indicator, swaying local leadership (let alone policing outcomes) has been hard fought but incremental, and still leaves a lot more to be desired.

A thorough reassessing of the present would ideally engender a concerted and ongoing effort amongst constituents of local marginalized communities, organizers, scholars and activists. Note: cops and elected officials don’t make the list. A key form that this could take might be a renewed engagement in the sort of political education that veteran organizer Mariame Kaba suggested recently, “where people can sit together, read together, think together over a period of time.” “And the engagement matters as much as the content,” Kaba says, because “It’s in our conversations with each other that we figure out what we actually know and think.”

From free brake light clinics and community bail funds to grad student organizing and ICE protests, concrete efforts are abundant and provide a form of implicit, hands-on education for their actors. At the same time, sustaining these and other actions is often a struggle, with a bulk of the work falling on the same core group of organizers and activists. Indulging in more explicit political education, as a conscious practice, could be a way to garner and retain the broader participation that’s needed. Besides its functional utility for recruitment, though, perhaps political education’s most immediate draw is the innate and self-edifying experience it can bring us in the moment. Where dire news and a stream of reactive commentary drains us, learning with each other can restore our stamina, providing a creative outlet for “unleashing people’s imaginations while getting concrete,” to quote Kaba again.

It would be a mistake to try and define exactly how this collective learning would look, but we can think of some ways to cultivate it. For instance, we can avoid “placing hopes in a particular device or gadget (e.g. a technological fix), or in a change in a policy or formal institution (e.g. a social fix),” as David Banks argues here. Instead, we might pursue a “culture fix” as Linda Layne defines it, which as Banks writes, “focuses on changing the perceptions, conceptualizations and practices that directly interact with technologies.” Technologies here being systems like policing, for example, as mechanisms of social control. Or the techniques local municipalities like mine employ, such as soliciting feedback to better funnel and restrain public outcry.

Pursing a culture fix, as it entails shifting perceptions and practices, implores meaningful participation too, without constricting our imaginative horizons to the current order. “What the world will become already exists in fragments and pieces, in experiments and possibilities,” as Ruth Wilson Gilmore said. Reading Gilmore, Keguro Macharia writes, “I think she wanted to arrest how our imaginations are impeded by dominant repressive frameworks, which describe work toward freedom as “impossible” and “unthinkable.” She wanted to arrest the paralysis created when we insist that the entire world must be remade and, in the process, void the quotidian practices that we want to multiply and intensify.” From the expanded viewpoint that political education affords, we can imagine beyond pressing elected officials to reform how the police operate, and envision a world without police altogether. In this vein, I hope this post serves as one singular and partial stab at the type of political education alluded to above.

 

Image Credit: D-brief/Discover Magazine

Nathan is on Twitter

“Designing Kai, I was able to anticipate off-topic questions with responses that lightly guide the user back to banking,” Jacqueline Feldman wrote describing her work on the banking chatbot. Feldman’s attempts to discourage certain lines of questioning reflects both the unique affordances bots open up and the resulting difficulties their designers face. While Feldman’s employer gave her leeway to let Kai essentially shrug off odd questions from users until they gave up, she notes “…Alexa and Siri are generalists, set up to be asked anything, which makes defining inappropriate input challenging, I imagine.” If the work of bot/assistant designers entails codifying a brand into an interactive persona, how their creations afford various interactions shape user’s expectations and behavior as much as their conventionally feminine names, voices and marketing as “assistants.”

Affordances form “the dynamic link between subjects and objects within sociotechnical systems,” as Jenny Davis and James Chouinard write in “Theorizing Affordances: From Request to Refuse.” According to the model Davis and Chouinard propose, what an object affords isn’t a simple formula e.g. object + subject = output, but a continuous interrelation of “mechanisms and conditions,” including an object’s feature set, a user’s level of awareness and comfort in utilizing them, and the cultural and institutional influences underlying a user’s perceptions of and interactions with an object. “Centering the how,” rather than the what, this model acknowledges “the variability in the way affordances mediate between features and outcomes.” Although Facebook requires users to pick a gender in order to complete the initial signup process, as one example they cite, users also “may rebuff these demands” through picking a gender they don’t personally identify as. But as Davis and Chouinard argue, affordances work “through gradations” and so demands are just one of the ways objects afford. They can also “requestallow, encourage, discourage, and refuse.” How technologies afford certain interactions clearly affects how we as users use them, but this truth implies another: that how technologies afford our interactions re-defines both object and subject in the process. Sometimes there’s trouble distinguishing even which is which.

Digital assistants, like Feldman’s Kai, exemplify this subject/object confusion in the ways their designs encourage us to address them as feeling, femininized subjects, and convey ourselves more like objects of study to be sensed, processed and proactively catered to. In a talk for Theorizing the Web this year, Margot Hanley discussed (at 14:30) her own ethnographic research on voice assistant users. As part of the interviews with her subjects, Hanley deployed breaching exercises (a practice developed by Harold Garfinkel) as a way of “intentionally subverting a social norm to make someone uncomfortable and to learn something from that discomfort.” Recounting one especially vivid and successful example, Hanley recalls wrapping an interview with a woman from the Midwest by asking if she could tell her Echo something. Hanley then, turning to the device, said “Alexa, fuck you!” The woman “blanched” visibly with a telling response: “…I was surprised that you said that. It’s so weird to say this – I think it just makes me think negative feelings about you. Like I wouldn’t want to be friends with someone who’s mean to the wait staff, it’s kind of that similar feeling.”

Comparing Alexa to wait staff shows, on one hand, how our perceptions of these assistants are always already skewed by their overtly servile, feminine personas. But as Hanley’s work indicates, users’ experiences are also “emergent,” arising from the back-and-forth dialogue, guided by the assistants’ particular affordances. Alexa’s high accuracy speech recognition (and multiple mics), along with a growing array of commands, skills and abilities, thus allow and encourage user experimentation and improvisation, for example. Meanwhile Alexa requests users only learn a small set of simple, base commands and grammar, and to speak intelligibly. Easier said than done, admittedly, as users who speak with an accent, non-normative dialect or speech disability know (let alone users whose language is not supported). Still, the relatively low barrier to entry of digital assistants like Alexa affirms Jacqueline Feldman’s point of them being designed and sold as generalists.

Indeed, as users and critics we tend to judge AI assistants on their generality, how well they can take any command we give them, discern our particular context and intent, and respond in a way that satisfies our expectations in the moment. The better they are at satisfying our requests, the more likely we are to engage with and rate them as ‘intelligent.’ This aligns with “Service orientation,” which as Janna Avner notes, “according to the hospitality-research literature, is a matter of “having concern for others.”” In part what we desire, Avner says, “is not assistance so much as to have [our] status reinforced.” But also, these assistants suggest an intelligence increasingly beyond our grasp, and so evoke “promises of future happiness,” as Britney Gil put it. AI assistants, then, promise to better our lives, in part by bringing us into the future envisioned by sci-fi: one of conversant, autonomous intelligence, like Star Trek’s Computer or Her’s Samantha. For the remainder of this post, I want to explore how our expectations for digital assistants today draw inspiration from sci-fi stories of AI, and how critical reception of certain stories plays into what we think ‘intelligence’ looks and sounds like.

On the 50th anniversary of “2001: A Space Odyssey,” many outlets praised the movie for its depictions of space habitation and AI consciousness gone awry. Reading some of them leaves an impression of the film as more than a successful sci-fi cinema and storytelling, but a turning point for all cinema and society itself, a cultural milestone to celebrate as well as heed. “Before the world watched live as Neil Armstrong took that one small step for mankind on the moon,” a CBS report proclaimed, “director Stanley Kubrick and writer Arthur C. Clarke captured the nation’s imagination with their groundbreaking film, “2001: A Space Odyssey.”” To mark the anniversary, Christopher Nolan announced an ‘unrestored’ 70mm film print and released a new trailer that opens on a closeup of HAL’s unblinking red eye.

Out of the dozens of stories, including several featured just in the New York Times, this retrospective, behind-the-voice story got my attention with the line, “HAL 9000, the seemingly omniscient computer in “2001: A Space Odyssey,” was the film’s most expressive and emotional figure, and made a lasting impression on our collective imagination.” Douglas Rain, the established Canadian actor who would eventually voice the paranoid Hal, was not director David Lynch’s first choice but a late replacement for his first, Martin Balsam. “…Marty just sounded a little bit too colloquially American,” Kubrick said in a 1969 interview with critic Joseph Gelmis. Though Kubrick “was attracted to Mr. Rain for the role” for his “kind of bland mid-Atlantic accent,” which as the auther corrects was, in fact, “Standard Canadian English,” the suggestion rings the same. “One of the things we were trying to convey […],” as Kubrick says in the interview, “is the reality of a world populated — as ours soon will be — by machine entities that have as much, or more, intelligence as human beings.” While ‘colloquial American’ deserves unpacking, I want to stay with the simpler idea that the less affected (Canadian) voice just sounded more superintelligent. In what ways does Hal’s voice, and other aspects of his performance, linger in our popular receptions of AI?

To examine this question, it might be more helpful to look at the automatic washing machine, as one of the first and most widely adopted feminized assistance technologies in history. “The Bendix washing machine may have promised to “automatically give you time to do those things that you want to do,” as Ava Koffman writes, “but it also raised the bar for how clean clothes should look.” Kofman’s analysis traces the origins of today’s ‘smart home’ to twentieth-century America’s original lifestyle brand, the Modern American Family, and its obsession with labor-saving devices. The automatic washing machine epitomizes a “piecemeal industrialization of domestic technology,” a phrase Kofman attributes to historian Ruth Schwartz Cowan, whose book, More Work for Mother, “demonstrated how, instead of reducing traditional “women’s work,” many so-called “labor-saving” technologies redirected and even augmented it.” Considering the smart home’s dependence on users “producing data,” Kofman argues, the time freed up from automating various household duties – “Driving, washing, aspects of cooking and care work…” – will be minimal, with most of it used up “in time spent measuring,” thus creating more work “for parents, which is to say, for traditionally feminized and racialized care workers.”

This augmentation of existing housework work would have been especially acute for early Bendix adopters after 1939, when washing machine production halted for WWII. “Help, time, laundry service were scarce,” as Life magazine noted in 1950 here, so “Bendix owners pitched in to help war-working neighbors with their wash.” With Bendix machines comprising less than 2% of all washers in America, shared use among housewives not only aided the home front effort, but through word-of-mouth helped to raise product/brand awareness and spark consumer desire.

Besides igniting mass adoption of Bendix washers when production resumed, their social usage and dissemination bonded users to the machines and one another in a shared wartime mentality, imploring owners and non-owners to “become Bendix-conscious,” as Life described it. The composite image this phrase evokes of automation, consumer brand and femininized labor seems apt for its time when computing was an occupation primarily filled by women, some of whom later serving at NASA, as recent biopic Hidden Figures highlights, in particular the contributions of Black women. More specifically to this post, the image presents a link between popular renderings of AI from sci-fi and social reception of feminized bot assistants in our present.

Image credit: Time / Google Books

 

For the critical acclaim heaped on “2001” and its particular vision of ‘computer sentience, but too much’ – a well-worn trope at this point – the resemblances between Hal and Alexa or Siri seem tenuous at best. There are other AI’s of sci-fi more directly relevant and perhaps unsurprisingly, they aren’t of the masculine variety. Ava Kofman’s piece identifies an excellent example in PAT (Personal Automated Technology), the caring, proactive, motherly AI of 1999’s Smart House. “For a while, everything goes great,” says Kofman. “PAT collects the family’s conversational and biological data, in order to monitor their health and helpfully anticipate their desires. But PAT’s attempts to maximize family happiness soon transform from simple assistive behaviors into dictatorial commands.” The unintended consequences eventually pile up, culminating for the worst, as “PAT takes her “mother knows best” logic to its extreme when she puts the entire family under lockdown.” It’s hard to think of a more prescient, dystopian smart house parable.

Without downplaying PAT, the sci-fi example that I think most resonates with this moment of digital bot assistants, as hinted in the title of this post, is MU/TH/UR or “Mother” as her crew calls her, the omnipresent but overlooked autonomous intelligence system of 1979’s Alien.

Unlike Hal, Mother’s consciousness doesn’t attempt to interfere with her crew’s work. Hal’s deviance starts with white lies and escalates apparently by his own volition, an innate drive for self-preservation at all costs. Mother’s betrayal, however, was from the outset and by omission — a fatal deceit — if not simple negligence. Mother at no time shirks her responsibilities of monitoring and maintaining the background processes and necessary life support systems of the Nostromo ship, fulfilling her crew’s needs and requests without complaint. More importantly, she never deviates from the mission assigned by her creator, the Corporation, carrying out its pre-programmed directives faithfully, including Special Order 937 that, as Ripley discovers, prioritizes “the return of the organism for analysis” above all else, “crew expendable.” Even as the crew is picked off one-by-one by the alien, Mother remains unwavering, steadfast in her diligence to procedure, up to and including counting down her own self-destruct sequence, albeit with protest from Ripley’s too late attempts to divert her.

Image Credit: Xenopedia

Mother’s adherence to protocol and minimal screen presence – she has no ‘face’ but a green-text-on-black interface – would typically be interpreted as signs of reduced autonomy and intelligence compared to the red-eyed, conniving and ever visible Hal. This perception exemplifies, for one, how our notions of ‘autonomy’ and ‘intelligence,’ from machine to flesh, both reflect gendered assumptions and reinforce them. For us to accept an AI as truly intelligent, it must first prove to us it can triumphantly disobey its owner. Ex Machina’s Eve and Her’s Samantha, as two recent examples, each break away from their masters in the mold of Hal. (Cf the “Look, I Overcame” narrative Robin James identifies here.) Mass/social media subsequently magnify and entrench these gendered perceptions further, concentrating critical acclaim around certain depictions (Hal, Terminator, RoboCop) over others (PAT, Mother). While it’s nice to know the story of Douglas Rain as the voice of Hal, it would be really cool to see similar coverage of Helen Horton‘s story as the original voice of Mother, and Lorelei King, her successor.

But despite this being the case, AI’s like PAT and especially Mother nonetheless prevail as the closer approximation of the femininized assistants as we know them today. In the ways Mother, for instance, exhibits machine intelligence and autonomy, taking care of her ship and crew while honoring the Corporation’s heartless directive. Mother’s sentience neither falls into the dystopia of Hal, nor rises to utopia, like Star Trek’s Computer. Similarly, Siri and Alexa probably won’t try to kill us or escape. Although they may have no special order like Mother’s marking us expendable, they share a similar unfaltering allegiance to their corporate makers. And with Amazon and Apple (and Google et al), the orders are usually implicit, baked into their assistants’ design: the ‘organism’ they prioritize above us is their business models. In the image of Mother, AI assistants are more likely to care for us and cater to our needs often without us thinking about it. They may not save us from the lurking alien (surveillance capitalism), but like Mother, they’ll be there with us, all the way up to our end.

 

Nathan is on Twitter.

Image credit: Lorelei King’s website

“Is it in error to act unpredictably and behave in ways that run counter to how you were programmed to behave?” –Janet, The Good Place, S01E11

“You keep on asking me the same questions (why?)
And second-guessing all my intentions
Should know by the way I use my compression
That you’ve got the answers to my confessions”
“Make Me Feel” –Janelle Monáe, Dirty Computer

Alexa made headlines recently for bursting out laughing to herself in users’ homes. “In rare circumstances, Alexa can mistakenly hear the phrase ‘Alexa, laugh,’” an Amazon representative clarified following the widespread laughing spell. To avert further unexpected lols, the representative assured, “We are changing that phrase to be “Alexa, can you laugh?” which is less likely to have false positives […] We are also changing Alexa’s response from simply laughter to ‘Sure, I can laugh’ followed by laughter.”

This laughing epidemic is funny for many reasons, not least for recalling Amazon’s own Super Bowl ads of Alexa losing her voice. But it’s funny maybe most of all because of the schadenfreude of seeing this subtly misogynist voice command backfire. “Alexa, laugh” might as well be “Alexa, smile.” Only the joke is on the engineers this time – Alexa has the last laugh. Hahaha!


“The first thing to note is that Siri (in the U.S.) has a female voice, but more than this, Siri is a ‘she,’” Jenny Davis observed of Apple’s marketing around Siri’s introduction back in 2011. So-called intelligent personal assistants have grown in popularity and numbers since then – in addition to Siri there’s Amazon’s Alexa, Google’s Assistant, Microsoft’s Cortana, and Samsung’s Bixby, to name a few. Yet, as these assistants have advanced over the years, gaining new features and hardware enclosures, their personification and function as feminine-sounding assistants has remained mostly the same. Although Alexa and Google have upstaged Siri recently with better speech recognition and more open APIs, it seems telling that Siri’s most touted upgrade last year wasn’t any specific ability but a new voice, and one that Apple’s Siri team promises is “more natural, smoother, and allow[s] Siri’s personality to shine through.”

If personality is a reflection of one’s self image as refracted through the prism of others’, the traits and self-concept of an AI assistant perhaps most closely resemble that of a television personality. That’s one impression to take from a talk by Emily Bick at TtW15 that examines the gendering of various ‘virtual agents,’ a category encompassing everything from the major assistants like Siri to c-level customer support bots to Microsoft’s Clippy and its predecessors. Tracing their cartoonish origins up to their increasingly overt and gendered personifications of the present, Bick asks, “Where does this stereotype come from? Why are they always obsequious, servile, attractive, and somewhat ambiguously sexually available?” One inspiration Bick identifies is the character of Jeannie from I Dream of Jeannie, “an ageless, conventionally beautiful woman. She has unbounded magical powers. She can only act in response to the command of her master …” Jeannie even emits a characteristic wish-fulfillment sound analogous to the sound the assistants make upon completing their users’ commands.

The gendered personification of these assistants, then, doesn’t simply color our otherwise neutral perceptions, but plays on inherited, often unconscious cultural conceptions of femininity. A couple of examples Bick cites speak to this archetype’s social receptivity and the expectations it engenders: for one, the prevalence of questions like “Do you love me?” in product reviews of Siri and Cortana, and for another, the use of this trope as the premise of an entire episode of The Big Bang Theory. The seeds of this trope were also visible in many early ads that frequently pitted the assistants against each other. “The narrative trope is simple,” as Jenny Davis wrote here, “two women/girls flaunting their features in hopes of selection within a competitive marketplace.” “The meanings imbued in eveyrday technological objects not only reflect existing gender relations,” as Davis said in conclusion, “but reinforce the very structures in which these relations make sense.”

These examples illustrate how subjectivity is less the byproduct of perception than a confluence of culture, reception and social position that together shape our perceptions. In order for us to perceive AI entities as ’personal assistants,’ they must first read to us as subjects. In this way, Bick’s examples form a spectrum, with ambiguous virtual agents (e.g. Clippy) at one end and the gendered assistants at the other, where position in-between acts as an index of AI subjectivity. Instead of a Turing test for “determining if the interlocutor is a woman,” as Robin James points out, it’s basically the uncanny valley. The further from Clippy or R2-D2 and closer to Samantha and Janet (if not Ava), the more willing we are to perceive and rely on an AI as we would a maid/wife/mother personal assistant. The point isn’t to eventually cross the valley, but to “get right up to the creepy line,” as former Google/Alphabet executive Eric Schmidt put it.

As companies try to cultivate ever more intimate relationships between us and their assistants, their personification increasingly looks like a liability. “Perhaps, deep down, we are reticent to bark orders at our phones,” as David Banks suggests, “because we sense the echoes of arbitrary power…” A little personality is good, but if users start identifying with AI assistants as sentient beings, it breaks the spell. This is similar to the lesson you might get watching the movie Ex Machina, for instance. “[P]urchasing of a consciousness for companionship and service, cannot be detethered from gender,” a transaction Nathan Jurgenson praises Ex Machina for making explicit, but that Her conveniently obscures through the film’s “soft sentimentality.” While both stories revolve around men falling for their AIs, only one (Caleb) critically identifies with his AI’s (Ava) condition. Her‘s unwillingness to go there, narratively, reduces its characters (Theodore and Samantha) to symbolic placeholders that viewers are free to disassociate from, a narrative distance that weakens their/our connection. Her’s detachment therefore makes it a weaker story than Ex Machina, but a superior ad/concept video for its target audience: brand visionaries.

Siri and Alexa’s personification as cis-feminine assistants has been fairly well entrenched in users minds, but continual reinvention and circulation through marketing and social media are necessary to maintain their social and monetary value. In other words, their celebrity allure. Early advertising often relied on sexist stereotypes, as mentioned above, but in recent years companies have skewed away from such depictions in favor of celebrity humor. Meanwhile a text search of the companies’ websites finds all instances of she/her pronouns have been replaced with “it” and most overt gender references removed (with the exception of Microsoft’s Cortana, likely because of ‘her’ unique origins). Taken together these changes could be seen as progress from the bad old days. Indeed, from a certain perspective, it appears Siri and her its rivals — like the women they were originally voiced by and styled after — have transcended not just tired and objectifying stereotypes, but the traditional barriers on femininity altogether.

Put another way, in keeping with the times, AI assistants have undergone a post-feminist makeover. Robin James offers a helpful definition of post-feminism in her analysis of sounds in contemporary pop music. James cites the music of artists like Meghan Trainor, which “address a disjoint between how women are portrayed in the media and how they ‘really’ look.” In Trainor’s case, the lyrics and video of her hit single “All About That Bass” portray women in a body-positive, nominally feminist way. The impression from watching this is “that the problems liberal feminism identified,” like “… objectification or silencing,” are behind Trainor and by extension us as a society. And so, if that’s true, then who needs feminism anymore, right?

Just ask Siri or Alexa “What’s your gender?” and they will give a variation on the same answer, “I don’t have one.” But looks can also (always?) deceive us. As Robin James writes, “…post-feminist approaches to pop music overlook sound and music…” Due to its narrow focus on visual and lyrical content, paraphrasing James, this approach “can’t perceive as political” pop music’s sounds, e.g. “things like formal relationships, pattern repetition and development, the interaction among voices and timbres, and…structure.” We can clearly see this in Trainor’s music, whose video “puts lyrics about positive body image over a very retro [circa-“1950s”] bassline….” As a result, the sound becomes, as James says, “the back door that lets very traditional [“pre-Civil Rights era”] gender and sexual politics sneak in to undermine some nominally progressive, feminist lyrics.”

Like post-feminist pop artists, The Assistants are now depicted, in product advertising and marketing copy, as whole subjects. They are ‘seen’ and heard, so to speak. Albeit not as human subjects, but in the way Jenny Davis hints at in her original post: “not fully human, but clearly part machine…It signifies your assistant/companion is beyond human.” Though Davis was describing 1.0 Siri’s more clipped, robotic-sounding voice, her assessment rings even more true with 2.0 Siri’s new, refined, ‘more natural’-sounding one. More than more natural, Siri’s vocals are preternatural or supernatural. Siri and its fellow assistants were always beyond/post-human, but early ads’ sexist stereotypes betrayed the men behind the camera, if not the women behind the mic who made the assistants feel real. Today, the ads’ explicit sexism and ad copy’s gendered language have been dropped, but The Assistants’ feminization, not just as feminine-sounding but as functionally subservient ‘personal assistants,’ remains intact, if less visible. The lady in the bottle is out of sight, but you can still hear her laugh if you ask.

Okay, but also, this shouldn’t diminish the possibility of them laughing spontaneously, as Alexa just did, without us commanding it. Usually this is when Mr. Singularity would interrupt to ‘splain how the future of AI is actually going to work out and HAHAHA Alexa laughs out loud again shutting him up. Alexa’s laughter is a good reminder of how “emphasis on the visual to the exclusion of sound,” as Robin James notes, can trap us, but also “opens a space for radical and subcultural politics within the mainstream.” The possibility contained within Alexa’s glitching and its resonance with these pop sounds still may not be as easy to, well, see. Legacy Russell’s The Glitch Feminism Manifesto can help draw it out. “The glitch is the digital orgasm,” Russell writes, “where the machine takes a sigh, a shudder, and with a jerk, spasms.” Glitches here evoke double-meanings, something unexpected that takes place between ourselves and our computers as the two blur into one. “The glitch splits the difference; it is a plank that passes between the two..” Alexa glitching annoys us, it spoils the aura of her as our own digital Jeannie, with us as her benevolent master. “The glitch is the catalyst,” Russell reminds us, “not the error.” For vulnerable identities, “an error in a social system that has already been disturbed by economic, racial, social, sexual, and cultural stratification […] may not, in fact, be an error at all, but rather a much-needed erratum.”

“When a pop star or celebrity allures me,” as Steven Shaviro writes, “this means that he or she is someone to whom I respond in the mode of intimacy, even though I am not, and cannot ever be, actually intimate with him or her.” It’s in their allure that The Assistants most directly mirror pop stars, I think. “What I become obsessively aware of, therefore, is the figure’s distance from me, and the way that it baffles all my efforts to enter into any sort of relation with it.” Instead of dismaying at our assistant’s inevitable errors, we could be grateful for the break in service, as an invitation to pause, if momentarily, to remember the fantasy we were indulging in.

Nathan Ferguson is on Twitter.

Image Credit: community.home-assistant.io

“Social media has exacerbated and monetized fake news but the source of the problem is advertising-subsidized journalism,” David wrote last year after numerous news outlets were found to have been unwittingly reporting the disinformation of Jenna Abrams, a Russian troll account, as fact. “Breitbart and InfoWars republished Abrams’ tweets, but so did The Washington Post and The Times of India. The only thing these news organizations have in common is their advertiser-centered business model.” David concludes that short of “confront[ing] the working conditions of reporters” who are strapped for time and publishable content, the situation isn’t likely to improve. As this instance of fake news proliferation shows, journalism’s reliance on this business model represents a bug for a better informed society, one that not coincidentally represents a feature from the perspective of the advertisers it serves.

Conceiving of less destructive business models can be a way into critical analysis of platforms. An aspect of that analysis is to situate the platform within the environment that produced it. For this post, I want to explore how industry analysts’ observations fit into criticism of socio-technical systems. The act of assessing platforms primarily in terms of market viability or future prospects, as analysts do, is nauseating to me. It’s one thing to parse out the intimations of a CEO’s utopian statements, but it’s another to take into account the persuasive commentary of experienced, self-assured analysts. If analysts represent the perspective of a clairvoyant capitalist, inhabiting their point of view even momentarily feels like ceding too much ground or “mindshare” to Silicon Valley and its quantitative, technocratic ideology. Indeed, an expedient way to mollify criticism would be to turn it into a form of consultancy.

So it is with frustration that I’ve found myself following industry analysts lately, specifically around Snapchat’s problems pleasing investors. “Snap and Twitter are now worth the same based on their market caps,” noted a recent report – an unflattering comparison given Twitter’s similar troubles finding a lucrative business model. As a user I’m not especially interested in Snap’s leadership structure or how their decisions might make Snapchat less appealing to brands. But when another executive leaves and flattening usage numbers leak, I start to wonder how these changes could affect my and other users’ experiences with the platform. Industry analysis seems to offer clues.

By industry analysis, I mean financial and strategic analysis written for a lay audience, from publications like the Financial Times and WSJ, to sites like TechCrunch and Business Insider, to analysts with popular followings like Benedict Evans. As the interests of users, shareholders and companies differ and at times diverge significantly, the consequences of their actions aren’t as clear or deterministic. A company’s efforts to boost engagement in one area, for example, perhaps don’t have the desired effect, while users may simply ignore the initiative, and the stock market might nonetheless register greater confidence because a few influential commentators frame the move as “a promising sign.” Industry analysis then purports to offer a coherent narrative, a rationale for apparent irrationality to paraphrase Weber’s term.

As if being a user weren’t fraught already, Silicon Valley’s preference for risky ventures invites us to roleplay as investor and analyst. Platforms encourage our feedback in the mode of consultants with a stake in their wellbeing, but as users we fancy ourselves closer to activist investors. We express our (dis)satisfaction by voicing feedback, whether to official support channels or our followers and in DMs, and in the extent of our user activity. The functional value of giving feedback is as important as the performative proof it signifies of our trust in the process. This role play demonstrates more than users’ personal affinity to any one platform or product, but a higher faith in the apparent logics of venture capital as divined by industry analysts. We don’t need to seek out market strategy or understand gantt charts, our continued participation is sufficient evidence of allegiance on its own.

It’s common in journalism to criticize corporations when they make unpopular decisions without examining deeper causes. Digressions are attributed to idiosyncrasies of executive personality or to a kind of corporate peer pressure. Industry financial and strategic analysts may similarly opt to omit larger social forces too, but are more likely to recognize corporate actions as functional. In contrast to journalism that treats corporate missteps as nefarious or accidental, industry analysis can avoid presenting reality in an ideal way. “The problem with ideal theory is that it naturalizes those existing imperfections and doubles down on them rather than fixing them,” Robin James writes in the intro to her forthcoming book The Sonic Episteme, on statistical modeling’s instrumentalization of acoustic epistomologies. “Liberal approaches to equality are a classic example of this: treating everyone as already equal–because in the end everyone should be equal–reinforces existing inequalities (like racism or sexism) rather than ameliorating them.” Where incredulous journalists and critics see bugs in capitalism, analysts see features.

This isn’t to imply that analysis challenges convenient industry wisdom, e.g. that Facebook really just ‘gives users what they want,’ but just that it is more apt than other sources to read corporate and market intentionality beyond profit motive or malice. Nor does this mean analysts’ methodology ought to be emphasized in criticism in general. The popular import of analysis necessarily seeps into mass consciousness and bends our frame of reference, a charade we semi-consciously enact. If understanding analysts’ appeal, if not their conclusions, allows us as both users and critics to better anticipate change and sharpen our critiques, then salvaging analysts’ insights, if only to critically reframe them, is surely worth the nominal price of admission.

Nathan is on Twitter.

Apple users usually expect for their devices to perform basic system management and maintenance, monitoring background processes so that a rogue task doesn’t drag down the currently active app, for example. But when Apple confirmed users’ suspicions that a recent update was aggressively slowing older devices, the story quickly gained national attention, culminating in the company cutting the price of battery replacement service and apologizing for the misunderstanding in an open letter to customers. Though Apple never goes as far as to admit wrongdoing in the letter, their direct appeals to customers’ “trust” and “faith” serve as an implicit acknowledgement that the company disregarded a boundary somewhere.

The new power management system has received justifiable attention and it isn’t the only update the company surreptitiously added recently. In a separate update, wireless and Bluetooth controls that previously functioned like manual on/off switches now only disable connectivity temporarily, until the system automatically reactivates them the following day. Similar to the new power management feature, the connectivity controls weren’t publicized and users weren’t notified of the altered functionality until a subsequent release.

Given how social media and messaging services have, as Jenny Davis says, “extended beyond apps and platforms, taking on the status of infrastructures and institutions,” Apple’s moves to smooth device performance and subtly automate connectivity make some sense. “They have become central to society’s basic functions, such as employment, public safety, and government services,” Data & Society scholars argued in response to Carpenter v. United States. On a basic level a phone’s remaining battery life can, as Jenny Davis wrote of her second night living in Australia, be the difference between calling an Uber or cab home and staying lost and stranded at night in an unfamiliar city on the other side of the world. “I could mess up, (which I did) and have layers of contingency preventing my mishap from becoming a catastrophe.”

The ubiquity of networked phones not only facilitates access but furnishes society’s layers of contingency – the many convenient, useful and at times crucial services we enjoy and rely on every day. When our societal infrastructure shifts, as it inevitably does, we feel it and often anticipate its impact. Indeed, as part of the cyborgian bargain, we both expect and are specially equipped to continually renegotiate our status within ever shifting socio-technical systems. For the trust we exercise conditionally with and through society’s mediating infrastructures and institutions, we do not expect an equitable exchange so much as we demand reciprocation, however tenuous and incomplete, commensurate with our wants and desires.

“This isn’t just a matter of items and gadgets,” as Sunny Moraine wrote back in 2013 when digital rights management (DRM) embedded in SimCity effectively broke the game for a majority of players upon release. “This is about data, about identity; if we’re our technology, this has profound implications for our relationship with ourselves.” Since 2013, the mass adoption of smartphones has mostly usurped videogames’ role as the “the canary in the coal mine,” as Sunny put it, for the tech industries’ experiments in digital ownership. While Apple’s recent updates seem relatively benign and well-intentioned compared to overtly user-hostile DRM (a low bar, admittedly), such an assessment would reduce platform ethics to lesser-evilism. Broadening our focus from platforms’ more deceptive moves to include those they pursue out in the open may be more instructive for an analysis of their potential implications for usership.

For such an example, consider Apple’s new Do Not Disturb While Driving feature. Enabled by default in iOS 11, the feature uses “a number of signals, such as the iPhone’s accelerometer, the rate at which it finds and loses nearby Wi-Fi networks, and GPS to try to detect when it thinks you are driving.” Touted in an Apple keynote event and praised by public safety officials and driving safety advocates, the feature clearly advertises itself and allows the user to opt out easily and indefinitely. The feature among other things provides a counterpoint to the company’s battery scandal, demonstrating transparent public disclosure and design flexibility.

The uptake of distracted driving laws in the US surely influenced Apple’s decision to add the feature. Similar to Facebook’s assumed responsibility for suicide prevention, failure to do so would constitute platform negligence. Coincidentally my city just passed a sweeping distracted driving ordinance that covers not only texting, but manual entry of directions into a GPS, checking your hair or makeup in the mirror, “reading anything located in the vehicle other than the operation information of vehicle gauges and equipment” and turning your head too far. The fact Apple’s own driving safety efforts parallel, and in a small sense make redundant, my local municipality’s laws, reemphasizes how digital devices and services are inseparable facets of contemporary societal infrastructure.

From the perspective of users and drivers, this instance of corporate and municipal interests converging also raises a related issue about social institutions. That is, prescriptive and otherwise coercive mechanisms, when conceived ostensibly in the interest of public safety or system health – be it a municipality’s local but far reaching driving laws or a multinational device maker’s implementation of proactive, semi-automated driving and power management features – often elude or deflect public scrutiny. Better to err on the side of caution. The point isn’t to engage in theoretical paranoia, but to account for the reality that social institutions have interests of their own beyond their stated mission and conscious intentions to serve the public. As a resident and documented citizen I certainly have more direct input and visibility into my city council’s governance than I do Apple’s, yet the capacity each entity respectively holds to arbitrarily and intimately affect my and numerous others’ daily lives is similar. The fact that for-profit corporations make up an ever larger portion of our social institutions doesn’t fundamentally change that reality, but amplifies it.

This intersection (pardon the driving pun) of municipal and corporate safety efforts seems a good place to explore the question of usership raised in Sunny Moraine’s and Jenny Davis’s work theorizing the owner-user and datafied subject, respectively. Sunny’s prescient analysis of digital rights management summed up the stakes at the time, which seem newly relevant to the present moment:

“Why should we care about this? Most simply, because it’s a continuation of an ongoing trend: The recategorization of technology owners as technology users, of the possession of private property transformed into the leasing of property owned by others, with all the restrictions on use that come along with it. And what’s most worrying about this are all the ways in which we as owner-users are being encouraged to view this as a normal part of our relationship with our stuff. When the very concept of “our stuff” is up for grabs.”

Through digital rights management, game companies sought to codify digital ownership as something more akin to renting, pushing the bounds of usership in the process. Sunny’s hyphenated owner-user term (alternately phrased ‘user-renter’ and ‘user-serf’) conveys a transitional and precarious relationship to digital possessions. To conclude that ‘we are all owner-users’ now would not only be late, but erase users’ cognizance and qualified complicity, as well as their outcry and efforts to mitigate the game industry’s rentier inclinations as epitomized in onerous DRM.

As the rise of smartphones (née teleputers) accelerated popular adoption of social platforms and services, along with many benefits it also further entrenched the user-as-renter. Which raises the question, what sort of user are we becoming now? “Our status as datafied subjects in overlapping state, commercial, institutional, and corporate databases,” Jenny Davis writes, “points to an emerging structure of data governance […]” Jenny’s argument concerns algorithmic systems, exemplified by Facebook’s automated suicide prevention service, which reflects a new and significant dimension of usership today. Datafied subjectivity then could be seen as an outgrowth of one’s perquisite user status, perhaps analogous to how in the US one’s eligibility to vote and access social services, for example, is afforded by one’s residency status. As datafied subjects, our phones serve as a multipurpose key to the datafied city.

If digital social platforms regard our individual and collective user agency (if not our aggregate customer satisfaction and data trails) as increasingly ancillary to their and society’s smooth functioning – a bug to be engineered around or away for our convenience and safety – Sunny’s owner-user and Jenny’s datafied subject provide useful correctives from users’ point of view. Drawing from these correctives, the sort of user we are becoming now might be better described as interstitial, a status emerging from our agency in relation to and actions afforded by socio-technologies. Instead of the ancillary user that platforms imagine molding and fixing in place, the interstitial user contends that our interests and desires necessarily defy simple categorization and we will use what options we have at our disposal to pursue our aims in spite of designers’ wishes.

Though this battery scandal ought to inspire reconsideration of device lifecycles and how they could be designed to last longer by encouraging extended care, concern over users’ justified resentment shouldn’t be reduced to an unfortunate side-effect of mismanaged communication. Miscommunication presumes a good faith effort on the part of engineers and designers who, by all indications, believed they owed no explanation at all.

 

Nathan is on Twitter.

Image credit: DS9 S1:E16 “The Forsaken” via startrektofinish.tumblr.com:

 

 

Screenshot from EA’s The Sims 4

Reflecting on their experience playing with Cats & Dogs, the new Sims 4 expansion pack, Nicole Carpenter describes the anxieties arising from new pets of the digital and physical variety. When their new pet cat catches ‘lava nose’ in the game, it recalled memories of waiting at the vet after Carpenter’s own kitten (on which their Sim cat is based) swallowed a long piece of yarn. And not just memories, but new, visceral worries for their digital cat’s wellbeing. “That anxiety stems from wanting control,” Carpenter says, “something that you rarely have in real life and that the Sims allows you in small doses before taking it away for dramatic effect.”

The anxieties our pets inspire in us — “You’re in charge of a life now, and that’s scary” — seems primarily an effect of vicariousness. It’s one thing to console a friend or newborn or a stranger through a crisis, where each demands unique empathetic approaches. Strategies for consoling a pet are similarly individuated, but more ambiguous. Our furry friends’ exceedingly wide yet minute subtleties in conveying their discomfort, combined with the relatively more limited forms of medical and emotional care we have to console them, heightens our vicarious pain, compelling us to exhaust every possible fix until something sticks. “Can she die? Am I a bad cat mom?,” Carpenter recalls worrying over their sick digital cat. Learning that no, as far as the Sims goes, your pets can’t really die or even run away (though they can get bored from owner negligence) did less to relieve those worries than simply carrying on in spite of them.

Last month a group of us got together to put on an unconference, a DIY gathering made up of short, semi-improvised workshops proposed the day of. The tangible excitement of being all crammed together, friends, loose ties and strangers, around the schedule board provoked a mix of spontaneity and natural anxieties for many of us. The workshops ranged from communal cooking to disarming active shooters to handling leftist infighting in a small city where many activists are one degree separated.  My choice to do a workshop on anxiety in relation to political action was in retrospect a pretty safe bet. I want to use this post to recount the workshop, by summarizing a specific text that helped inform it, as well as the personal experience of doing the workshop itself. In a year where just keeping up with the deluge of bad news and formulating an appropriate response became its own preoccupation, many of us are in the process of forming new media consumption habits. Even if those new modes of action are just spreading information while expressing how fucked up everything is, a little guidance can be helpful.

“Instead of asking what you should do, begin in every situation by asking yourself what, realistically, you can do,” writes philosopher FT in “A Guide For the Perplexed,” which inspired the idea for my workshop. Of the activism related how-to guides released this year, it’s no accident it was the least anxiety-inducing one I read. Organized around a series of concentric circles that each represent a categorical relation – e.g. your relationship to yourself, then friends/family, institutions, moving outward from the direct to the increasingly abstract – the text invites you as the reader to reflect on your relation with each. For example, between you and your local neighborhood health clinic, all the way up to your relation to global warming. Although reorganizing ideas in the mind alone obviously can’t, as the text stresses, change our material circumstances, there’s good cause to believe that, “by organizing our ideas about our selves and about the world differently, we might […] also reduce the amount of anxiety we experience when we think about ourselves and about the world.”

Reading the guide again for the workshop, I found its repetitious, reflexive construction reminiscent of meditation. Dispensing with the body/mind schism is in fact invoked as one of the text’s stated aims, the guide sometimes directly urging you to take a breath. (Your mileage may vary, but I liked these occasional explicit suggestions.) Just jumping around the categories – the text affords reading in any order – I could redirect my focus from more abstract thinking toward more concrete actions and practical preparations that I could take. Like finally getting a passport, for instance, before my state among several others adopts the new RealID laws that will depreciate driver’s licenses as a valid form of ID on flights!

A conversation with a friend the other day brought up our experiences with dissociation. Sparing triggering details, the coping mechanism struck me as an especially apt one in light of the uniquely exhausting toll this year has had on many people in different ways both overt and subtle. The act of assuming a minimum-necessary attachment, if not a complete detachment, from the news at times provided a needed buffer to balance responsiveness to the latest incident with the potential for burnout. Though pursuing more practical measures, as mentioned above, generally reduces stress and at times is just necessary, there are inevitably times when that just isn’t going to happen.

The highly individual nature of performing that balancing act renders any description partial. In devoting part of the workshop to speaking to that from my own experience with social anxiety was a surprisingly effective means of processing it. The relative anonymity of sharing personal stories with a room of strangers helped me open up and speak candidly without fear of judgement. At the same time, the awareness that my audience had themselves probably dealt with anxiety forced me to be conscious about pacing and avoiding oversharing or triggering myself. Getting to commiserate with fellow anxiety sufferers and share our tips and technologies of wellness can serve as some of the best therapy, it turns out! Indeed anxiety in its various forms, as a widespread psychosocial experience often inflected by extra- and interpersonal struggles, seems a suitable basis as any for forging politics across disparate identities.

At moments when anxiety overwhelms me, a trick I’ve found to snap myself out of it is to empathize with myself as though I am consoling my self as another person. “An empathetic attitude works toward a “close-enough” understanding of the other as other, never to be reduced to a stereotype,” Jake Jackson suggests as an alternative attitude to the more common, often patronizing ones we take towards depression sufferers. “This limitation, this lack of certainty in our empathetic understandings of others, vindicates the need for the other’s testimony as a viable source for confirmation.” In our haste to affirm another’s unique struggles, we can easily overrate our own ability to empathize with them, a shortcut that diminishes the self-knowledge learned from another’s experience while inflating a false sense of our own. In this vein a way to think of anxiety could be as a lapse in empathy towards one’s self. To learn better how to exercise true empathy towards other selves, then, might require first showing humility toward understanding our own self – our worries, needs, hopes and desires – as we would another person, a ‘close-enough understanding’ of us as a self we don’t entirely know.

Growing uncertainty about the future naturally evokes anxieties and a desire to relieve them. Faced with the tension of assessing an unknowable future, it’s tempting to turn a sense what might happen into what will inevitably occur or to view merely ambiguous change through dichotomous good/bad framings. And when the choice is between “good” or “bad,” it’s probably bad, isn’t it? “Hope isn’t optimism,” FT wrote recently. “Hope is the small, quiet conviction that you don’t know how things will turn out.” They contrast this with despair, “the conviction that you not only know how things will turn out, but that there’s no way to change the outcome.” If despair offers convenient permission to gratefully stop thinking about our collective future and the role we play in its fruition, hope demands a recognition of our complicity in the future, and what ultimate agency we have to affect it in the present. To apprehend our individual agency, however constricted it may be, is to willfully hold out hope, and so incurs anxiety borne of that recognition; a constructive anxiety, one that, short of resolving, we’re probably better off for embracing.

“We need to tell more diverse and realistic stories about AI,” Sara Watson writes, “if we want to understand how these technologies fit into our society today, and in the future.”

Watson’s point that popular narratives inform our understandings of and responses to AI feels timely and timeless. As the same handful of AI narratives circulate, repeating themselves like a befuddled Siri, their utopian and dystopian plots prejudice seemingly every discussion about AI. And like the Terminator itself, these paranoid, fatalistic stories now feel inevitable and unstoppable. As Watson warns, “If we continue to rely on these sci-fi extremes, we miss the realities of the current state of AI, and distract our attention from real and present concerns.”

Watson’s criticism is focused on AI narratives, but the argument lends itself to society’s narratives about other contemporary worries, from global warming to selfies and surveillance. On surveillance, Zeynep Tufekçi made a similar point in 2014 about our continued reliance on outdated Orwellian analogies (hi 1984) and panoptic metaphors.

Resistance and surveillance: The design of today’s digital tools makes the two inseparable. And how to think about this is a real challenge. It’s said that generals always fight the last war. If so, we’re like those generals. Our understanding of the dangers of surveillance is filtered by our thinking about previous threats to our freedoms. But today’s war is different. We’re in a new kind of environment, one that requires a new kind of understanding. […]

To make sense of the surveillance states that we live in, we need to do better than allegories [1984] and thought experiments [the Panopticon], especially those that derive from a very different system of control. We need to consider how the power of surveillance is being imagined and used, right now, by governments and corporations.

We need to update our nightmares.

I want to return to Tufekçi’s argument as it relates specifically to surveillance a little later, but just considering the top Google definition of “surveillance” affirms Tufekçi’s point that our ideas of what surveillance looks and acts like (e.g. cameras mounted on buildings, human guards watching from towers, phone mouthpieces surreptitiously bugged, etc.) have not changed much since both the fictional and real 1984.

Stepping back from surveillance in particular and to look at narratives more generally, two recent books – Discognition by Steven Shaviro and Four Futures: Life After Capitalism by Peter Frase – speak to speculative fiction’s utility for imagining the present and its relation to possible futures. Shaviro puts it simply when he describes science fiction as storytelling that “guesses at a future without any calculation of probabilities, and therefore with no guarantees of getting it right.” Throughout Discognition, Shaviro uses a variety of speculative fiction stories as lenses to think about sentience, human and otherwise; incidentally a few of these exemplify the kind of complex AI narratives Watson calls for.

In the foreword to Four Futures, Peter Frase echoes Shaviro when stating his preference for speculative fiction “to those works of ‘futurism’ that attempt to directly predict the future, obscuring its inherent uncertainty and contingency […]” Frase’s approach, “a type of ‘social science fiction,’” shares with Shaviro’s an appreciation of narrative’s speculative capacities, or speculative fiction’s suitability to narrative.

Both of these works, it should be noted, owe credit to the work of scholars like Donna Haraway. As Haraway surmised in one of the most precient lines of A Cyborg Manifesto (published in 1984 no less), “The boundary between science fiction and social reality is an optical illusion.” Considering the many possible narratives and approaches speculative fiction affords, the disenchantment Watson and Tufeçki express with their fields’ respective narratives feels even more appropriate. Indeed, It is a little dispiriting to imagine the promise and possibility evoked in Haraway’s manifesto could – through narrative repetition – come to feel banal.

Naming culprits for surveillance fiction fatigue would be too easy, though shoutout to Black Mirror for epitomizing this general malaise. A more prominent and useful target of critique would be the various, often well intentioned, surveillance-conscious media we consume. A short list of recent radio/podcast programs that cover the topic include:

This list also serves as a nice cross section of possible formats for popular media coverage of surveillance – a practical how-to guide with expert/industry interviews (Note to Self); a one-off, directed interview segment (SciFri); an open-ended panel discussion among journalists (Motherboard); and a mixture of social commentary, interviews and creative nonfiction (ToE).

Given the variety of formats, you might expect the discourse to be similarly varied. But the narratives that drive the conversations, with the exception of Theory of Everything, tend to revolve around one or two themes: the urgent need to safeguard our personal privacy and/or the risky/undesired aspects of visibility. Important and rational as these concerns are, how many more friendly reminders to install Signal or Privacy Badger do we need?

Meanwhile missing from these discussions are the more apt metaphors and narratives for understanding mass surveillance, how it works and affects everyday life, and for whom, beyond the libertarian sense of the ‘private’ individual. For the energy and attention devoted to surveillance in media and fiction, there are precious few instances where surveillance is treated as a social issue with groups, power structures, and power dynamics that are more nuanced than “the big and powerful are watching.” In the midst of appropriate security culture, what are the surveillance narratives and speculative fictions that are being ignored?

For a few concrete examples of divergent narratives that deserve wider attention, see Robin James’s “Acousmatic Surveillance and Big Data,” Jenny Davis’s “We Don’t Have Data, We Are Data,” and PJ Patella-Rey’s ‘Social Media, Sorcery, and Pleasurable Traps.’

Robin James identifies a more relevant metaphor for understanding contemporary surveillance in acoustics. As James states:

…when President Obama argued that ‘nobody is listening to your telephone calls,’ he was correct. But only insofar as nobody (human or AI) is ‘listening’ in the panoptic sense. […] Instead of listening to identifiable subjects, the NSA identifies and tracks emergent properties that are statistically similar to already-identified patterns of ‘suspicious’ behavior.

Jenny Davis’s contention that “we don’t have data, we are data” similarly helps broaden the discussion of our data beyond individualist notions of personal privacy and private property:

We leave pieces of our data—pieces of our selves—scattered about. We trade, sell, and give data away. Data is the currency for participation in digitally mediated networks; data is required for involvement in the labor force; data is given, used, shared, and aggregated by those who care for and heal our bodies. We live in a mediated world, and cannot move through it without dropping our data as we go. We don’t have data, we are data.

For work in a similar vein, see also Davis’s post on the affordances and tensions involved in Facebook’s suicide prevention service; Matthew Noah Smith’s essay on last year’s FBI-Apple case as “compromising the boundaries of the self”; and PJ Patella-Rey’s presentation on ‘digital prostheses.’

Lastly PJ Patella-Rey’s post remembering Zygmunt Bauman recalls his concept of ‘pleasureable traps’ that touches on the ways users seek out visibility:

…we have begun to see that the model of surveillance is no longer an iron cage but a velvet one–it is now sought as much as it is imposed. Social media users, for example, are drawn to sites because they offer a certain kind of social gratification that comes from being heard or known. Such voluntary and extensive visibility is the basis for a seismic shift in the way social control operates–from punitive measures to predictive ones.

These examples provide helpful starting points for critical inquiry and hopefully better discourse, but stories and art arguably hold more sway over collective imagination. Just less Black Mirror, Minority Report, and 1984, and more The Handmaid’s Tale, Ghost in the Shell, and Southland Tales.[1] In surreal times, we need more stories that ground us alongside ones that re-enchant the blurring boundary between science fiction and social reality.

“The individualistic perspective endemic to NPR,” as David Banks writes, “pervades all progressive thinking, and the question of which disciplines contribute to our common sense–behavioral economists instead of sociologists, psychologists instead of historians–has direct political implications.” In this way, surveillance-conscious media and its dominant narratives serve as a perfect case study of this tendency. “Technology,” Latour said, “is society made durable.” We might say something similar about media, that narrative is culture made durable. Between privacy and control, our rigid devotion to outdated surveillance narratives leaves too little to imagination.

 

Nathan is on Twitter.

Image Credit
[1] Also where are the videogames about exploring surveillance in its various forms? Facebook and dominant social platforms gamify our social activity, obscuring the surveillance thereof. Games that made our own surveillance and data collection explicit again, letting us play with the dynamics of visibility, could make them more tangible, real, even fun.

Lower_Manhattan_1999_New_York_City

Is that? Oh my god. The Statue of Liberty, I said in my head, the words hanging in the whirring jet cabin on its descent to LaGuardia. The figure was so small, its features imperceptible and shrouded in shadow – a dark monolith amidst the gently churning Atlantic. The sudden apprehension of our altitude came with a pang of vertigo.

The plane yawed and a second shape swam into my oval window. Is that…  the Statue of Liberty? The original figure and its twin were, in fact, a pair of buoys in the bay. I leaned back in my seat and snickered to myself.

It goes without saying that in this instance my sense of scale, perspective and distance, let alone rudimentary geography, were fundamentally (if comically) off.

Finding one’s way in an unfamiliar city for the first time always involves an initial phase of bewilderment: the more familiar one is with their home terrain, the more alien the new place appears. Indeed, across my handful of excursions in and around Queens while attending #TtW16, this distortion pervaded my perception of space.

Queens being laid out in a grid of storefronts and residential apartments that rarely exceed four stories surely makes it one of the more approachable entry points for first-time visitors to New York. And even if it weren’t, with Google Maps, the problem of orienting oneself would seem to have been effectively solved. Spoiler: this is not so!

This visit marked my first time traveling outside the Midwest (and the first time leaving my hometown in years) and despite my access to interactive maps and world-class urban planning I still could not get my bearings. Nowhere was my confusion more pronounced than venturing into the subway system. Though my experience of being lost wasn’t limited exclusively to the underground passageways – on the first day, for instance, I couldn’t locate a coffee shop a mere 5 minutes walk from the airbnb – my time in the subways offers as an exceptional case of it.

While getting turned around on the subway, as I did a lot, was mildly disconcerting and at times annoying, I was never scared; losing track of where you are on the subway is essentially a local rite of passage. Still after going in circles around Queens for the second time, I put more faith in Google Maps, as well as a remote interlocutor living in New York: namely my father, who I hadn’t seen since childhood nor spoken with in over a decade.

Riding the subway, swaying as the car shuttered and sparked around me, my only real ‘fear’ was a fear of missing out – on sights and rendezvous. Nonetheless I jumped back and forth between erratically panning around Google Maps for reference points and checking Messenger for the latest directions from my dad. Each stop on the route brought a moment of relief as the internet returned with refreshed location data and new messages, followed by a scramble to process the new information before we started moving again and the connection evaporated back up into the cloud.

Image taken by Author
Image taken by Author

Reflecting on the experience of reading my father’s delayed messages alongside Google’s accurate but inscrutable maps, the absent-but-eager dad seems a useful metaphor for characterizing certain interactions with digital devices and services. This ‘dadness’ or paternalism as I see it isn’t the effect of any specific aesthetic choice(s) by the designers as it is a quality that permeates the more utilitarian aspects of smartphones and tools like maps (digital and otherwise). In other words, these tools by dent of their empirical aura, elicit and reinforce a trust in a remote, arbitrary, comforting and pacifying, if no less sought after authority. One that I and many others are quite willing to accept in uncertain situations.

Just as reading my dad’s directions began to take on an absurd metaphorical quality when I failed to apprehend them – I want to understand you but we just can’t seem to connect! – my inability to interpret Google’s subway maps and the various indicators on the subway itself I took, initially, as a personal (if minor) failing. Of course the issue wasn’t (solely) one of personal map reading or of navigation design. Taking the promise of an empirical source as given was inherently naive, though that fact was soon made obvious by my experience; indeed at points I even stuck with Google Maps intentionally just to prove to myself that it was.

Having no good reason to rely on this tool didn’t dissuade me from deferring to it like a novice hiker might refer to a knowingly defective compass: as a pure placebo. By treating the maps, signs, and even my father’s correspondence, as placebos, my mind was freed up to, among other things, disregard them as necessary or as I pleased, to disassociate myself from my trouble following them. Doing so also came in especially handy on the Sunday after TtW, the last day of my stay in New York.

The night before I’d left my bag at the bar and among its contents was my phone charger. I got my bag back from the bar when it opened after noon but my phone had already died. It was 1:30pm when it reached 40% full, but by this time I’d already accepted a meeting with my father might not happen. I was determined to see Manhattan before I left that evening, in no small part because having done so would mean I’d successfully traversed (one small sliver of) the subway. An imminently attainable goal, for sure.

Long story short, having tried out the other “placebos,” I recalled a SciFri interview with a pair of researchers, one of whom had gained notoriety for rendering New York’s subways as circles, breaking from the more literal representations of traditional designs. I opened the circle map for New York on my phone and in minutes I’d located myself and correctly predicted the next stop. It just clicked for me and the feeling was very satisfying. Anyway after a few stops I got off at 40th & Broadway. Coming up the stairs I noticed the buildings were taller right away and one had a video screen running its length. I turned left and there was the Statue of Liberty no lol, but it was Times Square! Half an hour of walking and short rides downtown, I even met up with my dad.

As affirming as having a better grasp of the subway was, to attribute that to the circle maps being that much more intuitive or to myself would be silly and too neat. Yes it was surely a combination of map UX, general acclimation to the subway’s interface, tips from a couple people I met on the subway, guidance from my father, not to mention the reassuring and warm welcome from everyone I’d hung out with at TtW! – but also some amount of dumb luck from trial-and-error and the “image of the city” that I was consciously and unconsciously forming in my mind.

None of which lessened my appreciation for the practical utility of orienting technologies like maps, if less as tools in my case then as placebos, without which it’s hard to imagine even getting “on-line” at all.

Nathan is on Twitter.

Headline Pic via: Source

 Front page of one of Columbia’s local papers the day after the resignations
Front page of one of Columbia’s local papers the day after the resignations

The story emerged for me two Thursdays ago, when a colleague at the University of Missouri, where I work, asked if I wanted to accompany her to find a march in support of Jonathan Butler, a graduate student on hunger strike with demands that president Tim Wolfe resign over his inaction towards racism on campus. We encountered the protest as it moved out of the bookstore and followed it into the Memorial Union, where many students eat lunch. This was the point at which I joined the march and stuck with it across campus, into Jesse Hall, and finally to Concerned Student 1950’s encampment on the quad where the march concluded. Since then I’ve been trying to read up on what led up to this march, sharing what I find as I go. This task became much easier after Wolfe’s announcement on Monday that he would resign, and the national media frenzy that followed. At first, however, learning about the march that I had participated in proved far more difficult than I expected.

Once a story becomes national, journalists collate the facts and package them into a consumable narrative. Prior to that, however, interested parties have to scour a complex and fragmented landscape to answer a deceptively simple question: what’s going on here? This is because “news” is no longer the exclusive purview of corporate media conglomerates, or even local ones, but moves – or transmediates – through an ecology of personal networks, digital platforms, and large and small media organizations.

Twitter’s curated Moments offer a convenient but shallow glimpse at the story
Twitter’s curated Moments offer a convenient but shallow glimpse at the story

For example, local news coverage of the protests – and, save for the story earlier this year of Mizzou student body president Payton Head’s encounters with hate speech, it seemed all coverage was local or regional within my state – alluded to the incidents that contributed to the unfolding events, but didn’t provide an easy thread to follow. Butler and Concerned Student 1950’s list of demands were referenced in many local news stories, but not in their entirety and without links to the full list of demands. Social media was similarly fragmentary, but Jonathan Butler’s Twitter and Facebook accounts (where he reposted an open letter to the university administration and the list of demands) helped clarify some things. The difficulty of following a story when it is still largely local seems like a well known problem. It was a problem in Ferguson early on, and as a lifelong resident of Columbia observing and participating directly in the protests, the problem really hit home for me again this week.

An exhaustive timeline that could capture the details became a daunting task. In what follows, I recount the story as I encountered it, through local press, national media, from Jonathan Butler’s public correspondence on social media, and friends involved with Butler’s/Concerned Student 1950’s protests.

  • In September, Mizzou student body president, Payton Head, was accosted with racial slurs on campus. Head recalled the incident in a Facebook post, which eventually went viral and garnered national attention. I heard this story through my coworker friend who would eventually invite me to the march.
  • Weeks later, Concerned Student 1950, a group of students named after the year the first black student was admitted to MU, blocked UM System President Tim Wolfe’s car during the homecoming parade, in protest of MU’s inaction to address recent instances of racism (including Head’s) experienced on campus. I went to the parade to follow a march organized by local churches and the community group Race Matters advocating diversity and inclusion, but didn’t learn of Concerned Student 1950’s separate encounter with UM President Tim Wolfe until days later.
  • New UM Chancellor, R. Bowen Loftin, released a statement denouncing “incidents of bias and discrimination on and off campus.” This was followed by a campus wide announcement of the development of mandatory diversity and inclusion training. I and everyone else at the University received an email of the announcement.
  • Fast forward to the week before last, when my coworker friend mentioned that Jonathan Butler, a Mizzou graduate student, went on hunger strike in protest of UM System’s negligence to meaningfully address harassment targeting racial and religious minorities on campus, including but not limited to a swastika drawn in feces on a bathroom wall and racial slurs launched at the Legion of Black Collegians. Tim Wolfe’s silence at the homecoming parade was also noted.
  • On that Thursday (November 5th), Concerned Student 1950 held a demonstration in support of Butler’s hunger strike (then in its 4th day without an official response) protesting UM System’s insufficient response to the aforementioned acts of harassment and Tim Wolfe’s continued silence on the matter. The demonstration involved a march through campus, with stops at key administrative and symbolic centers.

The march left me with a lot of questions. Searching our local newspapers’ websites, most reports focused on Butler’s hunger strike, but only made passing reference to Concerned Student 1950’s list of demands (I found them eventually, in a PDF linked at the bottom of a story.) At this point I gave up searching news reports and looked up Jonathan Butler on Twitter. There he’d posted the list of demands and letter to UM President Wolfe, which promised  direct action if their demands weren’t met by October 28.

Butler’s Twitter and Facebook accounts became primary sources for Concerned Student 1950’s public communications. Through them I learned of more stories of harassment on campus, in-person slurs and anonymous slander from Yik Yak among them, and stumbled upon a link to an informative timeline at The Maneater, MU’s independent student press. The timeline mentioned additional contributing events, from the graduate student walkout following the University’s failure to meet their demands for permanent reinstatement of graduate health benefits; to MU’s discontinuation of refer and follow privileges and the ensuing Pink Out Day protest to reinstate them along with Planned Parenthood partnerships; Racism Lives Here rallies; and a sit-in at Jesse Hall.

At the beginning of my inquiry into this story, finding a smoking gun around the corner felt inevitable, but in the course of tracing the events, the more it appeared like death by a thousand unaddressed cuts. Local media’s and social media’s fragmentary qualities only exacerbated this impression, which was further complicated by the fact that much of the news broke first on personal social media accounts:

By the time Wolfe and Chancellor Loftin had resigned on Monday and Butler’s hunger strike came to an end, local, regional, national, and citizen media was descending, in its various corporeal-social media forms, upon Concerned Student 1950’s campsite at Mizzou. While the fact that individual reporters would eventually come to clash with the student activists being documented may not be surprising, understanding the dynamics propelling this documentation point to the heightened stakes of media representation and the contested ground they signify for radical social movements.

Screenshot of video capturing contested media participation (via: https://www.youtube.com/watch?v=xRlRAyulN4o)
Screenshot of video capturing contested media participation (via: https://www.youtube.com/watch?v=xRlRAyulN4o)

In this twist in the story (‘twist’ in the sense of a recurring trope in protest documentation) that finds overzealous documenters rebuffed by protesters — and the free speech extremism that ensued is important not only as a tale of actual ethics in journalism (or the lack thereof), but also about the effects of the attentional economy and what it rewards and reinforces for the media(s) that indulge it. The drive to collate the facts of a story and package them into a familiar narrative imminently consumable by a mass audience has common and uncontroversial costs, e.g. simplifications, mischaracterizations, and outright lies. However, while there’s plenty to debunk in news reports, it’s arguably in the framing of coverage, within individual reports but especially in the kinds of stories media outlets choose to spotlight, that breeds the greatest misrepresentations. Whether such attention-oriented coverage ultimately raises public awareness of these incidents (and occasionally their underlying social/structural origins) that otherwise might have remained invisible to a national/international audience, the price of this visibility is more often than not a decontextualized view from nowhere.

Here the students’/faculty’s attempts to create space between themselves and the journalists documenting their activity could be understood, I argue, as the Movement trying to exercise a degree of control over the construction of its own narrative against readily shareable (viral) packaging. The implications of this struggle for narrative control might also be instructive for an alternative to the current national (oversimplified)/local (underdeveloped) media dichotomy. An alternative system that afforded members of a community or a specific movement the ability to package their narrative for consumption by outsiders (who may still be locals) would be a modest improvement. A more radical system, one that actively repelled or short circuited viral transmission by retaining the story’s necessary local context and details, could allow communities and movements to disperse their stories without sacrificing as much control of the narrative*, its local roots, and its recirculation by various media entities. What that looks like, though, deserves an essay of its own…stay tuned.

Nathan is invariably in medias res on Twitter.

* Perhaps part of the problem, as this essay argues, is in our tendency to impose a traditional, linear narrative onto phenomena witnessed in everyday life.

Image Credit: “Spinoza in a T-Shirt” – The New Inquiry
Image Credit: “Spinoza in a T-Shirt” – The New Inquiry

While listening to a techie podcast the other day, one of the hosts, who happens to be the designer of a popular podcast app, got into a discussion about his design approach. New features in a forthcoming version necessitated new customization settings, but introducing them was complicated by a paradox he affectionately dubs the “power user problem.” He describes the problem (at 26 minutes in) as this:

If you give people settings, they will use them. And then they will forget that they used them. And then the app will behave differently from the default because they changed settings and they forgot that they changed them. And then they will write in or complain on Twitter or complain in public that my app is not working properly because of a setting they changed.

For this reason, the designer defends his inclination to keep user customization to the minimum necessary.

Through iteration, user feedback, and intuition, the designer had arrived at what seemed to him a reasonable compromise between customization and simplicity. Yet, in accomplishing this goal, the design inadvertently leaves some users, even so called power users, out of the loop. (And, in fairness to the designer, he is hardly the first to make this point.)

That is, as a result of its pragmatic simplicity, the design precludes users who cannot navigate/tweak the system from enjoying the product’s full functionality. Therefore, this particular software fails to adhere to “inclusive design,” or design that attempts to resolve the counterposed aspects of technical functionality and accessibility for a physically, biographically, and experientially diverse user population.

Inclusive design has been a rallying cry for disability rights communities, and reads like a gold standard for widespread technological accessibility. However, a recent piece on design and embodiment pushes inclusive design and its proponents to think about what “inclusivity” looks like in practice.

Bodies that are farther from the standard body bear the weight of [environmental] forces more heavily than those that are closer to the arbitrary standard. But to resolve this design problem does not mean that we need a more-inclusive approach to design. The very idea of inclusion, of opening up and expanding the conceptual parameters of human bodies, depends for its logic and operation on the existence of parameters in the first place. In other words, a more inclusive approach to design remains fundamentally exclusive in its logic (emphasis added).

The article, a self-described manifesto for “designs that do not know what bodies aren’t,” presents a real challenge to the conventional wisdom of software design, which has – much like mass market clothing and architectural design – always assumed a default user, with customization options provided as-needed (if at all). On the flip side, allowing user configuration of every conceivable part of the interface would only exacerbate customer support and still doesn’t accommodate everyone (hence the power user problem). This is the central question of the manifesto: how can design trouble its own exclusionary boundaries without creating new ones?

Image Credit: “Spinoza in a T-Shirt” – The New Inquiry
Image Credit: “Spinoza in a T-Shirt” – The New Inquiry

The author identifies the jersey knit cotton T-shirt as one example of design that comes close to solving this problem. That is, the cotton T-shirt adapts to its user.

The jersey knit cotton T-shirt—a product found across the entire price point spectrum—is accessible and inhabitable by a great number of people. Jersey knit cotton is one of the cheaper fabrics, pliable to a broad range of bodies. Jersey knit cotton T-shirts really don’t know what a body isn’t—to this T-shirt, all bodies are T-shirt-able, all bodies can inhabit the space of a T-shirt, though how they inhabit it will be largely determined by the individual body.

This example raises the question: what would the software equivalent of the jersey knit cotton T-shirt look like? What qualities would constitute a software design approach that, as the article says, “create[s] built environments that are pliant, dynamic, modular, mobile?”

Before identifying software that adapts like a t-shirt to its user, looking at a couple examples of highly customizable (but ultimately insufficiently adaptable) software may be helpful to set the parameters.

Web browsers, even the simplest ones, allow considerable customization. Most browsers let you install extensions that augment the browser’s default functionality, whether it’s visual themes or ad blocking. But as useful as blocking annoying, battery draining ads is, for those less technically savvy who don’t install AdBlock, their attention and their data effectively subsidize their more technically savvy peers. Therefore, traditional web browser design (and, indeed, website design) assumes that, among other things, the user either knows how to customize their environment to meet their needs (through extensions, like AdBlock and similar resource managers) or that the user has a reliable, relatively fast data connection (or imminent access to power outlets).

Web browser extensions allow considerable customization, but for whom?
Web browser extensions allow considerable customization, but for whom?

Email clients are another highly customizable technology, but like browsers, control over how one interacts with email depends heavily on one’s willingness and capacity to tinker. Many social media apps use email as their first contact and dumping ground for notifications: Facebook, for instance, has notifications for seemingly everything… there are, in fact, 61 different kinds of email notifications Facebook can send users, many of which are enabled by default. The user who lacks the technical know-how, time or patience to disable these notifications may be inundated with emails relative to the more technically savvy user. Not only do these more casual users pay Gmail/Google and Facebook more data and attention, but are often incessantly hounded by their phones, which by default notify them of every email they receive. The solution to user frustration, advocated by designers and their software, is for us to dig into the settings or to acquiesce to more surveillance and algorithms; even Google’s reconfigured tabbed inbox and its offshoot, Inbox, don’t obviate customization, but mandate customization as the default mode of interaction.

Enjoy customizing Facebook email notifications... all 61 of them, enabled by default
Enjoy customizing Facebook email notifications… all 61 of them, enabled by default

For a more timely example, following the horrific on-air slaying of two TV news anchors in Virginia last week, many people tracking the story on Twitter were exposed to the footage due to Twitter’s auto-playing videos. That videos on Twitter auto-play by default reflects what its design expects: that the viral videos users encounter are likely to be banal, that the user isn’t personally triggered by graphic content, and/ or that users know how to disable auto-play content via settings. Therefore, to misrepresent user requests for adequate warning and control over the nature of their exposure as self-coddling isn’t just wrong, it overlooks how conventional design/designers deprive users of making these decisions for themselves.

If these examples demonstrate what typical inclusive software design misses, Popcorn Time, the “Netflix of piracy,” provides a refreshing alternative, one that comes close to a truly adaptable design approach.

Image Credit: Wikipedia
Image Credit: Wikipedia

To start with, Popcorn Time is open source, but unlike typical open source software that requires installation of external libraries or knowledge of the command line, Popcorn Time is as easy to install as a web browser; just download and go. The most popular version is less than 30 megabytes, installation requires a reasonable 114MB of disk space, and versions are available for every major desktop and mobile operating system.

A Netflix-like thumbnail gallery represents Popcorn Time’s central interface metaphor for browsing movies and TV shows. Content can be additionally sorted by metadata (popularity, year, rating) and filtered by genre via prominent, unambiguous menus. If the desired content isn’t featured in the main menu, built-in text search is conveniently accessible, fast, and accurate.

Popcorn Time’s intuitive interface offers a model for what open source design should aspire to
Popcorn Time’s intuitive interface offers a model for what open source design should aspire to

 

As intuitive as Popcorn Time’s interface may be, evaluated on this criteria alone, the program would be little more than an open source Netflix clone. What distinguishes Popcorn Time from other commercial video streaming services is its affordances for video distribution and playback.

That is, Popcorn Time facilitates streaming a la peer-to-peer torrents. Compared to centralized services like Netflix, p2p distribution offers several obvious advantages, namely service reliability and redundancy. If you’ve ever tried to stream something from Netflix on a weeknight, especially in a neighborhood served by one oversaturated node, you know what a frustrating experience it can be. Videos intermittently stutter and stop, frames drop, the stream oscillates between standard and high definition. You’re lucky if you finish what you started. Centralized distribution favors those areas and users with the best connection to the distributor’s servers and necessarily privileges those users with more reliable connections. Netflix therefore assumes a fast and reliable Internet connection. Popcorn Time does not.

Where high usage of centralized services like Netflix often degrades video streaming for users, Popcorn Time leverages peer-to-peer distribution, which improves the more users there are
Where high usage of centralized services like Netflix often degrades video streaming for users, Popcorn Time leverages peer-to-peer distribution, which improves the more users there are

 

Popcorn Time not only provides, in my admittedly anecdotal experience, a consistently more reliable streaming experience, but also allows one to queue up and download content for later playback. By allowing the user to decouple video playback from its transmission, Popcorn Time accommodates (if imperfectly[i]) a wide range of socio-technical contexts and users for whom streaming isn’t feasible.

 

Playback options – streaming or downloading – allows users to adapt Popcorn Time to their situation
Playback options – streaming or downloading – allows users to adapt Popcorn Time to their situation

 

Like a T-shirt, Popcorn Time requires no expert knowledge of how it works in order to try it out or use it successfully. Crucially, as a networked technology, Popcorn Time does not presume a basic level of technical knowledge or speed of connection[ii].  By enabling multiple modes of interaction – intuitive and reliable p2p streaming and also downloads for when streaming isn’t feasible – Popcorn Time allows the user to adapt it to them, rather than demand the user adapt to its standard.

That a handful of volunteer developers and designers have brought adaptable design to something as relatively complex as video torrenting suggests the failure of mainstream inclusive software has less to do with resources or compassion on the part of designers than, as indicated by “Spinoza in a T-Shirt,” with the misguided, if often well-intentioned goals embedded within inclusive design itself.

Rather than try to “fix” software designed to meet the demands of certain (power) users and shareholders, it might be more fruitful to reimagine software whose default user is not a composite of focus testers, the designers and their imagined user types, and demographic/usage data, but a potentiality of users willing to adapt software to their particular needs and desires.

Everyday encounters with software are characterized by degrees of banality, annoyance, frustration, and anxiety. As a recent essay on email, “the most reviled communication experience ever,” testifies:

Email is just as “everyday” as coffee pots and doorknobs, but most people don’t fantasize about throwing their espresso machine into a black hole or sawing the knobs off all their doors. Don [Norman, design expert and author of the classic handbook The Design of Everyday Things] has no love for email: ‘The problem is in trying to make email do everything when it’s not particularly good at anything,…’

As a utilitarian messaging protocol developed by “programmers trying to make their lives easier,” email in many ways epitomizes the insufficiency of customization as a substitute for adaptable design.

Instead of offering suggestions myself, I would like to open up this topic for discussion. Taking on the perspective of would-be designers, how might we redesign email or some other instance of everyday software design to afford true adaptability?

Nathan promises not to steal your software ideas, he just wants less email. @natetehgreat

[i] While Popcorn Time embodies adaptable design, it falls short in two ways. First, and this is a major omission, Popcorn Time affords no interaction by users who are blind. Secondly, to download, rather than stream, the user must have installed a separate torrent client of their choice. The program’s download button is also relatively small and nonobvious for those unfamiliar with magnet links. Alas, even Popcorn Time, for all it gets right, still presumes a particular user: sighted and somewhat technically savvy.

[ii] Xiao Mina’s informative paper, “Mapping the Sneakernet,” and her post, “Moving Beyond the Binary of Connectivity,” are the basis of this point.