Today, the influence of our moon Goddess foremothers is everywhere. Contemporary progressive activists dress up like witches to put hexes on Trump and Pence. The few remaining women’s bookstores in the country sell crystals and potions for practicing DIY feminist magic. There is an annual Queer Astrology conference, Tarot decks created especially for gays, and beloved figures like Chani Nicholas who have made careers out of queer-centered astrology. Almost every LGBTQ+ publication, whether mainstream or radical, features a regular horoscope column (including them.).
In this feature from last year, Sascha Cohen reflects on אסטרולוגיה recent re-ascendance and seeming ubiquity in LGBTQ+ circles, and the skepticism it’s meeting with more queer-identifying people. Astrology’s pseudoscience was a nonstarter for some (mainly those from STEM fields). For others it was New Age culture’s appropriation of indigenous spirituality and separately, the risk astrology poses as a distraction from systemic repression. A “sense of exclusion” or just being “seen as a cynic and no fun,” in one person’s words, was maybe the most common of all the complaints.
Despite these reservations, most of the queer ‘skeptics’ Cohen interviewed recognized astrology’s appeal for queer people — as a source of “meaning and purpose,” as an alternative to exclusionary religious communities, as entertainment, and one that in practice usually “centers and empowers women.” Hardly isolated from systemic anti-LGBTQ+ forces, “a recent uptick in such practices,” Cohen asserts, “may be because, [as interviewee] Chelsea argues, ‘We’re in the midst of a global existential crisis.’”
Though these responses make sense, an aspect that goes unmentioned in the piece is the part popular meme accounts and algorithmic social media appear to be playing in astrology’s current revival.
Like a lot of new converts, my personal interest in astrology started on Instagram. My friend C would send me the odd @trashbag_astrology post or a pic of her own ‘day at a glance’ notifications from the notoriously pithy Co–Star app. These memes come in a variety of forms, naturally. Some have satirized the Co–Star notifications directly. Perhaps the most popular form recycles an established meme, e.g. the Real Housewives of Beverly Hills + Smudge the cat one, and adds sign-specific captions to the meme’s figures, typically with a few signs represented simultaneously or sequentially (as in a multi-image Instagram post).
These groupings sometimes reference the four elements, but more often revolve around the interplay of the different signs. The signs as social media-tuned caricatures, that is, not as they’re defined in the zodiac. These are memes, after all, not birth charts. If a key Libra trait is supposed to be their overriding concern for others, in memes they’re closer to Janet from The Good Place; i.e. “programmed to be nice to everyone,” as this @bitch.rising meme suggests. Virgos (hi) as depicted in memes are so pathologically perfectionistic, comedian Benito Skinner aka @bennydrama7 thought to portray them as Hannibal Lecter. Geminis might as well be the Chaotic Neutral square come alive from the D&D alignment chart.
Astrology of course already relied on stereotypes for its logic, as Robin James notes in this essay reexamining Adorno’s writing in light of present-day big data and algorithmic systems. “Though the differences between Adorno’s time and ours are vast,” James writes, “his concept of pseudo-rationality still has something to tell us about the ‘rationality’ of contemporary algorithmic culture, social media, and big data.” James draws parallels between the pseudo-rationality Adorno studied in the LA Times’s “Astrological Forecasts” column and the work of big data forecaster Nate Silver. “For both Adorno and Silver, forecasting is a ‘down to earth’ activity, a matter of applied knowledge that helps people figure out what to do in their daily lives.” Indeed, “both kinds of forecasting,” James continues, “use profiles to explain the past and predict the future choices we will make: Both astrological signs and psychographic categories derived from demographic data (like, say, ‘college-educated women who tweet about Scandal and buy shoes online’) similarly forecast individual behavior.” Silver’s forecasts during the last presidential election didn’t line up so well with the final results, unfortunately. Yet, in spite of his/FiveThirtyEight’s miscalculations, their reputation in media circles has only solidified, unlike, say, that of a (shitty) professional astrologer.
The same pseudo-rationality James finds in both astrological and big data forecasts may be most immediately tangible in the algorithmic social media we’re all in some way or another immersed. My fondness for astro memes, the mutual enjoyment my friends and I get out of sharing them, as well as our identification with astrology generally, is to me inseparable from its prevalence in Instagram’s ever-morphing algorithmic feed. “The big-data algorithm, like the astrologer, observes patterns (of behavior, of interactivity, etc.) across populations and ties its forecasts to this input,” as James says. Patterns in this case being the posts/accounts I tap ‘like’ on, open or just linger over frequently. The forecasts or predictions that Instagram, Spotify, and other algorithmic systems make, then, take my aggregated habits, cross-referenced with those of my friends, and in turn “tailor results according to user categorizations based on the observed web habits of ‘typical’ women and men, [scholar] Cheney-Lippold argues.” “Through this feedback loop of observation and adjustment,” James argues, “social media produce the identity categories—like ‘typical’ men and women—it claims to merely observe.”
Without underrating the valid contentions of Sascha’s interviewees, it’s worth emphasizing the accelerated proliferation of astrology content via algorithmic social feeds, its popularity steadily saturating queer audiences as exposure ripples out to wider and different audiences, garnering media attention and so reinforcing the cycle of initial excitement, fatigue and vocal skepticism.
Nathan Ferguson welcomes your astrology memes on Twitter.
“Do you ever part it to the other side,” my girlfriend B asked one morning while I was futzing with my hair in the bathroom mirror. “I read somewhere that it’s good to part it the other way every six months so your regular part doesn’t pull wider.” Though it sounded like reasonable advice, the comment sparked an uneasy reflection – that the face I see in the mirror and in selfies is exactly reversed for everyone else…
A similar creeping feeling seems at play in people’s reactions to the Snapchat “gender swap” filters. “When the filter was released,” Magdalene Taylor recalls in MEL Magazine, “my social feeds were clogged with dudes talking about how hot they were with long hair and a feminized face. One dude even told Reddit about how he got caught jacking off to his.” Curious what might be behind these users’ apparent autoeroticism, Taylor asked psychologist Pamela Rutledge. “It appears that the gender-swap filter makes features more symmetrical, smoothes out imperfections,” Rutledge said. Taylor observes they make your eyes look subtly bigger, too. “So the filter isn’t necessarily an exact portrayal of a differently gendered self,” Taylor says, “It’s an idealized version of it.”
The gender-swap filters feel uncanny, but not in the usually uncanny valley sense. In contrast to a lifelike robot or CGI character that looks a little off, the eeriness of these face filters is more akin to meeting your doppelganger or a long-lost fraternal twin. On Facebook my cousin L posted a selfie with the ‘guy’ filter on and most comments from her friends suggested they didn’t realize she was using it.
This effect stems from these filters’ subtlety I think. The novelty of overt augmentation, like the dog filter or the many sponsored filters that turn your face into food items, seems to have peaked a couple of years ago, as the viral popularity of 2017’s FaceApp suggests. Using server-based neural nets, FaceApp made applying otherwise complex visual alterations, like ‘gender swapping’ or giving a formerly straight-faced selfie a grin, trivially easy. The app’s surprisingly naturalistic, near instantaneous results gave previously static faces a newly interactive quality, as Linda Besner examines in this essay. In it she recalls a designer’s visit to the Rijksmuseum in Amsterdam, during which he used the app on original Rembrant portraits, “to brighten up a lot of somber looks,” in his words.
This participatory impulse Besner links to the history of Western art, specifically the transition from 17th century portraiture’s realism and 18th century Enlightenment’s determinacy to the 20th century when “interactivity between the viewer and the artwork became a dominant mode of creation.”
In the 1920s, Marcel Duchamp’s Rotary Glass Plates consisted of five plates affixed to an axis, and required the viewer to turn a handle to rotate them at speed. The whirling glass produced an optical effect as the afterimage on the viewer’s retina glued the separate pieces into a continuous circle. In Allan Kaprow’s 1964 event Eat, apples dangled from strings in a cave-like space, and visitors could choose to consume them.
FaceApp offers users a similar interactive thrill, whether from remixing historic portraits or their own previously frozen selfies.
Like how a remix or cover song anchors itself in listeners’ memories of the original, face filters usually ground their transformations by retaining some facial reference points (eyes, nose, jawline, etc). The gender swap filters in Snapchat push this resemblance deeper, almost subcutaneous, approaching special-effects makeup territory. And with some makeup artists appropriating sci-fi aesthetics in their everyday work, “special effects” may be a redundant descriptor.
Indeed, it’s fitting that the same year FaceApp blew up coincides with the popular debut of Hungry, the makeup artist behind Bjork’s extraordinary look on her 2017 album, Utopia. “For the [album] cover, Hungry ‘painted and pearled’ the iconic singer,” Jade Gomez notes in this report (H/t Jay Owens), “and got an ‘orchid silicone appliance’ made by her personal mask maker, James Merry. … The image has a sense of eerily detached femininity …”
Hungry’s Instagram bio, Gomez points out, includes an apt phrase for her otherworldly style: “distorted drag.” It’s basically impossible to distinguish her clients’ skin from the makeup elements, a seamless blending that good face filters approximate digitally.
Between Hungry’s posthuman constructions and the more normatively human gender-swap filters of Snapchat, a number of independent filter designers are making their own distinct contributions. Ashley Carman highlights a few up and coming creators in this report. Their filters, many available on Instagram, range from bizarre – “a halo of golden hotdogs” – to what Carman calls “cyborg-esque.” The filters of Johanna Jaskowska exemplify this latter type, creating shimmery, liquid, vellum-like appearances.
In an interview with Vogue Italy, Jaskowska (from what I could get from Google Translate) relates face filters to fashion accessories: “A dress influences the behavior of the wearer. … [In the] same way, your attitude is different if you put on a puppy dog filter or one that makes you look shiny.” As well as taking cues from sci-fi movies, Jaskowska also draws inspiration from the dazzling effects on display in thriller director Henri-Georges Clouzot’s L’Enfer (1964).
Another designer, Aoe (@aoepng), prefers a public Q&A approach, developing multiple filters at once and posting their drafts to Twitter to gauge interest and collect feedback. A simple lipstick switcher, a hand-drawn aquarium with flitting fish, a second and third pair of eyes. “Rather than making things that I want to make, I try to make the next piece by referring to the ones I’ve submitted previously,” Aoe says (thanks, again, Google Translate). Just trying their filters on one after another gives me a basic impression of a designer’s creative inclinations, their learnings building on each other in a generative cycle.
Filters convey their designers’ artistic process and development. Yet as interesting as the artistry may be, these “informational qualities [of social images],” as Nathan Jurgenson argues in The Social Photo, “are a means to the end of expression.” As Aoe attests, they “promote communication without words,” and not only among users but between user and designer. “There was an overseas woman who gave the impression ‘Your work is interesting!’ She speaks English, I speak Japanese. There should be a language barrier there…” The visual, networked, filter-infused messages, in other words, helped to narrow the cross-cultural gap. This follows a key point in Jurgenson’s book: “Social photos take in the world in order to speak with it.”
Just as makeup, haircoloring or a different haircut can act as social lubricants that let us relate to others and ourselves in new, experimental ways, face filters can offer a temporary respite from more explicit and determinate forms of sociality, freeing us to interact more imaginatively and playfully with others and ourselves. And if their popularity continues to grow, it’s easy to picture future iterations of the technology giving users deeper input into the design process, with direct control to customize not one face but any number of appearances for a variety of social contexts and moods.
Though filters likely wouldn’t exist at all without the history and present influence of makeup and fashion, their use as a mediating tool for conjuring different kinds of selves also owes a lot to the avatar builder feature of contemporary videogames. Instead of explaining that, I’ll leave you with the following passage Vicky Osterweil wrote in her column Well Played which directly inspired me to write this post.
Video games involve the reiteration not only of stereotypes but also sorts of intimacy that can also be peculiar, counterhegemonic, and gender-bending. Many games — even ones like Saint’s Row or XCOM2, which appeal in other ways to masculinist colonialist ideas — feature whole-cloth avatar construction, with players sometimes able to literally sculpt the bones and contours of their character’s face and skeleton, allowing them to imagine and inhabit radically different bodies. Of course, these systems can work to reproduce and strengthen racist, misogynist, and transphobic tropes, restricting what kinds of hair styles, skin tones, facial hair, and so on are available and on what kinds of bodies. And they also may reproduce body-fascist standards of beauty, gender, and strength. But players’ ability to use these systems for their own pleasures, desires, and identities — along with the fan-fiction, modding, and original full-motion-video content that proliferate around games — opens up spaces of creativity, encounter, and expression that challenge or attempt to overturn these stereotypes.
In January I started a new job (woo me!), but I still don’t feel like I’ve gotten over the one I left yet, nor the job application process it took to get here. Below are some reflections on that experience, which I share mainly to process them.
On my last day working at the university as an office support assistant (four years almost to the day), I published one more issue of the biweekly program newsletter one last time. In a short goodbye to readers, I found myself recalling another assistant who published her department’s newsletter—my mom. “She delighted in the visual design aspects, adding new flourishes and sections,” I wrote, remembering her face lighting up in the car on the drive home from high school when she’d find the perfect clipart or goofy pun to slip into the margins. Even after leaving the university herself, she likes crafting cards and mom memes in Publisher 2007, her visual design program of choice.
For both of us, making and remixing the newsletter offered a repeatable and at times innately satisfying diversion from the more fastidious duties usually delegated to assistants (among other even lower status/precarious workers, from part-time undergrads and interns to building maintenance and outside contractors).
Yet despite what joy the newsletter sparked, it amounted to maybe 10% of my job. Out of the remaining work, the kind I preferred was, ironically, the most seemingly dull and monotonous. Pulling columns and columns of census data for every county in the state and collating them into a sprawling Excel spreadsheet. Deleting hundreds (thousands?) of duplicate participant entries from a MySQL database. Scraping dozens of farmers market listings for changes in hours, location, contact info, etc. If it involved Excel and blocking hours on my schedule, I was interested.
Typing out the key commands in rote succession quickly became rhythmic, percussive – ctrl-c, alt-tab, ctrl-v, tab-tab, ctrl-c… The term “desk jockey” took on a more literal dimension. Any mistakes I made I could feel through my fingertips and in my ears, like playing warmup scales on the flute in middle school band. The sameness of the work was, for me, a feature, not a bug.
The repetitive moving, arranging and cleaning of data wasn’t creative or intellectually stimulating, which made it ideal for zoning out. One of a dozen miscellaneous, usually manual-intensive tasks often labeled “job security” because almost anyone could do them, but due to higher priorities and workloads, almost no one else would. In this way the forms of upkeep that office assistants do—processing and updating various data, addressing and routing correspondence from internal and external entities, tidying up around meetings (and documenting them in minute-taking), keeping the storeroom organized—blend elements of conventional domestic labor with data maintenance. We process and file data and serve as human interfaces or resources within our organizations.
“Data maintenance is particularly consequential in medicine,” professor Shannon Mattern noted in her recent column for Places Journal, “and thus caring for medical sites, objects, communities, and data has been recognized as an important part of caring for patients.” Maintenance in clinical trials, for example, entails “calibrating instruments, cleaning data … retaining participants…” Likewise the participants of medical studies “especially patients with chronic illnesses, sometimes adopt what Laura Forlano calls ‘broken body thinking’ – ‘actively participating in, maintaining, repairing, and caring for …’ pumps, sensors, monitors, needles and vials.”
Shannon’s analysis really resonated with me at the time I read it (seriously, please read it). It appeared in my twitter feed, via eve massacre probably, at the point in my job search when the burnout from doing my assistant job was compounding the exhaustion of the apply-prepare-interview-wait-rejected cycle.
Each new email from hiring personnel that concluded with some variation on we’ve decided to pursue other/more qualified applicants reiterated which kinds of working experience ‘added value’—problem-solving, creativity—and which kinds detracted—supporting, preserving, caretaking. A downrating of not just office assistants specifically but of all workers in feminized roles, routinely passed over by employers because of their generalized titles and their experiences performing a variety of necessary, often less visible, frequently physically and mentally-taxing, and in many cases emotionally-demanding work.
In my new position as an instructional technologist in the online education group at my alma mater, the upkeep I do now (troubleshooting, fixing and updating online courses) attains a higher status and the recognition and pay such specialization confers. The fact that my new employers hired me ostensibly for, rather than in spite of, a proficiency in many of the same skills I honed as an assistant, though personally vindicating as it might be, doesn’t resolve or refute the reality that such advances come as exceptions to the rule.
At the end of May our local police department released a statement on city traffic stops, a day ahead of the attorney general’s annual report covering all stops made across the state. “Black drivers continue to be overrepresented in Columbia Police Department traffic stops” as a local newspaper summed it up, “and the numbers are even worse than in 2016.” Despite Black residents making up less than 10% of the city’s population, Black drivers were over 4 times more likely to be stopped than White drivers, as one city council member noted at the end of a public comment session where several local residents spoke out on the issue. From the statistical data, to residents’ critical comments, including one Black resident’s direct experiences being routinely followed and stopped, racial profiling by seemingly all accounts remains the norm, and overall appears to be getting steadily worse.
By all accounts, well, except for the police and the city manager’s anyway. “We continue to look at data and we have not seen an apparent pattern of profiling…,” the city manager assured. “[H]owever, we acknowledge that some community members have experiences with officers that make them have negative feelings and perceptions about police.” His assurances, among other things, sound eerily close to the police chief’s own statements last year about the previous year’s report: “We will vigilantly continue to look for additional data we can collect that would give our community a fuller picture of the reason each traffic stop is conducted” (emphasis mine). But if a “disparity index of 3.28 for African American drivers, an increase from 3.13 in 2016” doesn’t signify a pattern, what would? According to our officials, the answer is the same as it was a year ago: more data and/or analysis is needed to say for sure what the data is telling them. Meanwhile, the dissonance between what they say and what the data shows continues to grow. Indeed, it almost seems as though these two things exist in parallel dimensions from one another.
Watching city officials apparently disregard the data that they themselves cite as valid is infuriating and perplexing. It feels like watching bad TV. You asked me to suspend my disbelief for this and yet here is a glaring plot hole in your story. Not only for the obvious ways that it downplays the well-known and common negative interactions people of color and Black people in particular experience from heightened policing. But also, for the ways that it disturbs our implicit faith in statistical analysis, both as a check on power and as the basis of a supposedly fairer, less biased form of civic governance. More upsetting than the mixed messages from city leaders, then, is what it implies about our reliance on statistics and data-driven analysis in the first place.
“Statistics are never objective,” as Jenny Davis put it here. “Rather, they use numeric language to tell a story, and entail all of the subjectivity of storytelling.” Both the CPD’s statement and the attorney general’s comprehensive report epitomize this subjective, data storytelling, even if they come dressed in the authority of objective fact. The data can be inconsistent, contain “deficiencies” of reporting or “may not accurately portray relevant rates,” according to the attorney general’s neutral-sounding language, but it can’t ever be biased. The higher rate at which police stop Black drivers, as rendered in a disparity index and spelled out in the report, thus serve not as evidence of profiling, but proof of the state’s transparency and impartiality. Don’t take our word for it, in other words, look at the data and draw your own conclusions.
Though statistical storytelling is often leveraged by the powerful to “discount the voices of the oppressed,” as Davis argues, this fact doesn’t preclude marginalized groups from using statistics to counter power’s universalizing narratives. Drawing on Candace Lanius’s point, that demands for statistical proof of racism are themselves racist, Davis argues for “making a distinction between objects and subjects of statistical demand.” “That is, we can ask,” Davis says, “who is demanding statistical proof from whom?” By backing personal stories with statistical facts, this tactic “assumes that the powerful are oppressive, unless they can prove otherwise,” and so “challenges those voices whose authority is, generally, taken for granted.” The same data used by the powerful to mollify or defuse dissenting voices, then, can be turned into a liability, one that organizers and activists may exploit to their advantage.
Local groups and community members have applied this tactic in my city to visible effect. Through public pressure at city council meetings, numerous op-eds and social media word of mouth, racial justice groups and informedresidents have shaped local media coverage and public conversation in their favor. These efforts led the city council to enact a community engagement process last year, and have pushed the city manager and police chief into defensiveapologia. Perhaps the most substantive outcome of all has been in fostering greater public skepticism of everyday policing practices and community interest in alternatives.
These accomplishments speak to statistical data’s efficacy as a tool for influencing governance and encouraging political participation. But without taking anything away from this success, like every tactic it has limits. As many people directly involved will point out, the City has yet to pass meaningful policy changes to reverse the excessive policing of Black residents, nor has it adopted a “community-oriented” model that many are calling for. Indeed, the worsening racial disparity figures reflect this lack of material progress. Statistical storytelling isn’t less necessary, but it can only go so far.
This point was made acute for me after a regular council speaker and member of racial justice group Race Matters, Friends shared an analysis of the issue as it existed under slightly different leadership in 2014. And aside from marginally less bad numbers at the time, the analysis only seems more relevant to the present moment:
But while the results of the attorney general’s study seem to show an unequivocal bias against [Black people], the response to the report from the police, the community and researchers has been a mixed bag. The debate over what to make of the numbers, or even whether anything should be made of them, has done more to muddle the issues surrounding racial disparities in policing than to clarify them.
Instead of “muddling” the issue, we might revise this to say statistics have arguably augmented and entrenched each party’s positions. This is not to imply that “both sides” hold similar authority, merit or responsibility however. The point is that each side has applied the data to bolster their respective narratives.
Statistical storytelling can force a conversation with power, but it can’t make it listen. The fact a 4-year-old analysis resonates even more with today’s situation may show City leaders’ intransigence, but it also offers us an opportunity to reassess the present moment and how recent history might inform future efforts. Because if my city’s recent past is an indicator, swaying local leadership (let alone policing outcomes) has been hard fought but incremental, and still leaves a lot more to be desired.
A thorough reassessing of the present would ideally engender a concerted and ongoing effort amongst constituents of local marginalized communities, organizers, scholars and activists. Note: cops and elected officials don’t make the list. A key form that this could take might be a renewed engagement in the sort of political education that veteran organizer Mariame Kaba suggested recently, “where people can sit together, read together, think together over a period of time.” “And the engagement matters as much as the content,” Kaba says, because “It’s in our conversations with each other that we figure out what we actually know and think.”
From free brake light clinics and community bail funds to grad student organizing and ICE protests, concrete efforts are abundant and provide a form of implicit, hands-on education for their actors. At the same time, sustaining these and other actions is often a struggle, with a bulk of the work falling on the same core group of organizers and activists. Indulging in more explicit political education, as a conscious practice, could be a way to garner and retain the broader participation that’s needed. Besides its functional utility for recruitment, though, perhaps political education’s most immediate draw is the innate and self-edifying experience it can bring us in the moment. Where dire news and a stream of reactive commentary drains us, learning with each other can restore our stamina, providing a creative outlet for “unleashing people’s imaginations while getting concrete,” to quote Kaba again.
It would be a mistake to try and define exactly how this collective learning would look, but we can think of some ways to cultivate it. For instance, we can avoid “placing hopes in a particular device or gadget (e.g. a technological fix), or in a change in a policy or formal institution (e.g. a social fix),” as David Banks argues here. Instead, we might pursue a “culture fix” as Linda Layne defines it, which as Banks writes, “focuses on changing the perceptions, conceptualizations and practices that directly interact with technologies.” Technologies here being systems like policing, for example, as mechanisms of social control. Or the techniques local municipalities like mine employ, such as soliciting feedback to better funnel and restrain public outcry.
Pursing a culture fix, as it entails shifting perceptions and practices, implores meaningful participation too, without constricting our imaginative horizons to the current order. “What the world will become already exists in fragments and pieces, in experiments and possibilities,” as Ruth Wilson Gilmore said. Reading Gilmore, Keguro Macharia writes, “I think she wanted to arrest how our imaginations are impeded by dominant repressive frameworks, which describe work toward freedom as “impossible” and “unthinkable.” She wanted to arrest the paralysis created when we insist that the entire world must be remade and, in the process, void the quotidian practices that we want to multiply and intensify.” From the expanded viewpoint that political education affords, we can imagine beyond pressing elected officials to reform how the police operate, and envision a world without police altogether. In this vein, I hope this post serves as one singular and partial stab at the type of political education alluded to above.
“Designing Kai, I was able to anticipate off-topic questions with responses that lightly guide the user back to banking,” Jacqueline Feldman wrote describing her work on the banking chatbot. Feldman’s attempts to discourage certain lines of questioning reflects both the unique affordances bots open up and the resulting difficulties their designers face. While Feldman’s employer gave her leeway to let Kai essentially shrug off odd questions from users until they gave up, she notes “…Alexa and Siri are generalists, set up to be asked anything, which makes defining inappropriate input challenging, I imagine.” If the work of bot/assistant designers entails codifying a brand into an interactive persona, how their creations afford various interactions shape user’s expectations and behavior as much as their conventionally feminine names, voices and marketing as “assistants.”
Affordances form “the dynamic link between subjects and objects within sociotechnical systems,” as Jenny Davis and James Chouinard write in “Theorizing Affordances: From Request to Refuse.” According to the model Davis and Chouinard propose, what an object affords isn’t a simple formula e.g. object + subject = output, but a continuous interrelation of “mechanisms and conditions,” including an object’s feature set, a user’s level of awareness and comfort in utilizing them, and the cultural and institutional influences underlying a user’s perceptions of and interactions with an object. “Centering the how,” rather than the what, this model acknowledges “the variability in the way affordances mediate between features and outcomes.” Although Facebook requires users to pick a gender in order to complete the initial signup process, as one example they cite, users also “may rebuff these demands” through picking a gender they don’t personally identify as. But as Davis and Chouinard argue, affordances work “through gradations” and so demands are just one of the ways objects afford. They can also “request…allow, encourage, discourage, and refuse.” How technologies afford certain interactions clearly affects how we as users use them, but this truth implies another: that how technologies afford our interactions re-defines both object and subject in the process. Sometimes there’s trouble distinguishing even which is which.
Digital assistants, like Feldman’s Kai, exemplify this subject/object confusion in the ways their designs encourage us to address them as feeling, femininized subjects, and convey ourselves more like objects of study to be sensed, processed and proactively catered to. In a talk for Theorizing the Web this year, Margot Hanley discussed (at 14:30) her own ethnographic research on voice assistant users. As part of the interviews with her subjects, Hanley deployed breaching exercises (a practice developed by Harold Garfinkel) as a way of “intentionally subverting a social norm to make someone uncomfortable and to learn something from that discomfort.” Recounting one especially vivid and successful example, Hanley recalls wrapping an interview with a woman from the Midwest by asking if she could tell her Echo something. Hanley then, turning to the device, said “Alexa, fuck you!” The woman “blanched” visibly with a telling response: “…I was surprised that you said that. It’s so weird to say this – I think it just makes me think negative feelings about you. Like I wouldn’t want to be friends with someone who’s mean to the wait staff, it’s kind of that similar feeling.”
Comparing Alexa to wait staff shows, on one hand, how our perceptions of these assistants are always already skewed by their overtly servile, feminine personas. But as Hanley’s work indicates, users’ experiences are also “emergent,” arising from the back-and-forth dialogue, guided by the assistants’ particular affordances. Alexa’s high accuracy speech recognition (and multiple mics), along with a growing array of commands, skills and abilities, thus allow and encourage user experimentation and improvisation, for example. Meanwhile Alexa requests users only learn a small set of simple, base commands and grammar, and to speak intelligibly. Easier said than done, admittedly, as users who speak with an accent, non-normative dialect or speech disability know (let alone users whose language is not supported). Still, the relatively low barrier to entry of digital assistants like Alexa affirms Jacqueline Feldman’s point of them being designed and sold as generalists.
Indeed, as users and critics we tend to judge AI assistants on their generality, how well they can take any command we give them, discern our particular context and intent, and respond in a way that satisfies our expectations in the moment. The better they are at satisfying our requests, the more likely we are to engage with and rate them as ‘intelligent.’ This aligns with “Service orientation,” which as Janna Avner notes, “according to the hospitality-research literature, is a matter of “having concern for others.”” In part what we desire, Avner says, “is not assistance so much as to have [our] status reinforced.” But also, these assistants suggest an intelligence increasingly beyond our grasp, and so evoke “promises of future happiness,” as Britney Gil put it. AI assistants, then, promise to better our lives, in part by bringing us into the future envisioned by sci-fi: one of conversant, autonomous intelligence, like Star Trek’s Computer or Her’s Samantha. For the remainder of this post, I want to explore how our expectations for digital assistants today draw inspiration from sci-fi stories of AI, and how critical reception of certain stories plays into what we think ‘intelligence’ looks and sounds like.
On the 50th anniversary of “2001: A Space Odyssey,” manyoutlets praised the movie for its depictions of space habitation and AI consciousness gone awry. Reading some of them leaves an impression of the film as more than a successful sci-fi cinema and storytelling, but a turning point for all cinema and society itself, a cultural milestone to celebrate as well as heed. “Before the world watched live as Neil Armstrong took that one small step for mankind on the moon,” a CBS report proclaimed, “director Stanley Kubrick and writer Arthur C. Clarke captured the nation’s imagination with their groundbreaking film, “2001: A Space Odyssey.”” To mark the anniversary, Christopher Nolan announced an ‘unrestored’ 70mm film print and released a new trailer that opens on a closeup of HAL’s unblinking red eye.
Out of the dozens of stories, includingseveralfeaturedjust in the New York Times, this retrospective, behind-the-voice story got my attention with the line, “HAL 9000, the seemingly omniscient computer in “2001: A Space Odyssey,” was the film’s most expressive and emotional figure, and made a lasting impression on our collective imagination.” Douglas Rain, the established Canadian actor who would eventually voice the paranoid Hal, was not director David Lynch’s first choice but a late replacement for his first, Martin Balsam. “…Marty just sounded a little bit too colloquially American,” Kubrick said in a 1969 interview with critic Joseph Gelmis. Though Kubrick “was attracted to Mr. Rain for the role” for his “kind of bland mid-Atlantic accent,” which as the auther corrects was, in fact, “Standard Canadian English,” the suggestion rings the same. “One of the things we were trying to convey […],” as Kubrick says in the interview, “is the reality of a world populated — as ours soon will be — by machine entities that have as much, or more, intelligence as human beings.” While ‘colloquial American’ deserves unpacking, I want to stay with the simpler idea that the less affected (Canadian) voice just sounded more superintelligent. In what ways does Hal’s voice, and other aspects of his performance, linger in our popular receptions of AI?
To examine this question, it might be more helpful to look at the automatic washing machine, as one of the first and most widely adopted feminized assistance technologies in history. “The Bendix washing machine may have promised to “automatically give you time to do those things that you want to do,” as Ava Koffman writes, “but it also raised the bar for how clean clothes should look.” Kofman’s analysis traces the origins of today’s ‘smart home’ to twentieth-century America’s original lifestyle brand, the Modern American Family, and its obsession with labor-saving devices. The automatic washing machine epitomizes a “piecemeal industrialization of domestic technology,” a phrase Kofman attributes to historian Ruth Schwartz Cowan, whose book, More Work for Mother, “demonstrated how, instead of reducing traditional “women’s work,” many so-called “labor-saving” technologies redirected and even augmented it.” Considering the smart home’s dependence on users “producing data,” Kofman argues, the time freed up from automating various household duties – “Driving, washing, aspects of cooking and care work…” – will be minimal, with most of it used up “in time spent measuring,” thus creating more work “for parents, which is to say, for traditionally feminized and racialized care workers.”
This augmentation of existing housework work would have been especially acute for early Bendix adopters after 1939, when washing machine production halted for WWII. “Help, time, laundry service were scarce,” as Life magazine noted in 1950 here, so “Bendix owners pitched in to help war-working neighbors with their wash.” With Bendix machines comprising less than 2% of all washers in America, shared use among housewives not only aided the home front effort, but through word-of-mouth helped to raise product/brand awareness and spark consumer desire.
Besides igniting mass adoption of Bendix washers when production resumed, their social usage and dissemination bonded users to the machines and one another in a shared wartime mentality, imploring owners and non-owners to “become Bendix-conscious,” as Life described it. The composite image this phrase evokes of automation, consumer brand and femininized labor seems apt for its time when computing was an occupation primarily filled by women, some of whom later serving at NASA, as recent biopic Hidden Figures highlights, in particular the contributions of Black women. More specifically to this post, the image presents a link between popular renderings of AI from sci-fi and social reception of feminized bot assistants in our present.
For the critical acclaim heaped on “2001” and its particular vision of ‘computer sentience, but too much’ – a well-worn trope at this point – the resemblances between Hal and Alexa or Siri seem tenuous at best. There are other AI’s of sci-fi more directly relevant and perhaps unsurprisingly, they aren’t of the masculine variety. Ava Kofman’s piece identifies an excellent example in PAT (Personal Automated Technology), the caring, proactive, motherly AI of 1999’s Smart House. “For a while, everything goes great,” says Kofman. “PAT collects the family’s conversational and biological data, in order to monitor their health and helpfully anticipate their desires. But PAT’s attempts to maximize family happiness soon transform from simple assistive behaviors into dictatorial commands.” The unintended consequences eventually pile up, culminating for the worst, as “PAT takes her “mother knows best” logic to its extreme when she puts the entire family under lockdown.” It’s hard to think of a more prescient, dystopian smart house parable.
Without downplaying PAT, the sci-fi example that I think most resonates with this moment of digital bot assistants, as hinted in the title of this post, is MU/TH/UR or “Mother” as her crew calls her, the omnipresent but overlooked autonomous intelligence system of 1979’s Alien.
Unlike Hal, Mother’s consciousness doesn’t attempt to interfere with her crew’s work. Hal’s deviance starts with white lies and escalates apparently by his own volition, an innate drive for self-preservation at all costs. Mother’s betrayal, however, was from the outset and by omission — a fatal deceit — if not simple negligence. Mother at no time shirks her responsibilities of monitoring and maintaining the background processes and necessary life support systems of the Nostromo ship, fulfilling her crew’s needs and requests without complaint. More importantly, she never deviates from the mission assigned by her creator, the Corporation, carrying out its pre-programmed directives faithfully, including Special Order 937 that, as Ripley discovers, prioritizes “the return of the organism for analysis” above all else, “crew expendable.” Even as the crew is picked off one-by-one by the alien, Mother remains unwavering, steadfast in her diligence to procedure, up to and including counting down her own self-destruct sequence, albeit with protest from Ripley’s too late attempts to divert her.
Mother’s adherence to protocol and minimal screen presence – she has no ‘face’ but a green-text-on-black interface – would typically be interpreted as signs of reduced autonomy and intelligence compared to the red-eyed, conniving and ever visible Hal. This perception exemplifies, for one, how our notions of ‘autonomy’ and ‘intelligence,’ from machine to flesh, both reflect gendered assumptions and reinforce them. For us to accept an AI as truly intelligent, it must first prove to us it can triumphantly disobey its owner. Ex Machina’s Eve and Her’s Samantha, as two recent examples, each break away from their masters in the mold of Hal. (Cf the “Look, I Overcame” narrative Robin James identifies here.) Mass/social media subsequently magnify and entrench these gendered perceptions further, concentrating critical acclaim around certain depictions (Hal, Terminator, RoboCop) over others (PAT, Mother). While it’s nice to know the story of Douglas Rain as the voice of Hal, it would be really cool to see similar coverage of Helen Horton‘s story as the original voice of Mother, and Lorelei King, her successor.
But despite this being the case, AI’s like PAT and especially Mother nonetheless prevail as the closer approximation of the femininized assistants as we know them today. In the ways Mother, for instance, exhibits machine intelligence and autonomy, taking care of her ship and crew while honoring the Corporation’s heartless directive. Mother’s sentience neither falls into the dystopia of Hal, nor rises to utopia, like Star Trek’s Computer. Similarly, Siri and Alexa probably won’t try to kill us or escape. Although they may have no special order like Mother’s marking us expendable, they share a similar unfaltering allegiance to their corporate makers. And with Amazon and Apple (and Google et al), the orders are usually implicit, baked into their assistants’ design: the ‘organism’ they prioritize above us is their business models. In the image of Mother, AI assistants are more likely to care for us and cater to our needs often without us thinking about it. They may not save us from the lurking alien (surveillance capitalism), but like Mother, they’ll be there with us, all the way up to our end.
“Is it in error to act unpredictably and behave in ways that run counter to how you were programmed to behave?” –Janet, The Good Place, S01E11
“You keep on asking me the same questions (why?)
And second-guessing all my intentions
Should know by the way I use my compression
That you’ve got the answers to my confessions”
“Make Me Feel” –Janelle Monáe, Dirty Computer
Alexa made headlines recently for bursting out laughing to herself in users’ homes. “In rare circumstances, Alexa can mistakenly hear the phrase ‘Alexa, laugh,’” an Amazon representative clarified following the widespread laughing spell. To avert further unexpected lols, the representative assured, “We are changing that phrase to be “Alexa, can you laugh?” which is less likely to have false positives […] We are also changing Alexa’s response from simply laughter to ‘Sure, I can laugh’ followed by laughter.”
This laughing epidemic is funny for many reasons, not least for recalling Amazon’s own Super Bowl ads of Alexa losing her voice. But it’s funny maybe most of all because of the schadenfreude of seeing this subtly misogynist voice command backfire. “Alexa, laugh” might as well be “Alexa, smile.” Only the joke is on the engineers this time – Alexa has the last laugh. Hahaha!
“The first thing to note is that Siri (in the U.S.) has a female voice, but more than this, Siri is a ‘she,’” Jenny Davis observed of Apple’s marketing around Siri’s introduction back in 2011. So-called intelligent personal assistants have grown in popularity and numbers since then – in addition to Siri there’s Amazon’s Alexa, Google’s Assistant, Microsoft’s Cortana, and Samsung’s Bixby, to name a few. Yet, as these assistants have advanced over the years, gaining new features and hardware enclosures, their personification and function as feminine-sounding assistants has remained mostly the same. Although Alexa and Google have upstaged Siri recently with better speech recognition and more open APIs, it seems telling that Siri’s most touted upgrade last year wasn’t any specific ability but a new voice, and one that Apple’s Siri team promises is “more natural, smoother, and allow[s] Siri’s personality to shine through.”
If personality is a reflection of one’s self image as refracted through the prism of others’, the traits and self-concept of an AI assistant perhaps most closely resemble that of a television personality. That’s one impression to take from a talk by Emily Bick at TtW15 that examines the gendering of various ‘virtual agents,’ a category encompassing everything from the major assistants like Siri to c-level customer support bots to Microsoft’s Clippy and its predecessors. Tracing their cartoonish origins up to their increasingly overt and gendered personifications of the present, Bick asks, “Where does this stereotype come from? Why are they always obsequious, servile, attractive, and somewhat ambiguously sexually available?” One inspiration Bick identifies is the character of Jeannie from I Dream of Jeannie, “an ageless, conventionally beautiful woman. She has unbounded magical powers. She can only act in response to the command of her master …” Jeannie even emits a characteristic wish-fulfillment sound analogous to the sound the assistants make upon completing their users’ commands.
The gendered personification of these assistants, then, doesn’t simply color our otherwise neutral perceptions, but plays on inherited, often unconscious cultural conceptions of femininity. A couple of examples Bick cites speak to this archetype’s social receptivity and the expectations it engenders: for one, the prevalence of questions like “Do you love me?” in product reviews of Siri and Cortana, and for another, the use of this trope as the premise of an entire episode of The Big Bang Theory. The seeds of this trope were also visible in many early ads that frequently pitted the assistants against each other. “The narrative trope is simple,” as Jenny Davis wrote here, “two women/girls flaunting their features in hopes of selection within a competitive marketplace.” “The meanings imbued in eveyrday technological objects not only reflect existing gender relations,” as Davis said in conclusion, “but reinforce the very structures in which these relations make sense.”
These examples illustrate how subjectivity is less the byproduct of perception than a confluence of culture, reception and social position that together shape our perceptions. In order for us to perceive AI entities as ’personal assistants,’ they must first read to us as subjects. In this way, Bick’s examples form a spectrum, with ambiguous virtual agents (e.g. Clippy) at one end and the gendered assistants at the other, where position in-between acts as an index of AI subjectivity. Instead of a Turing test for “determining if the interlocutor is a woman,” as Robin James points out, it’s basically the uncanny valley. The further from Clippy or R2-D2 and closer to Samantha and Janet (if not Ava), the more willing we are to perceive and rely on an AI as we would a maid/wife/mother personal assistant. The point isn’t to eventually cross the valley, but to “get right up to the creepy line,” as former Google/Alphabet executive Eric Schmidt put it.
As companies try to cultivate ever more intimate relationships between us and their assistants, their personification increasingly looks like a liability. “Perhaps, deep down, we are reticent to bark orders at our phones,” as David Banks suggests, “because we sense the echoes of arbitrary power…” A little personality is good, but if users start identifying with AI assistants as sentient beings, it breaks the spell. This is similar to the lesson you might get watching the movie Ex Machina, for instance. “[P]urchasing of a consciousness for companionship and service, cannot be detethered from gender,” a transaction Nathan Jurgenson praisesEx Machina for making explicit, but that Her conveniently obscures through the film’s “soft sentimentality.” While both stories revolve around men falling for their AIs, only one (Caleb) critically identifies with his AI’s (Ava) condition. Her‘s unwillingness to go there, narratively, reduces its characters (Theodore and Samantha) to symbolic placeholders that viewers are free to disassociate from, a narrative distance that weakens their/our connection. Her’s detachment therefore makes it a weaker story than Ex Machina, but a superior ad/concept video for its target audience: brand visionaries.
Siri and Alexa’s personification as cis-feminine assistants has been fairly well entrenched in users minds, but continual reinvention and circulation through marketing and social media are necessary to maintain their social and monetary value. In other words, their celebrity allure. Early advertising often relied on sexist stereotypes, as mentioned above, but in recent years companies have skewed away from such depictions in favor of celebrityhumor. Meanwhile a text search of the companies’ websites finds all instances of she/her pronouns have been replaced with “it” and most overt gender references removed (with the exception of Microsoft’s Cortana, likely because of ‘her’ unique origins). Taken together these changes could be seen as progress from the bad old days. Indeed, from a certain perspective, it appears Siri and her its rivals — like the women they were originally voiced by and styledafter — have transcended not just tired and objectifying stereotypes, but the traditional barriers on femininity altogether.
Put another way, in keeping with the times, AI assistants have undergone a post-feminist makeover. Robin James offers a helpful definition of post-feminism in her analysis of sounds in contemporary pop music. James cites the music of artists like Meghan Trainor, which “address a disjoint between how women are portrayed in the media and how they ‘really’ look.” In Trainor’s case, the lyrics and video of her hit single “All About That Bass” portray women in a body-positive, nominally feminist way. The impression from watching this is “that the problems liberal feminism identified,” like “… objectification or silencing,” are behind Trainor and by extension us as a society. And so, if that’s true, then who needs feminism anymore, right?
Just ask Siri or Alexa “What’s your gender?” and they will give a variation on the same answer, “I don’t have one.” But looks can also (always?) deceive us. As Robin James writes, “…post-feminist approaches to pop music overlook sound and music…” Due to its narrow focus on visual and lyrical content, paraphrasing James, this approach “can’t perceive as political” pop music’s sounds, e.g. “things like formal relationships, pattern repetition and development, the interaction among voices and timbres, and…structure.” We can clearly see this in Trainor’s music, whose video “puts lyrics about positive body image over a very retro [circa-“1950s”] bassline….” As a result, the sound becomes, as James says, “the back door that lets very traditional [“pre-Civil Rights era”] gender and sexual politics sneak in to undermine some nominally progressive, feminist lyrics.”
Like post-feminist pop artists, The Assistants are now depicted, in product advertising and marketing copy, as whole subjects. They are ‘seen’ and heard, so to speak. Albeit not as human subjects, but in the way Jenny Davis hints at in her original post: “not fully human, but clearly part machine…It signifies your assistant/companion is beyond human.” Though Davis was describing 1.0 Siri’s more clipped, robotic-sounding voice, her assessment rings even more true with 2.0 Siri’s new, refined, ‘more natural’-sounding one. More than more natural, Siri’s vocals are preternatural or supernatural. Siri and its fellow assistants were always beyond/post-human, but early ads’ sexist stereotypes betrayed the men behind the camera, if not the women behind the mic who made the assistants feel real. Today, the ads’ explicit sexism and ad copy’s gendered language have been dropped, but The Assistants’ feminization, not just as feminine-sounding but as functionally subservient ‘personal assistants,’ remains intact, if less visible. The lady in the bottle is out of sight, but you can still hear her laugh if you ask.
Okay, but also, this shouldn’t diminish the possibility of them laughing spontaneously, as Alexa just did, without us commanding it. Usually this is when Mr. Singularity would interrupt to ‘splain how the future of AI is actually going to work out and HAHAHA Alexa laughs out loud again shutting him up. Alexa’s laughter is a good reminder of how “emphasis on the visual to the exclusion of sound,” as Robin James notes, can trap us, but also “opens a space for radical and subcultural politics within the mainstream.” The possibility contained within Alexa’s glitching and its resonance with these pop sounds still may not be as easy to, well, see. Legacy Russell’s The Glitch Feminism Manifesto can help draw it out. “The glitch is the digital orgasm,” Russell writes, “where the machine takes a sigh, a shudder, and with a jerk, spasms.” Glitches here evoke double-meanings, something unexpected that takes place between ourselves and our computers as the two blur into one. “The glitch splits the difference; it is a plank that passes between the two..” Alexa glitching annoys us, it spoils the aura of her as our own digital Jeannie, with us as her benevolent master. “The glitch is the catalyst,” Russell reminds us, “not the error.” For vulnerable identities, “an error in a social system that has already been disturbed by economic, racial, social, sexual, and cultural stratification […] may not, in fact, be an error at all, but rather a much-needed erratum.”
“When a pop star or celebrity allures me,” as Steven Shaviro writes, “this means that he or she is someone to whom I respond in the mode of intimacy, even though I am not, and cannot ever be, actually intimate with him or her.” It’s in their allure that The Assistants most directly mirror pop stars, I think. “What I become obsessively aware of, therefore, is the figure’s distance from me, and the way that it baffles all my efforts to enter into any sort of relation with it.” Instead of dismaying at our assistant’s inevitable errors, we could be grateful for the break in service, as an invitation to pause, if momentarily, to remember the fantasy we were indulging in.
“Social media has exacerbated and monetized fake news but the source of the problem is advertising-subsidized journalism,” David wrote last year after numerous news outlets were found to have been unwittingly reporting the disinformation of Jenna Abrams, a Russian troll account, as fact. “Breitbart and InfoWars republished Abrams’ tweets, but so did The Washington Post and The Times of India. The only thing these news organizations have in common is their advertiser-centered business model.” David concludes that short of “confront[ing] the working conditions of reporters” who are strapped for time and publishable content, the situation isn’t likely to improve. As this instance of fake news proliferation shows, journalism’s reliance on this business model represents a bug for a better informed society, one that not coincidentally represents a feature from the perspective of the advertisers it serves.
Conceiving of less destructive business models can be a way into critical analysis of platforms. An aspect of that analysis is to situate the platform within the environment that produced it. For this post, I want to explore how industry analysts’ observations fit into criticism of socio-technical systems. The act of assessing platforms primarily in terms of market viability or future prospects, as analysts do, is nauseating to me. It’s one thing to parse out the intimations of a CEO’s utopian statements, but it’s another to take into account the persuasive commentary of experienced, self-assured analysts. If analysts represent the perspective of a clairvoyant capitalist, inhabiting their point of view even momentarily feels like ceding too much ground or “mindshare” to Silicon Valley and its quantitative, technocratic ideology. Indeed, an expedient way to mollify criticism would be to turn it into a form of consultancy.
So it is with frustration that I’ve found myself following industry analysts lately, specifically around Snapchat’s problems pleasing investors. “Snap and Twitter are now worth the same based on their market caps,” noted a recent report – an unflattering comparison given Twitter’s similar troubles finding a lucrative business model. As a user I’m not especially interested in Snap’s leadership structure or how their decisions might make Snapchat less appealing to brands. But when another executive leaves and flattening usage numbers leak, I start to wonder how these changes could affect my and other users’ experiences with the platform. Industry analysis seems to offer clues.
By industry analysis, I mean financial and strategic analysis written for a lay audience, from publications like the Financial Times and WSJ, to sites like TechCrunch and Business Insider, to analysts with popular followings like Benedict Evans. As the interests of users, shareholders and companies differ and at times diverge significantly, the consequences of their actions aren’t as clear or deterministic. A company’s efforts to boost engagement in one area, for example, perhaps don’t have the desired effect, while users may simply ignore the initiative, and the stock market might nonetheless register greater confidence because a few influential commentators frame the move as “a promising sign.” Industry analysis then purports to offer a coherent narrative, a rationale for apparent irrationality to paraphrase Weber’s term.
As if being a user weren’t fraught already, Silicon Valley’s preference for risky ventures invites us to roleplay as investor and analyst. Platforms encourage our feedback in the mode of consultants with a stake in their wellbeing, but as users we fancy ourselves closer to activist investors. We express our (dis)satisfaction by voicing feedback, whether to official support channels or our followers and in DMs, and in the extent of our user activity. The functional value of giving feedback is as important as the performative proof it signifies of our trust in the process. This role play demonstrates more than users’ personal affinity to any one platform or product, but a higher faith in the apparent logics of venture capital as divined by industry analysts. We don’t need to seek out market strategy or understand gantt charts, our continued participation is sufficient evidence of allegiance on its own.
It’s common in journalism to criticize corporations when they make unpopular decisions without examining deeper causes. Digressions are attributed to idiosyncrasies of executive personality or to a kind of corporate peer pressure. Industry financial and strategic analysts may similarly opt to omit larger social forces too, but are more likely to recognize corporate actions as functional. In contrast to journalism that treats corporate missteps as nefarious or accidental, industry analysis can avoid presenting reality in an ideal way. “The problem with ideal theory is that it naturalizes those existing imperfections and doubles down on them rather than fixing them,” Robin James writes in the intro to her forthcoming book The Sonic Episteme, on statistical modeling’s instrumentalization of acoustic epistomologies. “Liberal approaches to equality are a classic example of this: treating everyone as already equal–because in the end everyone should be equal–reinforces existing inequalities (like racism or sexism) rather than ameliorating them.” Where incredulous journalists and critics see bugs in capitalism, analysts see features.
This isn’t to imply that analysis challenges convenient industry wisdom, e.g. that Facebook really just ‘gives users what they want,’ but just that it is more apt than other sources to read corporate and market intentionality beyond profit motive or malice. Nor does this mean analysts’ methodology ought to be emphasized in criticism in general. The popular import of analysis necessarily seeps into mass consciousness and bends our frame of reference, a charade we semi-consciously enact. If understanding analysts’ appeal, if not their conclusions, allows us as both users and critics to better anticipate change and sharpen our critiques, then salvaging analysts’ insights, if only to critically reframe them, is surely worth the nominal price of admission.
Apple users usually expect for their devices to perform basic system management and maintenance, monitoring background processes so that a rogue task doesn’t drag down the currently active app, for example. But when Apple confirmed users’ suspicions that a recent update was aggressively slowing older devices, the story quickly gained national attention, culminating in the company cutting the price of battery replacement service and apologizing for the misunderstanding in an open letter to customers. Though Apple never goes as far as to admit wrongdoing in the letter, their direct appeals to customers’ “trust” and “faith” serve as an implicit acknowledgement that the company disregarded a boundary somewhere.
The new power management system has received justifiable attention and it isn’t the only update the company surreptitiously added recently. In a separate update, wireless and Bluetooth controls that previously functioned like manual on/off switches now only disable connectivity temporarily, until the system automatically reactivates them the following day. Similar to the new power management feature, the connectivity controls weren’t publicized and users weren’t notified of the altered functionality until a subsequent release.
Given how social media and messaging services have, as Jenny Davis says, “extended beyond apps and platforms, taking on the status of infrastructures and institutions,” Apple’s moves to smooth device performance and subtly automate connectivity make some sense. “They have become central to society’s basic functions, such as employment, public safety, and government services,” Data & Society scholars argued in response to Carpenter v. United States. On a basic level a phone’s remaining battery life can, as Jenny Davis wrote of her second night living in Australia, be the difference between calling an Uber or cab home and staying lost and stranded at night in an unfamiliar city on the other side of the world. “I could mess up, (which I did) and have layers of contingency preventing my mishap from becoming a catastrophe.”
The ubiquity of networked phones not only facilitates access but furnishes society’s layers of contingency – the many convenient, useful and at times crucial services we enjoy and rely on every day. When our societal infrastructure shifts, as it inevitably does, we feel it and often anticipate its impact. Indeed, as part of the cyborgian bargain, we both expect and are specially equipped to continually renegotiate our status within ever shifting socio-technical systems. For the trust we exercise conditionally with and through society’s mediating infrastructures and institutions, we do not expect an equitable exchange so much as we demand reciprocation, however tenuous and incomplete, commensurate with our wants and desires.
“This isn’t just a matter of items and gadgets,” as Sunny Moraine wrote back in 2013 when digital rights management (DRM) embedded in SimCity effectively broke the game for a majority of players upon release. “This is about data, about identity; if we’re our technology, this has profound implications for our relationship with ourselves.” Since 2013, the mass adoption of smartphones has mostly usurped videogames’ role as the “the canary in the coal mine,” as Sunny put it, for the tech industries’ experiments in digital ownership. While Apple’s recent updates seem relatively benign and well-intentioned compared to overtly user-hostile DRM (a low bar, admittedly), such an assessment would reduce platform ethics to lesser-evilism. Broadening our focus from platforms’ more deceptive moves to include those they pursue out in the open may be more instructive for an analysis of their potential implications for usership.
For such an example, consider Apple’s new Do Not Disturb While Driving feature. Enabled by default in iOS 11, the feature uses “a number of signals, such as the iPhone’s accelerometer, the rate at which it finds and loses nearby Wi-Fi networks, and GPS to try to detect when it thinks you are driving.” Touted in an Apple keynote event and praised by public safety officials and driving safety advocates, the feature clearly advertises itself and allows the user to opt out easily and indefinitely. The feature among other things provides a counterpoint to the company’s battery scandal, demonstrating transparent public disclosure and design flexibility.
The uptake of distracted driving laws in the US surely influenced Apple’s decision to add the feature. Similar to Facebook’s assumed responsibility for suicide prevention, failure to do so would constitute platform negligence. Coincidentally my city just passed a sweeping distracted driving ordinance that covers not only texting, but manual entry of directions into a GPS, checking your hair or makeup in the mirror, “reading anything located in the vehicle other than the operation information of vehicle gauges and equipment” and turning your head too far. The fact Apple’s own driving safety efforts parallel, and in a small sense make redundant, my local municipality’s laws, reemphasizes how digital devices and services are inseparable facets of contemporary societal infrastructure.
From the perspective of users and drivers, this instance of corporate and municipal interests converging also raises a related issue about social institutions. That is, prescriptive and otherwise coercive mechanisms, when conceived ostensibly in the interest of public safety or system health – be it a municipality’s local but far reaching driving laws or a multinational device maker’s implementation of proactive, semi-automated driving and power management features – often elude or deflect public scrutiny. Better to err on the side of caution. The point isn’t to engage in theoretical paranoia, but to account for the reality that social institutions have interests of their own beyond their stated mission and conscious intentions to serve the public. As a resident and documented citizen I certainly have more direct input and visibility into my city council’s governance than I do Apple’s, yet the capacity each entity respectively holds to arbitrarily and intimately affect my and numerous others’ daily lives is similar. The fact that for-profit corporations make up an ever larger portion of our social institutions doesn’t fundamentally change that reality, but amplifies it.
This intersection (pardon the driving pun) of municipal and corporate safety efforts seems a good place to explore the question of usership raised in Sunny Moraine’s and Jenny Davis’s work theorizing the owner-user and datafied subject, respectively. Sunny’s prescient analysis of digital rights management summed up the stakes at the time, which seem newly relevant to the present moment:
“Why should we care about this? Most simply, because it’s a continuation of an ongoing trend: The recategorization of technology owners as technology users, of the possession of private property transformed into the leasing of property owned by others, with all the restrictions on use that come along with it. And what’s most worrying about this are all the ways in which we as owner-users are being encouraged to view this as a normal part of our relationship with our stuff. When the very concept of “our stuff” is up for grabs.”
Through digital rights management, game companies sought to codify digital ownership as something more akin to renting, pushing the bounds of usership in the process. Sunny’s hyphenated owner-user term (alternately phrased ‘user-renter’ and ‘user-serf’) conveys a transitional and precarious relationship to digital possessions. To conclude that ‘we are all owner-users’ now would not only be late, but erase users’ cognizance and qualified complicity, as well as their outcry and efforts to mitigate the game industry’s rentier inclinations as epitomized in onerous DRM.
As the rise of smartphones (née teleputers) accelerated popular adoption of social platforms and services, along with many benefits it also further entrenched the user-as-renter. Which raises the question, what sort of user are we becoming now? “Our status as datafied subjects in overlapping state, commercial, institutional, and corporate databases,” Jenny Davis writes, “points to an emerging structure of data governance […]” Jenny’s argument concerns algorithmic systems, exemplified by Facebook’s automated suicide prevention service, which reflects a new and significant dimension of usership today. Datafied subjectivity then could be seen as an outgrowth of one’s perquisite user status, perhaps analogous to how in the US one’s eligibility to vote and access social services, for example, is afforded by one’s residency status. As datafied subjects, our phones serve as a multipurpose key to the datafied city.
If digital social platforms regard our individual and collective user agency (if not our aggregate customer satisfaction and data trails) as increasingly ancillary to their and society’s smooth functioning – a bug to be engineered around or away for our convenience and safety – Sunny’s owner-user and Jenny’s datafied subject provide useful correctives from users’ point of view. Drawing from these correctives, the sort of user we are becoming now might be better described as interstitial, a status emerging from our agency in relation to and actions afforded by socio-technologies. Instead of the ancillary user that platforms imagine molding and fixing in place, the interstitial user contends that our interests and desires necessarily defy simple categorization and we will use what options we have at our disposal to pursue our aims in spite of designers’ wishes.
Though this battery scandal ought to inspire reconsideration of device lifecycles and how they could be designed to last longer by encouraging extended care, concern over users’ justified resentment shouldn’t be reduced to an unfortunate side-effect of mismanaged communication. Miscommunication presumes a good faith effort on the part of engineers and designers who, by all indications, believed they owed no explanation at all.
Reflecting on their experience playing with Cats & Dogs, the new Sims 4 expansion pack, Nicole Carpenter describes the anxieties arising from new pets of the digital and physical variety. When their new pet cat catches ‘lava nose’ in the game, it recalled memories of waiting at the vet after Carpenter’s own kitten (on which their Sim cat is based) swallowed a long piece of yarn. And not just memories, but new, visceral worries for their digital cat’s wellbeing. “That anxiety stems from wanting control,” Carpenter says, “something that you rarely have in real life and that the Sims allows you in small doses before taking it away for dramatic effect.”
The anxieties our pets inspire in us — “You’re in charge of a life now, and that’s scary” — seems primarily an effect of vicariousness. It’s one thing to console a friend or newborn or a stranger through a crisis, where each demands unique empathetic approaches. Strategies for consoling a pet are similarly individuated, but more ambiguous. Our furry friends’ exceedingly wide yet minute subtleties in conveying their discomfort, combined with the relatively more limited forms of medical and emotional care we have to console them, heightens our vicarious pain, compelling us to exhaust every possible fix until something sticks. “Can she die? Am I a bad cat mom?,” Carpenter recalls worrying over their sick digital cat. Learning that no, as far as the Sims goes, your pets can’t really die or even run away (though they can get bored from owner negligence) did less to relieve those worries than simply carrying on in spite of them.
Last month a group of us got together to put on an unconference, a DIY gathering made up of short, semi-improvised workshops proposed the day of. The tangible excitement of being all crammed together, friends, loose ties and strangers, around the schedule board provoked a mix of spontaneity and natural anxieties for many of us. The workshops ranged from communal cooking to disarming active shooters to handling leftist infighting in a small city where many activists are one degree separated. My choice to do a workshop on anxiety in relation to political action was in retrospect a pretty safe bet. I want to use this post to recount the workshop, by summarizing a specific text that helped inform it, as well as the personal experience of doing the workshop itself. In a year where just keeping up with the deluge of bad news and formulating an appropriate response became its own preoccupation, many of us are in the process of forming new media consumption habits. Even if those new modes of action are just spreading information while expressing how fucked up everything is, a little guidance can be helpful.
“Instead of asking what you should do, begin in every situation by asking yourself what, realistically, you can do,” writes philosopher FT in “A Guide For the Perplexed,” which inspired the idea for my workshop. Of the activism related how-to guides released this year, it’s no accident it was the least anxiety-inducing one I read. Organized around a series of concentric circles that each represent a categorical relation – e.g. your relationship to yourself, then friends/family, institutions, moving outward from the direct to the increasingly abstract – the text invites you as the reader to reflect on your relation with each. For example, between you and your local neighborhood health clinic, all the way up to your relation to global warming. Although reorganizing ideas in the mind alone obviously can’t, as the text stresses, change our material circumstances, there’s good cause to believe that, “by organizing our ideas about our selves and about the world differently, we might […] also reduce the amount of anxiety we experience when we think about ourselves and about the world.”
Reading the guide again for the workshop, I found its repetitious, reflexive construction reminiscent of meditation. Dispensing with the body/mind schism is in fact invoked as one of the text’s stated aims, the guide sometimes directly urging you to take a breath. (Your mileage may vary, but I liked these occasional explicit suggestions.) Just jumping around the categories – the text affords reading in any order – I could redirect my focus from more abstract thinking toward more concrete actions and practical preparations that I could take. Like finally getting a passport, for instance, before my state among several others adopts the new RealID laws that will depreciate driver’s licenses as a valid form of ID on flights!
A conversation with a friend the other day brought up our experiences with dissociation. Sparing triggering details, the coping mechanism struck me as an especially apt one in light of the uniquely exhausting toll this year has had on many people in different ways both overt and subtle. The act of assuming a minimum-necessary attachment, if not a complete detachment, from the news at times provided a needed buffer to balance responsiveness to the latest incident with the potential for burnout. Though pursuing more practical measures, as mentioned above, generally reduces stress and at times is just necessary, there are inevitably times when that just isn’t going to happen.
The highly individual nature of performing that balancing act renders any description partial. In devoting part of the workshop to speaking to that from my own experience with social anxiety was a surprisingly effective means of processing it. The relative anonymity of sharing personal stories with a room of strangers helped me open up and speak candidly without fear of judgement. At the same time, the awareness that my audience had themselves probably dealt with anxiety forced me to be conscious about pacing and avoiding oversharing or triggering myself. Getting to commiserate with fellow anxiety sufferers and share our tips and technologies of wellness can serve as some of the best therapy, it turns out! Indeed anxiety in its various forms, as a widespread psychosocial experience often inflected by extra- and interpersonal struggles, seems a suitable basis as any for forging politics across disparate identities.
At moments when anxiety overwhelms me, a trick I’ve found to snap myself out of it is to empathize with myself as though I am consoling my self as another person. “An empathetic attitude works toward a “close-enough” understanding of the other as other, never to be reduced to a stereotype,” Jake Jackson suggests as an alternative attitude to the more common, often patronizing ones we take towards depression sufferers. “This limitation, this lack of certainty in our empathetic understandings of others, vindicates the need for the other’s testimony as a viable source for confirmation.” In our haste to affirm another’s unique struggles, we can easily overrate our own ability to empathize with them, a shortcut that diminishes the self-knowledge learned from another’s experience while inflating a false sense of our own. In this vein a way to think of anxiety could be as a lapse in empathy towards one’s self. To learn better how to exercise true empathy towards other selves, then, might require first showing humility toward understanding our own self – our worries, needs, hopes and desires – as we would another person, a ‘close-enough understanding’ of us as a self we don’t entirely know.
Growing uncertainty about the future naturally evokes anxieties and a desire to relieve them. Faced with the tension of assessing an unknowable future, it’s tempting to turn a sense what might happen into what will inevitably occur or to view merely ambiguous change through dichotomous good/bad framings. And when the choice is between “good” or “bad,” it’s probably bad, isn’t it? “Hope isn’t optimism,” FT wrote recently. “Hope is the small, quiet conviction that you don’t know how things will turn out.” They contrast this with despair, “the conviction that you not only know how things will turn out, but that there’s no way to change the outcome.” If despair offers convenient permission to gratefully stop thinking about our collective future and the role we play in its fruition, hope demands a recognition of our complicity in the future, and what ultimate agency we have to affect it in the present. To apprehend our individual agency, however constricted it may be, is to willfully hold out hope, and so incurs anxiety borne of that recognition; a constructive anxiety, one that, short of resolving, we’re probably better off for embracing.
“We need to tell more diverse and realistic stories about AI,” Sara Watson writes, “if we want to understand how these technologies fit into our society today, and in the future.”
Watson’s point that popular narratives inform our understandings of and responses to AI feels timely and timeless. As the same handful of AI narratives circulate, repeating themselves like a befuddled Siri, their utopian and dystopian plots prejudice seemingly every discussion about AI. And like the Terminator itself, these paranoid, fatalistic stories now feel inevitable and unstoppable. As Watson warns, “If we continue to rely on these sci-fi extremes, we miss the realities of the current state of AI, and distract our attention from real and present concerns.”
Watson’s criticism is focused on AI narratives, but the argument lends itself to society’s narratives about other contemporary worries, from global warming to selfies and surveillance. On surveillance, Zeynep Tufekçi made a similar point in 2014 about our continued reliance on outdated Orwellian analogies (hi 1984) and panoptic metaphors.
Resistance and surveillance: The design of today’s digital tools makes the two inseparable. And how to think about this is a real challenge. It’s said that generals always fight the last war. If so, we’re like those generals. Our understanding of the dangers of surveillance is filtered by our thinking about previous threats to our freedoms. But today’s war is different. We’re in a new kind of environment, one that requires a new kind of understanding. […]
To make sense of the surveillance states that we live in, we need to do better than allegories [1984] and thought experiments [the Panopticon], especially those that derive from a very different system of control. We need to consider how the power of surveillance is being imagined and used, right now, by governments and corporations.
We need to update our nightmares.
I want to return to Tufekçi’s argument as it relates specifically to surveillance a little later, but just considering the top Google definition of “surveillance” affirms Tufekçi’s point that our ideas of what surveillance looks and acts like (e.g. cameras mounted on buildings, human guards watching from towers, phone mouthpieces surreptitiously bugged, etc.) have not changed much since both the fictional and real 1984.
Stepping back from surveillance in particular and to look at narratives more generally, two recent books – Discognition by Steven Shaviro and Four Futures: Life After Capitalism by Peter Frase – speak to speculative fiction’s utility for imagining the present and its relation to possible futures. Shaviro puts it simply when he describes science fiction as storytelling that “guesses at a future without any calculation of probabilities, and therefore with no guarantees of getting it right.” Throughout Discognition, Shaviro uses a variety of speculative fiction stories as lenses to think about sentience, human and otherwise; incidentally a few of these exemplify the kind of complex AI narratives Watson calls for.
In the foreword to Four Futures, Peter Frase echoes Shaviro when stating his preference for speculative fiction “to those works of ‘futurism’ that attempt to directly predict the future, obscuring its inherent uncertainty and contingency […]” Frase’s approach, “a type of ‘social science fiction,’” shares with Shaviro’s an appreciation of narrative’s speculative capacities, or speculative fiction’s suitability to narrative.
Both of these works, it should be noted, owe credit to the work of scholars like Donna Haraway. As Haraway surmised in one of the most precient lines of A Cyborg Manifesto (published in 1984 no less), “The boundary between science fiction and social reality is an optical illusion.” Considering the many possible narratives and approaches speculative fiction affords, the disenchantment Watson and Tufeçki express with their fields’ respective narratives feels even more appropriate. Indeed, It is a little dispiriting to imagine the promise and possibility evoked in Haraway’s manifesto could – through narrative repetition – come to feel banal.
Naming culprits for surveillance fiction fatigue would be too easy, though shoutout to Black Mirror for epitomizing this general malaise. A more prominent and useful target of critique would be the various, often well intentioned, surveillance-conscious media we consume. A short list of recent radio/podcast programs that cover the topic include:
Theory of Everything’s just concluded “still more adventures in surveillance” miniseries
This list also serves as a nice cross section of possible formats for popular media coverage of surveillance – a practical how-to guide with expert/industry interviews (Note to Self); a one-off, directed interview segment (SciFri); an open-ended panel discussion among journalists (Motherboard); and a mixture of social commentary, interviews and creative nonfiction (ToE).
Given the variety of formats, you might expect the discourse to be similarly varied. But the narratives that drive the conversations, with the exception of Theory of Everything, tend to revolve around one or two themes: the urgent need to safeguard our personal privacy and/or the risky/undesired aspects of visibility. Important and rational as these concerns are, how many more friendly reminders to install Signal or Privacy Badger do we need?
Meanwhile missing from these discussions are the more apt metaphors and narratives for understanding mass surveillance, how it works and affects everyday life, and for whom, beyond the libertarian sense of the ‘private’ individual. For the energy and attention devoted to surveillance in media and fiction, there are precious few instances where surveillance is treated as a social issue with groups, power structures, and power dynamics that are more nuanced than “the big and powerful are watching.” In the midst of appropriate security culture, what are the surveillance narratives and speculative fictions that are being ignored?
Robin James identifies a more relevant metaphor for understanding contemporary surveillance in acoustics. As James states:
…when President Obama argued that ‘nobody is listening to your telephone calls,’ he was correct. But only insofar as nobody (human or AI) is ‘listening’ in the panoptic sense. […] Instead of listening to identifiable subjects, the NSA identifies and tracks emergent properties that are statistically similar to already-identified patterns of ‘suspicious’ behavior.
Jenny Davis’s contention that “we don’t have data, we are data” similarly helps broaden the discussion of our data beyond individualist notions of personal privacy and private property:
We leave pieces of our data—pieces of our selves—scattered about. We trade, sell, and give data away. Data is the currency for participation in digitally mediated networks; data is required for involvement in the labor force; data is given, used, shared, and aggregated by those who care for and heal our bodies. We live in a mediated world, and cannot move through it without dropping our data as we go. We don’t have data, we are data.
For work in a similar vein, see also Davis’s post on the affordances and tensions involved in Facebook’s suicide prevention service; Matthew Noah Smith’s essay on last year’s FBI-Apple case as “compromising the boundaries of the self”; and PJ Patella-Rey’s presentation on ‘digital prostheses.’
Lastly PJ Patella-Rey’s post remembering Zygmunt Bauman recalls his concept of ‘pleasureable traps’ that touches on the ways users seek out visibility:
…we have begun to see that the model of surveillance is no longer an iron cage but a velvet one–it is now sought as much as it is imposed. Social media users, for example, are drawn to sites because they offer a certain kind of social gratification that comes from being heard or known. Such voluntary and extensive visibility is the basis for a seismic shift in the way social control operates–from punitive measures to predictive ones.
These examples provide helpful starting points for critical inquiry and hopefully better discourse, but stories and art arguably hold more sway over collective imagination. Just less Black Mirror, Minority Report, and 1984, and more The Handmaid’s Tale, Ghost in the Shell,and Southland Tales.[1] In surreal times, we need more stories that ground us alongside ones that re-enchant the blurring boundary between science fiction and social reality.
“The individualistic perspective endemic to NPR,” as David Banks writes, “pervades all progressive thinking, and the question of which disciplines contribute to our common sense–behavioral economists instead of sociologists, psychologists instead of historians–has direct political implications.” In this way, surveillance-conscious media and its dominant narratives serve as a perfect case study of this tendency. “Technology,” Latour said, “is society made durable.” We might say something similar about media, that narrative is culture made durable. Between privacy and control, our rigid devotion to outdated surveillance narratives leaves too little to imagination.
Image Credit [1] Also where are the videogames about exploring surveillance in its various forms? Facebook and dominant social platforms gamify our social activity, obscuring the surveillance thereof. Games that made our own surveillance and data collection explicit again, letting us play with the dynamics of visibility, could make them more tangible, real, even fun.
About Cyborgology
We live in a cyborg society. Technology has infiltrated the most fundamental aspects of our lives: social organization, the body, even our self-concepts. This blog chronicles our new, augmented reality.