Echoes of the iPod clickwheel in the BeatsMusic touch interface.

YouTube Preview Image

In an earlier post, I talked about Apple’s 86ing of the iPod Classic, the one with the clickwheel interface rather than the touchscreen interface. There was plenty of iPod nostalgia as news of the clickwheel iPod’s discontinuation spread, including this piece, which focuses on the aesthetics of the clickwheel as an interface.

Though the touchscreen is often seen as replacing the clickwheel, I think the clickwheel has influenced toucscreen music interfaces. As you can see in the video above, the BeatsMusic touch interface echoes the iPod clickwheel. Just as you use the iPod clickwheel to fast forward or rewind or jump around in a track (press the center key, then slide back or forward on the wheel to place the cursor on the track’s progress bar at the bottom of the screen), you use Beats’ circular touch interface to fastforward or rewind the currently-playing track. (Beats is owned by Apple, so this resonance isn’t surprising; however, I don’t know if the Beats touchscreen wheel was developed before they were acquired by Apple.) So, to paraphrase a line from L.A. Style’s “James Brown Is Dead,” maybe we shouldn’t be mislead when the newsman said the iPod clickwheel is dead?

You could argue that the Beats interface also echoes turntable interfaces…and that’s not wrong, but when I scroll around the circular wheel on my iPhone 5’s touch screen, that much more closely and directly echoes the iPod classic than it does a record turntable. In fact, I’d argue that the clickwheel itself echoes turntablism (has anyone written on this?), so the resonance with turntables is included in the Beats interface’s resonance with the iPod clickwheel.

Clearly the clickwheel has had a lasting impact on digital music interface design. Are there other examples, besides the BeatsMusic interface, that y’all can think of?

A Few Notes for STS on Big Data

In the 60s there was this flourishing of  _________Studies Departments across Western academe. Women’s Studies, Cultural Studies, American Studies, Urban Studies, African American Studies, and Science and Technology Studies set up shop in large Universities and small colleges and slowly but surely created robust intellectual communities of their own. These interdisciplinary fields of study sought to break apart centuries-old notions about the noun that came before “studies.” It was a radical idea for the social and behavioral sciences that now seems somewhat banal; focusing an entire department on a subject, rather than a method or tradition, allowed researchers to focus on pressing issues at the expense of traditional methodological barriers. One could easily argue that this approach produced some of the most influential academic and popular writing of the 20th century. The 21st century has seen an unfortunate decline in these institutions and the complex problems they sought to investigate and mitigate have come roaring back in uncanny ways. (more…)

An Ethic of Prosumptive Sharing


The contemporary information economy is made up of prosumers—those who simultaneously produce and consume. This is exciting, as we lay-folk become micro-journalists, creating content and spreading what others create. However, such a system poses serious questions about the ethics of sharing practices.

In what follows, I offer a skeleton guideline for the ethics of sharing. It is purposely broad so as to remain flexible. I offer three key guiding principles: Who always matters; Intention always matters; and The law is a really good suggestion. (more…)

Meanwhile, in Philosophy…

You may have heard that academic philosophy is in the middle of an identity crisis. The Philosophical Gourmet Report (aka the Leiter Report, after its founder Brian Leiter) has been central to English-language academic philosophy’s self-concept. It has defined what counts as good and/or real philosophy for nearly 20 years. But in the last few weeks the administration and the validity of the PGR has been called into question by the parts of the discipline that had, up till now, supported it. If you want to read up on the scandal and the ensuing debate, check out Leigh Johnson’s “Archive of the Meltdown.”

At the heart of the debate is whether ranking philosophy departments and programs is something we ought to do in the first place. For reasons articulated here and here, I don’t think ranking philosophy (or any discipline’s) programs is something we ought to do. Rankings actively discourage meaningful diversification of a very non-diverse discipline, and help reinforce existing inequities.

One thing that actively encourages meaningful diversification of philosophers and philosophical practices is PIKSI, the Philosophy in an Inclusive Key Summer Institute. This is a summer program for philosophy students from groups traditionally underrepresented in the discipline; the point is to given them the encouragement and tools they need to successfully apply to graduate school in philosophy, and thus help remedy the discipline’s “pipeline problem.”

However, though the American Philosophical Association has traditionally funded PIKSI, this year it chose not to. So, PIKSI’s organizers are running a crowdsourcing campaign. Herreticat from the XCPhilosophy blog does a great job explaining why you ought to donate to PIKSI:

The subject matter of PIKSI is also crucial to its success. By focusing on the relationship between lived experience and philosophical reflection, the institute emphasizes the importance of students bringing their own concerns and questions to the table. Many students remark that the institute is the first time they learn there are “philosophers like them” and that they could have a role to play in philosophy. It is often the first time students participate in seminar discussions about anti-racist and feminist philosophical work. The testimonials in the PIKSI video also demonstrate the importance of this approach.

If you are a first generation college student, PIKSI could entail learning about what going to graduate school even means. If you are a low income student, it might mean learning concrete details about graduate stipends and having conversations with people who understand what it is like to have to support your family while in grad school. It involves talking to other people who get it about what it is like to be the only queer person of color, or woman with a disability, or first generation college student, and how to find the community that will allow you to not just get through, but thrive. It gives you email addresses and phone numbers and support networks.

Here’s where to go to donate. I know most of Cyborgology’s readers aren’t academic philosophers, but you should still care about diversity in philosophy (a) if you care about ideas, and/or (b) if you care about social justice in general.

Accessibility in Higher Education: The TEACH Act


Pic via: The Accessible Icon Project

Let me start by saying, accessibility is a human rights issue, not an afterthought. Frankly, it’s an insult to people with disabilities that access is even a subject of debate. And yet…

The Technology, Equality, and Accessibility in College and Higher Education Act (i.e., the TEACH Act) is currently under debate in congress. The legislation requires that technologies used in college classrooms be accessible to all students, including students with disabilities. It is entirely possible that you have not heard of the TEACH Act, but for those who it most affects—students with bodies that deviate from the norm—the stakes are quite high. The bill has some strong support, but also strong opposition, from surprising sources.   (more…)

Apple’s Health App: Where’s the Power?


In truth, I didn’t pay a tremendous amount of attention to iOS8 until a post scrolled by on my Tumblr feed, which disturbed me a good deal: The new iteration of Apple’s OS included “Health”, an app that – among many other things – contains a weight tracker and a calorie counter.

And can’t be deleted.


Ello: The Luxury Bicycle of Social networks

A Budnitz Bike in its natural habitat.

A Budnitz Bike in its natural habitat. Source.

Paul Budnitz describes himself as a “serial entrepreneur” having created other companies that make artisanal toys and luxury bicycles. He’s also the creator/founder/president/charismatic leader of Ello. And when a social network launches with a manifesto that proudly proclaims “You are not a product”, there’s more on the line than embedded video support. Despite the radical overtures of the initial launch, we shouldn’t expect any more from Ello than we would from a luxury bicycle. (more…)

Why I Black Out Twitter Handles on Blog Posts


Blacked out Twitter image from my post last week

Netiquette. I seriously hate that word. BUT an issue of internet-based-etiquette (blogger etiquette, specifically) recently came to my attention, and I’m interested in others’ practices and thoughts.

As a blogger, I often analyze content from Facebook and Twitter. In doing so, I usually post images of actual tweets, comments, and status updates. These are forms of data, and are useful in delineating the public tenor with regard to a particular issue, the arguments on opposing sides of a debate, and the ‘voice’ with which people articulate their relevant thoughts and sentiments.

As a common practice, I black out all identifying information when reposting this content. Last week, I posted some tweets with the names and images redacted. A reader commented on my post to ask why I did so, given that the tweets were public. We had a quick discussion, but, as I mentioned in that discussion, this issue deserves independent treatment. (more…)

Visible Social Identities vs Algorithmic Identities

This is cross-posted from xcphilosophy.

Traditionally, social identities (race, gender, class, sexuality, etc.) use outward appearance as the basis for inferences about inner content, character, or quality. Kant and Hegel, for example, inferred races’ defining characteristics from the physical geography of their ‘native’ territories: the outward appearance of one’s homeland determines your laziness or industriousness. [1] Stereotypes connect outward appearance to personality and (in)capability; your bodily morphology is a key to understanding if you’re good at math, dancing, caring, car repair, etc. Stereotypes are an example of what Linda Martin Alcoff calls “our ‘visible’ and acknowledged identity” (92). The attribution of social identity is a two-step process. First, we use visual cues about the subject’s physical features, behavior, and comportment to classify them by identity category. We then make inferences about their character, moral and intellectual capacity, tastes, and other personal attributes, based on their group classification. As Alcoff puts it, visual appearance is taken “to be the determinate constitutive factor of subjectivity, indicating personal character traits and internal constitution” (183). She continues, “visible difference, which is materially present even if its meanings are not, can be used to signify or provide purported access to a subjectivity through observable, ‘natural’ attributes, to provide a window on the interiority of the self” (192).  An identity is a “social identity” when outward appearance (i.e., group membership) itself is a sufficient basis for inferring/attributing someone’s “internal constitution.” As David Halperin argues, what makes “sexuality” different from “gender inversion” and other models for understanding same-sex object choice is that “sexuality” includes this move from outward features to “interior” life. Though we may identify people as, say, Cubs fans or students, we don’t usually use that identity as the basis for making inferences about their internal constitution. Social identities are defined by their dualist logic of interpretation or representation: the outer appearance is a signifier of otherwise imperceptible inner content.

Social identities employ a specific type of vision, which Alia Al-Saji calls “objectifying vision” (375). This method of seeing is objectifying because it is “merely a matter of re-cognition, the objectivation and categorization of the visible into clear-cut solids, into objects with definite contours and uses” (375). Objectifying vision treats each visible thing as a token instance of a type. Each type has a set of distinct visual properties, and these properties are the basis upon which tokens are classified by type. According to this method of vision, seeing is classifying. Objectifying vision, in other words, only sees (stereo)types. As Alcoff argues, “Racism makes productive use of this look, using learned visual cues to demarcate and organize human kinds. Recall the suggestion from Goldberg and West that the genealogy of race itself emerged simultaneous to the ocularcentric tendencies of the Western episteme, in which the criterion for knowledge was classifiability, which in turn required visible difference” (198). Social identities are visual technologies because a certain Enlightenment concept of vision–what Al-Saji called “objectifying vision”–is the best and most efficient means to accomplish this type of “classical episteme” (to use Foucault’s term from The Order of Things) classification. Modern society was organized according to this type of classification (fixed differences in hierarchical relation), so objectifying vision was central to modernity’s white supremacist, patriarchal, capitalist social order.

This leads Alcoff to speculate that de-centering (objectifying) vision would in turn de-center white supremacy:

Without the operation through sight, then, perhaps race would truly wither away, or mutate into less oppressive forms of social identity such as ethnicity and culture, which make references to the histories, experiences, and productions of a people, to their subjective lives, in other worlds, and not merely to objective and arbitrary bodily features (198).

In other words, changing how we see would change how society is organized. With the rise of big data, we have, in fact, changed how we see, and this shift coincides with the reorganization of society into post-identity MRWaSP. Just as algorithmic visualization supplements objectifying vision, what John Cheney-Lippold calls “algorithmic identities” supplement and in some cases replace social identities. These algorithmic identities sound a lot like what Alcoff describes in the preceding quote as a positive liberation from what’s oppressive about traditional social identities:

These computer algorithms have the capacity to infer categories of identity upon users based largely on their web-surfing habits…using computer code, statistics, and surveillance to construct categories within populations according to users’ surveilled internet history (164).

Identity is not assigned based on visible body features, but according to one’s history and subjective life. Algorithmic identities, especially because they are designed to serve the interests of the state and capitalism, are not the liberation from what’s oppressive about social identities (they often work in concert). They’re just an upgrade on white supremacist patriarchy, a technology that allows it to operate more efficiently  according to new ideals and methods of social organization.

Like social identities, algorithmic identities turn on an inference from observed traits. However, instead of using visible identity as the basis of inference, algorithmic identities infer identity itself. As Cheney-Lippold argues, “categories of identity are being inferred upon individuals based on their web use…code and algorithm are the engines behind such inference” (165). So, algorithms are programmed to infer both (a) what identity category an individual user profile fits within, and (b) the parameters of that identity category itself. A gender algorithm “can name X as male, [but] it can also develop what ‘male’ may come to be defined as online” (167). “Maleness” as a category includes whatever behaviors that are statistically correlated with reliably identified “male” profiles: “maleness” is whatever “males” do. This is a circular definition, and that’s a feature not a bug. Defining gender in this way, algorithms can track shifting patterns of use within a group, and/or in respect to a specific behavior. For example, they could distinguish between instances in which My Little Pony Friendship Is Magic is an index of feminine or masculine behavior (when a fan is a young girl, and when a fan is a “Brony”). Identity is a matter of “statistical correlation” (170) within a (sub) group and between an individual profile and a group. (I’ll talk about the exact nature of this correlation a bit below.)

This algorithmic practice of identity formation and assignment is just the tool that a post-identity society needs. As Cheny-Lippold argues, “algorithms allow a shift to a more flexible and functional definition of the category, one that de-essentializes gender from its corporeal and societal forms and determinations” (170). Algorithmic gender isn’t essentialist because gender categories have no necessary properties are constantly open to reinterpretation. An identity is a feedback loop of mutual renegotiation between the category and individual instances. So, as long as an individual is sufficiently (“statistically”) masculine or feminine in their online behavior, they are that gender–regardless, for example, of their meatspace “sex.” As long as the data you generate falls into recognizably “male” or “female” patterns, then you assume that gender role. Because gender is de-essentialized, it seems like an individual “choice” and not a biologically determined fact. Anyone, as long as they act and behave in the proper ways, can access the privileges of maleness. This is part of what makes algorithmic identity “post-identity”: privileged categories aren’t de-centered, just expanded a bit and made superficially more inclusive.

Back to the point about the exact nature of the “correlation” between individual and group. Cheney-Lippold’s main argument is that identity categories are now in a mutually-adaptive relationship with (in)dividual data points and profiles. Instead of using disciplinary technologies to compel exact individual conformity to a static, categorical norm, algorithmic technologies seek to “modulate” both (in)dividual behavior and group definition so they synch up as efficiently as possible. The whole point of dataveillance and algorithmic marketing is to “tailor results according to user categorizations…based on the observed web habits of ‘typical’ women and men” (171). For example, targeted online marketing is more interested in finding the ad that most accurately captures my patterns of gendered behavior than compelling or enforcing a single kind of gendered behavior. This is why, as a partnered, graduate-educated, white woman in her mid-30s, I get tons of ads for both baby and fertility products and services. Though statistically those products and services are relevant for women of my demographic, they’re not relevant to me (I don’t have or want kids)…and Facebook really, really wants to know these ads aren’t relevant, and why they aren’t relevant. There are feedback boxes I can and have clicked to get rid of all the baby content in my News Feed. Demanding conformity to one and only one feminine ideal is less profitable for Facebook than it is to tailor their ads to more accurately reflect my style of gender performance. They would much rather send me ads for the combat boots I probably will click through and buy than the diapers or pregnancy tests I won’t. Big data capital wants to get in synch with you just as much as post-identity MRWaSP wants you to synch up with it. [2] Cheney-Lippold calls this process of mutual adaptation “modulation” (168). A type of “perpetual training” (169) of both us and the algorithms that monitor us and send us information, modulation compels us to temper ourselves by the scales set out by algorithmic capitalism, but it also re-tunes these algorithms to fall more efficiently in phase with the segments of the population it needs to control.

The algorithms you synch up with determine the kinds of opportunities and resources that will be directed your way, and the number of extra hoops you will need to jump through (or not) to be able to access them. Modulation “predicts our lives as users by tethering the potential for alternative futures to our previous actions as users” (Cheney-Lippold 169). Your past patterns of behavior determine the opportunities offered you, and the resources you’re given to realize those opportunities. Think about credit scores: your past payment and employment history determines your access to credit (and thus to housing, transportation, even medical care). Credit history determines the cost at which future opportunity comes–opportunities are technically open to all, but at a higher cost to those who fall out of phase with the most healthful, profitable, privileged algorithmic identities. Such algorithmic governmentality “configures life by tailoring its conditions of possibility” (Cheney-Lippold 169): the algorithms don’t tell you what to do (or not to do), but to open specific kinds of futures for you.

Modulation is a particularly efficient way of neutralizing and domesticating the resistance that performative iterability posed to disciplinary power. As Butler famously argued, discipline compels perfect conformity to a norm, but because we constantly have to re-perform these norms (i.e., re-iterate them across time), we often fail to embody norms: some days I look really femme, other days, not so much. Because disciplinary norms are performative, they give rise to unexpected, extra-disciplinary iterations; in this light, performativity is the critical, inventive styles of embodiment that emerge due to the failure of exact disciplinary conformity over time. Where disciplinary power is concerned, time is the technical bug that, for activists, is a feature. Modulation takes time and makes it a feature for MRWaSP–it co-opts the iterative “noise” performativity made around and outside disciplinary signal. As Cheney-Lippold explains, with modulation “the implicit disorder of data collected about an individual is organized, defined, and made valuable by algorithmically assigning meaning to user behavior–and in turn limiting the potential excesses of meanings that raw data offer” (170). Performative iterations made noise because they didn’t synch up with the static disciplinary norm; algorithmic modulation, however, accommodates identity norms to capture individual iterability over time and make rational previously irrational styles of gender performance. Time is no longer a site outside perfect control; with algorithmic modulation, time itself is the medium of control (modulation can only happen over time). [For those of you who have been following my critique of Elizabeth Grosz, this is where the rubber meets the road: her model of ‘time’ and the politics of dynamic differentiation is really just algorithmic identity as ontology, an ontology of data mining...which, notably, Cheney-Lippold defines as “the practice of finding patterns within the chaos of raw data” (169; emphasis mine).]

Objectifying vision and data mining are two very different technologies of identity. Social identities and algorithmic identities need to be understood in their concrete specificity. However, they also interact with one another. Algorithmic identities haven’t replaced social identities; they’ve just supplemented them. For example, your visible social identity still plays a huge role in how other users interact with you online. People who are perceived to be women get a lot more harassment than people perceived to be men, and white women and women of color experience different kinds of harassment. Similarly, some informal experimentation by women of color activists on Twitter strongly suggests that the visible social identity of the person in your avatar picture determines the way people interact with you. When @sueypark & @BlackGirlDanger switched from female to male profile pics, there was a marked reduction in trolling of their accounts. Interactions with other users creates data, which then feeds into your algorithmic identity–so social and algorithmic identities are interrelated, not separate.

One final point: Both Alcoff and Al-Saji argue that vision is itself a more dynamic process than the concept of objectifying vision admits. Objectifying vision is itself a process of visualization (i.e., of the habits, comportments, and implicit knowledges that form one’s “interpretive horizon,” to use Alcoff’s term). In other words, their analyses suggest that the kind of vision at work in visual social identities is more like algorithmic visualization than modernist concepts of vision have led us to believe. This leaves me with a few questions: (1) What’s being left out of our accounts of algorithmic identity? When we say identity works in these ways (modulation, etc), what if any parts of the process are we obscuring? Or, just as the story we told ourselves about “objectifying vision” was only part of the picture, what is missing from the story we’re telling ourselves about algorithmic identities? (2) Is this meta-account of objectifying vision and its relationship to social identity only possible in an epistemic context that also makes algorithmic visualization possible? Or, what’s the relationship between feminist critiques & revisions of visual social identities, and these new types and forms of (post)identity? (3) How can we take feminist philosophers’ recent-ish attempts to redefine “vision” and its relation to identity, and the related moves to center affect and implicit knowledge and situate them not only as alternatives to modernist theories of social identity, but also as either descriptive and/or critical accounts of post-identity? I am concerned that many thinkers treat a shift in objects of inquiry/analysis–affect, matter, things/objects, etc.–as a sufficient break with hegemonic institutions, when in fact hegemonic institutions themselves have “modulated” in accord with the same underlying shifts that inform the institution of “theory.” But I hope I’ve shown one reason why this switch in objects of analysis isn’t itself sufficient to critique contemporary MRWaSP capitalism. How then do we “modulate” our theoretical practices to account for shifts in the politics of identity?

[1] As Kant argues, “The bulging, raised area under the eyes and the half-closed squinting eyes themselves seem to guard this same part of the face partly against the parching cold of the air and partly against the light of the snow…This part of the face seems indeed to be so well arranged that it could just as well be viewed as the natural effect of the climate” (11).

[2] “Instead of standards of maleness defining and disciplining bodies according to an ideal type of maleness, standards of maleness can be suggested to users based on one’s presumed digital identity, from which the success of identification can be measured according to ad click-through rates, page views, and other assorted feedback mechanisms” (Cheney-Lippold 171).

Designing for Timelessness

This essay is cross-posted with TechnoScience as if People Mattered

A Swiss-made 1983 Mr. T Watch. Timeless. (Source)

A Swiss-made 1983 Mr. T Watch. Timeless. (Source)

Micah Singleton (@micahsingleton) over at the Daily Dot has a really great essay about one of the biggest problems with the Apple Watch. You should read the whole thing but the big takeaway is that really great watches and mainstream tech have a fundamental incompatibility: nice watches usually become heirlooms that get handed down from generation to generation, but consumer technology is meant to be bought in product cycles of a only a couple of years. A really nice watch should be “timeless” in a way our devices never have been. Compared to the usual 2-year contract phone purchase, the technological evolution of high-quality watches moves about as fast as actual biological evolution. Is it possible to deliberately build timelessness into electronics? (more…)