Science, to borrow a phrase from Steven Shapin, is a social process that is “produced by people with bodies, situated in time, space, culture, and society, and struggling for credibility and authority.” This simple fact is difficult to remember in the face of intricate computer generated images and declarative statements in credible publications. Science may produce some of the most accurate and useful descriptions of the world but that does not make it an unmediated window onto reality.
Facebook’s latest published study, claiming that personal choice is more to blame for filter bubbles than their own algorithm, is a stark reminder that science is a deeply human enterprise. Not only does the study contain significant methodological problems, its conclusions run counter to their actual findings. Criticisms of the study and media accounts of the study have already been expertly executed by Zeynep Tufecki, Nathan Jurgenson, and Christian Sandvig and I won’t repeat them. Instead I’d like to do a quick review of what the social sciences know about the practice of science, how the institutions of science behave, and how they both intersect with social power, class, race, and gender. After reviewing the literature we might also be able to ask how the study of science could have improved Facebook’s research. (more…)
Editors Note: This is based on a presentation at the upcoming Theorizing the Web 2015 conference. It will be part of the Protocol Me Maybe panel.
I’ve been researching hacking for a little while, and it occurred to me that I was focusing on a yet unnamed hacking subgenre. I’ve come to call this subgenre “interface hacks.” “Interface hack” refers to any use of web interface that upends user expectations and challenges assumptions about the creative and structural limitations of the Internet. An interface hack must have a technical component; in other words, its creator must employ either a minimal amount of code or demonstrate working knowledge of web technologies otherwise. By virtue of the fact they use interface, each hack has aesthetic properties; hacks on web infrastructure do not fall in this category unless they have a component that impacts the page design.
I have a secret to tell all of you: I kind of don’t care about teaching evolution in science classes. Put another way, I’m less than convinced that most people, having learned the story of species differentiation and adaptation, go on to live fuller and more meaningful lives. In fact, the way we teach evolution ––with a ferocious attention toward competition and struggle in adverse circumstances–– might be detrimental to the encouragement of healthy and happy communities. I also see little reason to trust the medical community writ-large, and I cringe when a well-meaning environmentalist describes their reaction to impending climate change by listing all of the light bulbs and battery-powered cars they bought. I suppose –given my cynical outlook– that the cover story of this month’s National Geographic is speaking to me when it asks “Why Do Many Reasonable People Doubt Science?” Good question: what the hell is wrong with me? (more…)
There’s a tricky balancing act to play when thinking about the relative influence of technological artifacts and the humans who create and use these artifacts. It’s all too easy to blame technologies or alternatively, discount their shaping effects.
Both Marshall McLuhan and Actor Network Theorists (ANT) insist on the efficaciousness of technological objects. These objects do things, and as researchers, we should take those things seriously. In response to the popular adage that “guns don’t kill people, people kill people,” ANT scholar Bruno Latour famously retorts:
It is neither people nor guns that kill. Responsibility for action must be shared among the various actants
From this perspective, failing to take seriously the active role of technological artifacts, assuming instead that everything hinges on human practice, is to risk entrapment by those artifacts that push us in ways we cannot understand or recognize. Speaking of media technologies, McLuhan warns:
Subliminal and docile acceptance of media impact has made them prisons without walls for their human users.
This, they get right. Technology is not merely a tool of human agency, but pushes, guides, and sometimes traps users in significant ways. And yet both McLuhan and ANT have been justly criticized as deterministic. Technologies may shape those who use them, but humans created these artifacts, and humans can—and do— work around them. (more…)
Last week I wrote about the curious case of traditional love narratives in the face of online dating. In short, the profiled format, pay structure, and overall bureaucracy of online dating throws into stark relief the constructed belief in a fateful meeting of souls. And yet, the narrative persists. Here’s a brief snippet:
…[T]he landscape has drastically changed but the narrative, not so much. The maintenance of romantic love as a cultural construct, personal striving, and affective embodied response to courtship rituals speaks to the resiliency of normative culture and its instantiation through human action. Even as we transact and negotiate romantic relationships; even as we agree upon terms; even as we screen partners and subject ourselves to screening; we nonetheless speak of butterflies and hope for magic.
In the case of love and online dating, the narrative is both highlighted and strengthened through its empirical contradiction.
This idea sparked an interesting conversation among the Cyborgology team about how this principle—constitution through contradiction—is theoretically useful in understanding the relationship between technologies and culture. Technologies reflect cultural realities, but can also expose the constructed nature of these realities, threatening their taken-for-granted logic and concomitant guidance over behavior and interaction. In the face of such a threat, however, the logics remain, and even strengthen. (more…)
Sometimes it feels that to be a good surveillance theorist you are also required to be a good storyteller. Understanding surveillance seems to uniquely rely on metaphor and fiction, like we need to first see another possible world to best grasp how watching is happening here. Perhaps the appeal to metaphor is evidence of how quickly watching and being watched is changing – as a feature of modernity itself in general and our current technological moment in particular. The history of surveillance is one of radical change, and, as ever, it is fluctuating and rearranging itself with the new, digital, technologies of information production and consumption. Here, I’d like to offer a brief comment not so much on these new forms of self, interpersonal, cultural, corporate, and governmental surveillance as much as on the metaphors we use to understand them.
#review features links to, summaries of, and discussions around academic journal articles and books.
Today, guest contributor Rob Horning reviews: Life on automatic: Facebook’s archival subject by Liam Mitchell. First Monday, Volume 19, Number 2 – 3 February 2014 http://firstmonday.org/ojs/index.php/fm/article/view/4825/3823 doi: http://dx.doi.org/10.5210/fm.v19i2.4825
If, like me, you are skeptical of research on social media and subjectivity that takes the form of polling some users about their feelings, as if self-reporting didn’t raise any epistemological issues, this paper, steeped in Baudrillard, Derrida, and Heidegger, will come as a welcome change. It’s far closer to taking the opposite position, that whatever people say about their feelings should probably be discounted out of hand, given that what is more significant is the forces that condition the consciousness of such feelings. That approach is sometimes dismissed as failing to take into account individual agency; it’s implicitly treated as an affront to human dignity to presume that people’s use of technology might not be governed by full autonomy and voluntarism, that it’s tinfoil-hat silly to believe that something as consumer-friendly and popular as Facebook could be coercive, that the company could be working behind users’ backs to warp their experience of the world for the sake of Facebook’s bottom line.
Mitchell is not so overtly conspiratorial in this paper; (more…)
Today, I just want to write a brief post about a cool art project. The Dead Drop project, started by an artist in New York City, embodies much of the theory we talk about here at Cyborgology. And like most forms of art, it accomplishes this theorizing in a far more efficient and interesting way than that which we academics put forth with our many, many words.
The Dead Drop project began in 2010 by a Berlin based artist named Aram Bartholl. During his stay in NYC, he installed 5 Dead Drops in public places. Dead Drops are blank USB ports, cemented into city walls, trees, or other publicly accessible outdoor materials. People can upload and download files onto these ports. Anyone can install a Dead Drop, and Bartholl encourages worldwide participation. Bartholl describes the project as an “anonymous, offline, peer to peer file-sharing network in public space.” To date, there are 1,231 registered Dead Drops worldwide, comprising about 6,403 GB of storage space. (more…)
…or just get new friends?
The easiest, laziest, most click-baitiest op-ed, trend video, or thing to scream at a bar right now is how, with today’s technologies, we are more connected but also more alone. Ooh. Zuckerberg has 500 million friends but it was never really a spoiler to say that Sorkin’s The Social Network ends with him sitting alone at a computer. Ooh. The Turkle-esque irony is just too good for it not to zeitgeist all over the place.
That argument should not be altogether dismissed but I am quite skeptical of where it’s so often coming from and how it’s articulated. This trend might be largely disingenuous, and by that I do not mean intentionally insincere but instead a sort of cultural positioning: we-are-connected-but-alone not only drips with that delicious ironic juxtaposition, it simultaneously props the person making the case as being somehow deeper, more human, more in touch with others and experience. (more…)
[This is cross-posted at Its Her Factory.]
A few recent events and articles/news items have me thinking, in a somewhat disjointed fashion, about both what it means to “do theory” or to practice philosophy, and how, exactly, one should go about doing and practicing these things.
In particular, it seems like philosophy is stuck between being reduced to a hard science, on the one hand, and being incompatible with “digital humanities,” on the other. And in the end I think this double-bind has the very troublesome effect of discouraging, silencing, and marginalizing what could be the most innovative things philosophy has to offer science, digital humanities, and contemporary intellectual life more generally. (more…)