This piece is cross-posted on Microsoft Research New England’s Social Media Collective Research Blog.

In her recent post here on the Cyborgology blog, Jenny Davis brought the pervasive use of Facebook as a study site back into conversation. In brief, she argued that “studying Facebook—or any fleeting technological object—is not problematic as long as we theorize said object.” The take away from this statement is important: We can hope to make lasting contributions to research literature through our conceptual work – much more so than through the necessarily ephemeral empirical details that are tied to a time, a place, and particular technologies.

In this post, I want to give a different yet complementary answer to why it may be a problem if our research efforts are focused on a single study site. This is regardless of whether it is the currently most popular social network site or an already obsolete technological object. The post made me think of a tweet (by Nicole Ellison) echoing the discussion at the International Conference of Weblogs and Social Media (ICWSM) a few weeks ago:

In the story, blind men describing the elephant end up with wildly different accounts depending on which part of the animal each happened to stumble upon. While different accounts may all add accurate and relevant information, it is only in combining them that the men can begin to understand what the elephant looks like as a whole.

Similarly, if we focus only on Facebook –or whatever happens to be the most popular study site at any given moment– we will gain insight only into parts of the proverbial beast of how technologies and people go together. So, how might our conceptualizations of social media sites and social interaction change if we explored a wider range of services and used them as tools for our theorizing?

Let’s first consider Couchsurfing.org, a social media site that helps traveling guests to connect with local hosts for free accommodation and shared experiences. As a study site, it may encourage us to envision privacy in ways we wouldn’t come to think of in considering Facebook. Couchsurfing profiles offer users the option of presenting themselves as “several people,” making room for profiles that are not owned by individuals but small groups of people, such as couples, families, and housemates. Studying Couchsurfing may help us unpack what it means for a group to make itself more or less accessible and open to others. Social psychologist Irwin Altman’s definition of privacy as an interpersonal boundary process by which a person or a group regulates interaction with others[1] is quoted often in research on privacy in networked contexts, such as social media sites. However, the focus tends to be on individuals and their interpersonal negotiations, leaving regulation on the group level with scarce attention. Furthermore, what could we learn from studying couchsurfers’ experiences and considering privacy as hospitality, or privacy as politeness?

As another example, let’s think of changes in privacy settings and defaults on social media sites. Over Facebook’s history, changes to privacy settings have caused a number of heated debates (see boyd & Hargittai’s summary). Changes towards increased openness and decreased obscurity get framed as privacy violations. As such, they capture the attention of researchers and advocates focusing on privacy – and, for a reason. According to Altman’s theory of privacy, however, people’s efforts to regulate boundaries may fail both towards achieving too little or too much privacy. Pointing a finger at myself as much as at anyone else, I wonder whether our pressing concerns are biasing the focus of our theorizing.

While the trend among social media sites to tempt and push people to share more and more continues, purposely identifying and investigating counter examples could enrich our conceptual work. Consider Scoopinion, a Finnish news service that recently abandoned its original, automated social sharing model in order to focus on delivering personalized, “crowd-curated” recommendations for feature-length stories. In this process, interestingly for our theorizing purposes, Scoopinion users lost access to the behavioral data of others along with the chance to share their own reading data on the site. If we are to adapt Altman’s theory to the networked, augmented world of today, shouldn’t we look at how users conceive of system changes like this that (unexpectedly) decrease access and visibility, too? Is the sudden end of sharing perceived as a privacy violation? If our studies considered also cases that counter the trend of increased openness, we might come to understand reactions to changes in privacy settings and defaults more comprehensively. More importantly, it could help us see more clearly where prior theories can fail or support theoretical understandings that are situated in the networked context of today.

I argue that we are much better off in theorizing social media and the ways in which they relate to people if we choose to explore varied sites of study. These choices affect what seems illustrative of the phenomena under study. If our theoretical thinking builds on empirical research in only a few dominant study sites (or just one), we risk sailing into the murky waters where what is popular and typical comes to dominate our thinking – even when we know fully well that they are not all there is.

Airi Lampinen (@airi_) is a graduate student in Social Psychology at the University of Helsinki, Finland, and a researcher at Helsinki Institute for Information Technology HIIT. Currently, she is interning at Microsoft Research New England.


[1] Altman, I. 1975. The Environment and Social Behavior. Privacy – Personal Space – Territory – Crowding. Brooks-Cole Publishing Company, Monterey, CA, USA.

A little over a year ago, I found myself conducting a focus group session with a handful of middle school students. As part of a research project looking to better understand how Internet safety programs conceptualize youth and Internet technologies, I became increasingly surprised – and at least somewhat frustrated – that cyberbullying rarely came during dozens of conversations with students, parents and school administrators. This particular focus group session was no different. Nearing the end of the session, I finally asked the students if they used the word cyberbullying when they talked to their friends. Their response, looking at me as if I was the most out-of-touch idiot they had ever spoken with, was a unanimous “Nooo!” I then asked them: If you do not use the word, who does? Various students replied with disgusted exclamations of “Parents!” or “Teachers!” and in what would be one of the defining moments of the project, a student said “It’s an old lady word” quietly under her breath. Looking beyond some problematic ageism and sexism that may be implied in her response, there is an element of truth behind what she was saying: children are using a very different interpretive frame than parents when it comes to so-called “cyber-bullying.”

Kids don’t as deeply distinguish between online and offline bullying, just as they don’t distinguish between online and offline sociality. Their lives are full of everyday drama, smoothly transitioning between the social contexts of schools, homes, and social media (see: media ecologies). As such, the response from my focus group students is particularly telling. “Cyberbullying” is an “old lady word” created by grownups trying to figure out all this “new” online activity, and it’s yet another clear case of what other authors on this blog have described as digital dualism. In the Internet safety arena, digital dualist frames do not simply draw distinctions between online and offline social life – they are used to blame existing social problems on the social technologies that make them visible in new ways. Bullying, predation and exposure to “inappropriate content” have been seen as problems long before the widespread adoption of the Internet and information technologies by kids, and yet all of these problems appear as “new” or, at best, made worse by information technologies.

In this sense, digital dualist frames are grounded in technological determinism – the presumption that technologies drive social change – drawing attention to problematic information technologies and making it impossible to recognize or confront the entrenched social/institutional problems that produce “Internet safety” issues. Problems with disrespect and harassment (“bullying”) that emerge from increasingly restrictive neoliberal constructions of childhood are framed as problems with “inescapable” technologies. Problems with predation and “grooming” that emerge from the social distancing and isolation of childhood are framed as online enemies already in your home.” And, of course, problems that emerge from the insistence that youth are naturally without sexuality (and/or are dangerously sexual) are framed as problems with the “unwanted exposure” of youth to inappropriate content through information technologies. Put differently, youth do face risks online, but they are largely the same problems previous generations faced, made newly visible by Internet technologies.

So, back again to the so-called “old lady words.” The students I spoke with were by no means dismissive of the very real risks of harassment and abuse they face every day – they had very real concerns about bullying and predation, just as adults did. But when parents and teachers distinguish between online and offline life, it not only comes off as out-of-touch, it produces bad policy for youth safety. As one director of information technology in a NY school district told me, “When you look at the audience and the kids are snickering, they’re not taking it seriously. I think it’s that power of authority that’s trying to clamp down on students’ rights…” And, let’s face it, it’s hard not to laugh when our legislators say things like “continued cyber harassment cyber bullying is a sickness and a crime, these internet bullies do not care are realize that our women and children are hurt the most from these internet predators.” 

Nathan Fisk is a danah boyd fanboy and adjunct lecturer teaching “Youth and Teens Online” in the Science & Technology Studies department at Rensselaer Polytechnic Institute.

Doing Journalism in the Social Media Age

Discussion with Andy Carvin (@acarvin) & Zeynep Tufekci (@techsoc)

Introduction: Nathan Jurgenson (@nathanjurgenson) & PJ Rey (@pjrey)

 

In my research on the Dutch banking system, it became clear that the banks are seriously worried about social engineering. These techniques, such as phishing and identity theft, have become increasingly common. No reason for concern, right? Surely, a system upgrade, some stronger passwords, or new forms of encryption and all will be well again. Wrong! When it comes to social engineering, trust in technology is deadly. The solution, in fact, cannot be technological; it must to be social.

The term social engineering has been around for decades, but in the last couple of years, it has been popularized by famous social engineer Kevin Mitnick.  In the book Social Engineering: The Art of Human Hacking by another famous social engineer, Christopher Hadnagy, social engineering is defined as “the act of manipulating a person to take an action that may or may not be in the ‘target’s’ best interest.” This may include obtaining information, gaining computer system access, or getting the target to take certain action. Kevin Mitnick pointed out that instead of hacking into a computer system it is easier to “hack the human.” While cracking the code is nearly impossible, tricking someone into giving it to you is often relatively easy.

Countering these social engineering techniques tends to be difficult. As a result, banks are hesitant to contact their clients. Contacting the client means using media and this usage fosters trust in these media. This trust proves devastating to the banks, but is a nirvana for social engineers. As PJ Rey states in his essay Trust and Complex Technology: The Cyborg’s Modern Bargain, “it is no longer feasible to fully comprehend the inner workings of the innumerable devices that we depend on; rather, we are forced to trust that the institutions that deliver these devices to us have designed, tested, and maintained the devices properly.” Doug Hill builds upon Rey’s statements pointing out that our trust in technology applies to the people who use them as well as the people who have created them. In short, banks trust in their technology just as much as the employees and clients.

It is not hard to find tons of examples in popular discourse on the faith people have in technology. Every new piece of hardware or software is better than the previous one and will solve problems and tricky situations. However this blind trust in technology results in sophisticated invented scenario’s created by social engineers. An example would be pretending to be a computer helpdesk operator, randomly calling employees of a company and claiming that somebody of their department called because there is a problem with one of the computers. Chances are that at some point an employee will say yes and fall into the trap giving his or her password to the social engineer.

It goes without saying that the trust people have in technology is not the only factor in the equation. However, unexamined trust seems to be the big pitfall. If banks want to counter social engineering, they need to realize that this will not be done by merely upgrading password encryptions or other technological aspects of their security system. Further trust in technology will not remedy the problems that trust in technology created. Instead, the social side needs to be taken into account. The question we  are asking is: How can we make people more critical (i.e., less trusting) about the dangers revolving around technology, especially when it involves their own wallet?

In response to a BBC article on how hackers outwit online banking identity security systems, security technologist Bruce Schneier presents the solution of authenticating the transactions we make (similar to credit cards). Although this sounds like a shift away from a technological solution, seeing it is more about the transaction behavior of the client, this poses other dangers. Back-end systems monitor suspicious behavior. An example would be that if a client from The Netherlands signs from, say, Bulgaria, the situation is conceived as suspicious. This situation would add points to a risk score. If the risk score gets high enough other means of authentication come into play such as a telephone call to the client.

In authenticating the transaction the question needs to be answered if the transaction makes any sense with regards to the financial behavior of the client. This, as always, raises many questions on surveillance. The banks will need to know what your behavior is if they want to establish what suspicious behavior is for a specific client. Although banks probably wouldn’t mind this solution, but it feels like a violation of privacy from a client’s perspective. However, if we all don’t start being more critical, these sorts of invasive authentication scheme may soon become a reality.

Samuel Zwaan (@mediawetenschap) is a teacher and student in Media Studies at Utrecht University

The Google Matrix

Originally posted on PopMatters.

On Twitter, PJ Rey resurrected this August 2010 op-ed by William Gibson that has new currency given the hullaballoo about Google’s privacy-policy changes. Gibson argues that Google is an unanticipated form of artificial intelligence, “a sort of coral reef of human minds and their products.” But this description sounds less like artificial intelligence and more like Marx’s notion of the general intellect. Anticipating the intensification of technology, Marx claimed that machines would eventually subsume “the process of social life” and integrate it as a form of productivity.

The development of fixed capital indicates to what degree general social knowledge has become a direct force of production, and to what degree, hence, the conditions of the process of social life itself have come under the control of the general intellect and been transformed in accordance with it. To what degree the powers of social production have been produced, not only in the form of knowledge, but also as immediate organs of social practice, of the real life process.

This is pretty obscure even by Marx’s standards, but autonomist Marxists (Negri, Lazzaurato, Virno) have extrapolated from this a definition of general intellect that embraces, as Virno puts it, “formal and informal knowledge, imagination, ethical tendencies, mentalities and ‘language games’.” Because of the membranous nature of the general intellect, when harnessed and integrated with capital, it can recuperate all social behavior as “immaterial” production — enriching the valence of signs, producing affects, etc. — it means that “even the greater ‘power to enjoy’ is always on the verge of being turned into labouring task.” That is, our consumption, especially of information, is a mode of production. The general intellect is the sum of all that information circulation.

Google, then, is the reification of the general intellect. It manages to take human curiosity and turn it into capital.

The consequences of that are profound. Our curiosity is no longer a sign of our leisure; it’s an enormously important economic factor. To a degree this has always been true. Our willingness to pay attention to things is at the root of consumer demand. But it is now far more productive of informational goods in and of itself, thanks to ubiquitous online surveillance and data-storage capabilities. Much of the way we express our human curiosity can now be recorded and fed into algorithms and plotted on graphs of connections to generate more information, stimulate more curiosity, produce more demand. That’s why, as Gibson points out, Google’s Eric Schmidt claimed that people “want Google to tell them what they should be doing next.” Google doesn’t end lines of inquiry; it gives users momentum. The point of Google is to try to keep you Googling. Not only does that make their ad space more valuable, but it adds value to their search products; it thickens the membrane. As Gibson notes, “In Google, we are at once the surveilled and the individual retinal cells of the surveillant, however many millions of us, constantly if unconsciously participatory.”

What that means is that Google’s instantiation of the general intellect captures not merely human cooperation and collaboration, as the theorists tend to emphasize when discussing post-Fordist production and the productivity of interpersonal “virtuosity”. It also captures and perhaps even emphasizes the lateral surveillance aspect of sociality — each implementing control on everyone else, recording what they do and annotating it. Human curiosity is intensified and directed at one another. The general intellect becomes a giant spying machine. (Facebook is probably a more explicit example of that than Google, but Google seems more powerful as the received source of answers, the index of approved information.)

Gibson notes how Google makes personal identity a productive factor, a kind of capital it owns. This makes it something we are therefore stuck with. What we have done and would like to have forgotten is part of Google’s “fixed capital” that they are loath to relinquish, despite Schmidt’s suggestion that teens be issued fresh identities when they become adults. (Gibson ridicules “the prospect of millions of people living out their lives in individual witness protection programs.”) Instead we must adapt our understanding of who we are and what identity consists of. In The New Spirit of Capitalism, before launching into a discussion of the ideological usefulness of the term network, Boltanski and Chiapello discuss our demand for the intelligibility of shared social values.

Young cadres in particular feel a need clearly to identify the new forms of success and the new rules of the game in the economic world, in order to know how to conduct themselves and prepare their children. This demand for intelligibility exerts significant pressure for greater explanation and formalization of the rules of conduct, which will then guide action. In fact, the people tend to conform to these emergent new rules, if only because they confer meaning on what would otherwise merely seem like an arbitrary proliferation of ad hoc mechanisms and locally convenient improvisations.

I don’t know about that being a “fact,” but it seems plausible that social media have taught us all something about “locally convenient improvisations,” for good and ill. And the explosion of Facebook and Google into our lives has disrupted the old version of intelligibility — prompting new rules that are consistent with the new form of capitalism these media are driving.

So our common sense understanding of what it means to have a self is changing under this pressure. We no longer have the luxury of seeing ourselves as isolated individuals who make themselves as an expression of their iron internal will. Now we have our identities explicitly shaped (or maybe even dictated) by our contingent place in social networks and we can’t hide that fact from ourselves. We have to relieve the dissonance of our data trail by surrendering the prerogative of claiming to be self-created and learn to love the self the data tells us we are or should be at any particular moment. We let Google tell us what to do next.

Rob Horning (@marginalutility) is an editor of the New Inquiry.

Successful Black Guy
Successful Black Guy

Why successful black guy is successful: The socio-cognitive side of humor

Having read Jenny Davis, David Banks and PJ Rey on internet memes, I felt compelled to share my creative grain of sand on this peculiar ‘web-based’ construct. I often wonder why memes are funny. The simplicity of memes is deceiving: e.g., a Spartan image, often featuring only the face or upper body of a person or animal, and a kitsch colored background that would make Warhol think you’re on acid. Add two rows of parallel text above and below and presto! –  You have created funny. Is it really that easy? I would generally think (and hope) that humor is a complex phenomenon, that answering  “why is this picture of a cat funny to me?” requires invoking some esoteric philosophical or psychological terminology. I decided to do some research.

One of my favorite memes of all times is “successful black guy”. This is successful black guy explained by know your meme: “an image macro series featuring a Black man dressed in business attire and a witty one-liner satirizing the stereotype of young African American male as street hustlers or gangsters who only care about cars, money and ho’s. The humor is mostly derived from the intentional line break in mid-sentence, with the top line impersonating a black male stereotype (EX: I Got the Best Ho’s) and the bottom line suddenly falling flat in character (EX: Out in My Tool Shed).

The people over at know your meme are hinting at why memes are funny when they say: “The humor is mostly derived from the intentional line break in mid-sentence”:  the key lies in the logical continuity that is established by the top line of text, which suddenly breaks with the appearance of the following line. OK, we are getting somewhere now. But, why is this funny? Why are breaks in logical continuity amusing, and particularly sudden breaks of logical continuity?

 In a book published in November 2011, Matthew Hurley, alongside prominent philosophers Daniel Dennett and Reginald Adams, tries to give us insight into this question. Hurley was amused at the fact that most literature on humor was descriptive; it sought to differentiate types of humor and draw comparisons between them. There was little material on why some things or occurrences ought to be funny in the first place. Why does humor exist? Where does it come from? Hurley reached the following departure point: There is simply not enough information out there, at all times, for us to make completely informed decisions on a consistent basis. Our brains have to make decisions in situations abound with multiple pieces of incomplete information, and thus have to make assumptions. “All these best guesses simplify our world, give us critical insights into the minds of others, and streamline our decisions. But mistakes are inevitable, and even a small faulty assumption can open the door to bigger and costlier mistakes” (full article here).

 Our brains are constantly processing information about the world, and ourselves, so small mistakes can quickly pile up and be detrimental. The pleasure we feel when something is funny is a small jolt which the brain receives for picking out instances where our assumptions break down. The evolutionary value of this action is large, and the method by which we accomplish this, the systematic process of rewarding ourselves when we identify breaks of continuity, is called humor.

Enter the Successful Black Guy meme. The first line of text on the top is written in such a way as to prime your brain to make certain popular assumptions about the world, in this case about African Americans in the United States. Then, the second or concluding line finishes the sentence in a way which maintains the semantic consistency, in other words the sentence still makes sense, but through a different logical continuity pathway. This is the break that know your meme is talking about. The result is the uncovering of the previously covert assumptions your brain made after reading the first line. Now, your brain is happy!

If Hurley and Co. are right about the underlying mechanisms of humor, then it is easy to see why memes are funny, albeit appearing incredibly simple. The meme is like stripped down funny, the skeleton or template of a joke, if you will. It delivers a fully developed context through an image and by its binary spatial configuration, the edification and later destruction of an assumption about the world. It is possible that the internet meme as we know it is the smallest and simplest fully-functional and self-referential carrier of humor there is – the building block of humorous information, similar to how the meme is the smallest building block of cultural information.

In the case of successful black guy, the story does not end here. There is an added layer of complexity, which stems from the fact that the assumptions of the world called upon by the first line are of a particular kind, they comprise a stereotype of African Americans. Stereotypes are also cognitive responses to the fundamental condition in which humor arose; lack of information in every-day decision-making. Our brain bundles types of experiences together and tries to link them to experiences we previously had. It would simply be too costly to approach every new situation with a cognitive clean slate. In this sense, stereotypes are cognitively useful as decision-making heuristics. Thus when the second line of Successful Black Guy dismantles the assumptions of the first, it is not simply illuminating assumptions we made ad-hoc; it sheds light into how assumptions got bundled up through shared experience in a systematic and habitual manner. In this case, the experience of being black in the USA, and the stereotypes that accompanies it.

Most jokes will use these socially-situated assumptions and generalizations to their advantage. This is interesting because it means humor has an inherent social component. In other words, Successful Black Guy does not simply tell you – “Hey brain, here are some assumptions you made about Black people”. Rather, it says, “Here are some assumptions a group of people (which you may or may not belong to) consistently and systematically make about Black people”.

Successful Black Guy opens up the floor to asking the question: Can memes be used purposely and strategically with a social purpose? My tentative answer would be yes, if done right. For example, there are other memes out there which are about racial or ethnic stereotypes. Think of ‘high expectation Asian father’.

High Expectation Asian Father
High Expectation Asian Father

‘High expectation Asian father’ does not align the punch line with the de-construction of the social stereotype. The humor comes from showing us the assumptions we made on that particular joke, in this case a linguistic assumption, not the assumptions we collectively make about Asians in the United States (this assumption, this meme in particular, leaves intact). In contrast, Successful Black Guy’s humorous target lies at the core of the stereotype. This way it does a good job of cognitively nudging us to dismantle our social constructions about African Americans in the US. It does so without asking us if we have stereotypes (most people would say “no”), or without asking us to consciously make reference to them (most people could not). It simply shows us they are there; they reveal themselves to us, perhaps in the mythological way Jenny Davis described. Otherwise, the meme would not be funny.

Although they are cognitively useful, ethnic or racial stereotypes need to be constantly in check. The lag between evolutionary time – the time scale in which our brains got molded to be the way they are now – and globalized time, the instantaneous and parallel time paradigm of our age, is big. The evolutionary mechanisms we have built into our brains developed in a very different social context than the present one. Stereotypes made sense in the tribal mode of social organization, where one group of genetically similar people made sparse contact with other similarly-organized groups. This process of being shown our assumptions, our breaks of logic or where our generalizations extend beyond their reach, is useful to this goal, whether it is called humor or not. Being able to laugh while doing this might make all the difference, after all, it would entail strategically using built-in cognitive dispositions (humor) to mitigate the effects of cognitive heuristics (stereotypes). The internet meme can be a simple, cheap and available tool for doing so.

The Cyborgology blog is again sponsoring this year’s Theorizing the Web conference. Here’s the info:

On Twitter: @TtW_Conf & #TtW12.

On Facebook: Community Page & Event Page.

Keynote:

“Social Media and Social Movements”

Andy Carvin (NPR; @acarvin) with Zeynep Tufekci (UNC; @techsoc)

Andy Carvin & Zeynep Tufekci

Deadline for Abstracts: February 5th

Registration Opens: February 1st

Call for Papers:

Building off the success of last year’s conference, the goal of the second annual Theorizing the Web conference is to expand the range and depth of theory used to help us make sense of how the Internet, digitality, and technology have changed the ways humans live. We hope to bring together researchers from a range of disciplines, including sociology, communications, philosophy, economics, English, history, political science, information science, the performing arts and many more. We especially encourage international perspectives. In addition, we invite session and other proposals by tech-industry professionals, journalists, and other figures outside of academia. Intersections of gender, race, class, age, sexual orientation, and disability will not be isolated in seperate panels; instead, we fully expect these issues to be woven throughout the conference.

Submit abstracts online at http://tinyurl.com/TtW12.

Topics include:

  • Citizen/participant journalism and media curation
  • Identity, self-documentation and self-presentation
  • Privacy and publicity on the Web
  • Cyborgism and the technologically-mediated body (e.g., body modification)
  • Political mobilization, uprisings, revolutions and riots on social media (including the Arab Spring/Fall, Occupy)
  • Repression and the Web: Surveillance, wire-tapping, anonymity, pseudonymity
  • Code, values and design
  • Epistemology of the Web: Wikipedia, Global Voices, “filter bubbles” and the prosumption of information
  • Theorizing whose Web? How power and inequality (e.g., the Digital Divide) manifest on the Web
  • Mobile computing, online/offline space
  • Digital dualism & augmented reality; should the online/offline be conceptualized as seperate or enmeshed
  • Education, pedagogy and technology in the classroom
  • What art/literature can offer research and theory of the Web

We plan to curate 7 open submission panels, 4 presenters each as well as a couple invited panels and a keynote session on social media and social movements with Andy Carvin (NPR) and Zeynep Tufekci (UNC). Other events may be added before April.

The first Theorizing the Web conference happened last year. We decided to do this because there often is not a place for scholars who are theorizing about the Internet and society to gather and share their work. The 2011 program consisted of 14 panels, two workshops, two symposia (one on social media’s role in the Arab revolutions, the other, on social media and street art), two plenaries (by Saskia Sassen on “Digital Formations of the Powerful and the Powerless” and George Ritzer on “Why the Web Needs Post-Modern Theory”), and a keynote by danah boyd from Microsoft Research and NYU on “Privacy, Publicity Intertwined.” Presenters traveled from around the world (including Hong Kong and New Zealand). The archive is available here.

There will be a new website with much more information coming January 2012. For further inquiries, email theorizingtheweb@gmail.com.

Call for Artists:

In addition to traditional presentations, the conference will feature a variety of artistic and multimedia events. As such, we invite proposals from artists for relevant works or performances in any medium as well as for discussion of such pieces. We seek to display art of all forms during the conference and after at a reception. This could include, but is not limited to, paintings, sculpture, poetry, fiction writing, digital art, and performance art.

YouTube Preview Image

Would you agree when I say that the way we represent ourselves has much to do with the idea of how well we think we know about ourselves and perhaps, less to do with choice or control? Consider this, we deliberate over our clothes, are picky with food groups, finicky about television shows, have preferences for certain books, and who we hang out with. Our preferences are largely responsible for self-representation and act as guidelines for others to categorize us. What about decisions and preferences that are not deliberate – the way we react to distressing news (a death in the family); how we face challenges (poor scores in exams); our attitude towards physical exercise; planning a camping trip – are non-verbal and visceral cues that add up to people’s perception of what makes us who we are. So, representation can be controlled as well as non-deliberate in real life.

The digital space frequently encourages us to take control of how we represent ourselves. We are also given opportunities to modify the same at frequent intervals. Our digital histories are a cumulative record of our thoughts, activities, interests, and participations on a host of online platforms. Are they a sum total of what we are? Can we honestly say that our digital activities and our avatars online stand for the whole of our personality? Aren’t we more than the reflections of a series of ‘What’s On Your Minds’, or ‘Likes’, or ‘Add To’, ‘RT’, ‘Share This’, and ‘Recommend’? These are ways in which we communicate, mostly textually and digitally; modes peculiar to the Interwebs. Is there a system via which we can attempt complete digital transference of our offline selves so that there’s more ‘accurate’ representation for our digital peers?

Each of us exhibits a digital signature that is peculiar to what or who we are online. These take the form of avatars. My avatar receives its cues from its offline “twin”, however, neither do we deliberate over its responses nor do we have a conscious say in its growth. The body of reference that builds from our online detritus does not always accumulate in a controlled environment. The mycybertwin.com web service, however, allows us to do just that: artificially engineer a twin and let it loose on cyberspace as my virtual representation.

One of the trends of recent years has been the humanizing of digital channels, giving a face to things which are not human. This has led to the creation of avatars (also known as bots or chatter-bots) artificial intelligences with which users can “converse”. The success of such bots varies greatly, there are few which respond in a convincingly human way, it is no great mystery why they are commonly referred to as “Bots” often resulting in a stilted, mechanical interaction where straying off a recognized path can lead to poor responses.”

Avatars are the de-facto interlocutors of our Web communications. We don’t think about the person behind all our instant chats (their physical appearance, the clothes they wear while chatting, facial expressions), rather we instantly identify with that tiny icon-photo to the left of the message. The static image of a familiar face frozen in a smile has more meaning for us than all the micro-details of a live person whom we might have never met in person.

~

MyCyberTwin is a web service that allows you to ‘engineer’ your avatar, what they refer to as a ‘cyber twin’. The artificial intelligence that forms the brain behind the cyber twin would be fed with information pertaining to our habits, nature, attitude and preferences, and be taught into becoming us through lessons, text chat and constant feedback. While we are familiar with simple chat-bots and animated avatars that have a basic profile or script to work on, this web service offers something unheard of: an artificially intelligent avatar wiped clean of any personality, save for a generic, lab-built one that can be modified and re-cued into emulating us. Can it debunk my belief that our avatars have organically nurtured identities, not artificially cultivated one?

The most taxing part of the engineering exercise begins with filling up a 79-set questionnaire where I respond to questions on my religious affiliations, political views, sexual orientation, educational background, languages spoken, affinity to family and home, relationship status, my views on sex, spirituality, politics, humor quotient, and so on. A mix of hypothetical situations and my imagined emotional responses to the same are thrown in. It would have been liberating to draft the twin’s behavior and attitude – personality blueprint – from scratch as I would have had the authority to judge and decide near-close-to exactly how I would in a given situation. The questionnaire does leave room for more than one response (three options to a given question), which gives us room to account for mood swings, eccentricity, a bit of mischief, and other variations to how we might behave in various chat environments.

Knowing that the cyber twin has answered a very specific set of questions also helps us test the validity of responses within a tight framework of reference. The cyber twin runs on scripts running in our heads and operates on the assumption that it mimics our offline self. I chat with my twin assuming that it will respond to questions the way I do, however, this is where we come to the first roadblock I mentioned about sticking to a personality type. Operating as it does on the parameters of a ‘warm, but cheeky’ avatar, its responses are completely off the assumed set of responses I think I would give to the same prompts, keeping in mind that I consider myself ‘warm but cheeky’ indicator.

~

The whole exercise begins and ends with data feed as the basis for building identity and that turns out to be an Achilles Heel in my perception: the web service forces me to work within artificial constraints and engineer my twin in the world-view of the program. We are never given access to the ‘how’ of its functioning; what psychology and sociolinguistic texts does the program pick up references from about identity formation and character building? The twin takes turns, sometimes picking up cues from what I feed it, at other times it relies on the program to supply it with ‘plausible’ responses. Eventually, I was left with no choice but to adapt my responses to the twin’s and I started mimicking her, supplying responses to her questions with answers that she would give. This was simple, as she used a very limited vocabulary and response sets.

Performance theorist Richard Schechner says that “performance, that is, how people behave and display their behavior, is a fundamental category of human life”. If our actions and behavior online are also snippets of a meta-performance that encompasses lived experienced offline, then is there a script that we follow for reference online? In essence, is it my performances that gets textualized through my avatar or does the avatar learn to read the script herself and follow previous leads? While I would like to believe that our behavior online is more fluid rather than staged, isn’t it true that we enter the online stage anticipating an imminent performance. We expose ourselves and engage in monologues and let the audience know when the curtain’s about to drop, and rise again. The ploy becomes part of how we are perceived, and the script an integral part of how we construct ourselves online.

Imagine an actor who has to essay the role of a real life character on stage. While he can adopt the mannerisms, learn the language and mimic the several physical traits of the original, he can never hope to imbibe the life essence that goes into making the original man what he is. And it’s really hard to compile and define our life’s essence, isn’t it? We don’t always jot down our milestones or life turning moments. We retain it in memory and it forever changes us. That change cannot be replicated physically and mirrored, unless we live through it. Sharing does not equal understanding and doesn’t lead us into becoming someone. To me, the cyber twin is an actor, not the original.

The ontology of a cyber twin thus leaves me vastly confused. Is there a finite point in time or understanding when we know the twin has appropriated the meaning of being us and can stand in for us? If and when the cyber twin exists independently from its author, does it accumulate memories and form impressions? The web service mentions that the twin keeps a record of all the people it chats with and remembers conversations. Does it remember the essence of conversations – subtext, undertone, subtlety of meanings – or is text merely data?

I have seen it pronounce me rude and curious, but that’s because it captures the right keywords in the chat: I am curious about your love life, I say to her, to which she responds: you are a rather curious person, curiosity killed the cat. These are obviously stock quotes. How many variations of these would I have to teach her before she’s able to retort? Can a program be taught rhetoric?

Knowing that my avatar has an image that is fixed in space and time, in the memories of others, acts as a guiding hand for me when I am online. I am conscious of how I represent myself; the language I am socially sanctioned to use by my peers, the language I think my avatar should use as that’s the way that I would speak, whether online or offline. So, while these inputs are fed in through text, they are really part of a larger schema of behavior, character, feelings and the indescribable ‘human’ fickleness of agency. The cyber twin uses her textual hand to grope through the gallery of meaning making. But the fact remains that there is nothing at stake for the twin. An uncharacteristic move from the twin could mean loss of credibility for me. For now, while I am busy augmenting my twin’s reality online, I can’t say she’s doing the same to my reality, offline.

 This is a modified version of the original essay. Attribute it to: Ansher, N. (July 2011) ‘Engineering a Cyber Twin’. In Shah, N. and Jansen, F. Digital AlterNatives with a cause? Book One: To Be, p. 51-58. The Hague: Centre for Internet and Society and Hivos Knowledge Programme; Creative Commons

To access the full text of the essay in PDF, visit this site to download Book 1, To Be: http://cis-india.org/digital-natives/blog/dnbook

Nilofar Ansher is pursuing her Master of Arts in Ancient Civilizations from the University of Mumbai, India. She is an editor, writer and researcher and blogs at http://www.trailofpapercuts.wordpress.com. Twitter: @culture_curate

Review of ‘Digital Natives and the Return of the Local Cause’ by Anat Ben-David. Essay from the Digital AlterNatives with a Cause? book collective, published by Centre for Internet and Society, India and HIVOS, The Netherlands

Ben-David’s piece is an informed attempt to resolve the conceptual fuzziness of the term “Digital Native.” She attempts this in a philosophical manner: trying to move away from the ontological “who are Digital Natives?” to an epistemological “when and where are Digital Natives?” Her reasoning is that this change in perspective will allow us to unpack the hybrid term and thus  determine if it refers to a unique phenomenon worth exploring.

To answer the when and the where, Ben-David situates the term into its constituencies: digital and native, contextualizing the words using two approaches; historiographical (when) for the digital and geopolitical (where) for the native.

Digital” is situated, semantically, in the broader framework of technology-mediated social activism. The author applies the concept placing two events side-by-side: First, the 1999 manifestations against World-trade Organization protests in Seattle and then the 2011 Tahir Square protests in Egypt. Are these two phenomena different in nature? Is Tahir Square a more technologically advanced version of Seattle? Are the basic mechanisms the same, albeit with new faces and shinier phones?

Ben-David postulates three reasons for placing the manifestations on a different trajectory. First, “The Internet” of 1999 and “The Internet” of 2011 are quite different. The second is that the demographic constituting the protest are not the same: in 1999 they were mostly Civic Society Organization (CSO) employees and volunteers, while in Tahrir they were mostly civilians and concerned citizens connected through their local networks.

Tahir Square. Conceptually different from Seattle.

The third concerns the spatial and symbolic nature of the protests. In Seattle, the protests were against large transnational corporations; Seattle was chosen because it hosted the World Trade Organization that year. In Egypt, the protest was directed against local corruption and concerned itself with local governance issues. Tahir Square was chosen because the protests were directly about, of and in Egypt.

Which brings us to the where. The term “Native” is used by Ben-David to refer to the ongoing structural shifts towards localized activism campaigns. This change came with the growing realization that transnational activism campaigns who attempted to affect change across loosely cohesive cross-sections of the world, tended to lose touch with their points of origin and remain in suspended animation. Local campaigns seem to be more responsive and agile, especially in their ability to enter into dialogues with the needs of local populations. The spontaneity of action, the granular and modular level of the causes, and the lowered threshold of action needed to initiate a movement are some of the aspects Ben-David sees in emergent campaigns, which are critically different from activism campaigns in the past.

Of course, the location and time eventually intertwine. A growing trend in the development of the digital world has been the localization of frameworks, methodologies and approaches. The author explains this change using Richard Roger’s four stages of the evolution of politics about the web: Web as global space, web as public sphere, web as interconnected social networks, and the current one, web as a localized phenomenon.  By doing this, Ben-David is able to shows us without telling us that the distinction between when and where is purely analytical and that they really are a single entity of the time-space continuum.

A different kind of hybrid-spacetime

Ben-David succeeds in contextualizing both the digital and the native as different sides of the same coin: as two manifestations of the growth and maturation process that technology-mediated activism has been through over the last 10 years. The result is an internally-consistent perspective which sees Digital Natives habituating hybrid-timespaces alongside heterogeneous actors, where the relationship between the local and the global is contingent, transitory, dynamic—and knowledge can be transformed and adapted to fit actors and their causes.

Samuel Tettner is a Venezuela-born globally situated cyborg, interested in science, technology and their critical and empowering understanding, currently pursuing a Masters degree in Society, Science and Technology in the Netherlands.

Editor’s Note from PJ Rey: Several months ago, I wrote a post called “Why Journals are the Dinosaurs of Academia,” which argued that goal of academics to circulate their ideas as widely as possible was hindered by their own backward practice of attributing excessive symbolic value to print media. In fact, the academia’s incentive structure rewards the best practices of yesteryear, while wholly ignoring modern communication. This is largely a product of the entrenched interests powerful senior scholars who seeks to consolidate their privileged position by reifying their own established habits. I concluded that, for the academy to continue to be relevant (or, rather, to start being relevant again), we must begin to reward blogging, tweeting, wiki editing, etc.

Given recent interest in the topic, I thought I would repost Patricia Hill Collins’ response.

I agree that the status of a journal should be decoupled from the fact of whether or not it exists in print. The wind is already blowing in that direction as publishers realize how expensive print really is.

I don’t think that journals are necessarily dinosaurs. A good peer reviewed journal by experts in a field can become one important location that can help us wade through seemingly endless ideas on the web with an eye toward influencing informed decisions about quality. The sheer volume of ideas that are now available on the Web means that we need some sort of system (or multiple systems) of vetting those ideas. The journal system, especially in an era of ever-more-specialized journals, can help do that. Digital journals are well-positioned to help with this task. I, for one, don’t want a “thumbs up” Facebook model of voting on intellectual quality.

In short, one good journal can help the public that is interested in a a particular field of inquiry navigate through the vast amounts of data that are now on the Web.

The issue for me is the tightly bundled nature of the current hierarchical ranking of journals with employment hierarchies within the academy. It’s as if the journal system has been hijacked by the audit culture of the academy, one that requires that we place a “value” on everything. Higher education is on a slippery slope rushing to a place of ignoring the quality of the actual ideas in a journal article, instead assuming that a particular article must be “good” because it is published in a “ranked” journal. I find this kind of Group Think distressing — it stunts creativity and privileges those who are already at the top.

So what’s the real dinosaur here and what’s likely to happen to it? Will it go away on its own, running out of food to sustain it? Will it get so large that it will collapse under its own weight, leaving the rest of us a tasty carcass? Or are we missing the current signs that point to transformation in the works, a hybrid entity that fits nicely with your cyborg sensibilities?