Social theory should both grow out of, and be applicable to, empirical phenomena. As such, an important part of theorizing is to understand the substantive realities about that which we theorize. When theorizing about new technologies, this means keeping up with a highly complex and quickly changing empirical landscape. This post is a roundup of some recent empirical findings about social media trends, with a focus on Facebook—the current social media “hub.”

Study 1:


Title: “Why Facebook Users Get More than They Give: The Effect of Power Users on Everybody Else.”

 Authors: Keith N. Hampton, Lauren Sessions Goulet, Cameron Marlow, Lee Rainie

Out of: Pew Internet and American Life Project

Highlights:

  • 40% of Facebook users in our sample made a friend request, but 63% received at least one request
  • Users in our sample pressed the like button next to friends’ content an average of 14 times, but had their content “liked” an average of 20 times
  • Users sent 9 personal messages, but received 12
  • 12% of users tagged a friend in a photo, but 35% were themselves tagged in a photo

 

Excerpt:

The average Facebook user gets more from their friends on Facebook than they give to their friends. Why? Because of a segment of “power users,” who specialize in different Facebook activities and contribute much more than the typical user does. The typical Facebook user in our sample was moderately active over our month of observation, in their tendency to send friend requests, add content, and “like” the content of their friends. However, a proportion of Facebook participants – ranging between 20% and 30% of users depending on the type of activity – were power users who performed these same activities at a much higher rate; daily or more than weekly. As a result of these power users, the average Facebook user receives friend requests, receives personal messages, is tagged in photos, and receives feedback in terms of “likes” at a higher frequency than they contribute. What’s more,power users tend to specialize. Some 43% of those in our sample were power users in at least one Facebook activity:  sending friend requests, pressing the like button, sending private messages, or tagging friends in photos. Only 5% of Facebook users were power users on all of these activities, 9% on three, and 11% on two

 

Study 2

Title: Real People VS Fake Profiles

By: Barracuda Labs

 Highlights: I’ll let the infographic speak for itself

 

 

Study 3

Title: “Getting beeped with the hand in the cookie jar: Sampling desire, conflict, and self-control in everyday life.”

 Authors: Wilhelm Hofmann, Kathleen D. Vohs, and  Roy F. Baumeiste

From: Presented at the 13th annual meetings of the Society for Personality and Social Psychology

 Highlights:  A study of 205 adults found that their desires for sleep and sex were the strongest, but the desire for media and work were the hardest to resist. Surprisingly, participants expressed relatively weak levels of desire for tobacco and alcohol. This implies that it is more difficult to resist checking Facebook or e-mail than smoking a cigarette, taking a nap, or satiating sexual desires.

 Excerpt (from press release):

 In the new study of desire regulation, 205 adults wore devices that recorded a total of 7,827 reports about their daily desires. Desires for sleep and sex were the strongest, while desires for media and work proved the hardest to resist. Even though tobacco and alcohol are thought of as addictive, desires associated with them were the weakest, according to the study. Surprisingly to the researchers, sleep and leisure were the most problematic desires, suggesting “pervasive tension between natural inclinations to rest and relax and the multitude of work and other obligations,” says Hofmann, the lead author of the study forthcoming in Psychological Science. Moreover, the study supported past research that the more frequently and recently people have resisted a desire, the less successful they will be at resisting any subsequent desire. Therefore as a day wears on, willpower becomes lower and self-control efforts are more likely to fail, says Hofmann, who co-authored the paper with Roy Baumeister of Florida State University and Kathleen Vohs of the University of Minnesota.

 Study 4

 

Title: “Facebook Makes a Splash in the Bathroom”

 

 Author:  Vlad Gorenshteyn

From: AISMedia

 Highlights: According to a telephone survey conducted by marketing firm AISMedia, about 1/3 of Facebook users engage in Facebook activity while in the bathroom. Women were slightly more likely than men to report using Facebook in the bathroom (54.4% and 46.6% respectively), and the most prominent ‘bathroom-bookers’ were between the ages of 30-39.

 Excerpt:

We contacted 500 Americans and asked them a rather personal question: “Do you ever use Facebook on your mobile device while you’re in the bathroom?” Our survey results reveal that Facebook has become so prolific, that nearly 1/3 of people (27 percent to be precise) can’t resist the urge, responding “yes”!

 

General Summary

We see several interesting and related findings in these seemingly disparate studies. From the Pew Research report, we see that a) practices of Facebook use vary widely among members, and b) the way members of our network uses Facebook shapes our own social media experience. Moreover, we see that some are characterized as “power users,” engaging in high levels of activity. Perhaps these “power users” partially account for the findings of Hoffman et al., as they may be unable to resist the urge to engage in social media interactions. Relatedly, it may be these power-users and/or social media addicts who bring their mobile devices with them into the bathroom. Then again, the technology that allows us to bring Facebook into the bathroom (i.e. mobile devices) may simply offer a more entertaining alternative to paper-based reading materials. Finally, an analysis of real versus fake Facebook profiles alerts us to a socially inappropriate way to use Facebook: dishonestly.

Morality, or our internalized barometer of right and wrong, is heavily integrated into almost all areas of public and private life. It has been and continues to be at the center of social and philosophical theorizing, heated political debates, religious movements, artwork, fictional writing, and interpersonal relationships. Here, I want to talk about the ongoing moral dilemma of the cyborg.

Morality is complex. Moral tenets are numerous, overlapping, and often contradictory. Morality not only varies between social groups, but is multiplicious within social groups, and even within a single individual. Here, I discuss two pervasive and contradictory moral tenets that plague the contemporary cyborg. I argue that the tension between these moral tenets plays a significant role in the largely ambivalent and often anxious relationship between humans and the technologies that augment humanity.

Specifically, I discuss the contradictory moral tenets of potential realization and self-reliance.

Potential realization and self reliance

By potential realization, I refer simply to the full realization of one’s potential—to be as successful as possible. As bioethicist Carl Elliott persuasively describes, social actors in contemporary western societies view life as a project. We keep our life-projects in mind as we make decisions about how to behave, what to pursue, with whom to engage, and what to prioritize. We want our project (i.e. our lives) to be as successful (i.e. well-lived) as possible. We want, as the famous Army slogan goes, to “be all that we can be.”

This is largely an ends-based moral tenet. The goal is a successful life, the means are less important. The moral actor in contemporary western society has financial wealth, social prestige, a thriving social life, and a healthy looking, attractive body. These are signs of a life well-lived. These are moral trophies, earned by any means possible. To be poor, lonely, unattractive, or untitled, is to embody moral failure and suffer social stigma. The best anabolic steroids promoted bulking and cutting during a way that nobody had ever thought before, but they also left a detrimental mark on the user’s health. These ranged from some minor issues like acne, hair loss, and man-boobs to other fatal problems like organ failure. Cases of stroke and overtime among steroid users were also quite common. For the blokes who dedicated their lives to bodybuilding, there was little or no they might do about the difficulty thanks to the shortage of alternatives. But that was within the past. because of tons of very expensive research, we now have remarkable options to anabolic steroids that have come to be referred to as legal steroids.

 At the same time, we maintain a strong moral commitment to individual accomplishment and self-reliance. We revel in the idea of pulling ourselves up by the bootstraps. This is seen in the vast popularity of the rags-to-riches narrative, and our public heroization of the self-made (wo)man through film, literature, and journalism.

This is a means-based moral tenet. The moral actor makes it on hir own, rejecting all forms of external aid. This moral tenet plays the important (delusion-preserving) role of reinforcing rhetorics of equal opportunity and level playing fields.

Moral tension

The problem is that these two deeply ingrained moral tenets do not peacefully co-exist. To produce the optimal life-project is to use all available resources. To rely only on the self is to eschew all external help. This tension is exacerbated in the cyborg era. To rely upon the quickly developing technologies of the time is to breach the value of self-reliance. To choose not to utilize all available tools in the quest for greatness is to fail to realize one’s full potential, sacrificing the end product of the life project. It is within this tension that the cyborg must negotiate hir morality, and so develops an anxiety ridden, ambivalent relationship with the technologies of the time.

To demonstrate this tension, I provide 3 examples: steroid controversies,  weight loss surgery, and digitally mediated social interaction. You can also provide some great information on this at ceasar-boston.org website.

YouTube Preview Image

The 2008 documentary Bigger Stronger Faster directed by Chris Bell, explores the competing rhetorics surrounding steroid use. He shows how sports fans feel cheated by record breaking athletes who take performance enhancing drugs, and yet, buy tickets in droves to see the athletes whose feats are facilitated by the synthetically produced testosterone. Testosterone is a hormone produced by the male’s sexual glands. It is the this hormone that is responsible for the deep voice and facial hair in men. It is also produced in small doses in females; and the levels of testosterone in the female body actually determine whether a fetus will become a male or female baby. Testosterone is an important element of the activities that take place in the male organism, and is responsible for maintaining the levels of sex drive and concentration. Testosterone supplements, also known as testosterone boosters, are products designed to increase testosterone levels in the body. They come in the form of gels, creams, patches and injections. The primary application of best testosterone booster is increasing the testosterone levels in patients suffering from low testosterone levels. In addition to testosterone booster supplements available in stores and pharmacies, there are many herbs and natural products that act as natural testosterone boosters. Let’s see which the best testosterone booster is. To opt-out is to sacrifice performance, and potentially threaten the success of the team. To opt-in is to cheat, and potentially undermine the honor of the sport. Similarly, they interview competitive body builders/weight lifters who frame their steroid use as the necessary means to get as big and strong as possible. To opt-out is to sacrifice size and strength. To opt-in is to rely upon more than one’s own hard work. In both cases, opting-out means leaving potential unrealized, while opting-in de-couples the athletic accomplishments from the athlete himself.

Karen Thorsby’s wonderful article on narrative forms of stigma management after weight-loss surgery tells a substantively different, but theoretically similar story to Bell’s documentary. According to (highly oppressive, judgmental, and problematic) American assumptions, fat bodies are derided for their immorality—as these bodies fail to meet perceived standards of beauty and health. Because body size is obvious upon first meeting/seeing, assumptions about morality are instantaneous. To display fatness is to display is immorality. To remain fat is to remain immoral. To accomplish morality then, one must be able to display a thin or progressing-towards-thin body.  At the same time, those who employ surgery to aid in the process of weight loss are charged with cheating, laziness, and an immoral reliance on medical technologies rather than will power, hard work, and self-discipline. (This argument of course ignores the technologically augmented foods that have been scientifically stripped of calories so that we can still indulge in brownies and ice cream while maintaining thin bodies—but I digress). Throsby shows how post-operative women must shift from managing one form of moral stigmatization—the discredited state of fatness, to the management of another—the discreditable state of bodily change through enhancement technologies. They do so through “passing” as “normal” dieters, or emphasizing the continued efforts of diet and exercise—as opposed to the initial surgery—in the accomplishment and maintenance of their new physical states.

As a final example, I want to locate this moral tension in a somewhat less obvious, less extreme (although highly prevalent) moral dilemma: the use of digital media to create and maintain social connections. A successful life project entails not only material triumphs, but meaningful relationships. Increasingly, digital technologies are integrated into our social interactions. To opt-in, one runs the risk of relying too heartily upon these technologies, sacrificing interpersonal skills in favor of allegedly shallow exchanges and meaningless connections. I discussed this in my post last week. At the same time, to opt-out is to remove oneself from a high-trafficked site of interaction. The deleterious effects of such removal can be seen in interviews that I have conducted with social media users who have “dropped off” for a time. Although they describe it as liberating, they also describe a feeling of disconnection—even with their closest friends. They necessarily miss out on part of the conversation, and exclude themselves from part of the shared experiences that make up the stuff of relationships.

This moral tension, demonstrated in the three cases above, facilitates an anxiety ridden relationship between humans and the technologies that augment us. To opt-out is to potentially sacrifice the end product of the life-project, to opt-in is to potentially sacrifice the process. So we worry, we act,  we try to strike a moral and practical balance, and we construct narratives to support the complex and often contradictory moral paths that we decide to take.

 

 

 

A recent study published in Cyberpsychology, Behavior and Social Networking looks at the relationship between Facebook use and perceptions of other’s lives. The authors, sociologists Hui-Tzu Grace Chou and Nicholas Edge from Utah Valley University, find that those with greater involvement in Facebook feel that others have better and happier lives than they do. This is amplified for those who have many Facebook Friends with whom they do not interact outside of the online platform. These findings have been picked up by several mainstream media outlets, and unsurprisingly, are used as evidence of the deleterious impacts of an over-digitized world. An ABC news story, for example, retrieved through Yahoo! News, concludes with the following advice:

 “So if you are looking for a way to cheer yourself up…you may do well to log off Facebook. Call your best friend instead.”

 The comment sections are full of vindicated technological dystopianists extoling the benefits of face-to-face (read: real) over digitally mediated interaction. To keep things consistent, I will share some of the comments from the news story linked above:

 

“I asked one of my 1,000 Facebook friends if anyone would drive me to the airport…I ended up taking a cab!”

 “Cancel facebook and see how many of those ‘friends’ call ya.”

  “Between people fooling themselves into thinking they have lots of ‘friends’ and becoming socially retarded with ‘Smart’ phone I see bad things for our future.”

 “Get off Facebook and go out and make real ‘live’ friends. It’s much more fun I guarantee you.”

Admittedly, a study such as this is powerful evidence for technological naysayers. A negative relationship between Facebook usage and mental well-being indeed offers a dismal picture of a constantly connected populous. I counter this, however, by arguing that the problem rests not in the platform itself, but in the potentially unhealthy ways that some people engage with it—just as there are unhealthy ways to engage in all forms of sociality, including face-to-face interaction. In order to make this argument, I need first to clarify the social psychological process represented by the findings in this study.

 A well established tenet of sociology is that we come to see ourselves as others see us. This was most famously articulated By Charles H. Cooley with his concept of the looking glass self.  One would assume, based on this tenet, that if we all see each other  as leading a happy, successful and fulfilling lives, that we would in turn come to see ourselves in this same light. The findings from the study obviously do not support this. Rather than basking in a shared aura of positive energy and self-esteem boosting reflections, we engage in evaluative practices that are somehow blocked from the targets of evaluation (i.e. our Facebook Friends). The sociological problem then becomes locating this blockage. Why does Cooley’s tenet fail to apply? Why do other’s positive evaluations fail to translate into positive self-views?

This is the case, I argue, because the looking glass self refers to an interactive relationship, and the relationships discussed here, though taking place through a potentially interactive medium, are not necessarily engaged in interactive ways. I want to draw particular attention to the finding that feelings of relative inadequacy are amplified for those with large numbers of Facebook Friends who they do not personally know. These relationships are less about interactivity and more about surveillance. They are less about mutual growth, depth, and closeness, and more about looking, judging, and comparative self-evaluation. They are about, as one particularly clever commenter from the article above points out “keeping up with the Kardashians.”  These are not augmented relationships, but wholly digital ones, where the seer and seen, though officially connected, fail to interconnect. The social psychological process at work here then is not the looking glass self, but comparison processes.

According to comparison theory, classically articulated by Leon Festinger in 1954, we utilize others as a measuring stick against which we learn about ourselves. In constructing this measuring stick, we use all available information. On Facebook, “all available information” is highly selective, and consists primarily of flattering pictures, LOLs, and status updates about happy relationships, happy hour, and happily accepted promotions. Even public self-denigration is often met with an onslaught of positive comments, revealing the self-denigrator not only to be overly modest, but surrounded by close friends who think highly of hir. If we rely on Facebook as primarily a surveillance device, unable to incorporate any information not put forth on the Facebook page, then the measuring stick against which we judge ourselves will represent an unattainably fulfilling existence—making us feel bad.

This is an unhealthy way to use Facebook. And yes, the architecture of Facebook facilitates this kind of use. This does not mean, however, that Facebook is inherently bad for mental health. The architecture of Facebook also affords highly interactive, engaging, and mutually stimulating relations. When used primarily as a platform of interaction (rather than surveillance) Facebook augments existing and potential relationships. It allows us to keep connected with people who care about us, no matter how geographically far away. Facebook allows us to share good news and receive positive feedback. Facebook allows us to share bad news and receive support. These interactive activities promote sociality, mutual support, and an outlet for venting frustrations.

The point is that all forms of interaction can be practiced in a variety of ways—some healthy and some unhealthy. This includes face-to-face interaction. Just as poring over  the carefully crafted photo albums and  strategically curated wall posts of  Facebook-only-Friends can lead to feelings of relative inadequacy, insecurity, and  self-consciousness, so too can sitting in a coffee shop for hours discussing who has the nicest house, who has gotten fat since high school, and who does (and does not) deserve professional respect.

Burn Book: A non-digital form of unhealthy social behavior

So if you want to feel better, stop stalkernetting and write on your best friend’s wall. While you’re at it, purge those Friends who are merely targets of surveillance, they are messing with your self-evaluative measuring stick.


Eksobionics, a company dedicated to the augmentation of the human body, recently developed Ekso—a “bionic exoskeleton that allows wheelchair users to stand and walk.” In this post, I pose a question to which I honestly do not have a definitive answer: Does this development represent human progress or does it further perpetuate the subordination of physically impaired bodies?

I begin with a brief background on the company and a description of the product. I then present arguments for both progress and ableism. Finally, I question —but ultimately defend—the validity of this dichotomy.

Eksobionics was founded in 2005 under the name Berkeley Exoworks in partnership with the Human Engineering Laboratory at UC Berkeley. They began by developing exoskeletons that allow humans to carry more weight and move more efficiently on diverse terrains. In true cyborg-development style, the company received funding in 2008 from the department of defense to develop and eventually distribute their technology for military field use. Today, under the name Eksobionics, the company has developed and is prepared to distribute Ekso (the bionic exoskeleton that allows wheelchair users to stand and walk) to rehabilitation facilities. By 2014, they hope to make it available for everyday use.

The argument that Ekso represents human progress is quite straight forward and easy to make. The physically impaired human body is augmented by this device, given the ability to stand and walk where before this ability was not granted. Not only does this give wheelchair users access to spaces previously unavailable, but can have positive health benefits, as the wheelchair user can exercise hir leg muscles, improve breathing capacity, and relieve the skin that becomes susceptible to pressure sores from extended periods of sitting. Moreover, and perhaps most importantly, many (though certainly not all) people with spinal cord injuries do hope to walk again. This technology aids in the accomplishment of this goal.

Less straightforward is the argument that Ekso represents a step backwards, a move towards the further denigration of physically impaired bodies. Here we have a product made to improve the lives of those with spinal cord injuries, and yet, it implies that walking, rather than wheeling, is necessarily the preferable state of mobility. I must point out here that a body in a wheelchair is already an augmented body. The technology of the chair, whether manual or electric, grants the mobility that is organically restricted. A practiced wheelchair user can indeed often move more quickly than a person relying on leg muscles alone. When in a wheelchair facilitating space, a wheeler can maneuver quite easily, accomplishing necessary tasks and acting independently. The problem, of course, is that many places and spaces do not facilitate such free use of a wheelchair. I wrote about this more extensively in an earlier post. With this in mind, I will now elaborate on is the difference between disability and physical impairment. It is in this difference, I argue, that we see the ableism that is built into the Ekso.

According to the social model of disability (as opposed to the medical model), an impairment is simply a physical condition. The legs are immobile. The eyes do not see. The ears do not hear. These conditions are inherently value neutral. They do not, in any essential way, hinder the extent to which a person can engage as an active member of society. These impairments become disabling, however, when experienced within an environment that fails to accommodate the spectrum of physical and mental states. Sight-only crosswalks are disabling for those with vision impairments. Public speeches without sign-language interpreters are disabling for those with hearing impairments. Buildings without ramps and/or elevators are disabling to those with mobility impairments. The technology of the Ekso assumes able-bodied advantage, and so works to fit the impaired body into an ableist environment. The impaired body is, by implication, devalued.

Having laid out both sides of the argument, I must now take a step back and question the validity of the dichotomy itself. Indeed, I have laid out a theoretically false dichotomy between ableism and progress. At an academic level, this dichotomy, as with most dichotomies, is problematic. It incorrectly assumes a zero sum game where a device that aids in walking necessarily denigrates the wheeling body. Empirically, however, this dichotomy is not false. The development and distribution of technologies require resources, including time, money, space, and innovative minds. These resources—especially money—are limited, and choices must be made. Will we use these resources help wheelchair users walk, or to make inaccessible buildings more accessible? Will we use these resources to help blind people see, or to improve web-reading devices? Will we use these resources to develop medications for ADHD, or to develop curriculums and work spaces that accommodate those with high energy and quick moving thought patterns? In a perfect world, these “or” questions would be nonsensical. In the real world, however, the allocation of resources into one side means a decrease in resources for the other. So do we want to use our limited resources to improve ability at the individual level, while perpetuating an ableist environment, or create a more accommodating environment, where impairments are no longer disabling?

An issue not to be overlooked here is one of access. Who can afford the treatment and devices that improve individual mobility? If their proliferation perpetuates a disabling environment, then what is to come of those (likely the vast majority) who cannot afford these specialty devices? The result could be potentially devastating.

On the other hand, and to disclose fully, as a woman who terribly misses hiking in the Virginia mountains,  is rejuvenated by a brisk walk, and basks in the sweat-pouring experience of a long run in the mid-summer Texas heat, were I to acquire a serious spinal cord injury, I would find hope in a device like the Ekso. But then again, I am a product of an ableist society.

Note: Special thanks to Huong Le for bringing the Ekso to my attention

Prosumption refers to the merging of production and consumption, where the consumer produces that which s/he consumes. The term was first introduced by Alvin Toffler in 1980 in reference the marketplace, and reinvigorated by Ritzer and Jurgenson when they applied it to Web 2.0. In a special issue of American Behavioral Scientist (edited by Ritzer, with an introduction by Jurgenson, and an article by fellow Cyborgologist PJ Rey)I argue for the extension of prosumption into the realm of identity.This was elaborated upon in a Cyborgology post by Nathan Jurgenson and myself.

Specifically, Nathan and I looked at the ways in which new identity categories are prosumed via digital technologies. Digital technologies enable geographically dispersed individuals to meet, interact, and collaboratively write new kinds of selves into being. We then wondered about the destructive effect of identity prosumption on the postmodern project of categorical queering, as well as the liberating result of providing categories into which previously marginalized individuals can fit, finding community and a legitimate label with which to define themselves. It is this last point–the liberating and constraining potential of digitally enabled identity prosumption–that I will further disentangle in this post.

Early researchers celebrated the liberating potential of the internet as a space in which actors could separate from their physical bodies, geographic locations, and personal histories to engage in identity play. This celebratory mood has been tempered in recent years, as we’ve come to understand that bodies and histories come with us into digitally mediated interactions. Moreover, with the increasing prevalence of nonomous online environments (like Facebook and Twitter) physical realities are increasingly enmeshed with digital interactions, making accurate self representation (rather than creative self exploration) the standard for online engagement. Moreover, to prosume a new identity, the social actor must contend with a social network that can negate newly acquired identity meanings in a very public way.

Still, the internet does offer opportunities for identity growth and change at the level of individual social actors, and cultural realities more largely. At the individual level, for example, Samuel Tettner blogs about his digitally enabled journey into vegetarianism. He essentially prosumes a vegetarian identity by interacting with a geographically separate vegetarian friend through e-mail, learning about food systems through various websites, and changing his physical food practices. At the cultural level, Transableists, asexuals, and bug chasers, through digital connection and collaborative public sharing, now have names, communities, language, and negotiable meanings with which to make sense of themselves, and these categories are made available in the identity marketplace.

In short, digitally enabled identity prosumption must overcome the challenge of lateral surveillance and pervasive documentation, but also provides a path to a more abundant identity menu at the individual level, and a template for categorical construction at the cultural level.

This picture, however, focuses only on the front end. It focuses on how identities are prosumed, and speculates about the outcome. Identity negotiation, however, is a continuous process. I therefore want to explore the back end of identity prosumption–what happens once an identity is prosumed? I argue that the same technologies which grant us access to an abundance of identity categories also trap us (though not inescapably) within the categories that we construct.

Increasing public documentation means that what may have been a phase, vaguely remembered and scarcely commemorated, becomes enshrined through status updates, check-ins, photographs, and the concomitant interactions by others with these documented aspects of the self. As epitomized by Facebook’s new Timeline feature, our digital reflections are historically layered. If the layers fail to congeal, the actor risks accusations of in authenticity–one of the greatest moral sins in contemporary society. At the individual level, the moral imperative for authenticity is therefore the mechanism of identity stasis.

At the cultural level, institutionalization is the mechanism of stasis. Communities that form around a marginalized commonality (e.g. the stated need for a physical impairment) come quickly to establish a name for this way of being (e.g. transabled). This name, once applied, spreads with members across the web, obtains a Wikipedia page, is picked up by journalists and researchers and published in magazines, newspapers, and journals. These labels and meanings can, in some cases, be turned into medical conditions, partially eternalized in the Diagnostic and Statistical Manual of Mental Disorders (DSM).

Digital technologies therefore facilitate the acquisition of new identities both interpersonally and culturally. This is enabling. At the same time, once acquired, these identities lose much of their fluidity. They are documented, archived, spread, and tangibly incorporated, constraining the evolution of a self and of a culture.

 

 

 

Facebook is now rolling out the new Timeline format. Reviews, as usual, are mixed. Some applaud the now historically situated self presentation while ohers express discomfort at the increasing reach of this platform as it now invades a past in which it was previously absent. I am not going to engage these debatesin the present post. Instead, I will talk about what Timeline does in in terms of self and identity.

Timeline, I argue, integrates self narratives fragmented by their simultaneous temporal location prior to, and at the heigt of, augmented society.

Narratives are linear stories. They have a beginning middle and end and usually a coherent theme. Self narratives are the stories that we tell about ourselves. They are necessarily selective, highlihting some things while ignoring or mimizing others. Self narratives take that which is messy, fragmented and disjointed, and wraps it into a clean, cohesive, and consumable package. The self narrative has very real consequences. We not only make sense of ourselves through these narratives but are then guided in our actions by this sense making. It is through self narrative that we learn who we are make decisions about what we should do.

Facebook is an important tool in the construction of self narratives in an augmented society. Our profiles act as tangible reflections of where we have been, what we have done, who we are, what we are therefore likely to do. These narratives are co-constructed and, as pointed out in a previous post by Nathan and I, prosusumed. This project of linearity, however, is complicated by a past that took place entirely outside of social media technologies. The self, as told through facebook, privileges the present, and only with effort, pays homage to the past.  Enter Facebook Timeline.

The pre-digital past is reconstructed in a digital format using the logic of augmented reality. Childhood pictures are tagged and commented upon. Occurances and people are granted significant roles in the narrative by listing them as “events” in a particular time period. Others are pushed to the background or cut out of the story altogether. Through links and tags multiple narratives weave together to co-constuct eachothers’ stories and digitize an analog past.

Technologies are, by nature, biased. They are biased by the humans who create them. They are biased by the cultures in which they are produced. They are biased by the perceived needs of intended consumers and they are biased by agentic practices of consumption. A recent TEDMED talk (Why Hospital Rooms Don’t Work) by architect Michael Graves highlights the biased nature of technologies. Specifically, he demonstrates the embeddedness of privilege.

Michael Graves is a renowned architect. In 2003, Graves developed a rare (and still mysterious) illness that left him paralyzed. While fighting the illness and then undergoing rehabilitation, Graves spent a significant amount of time in hospitals. He found the facilities not only to be aesthetically displeasing, but impractical and sometimes downright inaccessible for a person with mobility impairments. He describes unreachable light switches and faucet handles, rooms so small that maneuverability is impossible, really ugly floral patterns, and an overall requirement that he, as a person in a wheelchair, ask for help with tasks that he should be able to complete independently. Summarizing these shortcomings he says:

 They just made the most frustrating mistakes you could ever imagine and made your cure more difficult. Your room should make it easier for the doctors and the aides and the patient. But instead it does just the opposite.

 These medical facilities, in short, were made by and for able-bodies. Or in other words, the architecture of these medical spaces were rooted in Able-bodied Privilege.

I argue that  technologies can be biased in two broad ways: Explicitly and Seamlessly.

By explicit bias, I refer to technologies constructed with explicit political intention. Robert Moses, for example, an architect in the early 20th century, built New York bridges that were too low to be cleared by buses, denying access to poor minorities that relied on public transit. #OWS occupiers have constructed tent cities and (both variants of) the human microphone with the explicit political intention of disrupting ties between government and big business. And electric car manufacturers design technologies that reduce carbon emissions and promote the political goal of environmental stewardship. The biases in these technologies are explicit in that the creators worked with intention. They purposefully embedded the bias into the technology, and utilize the technology for a particular political goal. Seamless bias does not operate so clearly.

Robert Moses

By seamless bias, I refer to technologically embodied privilege. By privilege, I do not necessarily mean disproportionate access to resources (although this is almost always the case), but to membership in the taken-for-granted unmarked social category: White Privilege; Male Privilege; Able-bodied Privilege. Bias, in this sense, is seamlessly built into the technology without conscious intentionality, as the producer and perceived consumer operate from the logical viewpoint of the unmarked human. Those who create IQ tests, for example, rely on a logic of whiteness not because the creators wish to penalize people of color, but because the logic of whiteness is, in U.S. society, the taken-for-granted unmarked form of logic. It is this latter form of bias that is highly difficult to pinpoint, and highly illuminating once discovered. It is difficult to pinpoint because it requires that we become conscious of that which eludes notice. It is illuminating because it shows us the assumptions under which we operate—it marks that which heretofore remained invisibly influential.

 It is the seamless form of bias that was pinpointed by Michael Graves. Specifically, he highlighted the Able-bodied Privilege in which the architecture of hospital rooms is embedded. Able-bodied Privilege is rampant in U.S. society (and largely internationally). It is seen in city planning, as sidewalks often slant towards the road, making them efficient for draining water, but dangerous for a person navigating the terrain in a wheelchair. It is seen in everyday linguistic practices, as we conflate inadequacy with “lameness.” It is seen in internet technologies, as websites are designed first for seeing and hearing individuals, and sometimes adapted for those with visual and/or hearing impairments.

Able-bodied Privilege is perhaps epitomized by the architecture of medical facilities. These spaces are created for the rehabilitation of those with physical impairments, and yet, as Graves points out, are designed according to able-bodied logic.

Graves is working to rectify the bias by designing hospital spaces that enable (rather than constrain) those with mobility impairments. In doing so, he requires his staff to spend time in a wheelchair. He takes them out of their privileged positions and gives them a glimpse (be it brief and imperfect) of the view from which the consumers of this technology will operate. This further highlights the need for diversity in all fields of technology design and production. To get outside of privileged logic we need the perspectives of those looking in, as well as the inclusion of voices from those who privileged logic leaves out.

Finals are a stressful time for students, as numerous deadlines—often requiring the accumulation of a semester’s worth of work—converge into one terrifying week. Pajamas stay on. Coffee gets brewed. Some thrive.  Some sink. And a few, in a panic, copy/paste something directly from Wikipedia.

From the instructor side, finals are also stressful, but in a different way. Not only do we have to rush through stacks of papers, inputting senior grades in time for graduation, but we have to do so with the knowledge that this is the moment of truth—the moment where we find out how effective we have been all semester (if effective at all). Like our students, this process can bring  moments of great triumph, such as grading a perfect test or reading a profound paper—one that perhaps even teaches us something. It can also bring defeat, where we must acknowledge that some students never truly engaged with the material. And finally, it brings sleepless nights (and probably an indignant Facebook status update) as we inevitably find the direct Wikipedia quote copied, pasted, and sometimes linked in a final paper.

I argue here that cases of internet plagiarism (such as text copied from Wikipedia and presented as original work in a final paper) is largely the result of a pedagogic failure to integrate contemporary technologies into the learning process. I want to qualify this argument in two ways. First, by saying that plagiarism is never okay, and the students who engage in it should (and usually do) know better. I think it would be less common, however, if we taught students how to effectively incorporate informal internet searchers into the learning process. Second, I qualify this argument by saying that the “failure” does not sit with any one particular instructor/educator, but with a shared pedagogical logic that has not kept up with the technologies of the time (there are of course many exceptions, such as Dan Greene’s American Studies course that he wrote about on this blog).

When in need of information, Digital Natives WILL Google. It is our responsibility, as educators, to help them Google responsibly. We can (and I argue should) teach students how Google, Wikipedia, YouTube and other similar sites can be effective research tools and content supplements.

When teaching students to conduct original research, instructors sometimes demonize Google and Wikipedia. They draw a hard and fast line between academic and non-academic sources, using Wikipedia as the counter example to the peer-reviewed journal article. To a degree, these instructors are correct. Wikipedia is not peer reviewed, and information can be inaccurate. Indeed, it is important that students can differentiate between different kinds of sources, and that they use the right kinds of sources in formal research papers. However, when beginning a literature review, on a topic about which one knows little, Google and Wikipedia are great tools.

I have taught two semesters of Advanced Methods of Research at Texas A&M. A large part of the course is for each student to conduct an original research project and write up a journal article style report. I find that students are often baffled and overwhelmed by the literature review process. The problem is not that they are unable to summarize and string together a series of related articles, but that they do not know where to begin looking for the right articles. They are unfamiliar with the substantive field in which they are studying and afraid that they will miss an essential piece of research. This is a problem common to all of us in the academy as we venture into unfamiliar research areas. So I tell my students to do what I do in this situation: Start with Google. This is met with wide eyes and giggles. I then instill in them that Google is a great place to start, but NEVER the place to end.

We talk about Google Scholar, tracking citations, and reference sections in Wikipedia. I then talk them through finding the full versions of these articles via the University library system, and further mining citations from there. In short, I help them start in a place they are comfortable (informal internet searches) so that they can more seamlessly move into less chartered territories (the realm of peer-reviewed journals and other academic sources).

Google and Wikipedia (and others) are also useful (when used wisely) for supplementing in-class content. Especially when teaching large classes, we often have students of varying levels and with very different backgrounds. Moreover, we sometimes forget that the students in our classes have not read everything that we have. As such, we inevitably reference (without explaining) a school of thought, framework, or social thinker with which they are not familiar. Here enters responsible Googling. Certainly the plethora of information on The Web can lead students astray, which is why we must talk with them about source differentiation (e.g. .edu or .org versus .com) and triangulation (do most places give similar facts/summaries?). Moreover, we need to make ourselves available to talk with students about the content that they’ve found—without making them feel as though they’ve somehow cheated by watching a YouTube video about postmodern thought.

Digital natives have special and unique skills. As educators, we can help hone these skills so that students can put them to optimal use. By de-stigmatizing informal internet searches, we not only broaden students’ intellectual scope, but also empower them to become more involved in their own processes of learning and education.   .

 

Bloggers here at Cyborgology have explored the internet meme in interesting ways. Most notably, David Banks analyzed the performative meme, arguing for its function in cultural cohesion, and P J Rey delineated the political and strategic role of internet memes in the #OWS movement. Here, I wish to take a step back, and deconstruct the very structure of the internet meme, exploring what the internet meme is and what it does. Specifically, I argue that the internet meme is the predominant (and logical) form of myth in an augmented society, and that it both reflects and shapes cultural realities.

To make this argument, I must first put forth definitions of both myth and meme.

I use myth here as it is used in semiotics (or the study of symbols) specifically drawing on Roland Barthes conceptualization. Myth, according to Barthes, is a representation of the dominant ideologies of our time. He delineates the structure of the myth as a second-order semiological system in which the sign (the totality of a concept and form) becomes the signifier (mere form). In his classic example, Barthes shows a depiction of a young Black soldier giving the French salute. This image is at once a complete sign (Black soldier gives French salute) and the form or signifier of the second-order system: the myth (France is a great empire supported by all, regardless of color or creed). Importantly, Barthes points out that the myth is decoupled from its roots. The construction of the myth is forgotten and the mythic sign is stated as fact.It is this decoupling which makes myth such a powerful transmitter of culture and ideas.

 

A meme, as first termed and defined by biologist Richard Dawkins in 1976, is a cultural unit that spreads from person to person through copy or imitation. Memes both reflect and shape cultural discourse, mood, and behavioral practice. The evolutionary process of memes is compared by Dawkins and others to natural selection in genes, whereby reproductive success of a given meme is linked to variation, mutation, competition, and inheritance. In other words, memes that outperform other memes and shift appropriately with cultural sentiments will thrive and persist, while memes that fail to proliferate will fall into extinction.

Internet memes refer to these cultural units (catch phrases, images, fashions, expressions etc.) that spread rapidly via internet technologies, constructing, framing, and revealing cultural realities. Lolcats, for example, a quite successful internet meme, reflects a growing affection between humans and companion animals, and has created the normative linguistic practice of asking if one “can haz” something. In a less innocuous example, the numerous  #OWS memes (described in PJ Rey’s post linked above) portray, reinforce, and  aid in the construction of what Nathan Jurgenson describes an atmosphere of augmented dissent.

 

We can see clearly that the myth and the meme share a semiotic structure in which the first order sign becomes the mythic and/or memetic signifier. The Guy Fawkes mask, for example, is simultaneously the sign of an historical moment, a popular film, and the hacker group Anonymous, as well as a signifier of the contested relation between political institutions and the anonymous components that make up “the masses.” Moreover, the meme, like the myth, is divorced from its construction, stated instead as indisputable fact. Just as Barth’s saluting  Black soldier does not offer up a viewpoint for debate, the Guy Fawkes mask does not make an argument, it asserts a cultural refusal to be oppressed.

Not only can we see that the myth and the meme share a semiotic structure, but I argue that the internet meme is the predominant and logical form of myth in an augmented society. I put forth 4 supports for this argument, all of which link the construction and spread of internet memes to the affordances of augmented reality: 1) internet memes are simultaneously digital and physical; 2) internet memes are quickly spread and often 3) user generated; 4) internet memes are easily adaptable.

In augmented society, we interact, communicate, create, and live life in inseparable physical and digital realms. Similarly, the internet meme is often rooted in a physical occurrence, spread digitally, and (re)enacted both physically and digitally. The performative memes described by David Banks, for example, are embodied practices, photographically recorded, digitally shared, and physically imitated.

 

Cyborgologist David Banks engaging in a performative meme for a previous Cyborgology post.

The spread of these memes is quite rapid. Less than 24 hours after police officers pepper-sprayed peaceful protesters on the UC Davis campus, images of the event—signifying both the actual occurrence and disproportionately violent actions taken by an oppressive regime—were distributed internationally through formal (e.g. news) and informal (e.g. Facebook) channels.

Moreover, memes are often user-generated, and so easily adaptable. Almost as quickly as the photographic image of the pepper-spraying police officer spread, it was modified to depict the officer spraying everything from the fathers of the constitution, to a baby seal, to Yoda.

These images, texts, sayings and stories—digitally and physically rooted, widely and quickly dispersed, prosumed and adapted from the bottom up—spread and reinforce cultural sentiments and ideas. They paint the political landscape. They impact language, shape humor, and drive cultural connection and distinction. Internet memes, in short, are the myths of our time, afforded by the technologies of our time.

 

Facebook Inc. and researchers from the University of Milan recently released a study showing that Facebook users are linked by only 4.7 degrees of separation.  This is a significant decrease from the 6 degrees of separation found in Milgrim’s 1967 study, from which the common conception of our degree of networked connection (and the Kevin Bacon game) stems.

Here, I examine what these findings mean in terms of social relationships in the contemporary era.

These findings point to three main things: In the most basic sense, these findings show that Facebook is a highly pervasive and global platform through which interaction takes place. Relatedly, those who interact on Facebook connect to large and diverse networks. Finally, as we increasingly interact on a shared platform, with a wide and diverse group of others, these findings indicate that we are increasingly connected through weak ties. It is this last point that I will expand upon

This increase in weak ties can be interpreted in many ways. With Zuckerberg’s stated goal of an open and connected society, the following excerpt from the report signifies nothing less than a victory for the Facebook team:

When considering even the most distant Facebook user in the Siberian tundra or the Peruvian rainforest, a friend of your friend probably knows a friend of their friend.

The importance of these connections should not be overlooked. Indeed, weak ties are an effective vehicle for the spread of different perspectives, news, information, and opportunities. Classic studies on social networks demonstrate that weak ties grant us access to tangible and intangible resources that exist outside the awareness of our immediate and close networks.

We should be careful, however, about taking this positive outlook too far. Although indirect connections do tie Facebook users around the globe, our direct networks, according to the study, are quite homogenous in terms of age and geography. In short, we interact with others similar to ourselves. Therefore, the notion of a globally connected population, sharing cultures, viewpoints, and information, is a bit unrealistically utopic.

Moreover, an increase in weak ties can (and has been) a cause for social anxiety. As I addressed last week, people fear a loss of privacy and a thinning out of meaningful relationships in light of indiscriminate social network site connections.

These conflicting interpretations leave one with the difficult task of disentangling the costs from the benefits of weak tie connections, and unpacking what it all means when these connections are rooted in a digital platform.

If we take each interpretation individually, it may seem as though instrumental access to resources (a benefit of weak ties) is paid for with the dissipation of meaningful affective connections (a benefit of strong ties).  I argue instead, as I have previously, that weak and strong ties are not mutually exclusive. An increase in weak ties does not preclude the maintenance of strong ties.

The weak ties that connect us through Facebook do not constitute increasingly meaningless friendships, but avenues through which information and resources flow. The news stories, job opportunities, bands, movies, and YouTube videos that I learn about via Facebook will likely reach me through my direct network connections, but will be sourced via indirect network connections. I may never know the source of the information (e.g. a Friend’s cousin’s boyfriend’s brother living in France) but will have access to it nonetheless.  These indirect network connections allow information and resources to be quickly and vastly dispersed, but likely do not result in the formation of new relationships. Indeed, numerous social media studies show that we primarily utilize social network sites to maintain existing relationships, or to bolster budding relationships. We rarely use these sites as a means of meeting new people.

Overall then, weak tie connections provide an expanding path through which resources and information flow in and out of wide ranging network clusters. Each cluster, however, remains tightly knit. As stated in the report:

The Facebook social network is at once both global and local. It connects people who are far apart, but also has the dense local structure we see in small communities

Facebook is neither the utopic open forum, nor the dystopic isolator. It is, instead, an increasingly pervasive communication platform that augments the ways in which social networks operate.