FBI director James B. Comey’s recent comment that police scrutiny has led to an uptick in violence is a villainization of #BlackLivesMatter activists. I rerun this piece as a response to Comey’s position.15407587706_6f3ccf86c2_z

 

I am an invisible man. No, I am not a spook like those who haunted Edgar Allan Poe; nor am I one of your Hollywood-movie ectoplasms. I am a man of substance, of flesh and bone, fiber and liquids — and I might even be said to possess a mind. I am invisible, understand, simply because people refuse to see me. Like the bodiless heads you see sometimes in circus sideshows, it is as though I have been surrounded by mirrors of hard, distorting glass. When they approach me they see only my surroundings, themselves, or figments of their imagination — indeed, everything and anything except me…It is sometimes advantageous to be unseen, although it is most often rather wearing on the nerves. Then too, you’re constantly being bumped against by those of poor vision…It’s when you feel like this that, out of resentment, you begin to bump people back. And, let me confess, you feel that way most of the time. You ache with the need to convince yourself that you do exist in the real world, that you’re a part of all the sound and anguish, and you strike out with your fists, you curse and you swear to make them recognize you. And, alas, it’s seldom successful… ~Ralph Ellison (1932), Invisible Man

In what follows, I argue that the Black Lives Matter movement is a hacker group, glitching the social program in ways that disrupt white supremacy with glimpses of race consciousness. It is a group that combats black Americans’ invisibility; that “bumps back” until finally, they are recognized.  As Ellison continues:  

Invisibility, let me explain, gives one a slightly different sense of time, you’re never quite on the beat. Sometimes you’re ahead and sometimes behind. Instead of the swift and imperceptible flowing of time, you are aware of its nodes, those points where time stands still or from which it leaps ahead. And you slip into the breaks and look around.

The Black Lives Matter movement brings us, forcefully, into the “breaks,” and invites us to look around, too.

To hack is to find and exploit the weaknesses in a system. Once found, hackers can gain access to what’s inside, and, if desired, change the programming. The Black Lives Matter movement is working to accomplish the latter. They expose racism among America’s most established institutions, and then disrupt the fabric of everyday life to bring these weaknesses to the attention of the masses. This disruption or “glitch” that activists—especially activists of color— present is in many cases, simply themselves. They are black bodies taking up space; black bodies making demands; black bodies resisting invisibility.

Earlier this year, Black Lives Matter activists took over Baltimore. Sitting peacefully, marching the streets, and alternatively, breaking windows and setting things on fire in protest of the deadly police brutality inflicted upon Freddie Gray. Police deployed tanks. Officials closed schools. Businesses were unable to operate. Glitch: Look at us.

In Ferguson earlier this week, Black Lives Matter activists blocked the entrance to the St. Louis Federal Courthouse and traffic on a major highway in protest and remembrance of Michael Brown, the unarmed black teen killed by a white police officer one year ago. The city declared a stated of emergency and arrested close to 60 protesters, including high profile activists like philosopher Cornel West. Glitch: We are still here.

The Invisible Man Himself, Ralph Ellison
The Invisible Man Himself, Ralph Ellison

In Seattle last week, two black women activists stormed the stage at the Social Security Works rally in Westlake Park, prohibiting white presidential candidate Bernie Sanders from speaking. Lamenting Sanders’ failure to address contemporary racial issues, the women were booed by the crowd but refused to give the microphone back. They invited Sanders to respond to their criticisms. He declined. Following the event, Black Lives Matter Seattle released a press statement in which they proclaim: “…we honor Black lives lost by doing the unthinkable, the unapologetic, and the unrespectable.”

The choice of Sanders as a target is of particular relevance. Sanders is a self-described ally with a strong record of civil rights activism. In fact, just hours after his failed attempt to speak at Westlake, Sanders addressed a crowd of 15,000 at the University of Washington calling for an end to institutional racism and reform of the criminal justice system. In contrast, Donald Trump claims there will be no “black presidents for awhile” following what he considers a botched job by Barack Obama, and Ben Carson believes we needn’t think of race because he knows deep down that brains, not skin, make us who we are.

Bernie isn’t perfect, but he’s far better than the rest. And that’s just it. His work, his almost anti-racist position, his good intentions and barely missed marks make him the lowest common denominator within the existing political system. This is a system that puts black lives alongside a suit of issues—environment, economy, tax policy, military funding. This is a system that hides race issues amongst the crowded tabs of candidates’ official web pages. The Black Lives Matter movement rejects this model. Instead, it insists that in this moment, Black Lives take center stage. Anywhere but the center is unacceptable. No more hiding in plain site. Glitch: We are taking over the platform.

Because of this insistence upon centrality, Black Lives Matter refuses to be Anonymous. They do not disrupt the system quietly. The hack is their presence. The hack is their voices. The hack is their faces. It’s not about discourse or even policy, but an insistence upon visibility; a refusal to remain unseen.

Like any good systems maintenance crew, however, the U.S. social system has workers diligently laboring to quiet the glitches, to restore the program, to punish the hackers and reinstate their invisibility. In Ferguson last year, these workers made up the grand jury that chose not to indict police officer Darren Wilson, the man who killed Michael Brown. This week, the workers are the “Oath Keepers,” made up of five white men with weapons, patrolling the streets of Ferguson to maintain “order” and “peace.” In the media, these are the news stations that label protestors “rioters” and highlight the destruction of property while marginalizing the historical and systemic destruction of black lives. It is Bernie Sanders, who pouts at his lost stage time rather than step aside to graciously acknowledge that this moment is not for him.

But the Back Lives Matter hack is powerful in its persistence. The system has been weakened by cameras on cops, fires in the streets, citizens demanding answers, and feet stomping on the ground, day after day, month after month. And because of this persistence, it is a hack that the system can only fight for so long. Each protest-induced glimpse makes invisibility more difficult to restore. At some point, we will have all seen too much, even those who try to close their eyes. This war of glitches creates a tumultuous moment, but provides the code with which to write an alternative future.

Jenny Davis is on Twitter @Jenny_L_Davis

Headline Pic via: Source

hyperauthenticity

Authenticity is a tricky animal, and social media complicate the matter. Authenticity is that which seems natural, uncalculated, indifferent to external forces or popular opinion. This sits in tension with the performativity of everyday life, in which people follow social scripts and social decorum, strive to be likeable—or at least interesting—and constantly negotiate the expectations of ever expanding networks. The problem of performance is therefore to pull it off as though unperformed. The nature of social media, with its built-in pauses and editing tools, throw the semblance of authenticity into a harsh light. Hence, the widespread social panics about a society whose inhabitants are disconnected from each other and disconnected from their “true selves.”

For political campaigns, the problem of authenticity is especially sharp. Politicians are brands, but brands that have to make themselves relatable on a very human level. This involves intense engagement with all forms of available media, from phone calls, to newspaper ads and editorials, to talk show appearances, television interviews and now, a social media presence. The addition of social media, along with the larger culture of personalization it has helped usher in, means that political performances must include regular backstage access. Media consumers expect politicians to be celebrities, expect celebrities to be reality stars, and expect reality stars to approximate ordinary people, but with an extra dab of panache. The authentic politician, then, must be untouchable and accessible, exquisite and mundane, polished yet unrehearsed. Over the last couple of elections, social media has been the primary platform for political authenticity. Candidates give voters access to themselves as humans—not just candidates—but work to do so in a way that makes them optimally electable. It’s a lot of work to be so meticulously authentic.

This is why political authenticity requires robust PR teams. Political campaigns are hyperperformative, making the politician’s image of authenticity spectacularly calculated. The Clinton campaign includes marketing experts from Coca-Cola and the advertising agency GSD&M; Jeb Bush’s team includes a full media staff, including a communications director, press secretary, and head of media relations; Bernie Sanders, whose brand is arguably the “un-brand,” has the firm Revolution Messaging behind his social media image. Interestingly, (but not surprisingly) I had to dig for this information in ways I didn’t have to with other candidates. When your brand is the un-brand, your team quickly deletes information about PR on your Wikipedia page; And Donald Trump, in an ironic and oddly brilliant move, maintains authenticity by owning his brand status. Perhaps this strategy was suggested by his ever-present Media Handler, Hope Hicks.

Importantly, political hyperauthenticity relies upon a compliant audience. The performativity of political campaigns is an open secret, balancing between cynical recognition and practical denial. We know the performance for what it is, but allow the performance to go off as though spontaneous. Indeed, we insist upon it. This is what sociologist Erving Goffman calls “tact” and it’s a practice that, for good reason, pervades everyday life. Tact facilitates smooth interaction and helps us fumbling social actors avoid embarrassment. It’s how we manage to carry on, even after someone trips, farts, misspeaks, or leaves spinach in their teeth. The maintenance of political theater requires audiences to dig deep into their reservoirs of civil inattention. Political theater requires hypertact.

The masterful craft of political campaigns is common knowledge. We all realize that Bernie Sanders entered last week’s debate with “sick and tired of hearing about your damn emails!” ready made in his quote bag. In this way, we didn’t expect Rand Paul’s 24 hour livestream to contain illegal, immoral, or even revealing content. We expected it to contain coffees, bad jokes, small talk, and mussed hair. We expected it to be boring with spurts of amusement, and the Paul team delivered. The most dramatic moment was Paul’s reference to the “dumbass livestream,” which quickly became a t-shirt slogan and press release from his team. “Look at how authentic Rand Paul is!! He said an unpolished thing!!”

However, despite extensive teams and apparent audience complicity, candidates’ social media use sometimes allows nuggets of ill-crafted content to seep through, and well-crafted resistance to break in. People didn’t find it cute when Clinton asked them to tweet emojis about student debt. People did find it funny, however, to watch Ted Cruz botch his first attempted response to Obama’s State of the Union address. And the people running Carlyfiorina.org took advantage of an unregistered domain name to highlight the number of workers laid off at HP under Fiorina’s leadership. Social media is therefore a tool of the powerful, bolstered by citizens’ tact, but it is also a tool that contains unique vulnerabilities. The opportunity to screw up is ever available, and when it happens, it doesn’t go away. It can loop on vine, spread through retweets, it can become a meme. In candidates’ digitally mediated quest for hyperauthenticity, social media can also, despite itself, occasionally pull back the curtain. And when the curtain pulls back, we pounce, lest the sham of the entirety be revealed.

Jenny Davis is on Twitter @Jenny_L_Davis

Pic via: Source

Greenville College

Small towns move at the rate of horse and buggy rather than high speed internet, and therefore tend to reside on the wrong side of the digital divide. However, digital divides are not fixed or homogeneous, and small towns can surprise you. This is made clear through the case of Greenville College.

Out far from the glow of St. Louis is the small rural community of Greenville, Illinois. Greenville is a negligible town of 7,000. Most pass it on the interstate without even noticing–or use it as a place to go to the bathroom on the way from St. Louis toward Indianapolis. Amidst its miniscule population is a small enclave of higher-ed: Greenville College. Greenville College, founded in 1892, is a small Christian liberal arts college. Greenville College was once on the unfortunate side of the digital divide–until, out of necessity, it surpassed its urban counterparts.

In the late 90’s, the trend in network infrastructure on college campuses was to wire dorms and buildings with broadband. However, Infrastructure like that costs loads in installation and upkeep. Pulling wire across a campus and through old dorms is intensive and expensive work. The upkeep of this type of infrastructure is too costly for a small institution like Greenville, and the city certainly didn’t have much in place. So GC adapted.  Rather than installing an expensive infrastructure and then installing wireless access points later, they skipped the heavy infrastructure altogether and jumped right to wireless.

According to an article in the St. Louis Post Dispatch, Greenville College was the first campus in the US to install wireless Internet.[1] In a brief conversation with the IT director, Paul Younker, he adds that Greenville was the first campus with high speed (2mb) wireless internet with a single T1 line as the backbone. In 1999, employing a large enterprise-level wireless network was a big deal. 53 Wireless APs scattered across campus changed the way GC used their campus and connected to the outside world, and made the school an ironic innovator in higher-ed tech. The decision to go wireless, according to Younker, was based on the cost of infrastructure. For the money they would spend installing a port in every room, wireless could be installed throughout the entire campus.

This Greenville College case demonstrates that technological innovation doesn’t always happen in the most obvious places­­–If we were to follow common assumptions about the digital divide, we might assume that the first wireless network on a college campus might happen somewhere with an already existing and strong network infrastructure. Certainly it would happen somewhere with multiple stoplights.  However, there was no internet infrastructure in the dorms at GC and only minimal infrastructure elsewhere on campus. Even more, at the time when wireless was installed at GC, the infrastructure of the entire town was rather questionable. This eschews a purely hierarchical model of technology adoption. It wasn’t the most privileged who implemented a wireless network first, it was those who had relatively less privilege, access and capital. It was those who had a need.

Yet, Greenville’s leap over the digital divide has not been entirely smooth.  Younker recounts that the first broadcasting SSID across campus was called “Moses.” The name had multiple meanings. It reflected the liberation that wireless internet brought (“Let my people go” ) and also, the speed of the network— checking your email was sometimes sort of like wandering through the desert for 40 years. It wasn’t until recently (2014) that Greenville College got a substantially better internet connection through the Illinois Century Network, an initiative designed to bring high speed internet to educational institutions throughout rural Illinois. Before 2014, wireless internet, as nice as that is, was still bogged down by an abysmal infrastructure of T1 lines and small town internet providers.

The digital divide in small towns is real, but not beyond negotiation. Greenville, being on the losing end of that divide, was required to innovate and think creatively with infrastructure. In that moment of innovation, we see the assumptions about small towns and their technology shaken a bit as this place with sparse infrastructure surpassed its privileged counterparts. Paying close attention to rural spaces gives a different and more varied picture of the way technology and culture function together, in sometimes surprising ways.

 

[1] “College in Illinois is the First to Deliver Internet Without Wires” (Vol 129 Number 279). St. Louis Post Dispatch. October 6, 1999 [article paywalled].

 

Matt Bernico (@The_Intermezzo) is a Ph.D. Candidate at the European Graduate School in Saas-Fee CH. He also teaches at Greenville College in Greenville Illinois. His research interests are Media Studies, Speculative Realism and Political theory.

Pic Via: Source

I'm not kidding, this is a VHS from www.Tower.com which I'm pretty sure is the current iteration of Tower Records.
I’m not kidding, this is a VHS that you can buy right now from www.Tower.com, which I’m pretty sure is the current iteration of Tower Records.

In 1953, Hugh Hefner invited men between the ages of “18 and 80” to enjoy their journalism with a side of sex. It was Playboy’s inaugural issue, featuring Marylyn Monroe as the centerfold, and it launched an institution that reached behind drugstore counters, onto reality TV, and under dad’s mattresses.  It was racy and cutting edge and ultimately, iconic. Posing for Playboy was a daring declaration of success among American actresses and the cause of suspension for a Baylor University student [i]. But edges move, and today, Playboy vestiges can  be found on the Food Network.

In August, Playboy stopped showing nude images on their website. The New York Times reports that viewership subsequently increased from 4 million to 16 million. That’s fourfold growth!!   In what can only be described as good business sense, the company announced that in March, they will stop including nude women in their magazine as well. Putting clothes on appears surprisingly profitable.

In the NYT piece,  Playboy CEO Scott Flanders explains the decision: You’re now one click away from every sex act imaginable for free. And so it’s just passé at this juncture… That battle has been fought and won.

Flanders is right. The Internet has changed the sex industry. Pornography has never been more accessible, nor has its consumption been more acceptable. Porn websites draw more visitors than Twitter, Netflix, and Amazon combined. My colleague who studies pornography consumption tells me that among college men, he and his team find about a 90% consumption rate.  Today, people take their porn interactive, amateur, queer, multi-partied, penetrational, and however else they come to imagine. What  this means for the evolution of sexuality is a complicated can of worms, but what’s clear is that set shots of airbrushed  women, arms  grasping  bedposts, mouths partially open and eyes partially closed, are as outdated as Playboy’s most identifiable medium (the magazine).

So Playboy is going normcore.  While normcore first referred to exceptionally unexceptional and gender neutral clothing, it has broadened to mean a pushback against the fast-paced attention demands of an identity-saturated moment, largely facilitated by digital technologies that afford 24 hour news cycles, widespread content creators, and self-started brand initiatives. Normcore is resistance through normalcy—hardcore normalcy.

Playboy will replace the nude images with scantily clad ones. They’ll also return more attention to the journalistic portion of the publication (maybe people really will get it for the stories). A XXX market renders R rated content definitively vanilla, and a move to PG-13 comparatively bold.  As evidenced by the brand’s enormous website growth following their decision to put clothes on the models, subtle is the new edgy and “tasteful” is a market niche.

 

Follow Jenny Davis on Twitter @Jenny_L_Davis

 

[i] The incident resulted in a fraternity having to “write essays” as punishment for posing in the same issue fully clothed alongside women in bikinis. The female student who posed nude, however, was indeed suspended. Because, gender.

school-desks-305953_640

Before leaving his post as Australia’s Education Minister, Christopher Pyne approved a major restructuring of the public school curriculum. The new plan makes code and programming skills central. In a statement released by Australia’s Department of Education and Training at the end of September, Pyne laid out plans to disperse $12million for:

  • The development of innovative mathematics curriculum resources.
  • Supporting the introduction of computer coding across different year levels.
  • Establishing a P-TECH-style school pilot site.
  • Funding summer schools for STEM students from underrepresented groups.

From grade 5, students will learn how to code. From grade 7, they will learn programming. What they will no longer be required to learn, however, is history and geography. The new plan replaces these heretofore core subjects with the technical skills of digital innovation.

This curricular refocus represents an important shift in the labor market, in which the means of production are increasingly digital and employment opportunities require a technical skill set. Indeed, tomorrow’s job seeker is well advised to learn how to code and institutions of education are quickly tuning in to this. France started teaching computer science in elementary school and the UK similarly introduced code into their primary school program. In the U.S., New York Mayor Bill de Blasio announced plans to offer computer science in all of the city’s public schools within 10 years.

But about replacing history and geography…

This replacement fundamentally misunderstands the deeply social nature of programming and code. To treat technical skill as somehow separate from socio-historical knowledge is not only fallacious, but bodes poorly for the future that the curricular shift is intended to improve. Computer science is historical and geographic. Code is culturally rooted and inherently creative.

Of course it’s important to teach students the skills they need to be competitive in and contribute to, the societies in which they live. Computer science is part of that. But teaching technical skill without social underpinnings is truly, coding without a net. It creates technicians who are expert in the how without understanding the why, the what happened before, or the what could be.

It is far from novel to claim that code is not neutral or that computer programs contain the cultural fingerprints of those who make and use them. A bustling literature on big data and its human antecedents and consequences is evidence of this. And yet, assumptions of technological neutrality continue to shape policy and practice. For example, Jobaline’s voice analyzer algorithmically selects voices that are best suited for employment; Facebook users don’t know their feeds are filtered; politicians refute firearms regulation with the logic that “guns don’t kill people, people kill people”.

The next generation will be a generation of makers. We will do well to remember that at its base, making is a social process.

 

Follow Jenny Davis on Twitter @Jenny_L_Davis

Pic via: Source

LeeAt this point everyone is undoubtedly aware of the school shooting at Umpqua Community College in Oregon, though I am certain the method by which we came across the news varied. Some of us were notified of the event by friends with fingers closer to the pulse, and still more of us learned about it first-hand through a variety of content aggregation platforms, such as Reddit.

I would hazard that the majority of people learned about it first and foremost through social media;  primarily Facebook or Twitter. I certainly did. A friend first told me about it through Facebook Messenger, and almost immediately after she did, I started to see articles trickling into my newsfeed in the background of my Messenger window. And the moment that happened I shut my Facebook tab down, despite the fact that I compulsively, addictively, have it open almost all the time.

Facebook, when taken on the whole, is a fantastic way for people to compare each others’ lives and share pictures of kittens and children, but when it comes to a tragedy, the platform is woefully inadequate at allowing its users to parse and process the gravity of events as they unfold. It is a thorough shock to a system that thrives on irreverent links and topical memes, and when faced with an item that requires genuine reflection and thought, it is often simpler – indeed, even more beneficial – for users to turn their heads until the kittens may resume.

This is no fault to the user. At the system level, this is the way Facebook operates. Users share personal updates, photos, or links, and other users may comment or “like” them before sharing them within their own group of friends. Liking content is often perceived as a method by which to gauge this content. Is it worth your time? Judging by the sheer volume of users who liked it, perhaps it is.

But, as is inevitable, a system like that has a tendency to favor those aforementioned irreverent links. Is your meme good enough – nay, dank enough – to garner All of The Likes? Then congratulations: You have, in a sense, won Facebook. There is a strange sense of accomplishment through that feat that is difficult to convey in a way that does not invariably make us all sound embarrassingly shallow, but in the internet age, where fictitious and transient achievements are often held up side by side with those more tangible, it is a very important item to consider. Because Facebook rewards the user with likes. The content that garners the most likes, regardless of  substance, is the one that results in reward.

And herein lies the rub. Facebook does not offer any way to weight or differentiate these links in a meaningful way, so all of your social media content has a tendency to just flow, like a deranged stream-of-consciousness. Try to imagine the scene from the 1971 classic Willy Wonka and the Chocolate Factory in which Gene Wilder takes a group of innocent civilians down a tunnel of terrors. At one point, they slowly travel down a river of candy and see a bright garden of colorful sweets, and in the next moment, they are treated to an ever-quickening barrage of lights and awful imagery, only to emerge on the other side to continue a fantastical tour of candy making delight. This is more or less the Facebook experience during any sort of national tragedy, and as you might surmise, it has a tendency to leave us all a little desensitized, confused, and hollow.

The fact is, when you read about ten people murdered and countless others injured in between a list of “12 Pugs Who Cannot Even Right Now” and a gallery of Disney Princesses dressing as though they play in an ‘80s hair metal tribute band, it quickly becomes jarring to have to openly and honestly consider the one piece of news that is irresponsible to ignore.

This is the true reason I jumped ship from Facebook:  I felt I needed to find my information elsewhere. After experiencing multiple school shootings through the social media juggernaut, I was all too aware of the unhealthy way I would wind up consuming the news if I stayed where I was, biting my nails and refreshing my home page to click on link after link. Every article would possess a  shock headline specifically designed to garner coveted clicks, and many would quickly deviate off the topic into politicized debate, piggy-backing off the horrific event in ways that would produce the greatest volume of interaction through either likes or comments. The fates of the victims and their families would inevitably be buried under the larger, more politicized issues that generate substantially more views. I hope you will all be understanding when I say that I am, at long last, a bit too squeamish for the process. Facebook wasn’t built for bad news.

To Facebook’s credit, they have clearly heard their users and are working to add some additional functionality to the platform to help people respond in a meaningful and appropriate way to news that does not, and should not, ever garner a “like.” News outlets have been quick to dub this upcoming functionality the “dislike button,” as this has been a common request by Facebook users from the moment the like button came into existence. However – and again, this is to Facebook’s credit – there will be no dislike button for the exact reasons you already know. Such a feature would only cause users to rampantly troll and bully other users with negativity, which goes completely counter to the sort of positive cyber utopia Mark Zuckerberg would like his community to be.

Lee1

No, if the reports from Facebook are to be taken at face value, we are more likely to receive a button that gives us the ability to issue condolences or regards – something specific that shows a respectful acknowledgement of the news received, and notifies the poster that we are thinking about it.

And so, all our problems are solved, and in the future, no Facebook user will be burdened by the awkwardness of having to sit in silence when an acquaintance announces the passing of a family member, mouse arrow hovering between the comment bar and the “like” button, uncertain of how to proceed.

The truth is that “liking” something on Facebook is the absolute lowest common denominator of participation in the community, and any additional button meant to showcase a feeling of remorse would have no more impact than the tools already at our disposal. Does clicking “Sorry About That” at the news of a gun-related massacre actually indicate anything other than the consumption of the headline?

Likes, dislikes, regards, condolences – whatever new and supposedly groundbreaking interactive buttons Facebook chooses to unveil still fail to convey interaction in any meaningful way. If we are to assume that this problem is a wound, then the addition of new buttons is not even a band-aid that might cover it up. It is a team of strangers wandering in off the street  to stare at the wound while shrugging their shoulders.

At the end of the day, Facebook’s issue with conveying tragedy has nothing to do with a lack of expressive tools. They have plenty of sad-face emojis. I can conjure up a cartoonish picture of a crying dog clutching a broken heart even as we speak. The problem is that these minimum-effort interactions are still sandwiched in between list articles of Harry Potter gifs and celebrity gossip. There is no weight; nothing to distinguish that this news about human suffering is any more or less important than the exciting comeback Tom Hardy gave in a recent interview preceding his latest blockbuster. When true tragedy occurs – personal, local, or even international – the consumption of that information is every bit as important as the interactions that follow it. An event like the school shooting at Umpqua demands more than just a modicum of reverence and attention. Would any of us be satisfied if we relayed the passing of a loved one – a personal loss – and that news was buried underneath recipes for cookie bars and this year’s swimwear trends?

If this all seems a bit high-horse and hypocritical (I am, after all, still a Facebook user wandering through every image of a baby hippopotamus I can find and compulsively liking them all), then allow me to assuage that anger slightly by letting you know that I am not in the business of blaming Facebook users for using a platform the way it was clearly designed to be used. If I accomplish anything with this piece, I hope it will be to challenge us all to consider the media through which we consume our information, and if there is some failing in those media, to look elsewhere or to request a change. I do not know if Facebook considers themselves responsible in any way for the delivery of important news to their users, but given how many of us acquire information about current events through their platform, I think it’s important that the designers and employees of Facebook understand that the paradigm creates a responsibility, and it is currently undelivered.

 
Bio:

H.L. Starnes is a writer, pop culture enthusiast, social media connoisseur, and owner of a very official looking certificate that labels him the Number One Dad, edging out all other fathers in a global ranking system.

 

Headline Pic Via: Source

Zuckerberg Pic Via: Source 

Trigger

I am sick of talking about trigger warnings. I think a lot of people are. The last few months have seen heated debates and seemingly obligatory position statements streaming through RSS and social media feeds.  I even found a piece that I had completely forgotten I wrote until I tried to save this document under the same name (“trigger warning”). Some of these pieces are deeply compelling and the debates have been invaluable in bringing psychological vulnerability into public discourse and positioning mental health as a collective responsibility. But at this point, we’ve reached a critical mass. Nobody is going to change anyone else’s mind. Trigger warning has reached buzzword status. So let’s stop talking about trigger warnings. Seriously. However, let’s absolutely not stop talking about what trigger warnings are meant to address: the way that content can set off intense emotional and psychological responses and the importance of managing this in a context of constant data streams.

I’m going to creep out on a limb and assume we all agree that people who have experienced trauma should not have to endure further emotional hardship in the midst of a class session nor while scrolling through their friends’ status updates. Avoiding such situations is an important task, one that trigger warnings take as their goal.  Trigger warnings, however, are ill equipped for the job.

Why Trigger Warnings Fall Short

 Warning people of potential triggers is a great idea. But trigger warnings do way more than this. They warn of sensitive material, as per their primary function, but also, they pick a fight.

Language is living. The meaning of a term is subject to the contexts in which people use it. The trigger debates have charged trigger warnings with a keen divisiveness. Trigger warnings not only warn, but also state that delivering warnings in this specifically explicit way is something writers, teachers, and speakers should do. Clearly, this is not a position with which everyone agrees, and claimants on both sides shade their arguments with a strong moral tint. Posting a trigger warning is therefore a political decision, one that tells a contingent of consumers to go screw.

It is perhaps tempting to shrug off concern for those audiences so vehemently against trigger warnings that the inclusion of one is taken as a personal affront. I implore you to resist the urge to shrug. Alienating audiences with opposing worldviews is exclusionary, unproductive, rude, and ultimately, unfortunate. It takes a potential conversation and turns it into a self-congratulatory monologue. This may be okay on a personal Facebook page, but less so in widespread public media and especially, classrooms.

So, imbued with the contentions of the trigger debates, trigger warnings do too much.  Ironically, however, trigger warnings also don’t do enough.

The logic of trigger warnings is that trauma can be mitigated if content producers prepare consumers for the inclusion of sensitive material. This supposes that the producer can identify what’s sensitive and thereby determine what requires warning. That is, trigger warnings presume that we can predict each other’s trauma. Avoiding psychological harm then depends upon accurate prediction. It’s essentially a bet that hinges on mindreading. If we take seriously what trigger warnings are intended for, this is a pretty risky bet.

Of course there are topics that make their sensitivity known—sexual assault, intimate partner violence, images of war etc. But lots of potentially sensitive topics aren’t so obvious. I have a student who tells me she feels anxious and angry whenever she steps into an ice cream parlor as the smell of baking cones brings back terrible memories of a negative work experience. It produces discomfort rather than real psychological trauma for her, but what if her bad work experience went beyond an annoying boss and ice cream really did invoke a more serious reaction? Extrapolate this example to the myriad contexts in which people encounter jarring life events. A granular approach would provide trigger warnings for increasingly more topics, but this is a slippery slope that quickly becomes a losing battle. The content would be lost among the warnings, and potentially harmful content would still slip by.

So How Do We Write for an Audience With Whom We Don’t Necessarily Agree, While Caring for an Audience Who We Can Never Entirely Know?

 I suggest we do so with an orientation towards audience intentionality among content producers, content distributors, and platform designers. Let the audience intentionally decide what to consume and on what terms. Don’t make consumption compulsory.

Content producers and distributors include published authors, social media prosumers, and classroom teachers. These are the people who make the content and spread it around. For them, I offer a very simple suggestion: use informative titles, thoughtful subtitles, and precise keywords. David Banks mentions the title approach as one he’s taken in lieu of trigger warnings. It’s a simple and elegant response to the trauma problem. Rather than “trigger warned content,” the content is just accurately framed. The reader can prepare without being warned, per say. Clever titles are, well, clever, but leave the reader unprepared and vulnerable to surprise. I’ve used clever titles. I’m now going to stop.  Check out the title of this piece. Nothing fancy, but you knew what you were in for when you clicked the link. Goal accomplished. Not to mention, using clear titles with precise keywords helps with search engine optimization and in a flooded attention economy, that’s nothing to sneeze at. Search Engine Optimization, which is also known as SEO, is the process of increasing the volume and quality of traffic to a website from search engines via organic or search results. The higher your company’s website ranks on Search Engine Result Page (SERP), the more searchers will visit your site. As a marketing strategy for increasing site’s relevance, Outreach Warriors SEO consultants consider how search algorithms work and what people search for. A SEO process may involve a site’s coding and structure, content and copywriting, site presentation, as well as fixing other problems that will prevent search engines from indexing your company website. If your company’s website is not indexed by search engines, there will be no chance at all for your site to get high visibility rankings on search engines. Therefore, it is extremely important for businesses to take note of SEO and make sure that their websites are properly indexed by search engines.

The term “SEO” can also refer to “Search Engine Optimizer”. This is an industry term that refers to agencies and consultants that carry out search engine optimization process on behalf of their clients, and by employees who perform SEO services in-house. Every agency and consultant has their own SEO methodology; therefore they may use different methods to achieve high organic rankings for websites. In most cases, in order to have effective SEO, it may require changes to the HTML source code of a site, SEO tactics will be incorporated into website development and design. That is why almost all credible SEO agencies like Search Combat and consultants will first look at the design and back-end architecture of a website before starting any SEO process. This will ensure that SEO is carried out effectively.

To the platform designers— stop it already with the autoplay. Design platforms with the assumption that users do not want to consume everything that those in their networks produce. People are excellent curators. They will click the link if they want to consume. Give people the opportunity to click and the equal opportunity to scroll by.  This is all the more effective if producers and distributors clearly label their content.

Trigger warnings are earnest in their purpose, but don’t hold up as a useful tool of social stewardship. People know themselves and given enough information, can make self-protective decisions quite effectively. Trigger warnings are a paternalistic and divisive alternative to handing over consumptive decisions in subtler and simpler ways. Perhaps the best way we can care for one another is by helping and trusting each person to care for hirself.

Jenny Davis is in Twitter @Jenny_L_Davis

Headline Pic: Source

madscientist

I know this is a technology blog but today, let’s talk about science.

When I’m not theorizing digital media and technology, I moonlight as an experimental social psychologist. The Reproducibility Project, which ultimately finds that results from most psychological studies cannot be reproduced, has therefore weighed heavy on my mind (and prominent in over-excited conversations with my partner/at our dogs).

The Reproducibility Project is impressive in its size and scope. In collaboration with the authors of original studies and volunteer researchers numbering in the hundreds, project managers at the Open Science Framework replicated 100 psychological experiments from three prominent psychology journals. Employing “direct replications” in which protocols were recreated as closely as possible, the Reproducibility Project found that out of 100 studies, only 39 produced the same results. That means over 60% of published studies did not have their findings confirmed.

In a collegial manor, the researchers temper  the implications of their findings by correctly explaining that each study is only one piece of evidence and that theories with strong predictive power require robust bodies of evidence. Therefore, failure to confirm is not necessarily a product of sloppy design, statistical manipulations, or dishonesty, but an example of science as an iterative process. The solution is more replication. Each study can act as its own data point in the larger scientific project of knowledge production. From the conclusion of the final study:

As much as we might wish it to be otherwise, a single study almost never provides definitive resolution for or against an effect and its explanation… Scientific progress is a cumulative process of uncertainty reduction that can only succeed if science itself remains the greatest skeptic of its explanatory claims.

This is an important point, and replication is certainly valuable for the reasons that the authors state. The point is particularly pertinent given an incentive structure that rewards new and innovative research and statistically significant findings far more than research that confirms what we know or concludes with null hypotheses.

However, in its meta-inventory of experimental psychology, the Reproducibility Project suffers from a fatal methodological flaw: its use of direct replications. This methodological decision, based upon accurate mimicry of the original experimental protocol, misunderstands what experiments do—test theories.  The Reproducibility Project replicated empirical conditions as closely as possible, while the original researchers treated empirical conditions as instances of theoretical variables.  Because it was incorrectly premised on empirical rather than theoretical conditions, the Reproducibility Project did not test what it set out to test.

Experiments are sometimes critiqued for their artificiality. This is a critique based in misunderstanding. Like you, experimentalists also don’t care how often college students agree with each other during a 30 minute debate, or how quickly they solve challenging puzzles. Instead, they care about things like how status affects cooperation and group dynamics, or how stereotypes affect performance on relevant tasks. That is, experimentalists care about theoretical relationships that pop up in all kinds of real life social situations. But studying these relationships is challenging.  The social world is highly complex and contains infinite contingencies, making theoretical variables difficult to isolate. The artificial environment of the lab helps researchers isolate their theoretical variables of interest. The empirical circumstances of a given experiment are created as instances of these theoretical variables. Those instances necessarily change across time, context, and population.

For example, diffuse status characteristics, a commonly used social psychological variable, are defined as: observable personal attributes that have two or more states that are differentially evaluated, where each state is culturally associated with general and specific performance expectations. Examples in the contemporary United States include race, gender, and  physical attractiveness.  In this example, we know that any of these may eventually cease to be diffuse status markers, hence the goal of social justice activism. Similarly, we could be sure that definitions of “physical attractiveness” will vary by population.

Experimentalists are meticulous (hopefully) in designing circumstances that instantiate their variables of interest, be they status, stereotypes, or, as in the case below, decision making.

One of the “failed replications” was from a study that originated at Florida State University. This study asked students to choose between housing units: small but close to campus, or larger but further away from campus. The purpose of the study was to test conditions that affect decision making processes (in this case, sugar consumption).  For FSU students, the housing choice was a difficult decision. At the University of Virginia, where the study was replicated, the decision was easy. While Florida is a commuter school, UVA is not, therefore living close to campus was the only reasonable decision for the replication population. Unsurprisingly, the findings from Florida didn’t translate to Virginia. This is not because the original study was poorly designed, statistically massaged, or a fluke,  but because in Florida, the housing choice was an instance of a “difficult choice” but in Virginia, it was not. Therefore, the theoretical variable of interest did not translate. The replication study  failed to replicate the theoretical test

Experimentalists would not expect their empirical findings to replicate in new situations. They would, however, expect new instances of the theoretical variables to produce the same results. Those instances, however, might look very different.

Therefore, the primary concern of a true replication study is not empirical research design, but how that design represents social processes that persist outside of the laboratory. Of course, because culture shifts slowly, empirical replication is both useful and common in recreating theoretical conditions. However,  A true replication is one that captures the spirit of the original study, not one that necessarily copies it directly.  In contrast, the Reproducibility Project is actively atheoretical. Footnote 5 of their proposal summary states:

Note that the Reproducibility Project will not evaluate whether the original interpretation of the finding is correct.  For example, if an eligible study had an apparent confound in the design, that confound would be retained in the replication attempt.  Confirmation of theoretical interpretations is an independent consideration

It is unfortunate that the Reproducibility Project contains such a fundamental design error, despite its laudable intentions. Not only because the project used a lot of resources, but also because it takes an important and valid point—we need more replication—and undermines it by arguing with poor evidence. The Reproducibility Project proposal concludes with a compelling statement:

Some may worry that discovering a low reproducibility rate will damage the image of psychology or science more generally.  It is certainly possible that opponents of science will use such a result to renew their calls to reduce funding for basic research.  However, we believe that there is a much worse alternative: having a low reproducibility rate, but failing to investigate and discover it.  If reproducibility is lower than acceptable, then we believe it is vitally important that we know about it in order to address it.  Self-critique, and the promise of self-correction, is why science is such an important part of humanity’s effort to understand nature and ourselves.

I whole heartedly agree.  We do need more replication, and with the move towards electronic publishing models, there is more space than ever for this kind of work. Let us be careful, however, that we conduct replications with the same scientific rigor that we expect of the studies’ original designers. And in the name of scientific rigor, let us be sure to understand, always, the connection between theory and design.

 

Jenny L. Davis is on Twitter @Jenny_L_Davis

Headline Image: Source

WordPress › Error

There has been a critical error on this website.

Learn more about troubleshooting WordPress.