Greenville College

Small towns move at the rate of horse and buggy rather than high speed internet, and therefore tend to reside on the wrong side of the digital divide. However, digital divides are not fixed or homogeneous, and small towns can surprise you. This is made clear through the case of Greenville College.

Out far from the glow of St. Louis is the small rural community of Greenville, Illinois. Greenville is a negligible town of 7,000. Most pass it on the interstate without even noticing–or use it as a place to go to the bathroom on the way from St. Louis toward Indianapolis. Amidst its miniscule population is a small enclave of higher-ed: Greenville College. Greenville College, founded in 1892, is a small Christian liberal arts college. Greenville College was once on the unfortunate side of the digital divide–until, out of necessity, it surpassed its urban counterparts.

In the late 90’s, the trend in network infrastructure on college campuses was to wire dorms and buildings with broadband. However, Infrastructure like that costs loads in installation and upkeep. Pulling wire across a campus and through old dorms is intensive and expensive work. The upkeep of this type of infrastructure is too costly for a small institution like Greenville, and the city certainly didn’t have much in place. So GC adapted.  Rather than installing an expensive infrastructure and then installing wireless access points later, they skipped the heavy infrastructure altogether and jumped right to wireless.

According to an article in the St. Louis Post Dispatch, Greenville College was the first campus in the US to install wireless Internet.[1] In a brief conversation with the IT director, Paul Younker, he adds that Greenville was the first campus with high speed (2mb) wireless internet with a single T1 line as the backbone. In 1999, employing a large enterprise-level wireless network was a big deal. 53 Wireless APs scattered across campus changed the way GC used their campus and connected to the outside world, and made the school an ironic innovator in higher-ed tech. The decision to go wireless, according to Younker, was based on the cost of infrastructure. For the money they would spend installing a port in every room, wireless could be installed throughout the entire campus.

This Greenville College case demonstrates that technological innovation doesn’t always happen in the most obvious places­­–If we were to follow common assumptions about the digital divide, we might assume that the first wireless network on a college campus might happen somewhere with an already existing and strong network infrastructure. Certainly it would happen somewhere with multiple stoplights.  However, there was no internet infrastructure in the dorms at GC and only minimal infrastructure elsewhere on campus. Even more, at the time when wireless was installed at GC, the infrastructure of the entire town was rather questionable. This eschews a purely hierarchical model of technology adoption. It wasn’t the most privileged who implemented a wireless network first, it was those who had relatively less privilege, access and capital. It was those who had a need.

Yet, Greenville’s leap over the digital divide has not been entirely smooth.  Younker recounts that the first broadcasting SSID across campus was called “Moses.” The name had multiple meanings. It reflected the liberation that wireless internet brought (“Let my people go” ) and also, the speed of the network— checking your email was sometimes sort of like wandering through the desert for 40 years. It wasn’t until recently (2014) that Greenville College got a substantially better internet connection through the Illinois Century Network, an initiative designed to bring high speed internet to educational institutions throughout rural Illinois. Before 2014, wireless internet, as nice as that is, was still bogged down by an abysmal infrastructure of T1 lines and small town internet providers.

The digital divide in small towns is real, but not beyond negotiation. Greenville, being on the losing end of that divide, was required to innovate and think creatively with infrastructure. In that moment of innovation, we see the assumptions about small towns and their technology shaken a bit as this place with sparse infrastructure surpassed its privileged counterparts. Paying close attention to rural spaces gives a different and more varied picture of the way technology and culture function together, in sometimes surprising ways.

 

[1] “College in Illinois is the First to Deliver Internet Without Wires” (Vol 129 Number 279). St. Louis Post Dispatch. October 6, 1999 [article paywalled].

 

Matt Bernico (@The_Intermezzo) is a Ph.D. Candidate at the European Graduate School in Saas-Fee CH. He also teaches at Greenville College in Greenville Illinois. His research interests are Media Studies, Speculative Realism and Political theory.

Pic Via: Source

I'm not kidding, this is a VHS from www.Tower.com which I'm pretty sure is the current iteration of Tower Records.
I’m not kidding, this is a VHS that you can buy right now from www.Tower.com, which I’m pretty sure is the current iteration of Tower Records.

In 1953, Hugh Hefner invited men between the ages of “18 and 80” to enjoy their journalism with a side of sex. It was Playboy’s inaugural issue, featuring Marylyn Monroe as the centerfold, and it launched an institution that reached behind drugstore counters, onto reality TV, and under dad’s mattresses.  It was racy and cutting edge and ultimately, iconic. Posing for Playboy was a daring declaration of success among American actresses and the cause of suspension for a Baylor University student [i]. But edges move, and today, Playboy vestiges can  be found on the Food Network.

In August, Playboy stopped showing nude images on their website. The New York Times reports that viewership subsequently increased from 4 million to 16 million. That’s fourfold growth!!   In what can only be described as good business sense, the company announced that in March, they will stop including nude women in their magazine as well. Putting clothes on appears surprisingly profitable.

In the NYT piece,  Playboy CEO Scott Flanders explains the decision: You’re now one click away from every sex act imaginable for free. And so it’s just passé at this juncture… That battle has been fought and won.

Flanders is right. The Internet has changed the sex industry. Pornography has never been more accessible, nor has its consumption been more acceptable. Porn websites draw more visitors than Twitter, Netflix, and Amazon combined. My colleague who studies pornography consumption tells me that among college men, he and his team find about a 90% consumption rate.  Today, people take their porn interactive, amateur, queer, multi-partied, penetrational, and however else they come to imagine. What  this means for the evolution of sexuality is a complicated can of worms, but what’s clear is that set shots of airbrushed  women, arms  grasping  bedposts, mouths partially open and eyes partially closed, are as outdated as Playboy’s most identifiable medium (the magazine).

So Playboy is going normcore.  While normcore first referred to exceptionally unexceptional and gender neutral clothing, it has broadened to mean a pushback against the fast-paced attention demands of an identity-saturated moment, largely facilitated by digital technologies that afford 24 hour news cycles, widespread content creators, and self-started brand initiatives. Normcore is resistance through normalcy—hardcore normalcy.

Playboy will replace the nude images with scantily clad ones. They’ll also return more attention to the journalistic portion of the publication (maybe people really will get it for the stories). A XXX market renders R rated content definitively vanilla, and a move to PG-13 comparatively bold.  As evidenced by the brand’s enormous website growth following their decision to put clothes on the models, subtle is the new edgy and “tasteful” is a market niche.

 

Follow Jenny Davis on Twitter @Jenny_L_Davis

 

[i] The incident resulted in a fraternity having to “write essays” as punishment for posing in the same issue fully clothed alongside women in bikinis. The female student who posed nude, however, was indeed suspended. Because, gender.

school-desks-305953_640

Before leaving his post as Australia’s Education Minister, Christopher Pyne approved a major restructuring of the public school curriculum. The new plan makes code and programming skills central. In a statement released by Australia’s Department of Education and Training at the end of September, Pyne laid out plans to disperse $12million for:

  • The development of innovative mathematics curriculum resources.
  • Supporting the introduction of computer coding across different year levels.
  • Establishing a P-TECH-style school pilot site.
  • Funding summer schools for STEM students from underrepresented groups.

From grade 5, students will learn how to code. From grade 7, they will learn programming. What they will no longer be required to learn, however, is history and geography. The new plan replaces these heretofore core subjects with the technical skills of digital innovation.

This curricular refocus represents an important shift in the labor market, in which the means of production are increasingly digital and employment opportunities require a technical skill set. Indeed, tomorrow’s job seeker is well advised to learn how to code and institutions of education are quickly tuning in to this. France started teaching computer science in elementary school and the UK similarly introduced code into their primary school program. In the U.S., New York Mayor Bill de Blasio announced plans to offer computer science in all of the city’s public schools within 10 years.

But about replacing history and geography…

This replacement fundamentally misunderstands the deeply social nature of programming and code. To treat technical skill as somehow separate from socio-historical knowledge is not only fallacious, but bodes poorly for the future that the curricular shift is intended to improve. Computer science is historical and geographic. Code is culturally rooted and inherently creative.

Of course it’s important to teach students the skills they need to be competitive in and contribute to, the societies in which they live. Computer science is part of that. But teaching technical skill without social underpinnings is truly, coding without a net. It creates technicians who are expert in the how without understanding the why, the what happened before, or the what could be.

It is far from novel to claim that code is not neutral or that computer programs contain the cultural fingerprints of those who make and use them. A bustling literature on big data and its human antecedents and consequences is evidence of this. And yet, assumptions of technological neutrality continue to shape policy and practice. For example, Jobaline’s voice analyzer algorithmically selects voices that are best suited for employment; Facebook users don’t know their feeds are filtered; politicians refute firearms regulation with the logic that “guns don’t kill people, people kill people”.

The next generation will be a generation of makers. We will do well to remember that at its base, making is a social process.

 

Follow Jenny Davis on Twitter @Jenny_L_Davis

Pic via: Source

LeeAt this point everyone is undoubtedly aware of the school shooting at Umpqua Community College in Oregon, though I am certain the method by which we came across the news varied. Some of us were notified of the event by friends with fingers closer to the pulse, and still more of us learned about it first-hand through a variety of content aggregation platforms, such as Reddit.

I would hazard that the majority of people learned about it first and foremost through social media;  primarily Facebook or Twitter. I certainly did. A friend first told me about it through Facebook Messenger, and almost immediately after she did, I started to see articles trickling into my newsfeed in the background of my Messenger window. And the moment that happened I shut my Facebook tab down, despite the fact that I compulsively, addictively, have it open almost all the time.

Facebook, when taken on the whole, is a fantastic way for people to compare each others’ lives and share pictures of kittens and children, but when it comes to a tragedy, the platform is woefully inadequate at allowing its users to parse and process the gravity of events as they unfold. It is a thorough shock to a system that thrives on irreverent links and topical memes, and when faced with an item that requires genuine reflection and thought, it is often simpler – indeed, even more beneficial – for users to turn their heads until the kittens may resume.

This is no fault to the user. At the system level, this is the way Facebook operates. Users share personal updates, photos, or links, and other users may comment or “like” them before sharing them within their own group of friends. Liking content is often perceived as a method by which to gauge this content. Is it worth your time? Judging by the sheer volume of users who liked it, perhaps it is.

But, as is inevitable, a system like that has a tendency to favor those aforementioned irreverent links. Is your meme good enough – nay, dank enough – to garner All of The Likes? Then congratulations: You have, in a sense, won Facebook. There is a strange sense of accomplishment through that feat that is difficult to convey in a way that does not invariably make us all sound embarrassingly shallow, but in the internet age, where fictitious and transient achievements are often held up side by side with those more tangible, it is a very important item to consider. Because Facebook rewards the user with likes. The content that garners the most likes, regardless of  substance, is the one that results in reward.

And herein lies the rub. Facebook does not offer any way to weight or differentiate these links in a meaningful way, so all of your social media content has a tendency to just flow, like a deranged stream-of-consciousness. Try to imagine the scene from the 1971 classic Willy Wonka and the Chocolate Factory in which Gene Wilder takes a group of innocent civilians down a tunnel of terrors. At one point, they slowly travel down a river of candy and see a bright garden of colorful sweets, and in the next moment, they are treated to an ever-quickening barrage of lights and awful imagery, only to emerge on the other side to continue a fantastical tour of candy making delight. This is more or less the Facebook experience during any sort of national tragedy, and as you might surmise, it has a tendency to leave us all a little desensitized, confused, and hollow.

The fact is, when you read about ten people murdered and countless others injured in between a list of “12 Pugs Who Cannot Even Right Now” and a gallery of Disney Princesses dressing as though they play in an ‘80s hair metal tribute band, it quickly becomes jarring to have to openly and honestly consider the one piece of news that is irresponsible to ignore.

This is the true reason I jumped ship from Facebook:  I felt I needed to find my information elsewhere. After experiencing multiple school shootings through the social media juggernaut, I was all too aware of the unhealthy way I would wind up consuming the news if I stayed where I was, biting my nails and refreshing my home page to click on link after link. Every article would possess a  shock headline specifically designed to garner coveted clicks, and many would quickly deviate off the topic into politicized debate, piggy-backing off the horrific event in ways that would produce the greatest volume of interaction through either likes or comments. The fates of the victims and their families would inevitably be buried under the larger, more politicized issues that generate substantially more views. I hope you will all be understanding when I say that I am, at long last, a bit too squeamish for the process. Facebook wasn’t built for bad news.

To Facebook’s credit, they have clearly heard their users and are working to add some additional functionality to the platform to help people respond in a meaningful and appropriate way to news that does not, and should not, ever garner a “like.” News outlets have been quick to dub this upcoming functionality the “dislike button,” as this has been a common request by Facebook users from the moment the like button came into existence. However – and again, this is to Facebook’s credit – there will be no dislike button for the exact reasons you already know. Such a feature would only cause users to rampantly troll and bully other users with negativity, which goes completely counter to the sort of positive cyber utopia Mark Zuckerberg would like his community to be.

Lee1

No, if the reports from Facebook are to be taken at face value, we are more likely to receive a button that gives us the ability to issue condolences or regards – something specific that shows a respectful acknowledgement of the news received, and notifies the poster that we are thinking about it.

And so, all our problems are solved, and in the future, no Facebook user will be burdened by the awkwardness of having to sit in silence when an acquaintance announces the passing of a family member, mouse arrow hovering between the comment bar and the “like” button, uncertain of how to proceed.

The truth is that “liking” something on Facebook is the absolute lowest common denominator of participation in the community, and any additional button meant to showcase a feeling of remorse would have no more impact than the tools already at our disposal. Does clicking “Sorry About That” at the news of a gun-related massacre actually indicate anything other than the consumption of the headline?

Likes, dislikes, regards, condolences – whatever new and supposedly groundbreaking interactive buttons Facebook chooses to unveil still fail to convey interaction in any meaningful way. If we are to assume that this problem is a wound, then the addition of new buttons is not even a band-aid that might cover it up. It is a team of strangers wandering in off the street  to stare at the wound while shrugging their shoulders.

At the end of the day, Facebook’s issue with conveying tragedy has nothing to do with a lack of expressive tools. They have plenty of sad-face emojis. I can conjure up a cartoonish picture of a crying dog clutching a broken heart even as we speak. The problem is that these minimum-effort interactions are still sandwiched in between list articles of Harry Potter gifs and celebrity gossip. There is no weight; nothing to distinguish that this news about human suffering is any more or less important than the exciting comeback Tom Hardy gave in a recent interview preceding his latest blockbuster. When true tragedy occurs – personal, local, or even international – the consumption of that information is every bit as important as the interactions that follow it. An event like the school shooting at Umpqua demands more than just a modicum of reverence and attention. Would any of us be satisfied if we relayed the passing of a loved one – a personal loss – and that news was buried underneath recipes for cookie bars and this year’s swimwear trends?

If this all seems a bit high-horse and hypocritical (I am, after all, still a Facebook user wandering through every image of a baby hippopotamus I can find and compulsively liking them all), then allow me to assuage that anger slightly by letting you know that I am not in the business of blaming Facebook users for using a platform the way it was clearly designed to be used. If I accomplish anything with this piece, I hope it will be to challenge us all to consider the media through which we consume our information, and if there is some failing in those media, to look elsewhere or to request a change. I do not know if Facebook considers themselves responsible in any way for the delivery of important news to their users, but given how many of us acquire information about current events through their platform, I think it’s important that the designers and employees of Facebook understand that the paradigm creates a responsibility, and it is currently undelivered.

 
Bio:

H.L. Starnes is a writer, pop culture enthusiast, social media connoisseur, and owner of a very official looking certificate that labels him the Number One Dad, edging out all other fathers in a global ranking system.

 

Headline Pic Via: Source

Zuckerberg Pic Via: Source 

Trigger

I am sick of talking about trigger warnings. I think a lot of people are. The last few months have seen heated debates and seemingly obligatory position statements streaming through RSS and social media feeds.  I even found a piece that I had completely forgotten I wrote until I tried to save this document under the same name (“trigger warning”). Some of these pieces are deeply compelling and the debates have been invaluable in bringing psychological vulnerability into public discourse and positioning mental health as a collective responsibility. But at this point, we’ve reached a critical mass. Nobody is going to change anyone else’s mind. Trigger warning has reached buzzword status. So let’s stop talking about trigger warnings. Seriously. However, let’s absolutely not stop talking about what trigger warnings are meant to address: the way that content can set off intense emotional and psychological responses and the importance of managing this in a context of constant data streams.

I’m going to creep out on a limb and assume we all agree that people who have experienced trauma should not have to endure further emotional hardship in the midst of a class session nor while scrolling through their friends’ status updates. Avoiding such situations is an important task, one that trigger warnings take as their goal.  Trigger warnings, however, are ill equipped for the job.

Why Trigger Warnings Fall Short

 Warning people of potential triggers is a great idea. But trigger warnings do way more than this. They warn of sensitive material, as per their primary function, but also, they pick a fight.

Language is living. The meaning of a term is subject to the contexts in which people use it. The trigger debates have charged trigger warnings with a keen divisiveness. Trigger warnings not only warn, but also state that delivering warnings in this specifically explicit way is something writers, teachers, and speakers should do. Clearly, this is not a position with which everyone agrees, and claimants on both sides shade their arguments with a strong moral tint. Posting a trigger warning is therefore a political decision, one that tells a contingent of consumers to go screw.

It is perhaps tempting to shrug off concern for those audiences so vehemently against trigger warnings that the inclusion of one is taken as a personal affront. I implore you to resist the urge to shrug. Alienating audiences with opposing worldviews is exclusionary, unproductive, rude, and ultimately, unfortunate. It takes a potential conversation and turns it into a self-congratulatory monologue. This may be okay on a personal Facebook page, but less so in widespread public media and especially, classrooms.

So, imbued with the contentions of the trigger debates, trigger warnings do too much.  Ironically, however, trigger warnings also don’t do enough.

The logic of trigger warnings is that trauma can be mitigated if content producers prepare consumers for the inclusion of sensitive material. This supposes that the producer can identify what’s sensitive and thereby determine what requires warning. That is, trigger warnings presume that we can predict each other’s trauma. Avoiding psychological harm then depends upon accurate prediction. It’s essentially a bet that hinges on mindreading. If we take seriously what trigger warnings are intended for, this is a pretty risky bet.

Of course there are topics that make their sensitivity known—sexual assault, intimate partner violence, images of war etc. But lots of potentially sensitive topics aren’t so obvious. I have a student who tells me she feels anxious and angry whenever she steps into an ice cream parlor as the smell of baking cones brings back terrible memories of a negative work experience. It produces discomfort rather than real psychological trauma for her, but what if her bad work experience went beyond an annoying boss and ice cream really did invoke a more serious reaction? Extrapolate this example to the myriad contexts in which people encounter jarring life events. A granular approach would provide trigger warnings for increasingly more topics, but this is a slippery slope that quickly becomes a losing battle. The content would be lost among the warnings, and potentially harmful content would still slip by.

So How Do We Write for an Audience With Whom We Don’t Necessarily Agree, While Caring for an Audience Who We Can Never Entirely Know?

 I suggest we do so with an orientation towards audience intentionality among content producers, content distributors, and platform designers. Let the audience intentionally decide what to consume and on what terms. Don’t make consumption compulsory.

Content producers and distributors include published authors, social media prosumers, and classroom teachers. These are the people who make the content and spread it around. For them, I offer a very simple suggestion: use informative titles, thoughtful subtitles, and precise keywords. David Banks mentions the title approach as one he’s taken in lieu of trigger warnings. It’s a simple and elegant response to the trauma problem. Rather than “trigger warned content,” the content is just accurately framed. The reader can prepare without being warned, per say. Clever titles are, well, clever, but leave the reader unprepared and vulnerable to surprise. I’ve used clever titles. I’m now going to stop.  Check out the title of this piece. Nothing fancy, but you knew what you were in for when you clicked the link. Goal accomplished. Not to mention, using clear titles with precise keywords helps with search engine optimization and in a flooded attention economy, that’s nothing to sneeze at.

To the platform designers— stop it already with the autoplay. Design platforms with the assumption that users do not want to consume everything that those in their networks produce. People are excellent curators. They will click the link if they want to consume. Give people the opportunity to click and the equal opportunity to scroll by.  This is all the more effective if producers and distributors clearly label their content.

Trigger warnings are earnest in their purpose, but don’t hold up as a useful tool of social stewardship. People know themselves and given enough information, can make self-protective decisions quite effectively. Trigger warnings are a paternalistic and divisive alternative to handing over consumptive decisions in subtler and simpler ways. Perhaps the best way we can care for one another is by helping and trusting each person to care for hirself.

Jenny Davis is in Twitter @Jenny_L_Davis

Headline Pic: Source

madscientist

I know this is a technology blog but today, let’s talk about science.

When I’m not theorizing digital media and technology, I moonlight as an experimental social psychologist. The Reproducibility Project, which ultimately finds that results from most psychological studies cannot be reproduced, has therefore weighed heavy on my mind (and prominent in over-excited conversations with my partner/at our dogs).

The Reproducibility Project is impressive in its size and scope. In collaboration with the authors of original studies and volunteer researchers numbering in the hundreds, project managers at the Open Science Framework replicated 100 psychological experiments from three prominent psychology journals. Employing “direct replications” in which protocols were recreated as closely as possible, the Reproducibility Project found that out of 100 studies, only 39 produced the same results. That means over 60% of published studies did not have their findings confirmed.

In a collegial manor, the researchers temper  the implications of their findings by correctly explaining that each study is only one piece of evidence and that theories with strong predictive power require robust bodies of evidence. Therefore, failure to confirm is not necessarily a product of sloppy design, statistical manipulations, or dishonesty, but an example of science as an iterative process. The solution is more replication. Each study can act as its own data point in the larger scientific project of knowledge production. From the conclusion of the final study:

As much as we might wish it to be otherwise, a single study almost never provides definitive resolution for or against an effect and its explanation… Scientific progress is a cumulative process of uncertainty reduction that can only succeed if science itself remains the greatest skeptic of its explanatory claims.

This is an important point, and replication is certainly valuable for the reasons that the authors state. The point is particularly pertinent given an incentive structure that rewards new and innovative research and statistically significant findings far more than research that confirms what we know or concludes with null hypotheses.

However, in its meta-inventory of experimental psychology, the Reproducibility Project suffers from a fatal methodological flaw: its use of direct replications. This methodological decision, based upon accurate mimicry of the original experimental protocol, misunderstands what experiments do—test theories.  The Reproducibility Project replicated empirical conditions as closely as possible, while the original researchers treated empirical conditions as instances of theoretical variables.  Because it was incorrectly premised on empirical rather than theoretical conditions, the Reproducibility Project did not test what it set out to test.

Experiments are sometimes critiqued for their artificiality. This is a critique based in misunderstanding. Like you, experimentalists also don’t care how often college students agree with each other during a 30 minute debate, or how quickly they solve challenging puzzles. Instead, they care about things like how status affects cooperation and group dynamics, or how stereotypes affect performance on relevant tasks. That is, experimentalists care about theoretical relationships that pop up in all kinds of real life social situations. But studying these relationships is challenging.  The social world is highly complex and contains infinite contingencies, making theoretical variables difficult to isolate. The artificial environment of the lab helps researchers isolate their theoretical variables of interest. The empirical circumstances of a given experiment are created as instances of these theoretical variables. Those instances necessarily change across time, context, and population.

For example, diffuse status characteristics, a commonly used social psychological variable, are defined as: observable personal attributes that have two or more states that are differentially evaluated, where each state is culturally associated with general and specific performance expectations. Examples in the contemporary United States include race, gender, and  physical attractiveness.  In this example, we know that any of these may eventually cease to be diffuse status markers, hence the goal of social justice activism. Similarly, we could be sure that definitions of “physical attractiveness” will vary by population.

Experimentalists are meticulous (hopefully) in designing circumstances that instantiate their variables of interest, be they status, stereotypes, or, as in the case below, decision making.

One of the “failed replications” was from a study that originated at Florida State University. This study asked students to choose between housing units: small but close to campus, or larger but further away from campus. The purpose of the study was to test conditions that affect decision making processes (in this case, sugar consumption).  For FSU students, the housing choice was a difficult decision. At the University of Virginia, where the study was replicated, the decision was easy. While Florida is a commuter school, UVA is not, therefore living close to campus was the only reasonable decision for the replication population. Unsurprisingly, the findings from Florida didn’t translate to Virginia. This is not because the original study was poorly designed, statistically massaged, or a fluke,  but because in Florida, the housing choice was an instance of a “difficult choice” but in Virginia, it was not. Therefore, the theoretical variable of interest did not translate. The replication study  failed to replicate the theoretical test

Experimentalists would not expect their empirical findings to replicate in new situations. They would, however, expect new instances of the theoretical variables to produce the same results. Those instances, however, might look very different.

Therefore, the primary concern of a true replication study is not empirical research design, but how that design represents social processes that persist outside of the laboratory. Of course, because culture shifts slowly, empirical replication is both useful and common in recreating theoretical conditions. However,  A true replication is one that captures the spirit of the original study, not one that necessarily copies it directly.  In contrast, the Reproducibility Project is actively atheoretical. Footnote 5 of their proposal summary states:

Note that the Reproducibility Project will not evaluate whether the original interpretation of the finding is correct.  For example, if an eligible study had an apparent confound in the design, that confound would be retained in the replication attempt.  Confirmation of theoretical interpretations is an independent consideration

It is unfortunate that the Reproducibility Project contains such a fundamental design error, despite its laudable intentions. Not only because the project used a lot of resources, but also because it takes an important and valid point—we need more replication—and undermines it by arguing with poor evidence. The Reproducibility Project proposal concludes with a compelling statement:

Some may worry that discovering a low reproducibility rate will damage the image of psychology or science more generally.  It is certainly possible that opponents of science will use such a result to renew their calls to reduce funding for basic research.  However, we believe that there is a much worse alternative: having a low reproducibility rate, but failing to investigate and discover it.  If reproducibility is lower than acceptable, then we believe it is vitally important that we know about it in order to address it.  Self-critique, and the promise of self-correction, is why science is such an important part of humanity’s effort to understand nature and ourselves.

I whole heartedly agree.  We do need more replication, and with the move towards electronic publishing models, there is more space than ever for this kind of work. Let us be careful, however, that we conduct replications with the same scientific rigor that we expect of the studies’ original designers. And in the name of scientific rigor, let us be sure to understand, always, the connection between theory and design.

 

Jenny L. Davis is on Twitter @Jenny_L_Davis

Headline Image: Source

Screen Shot 2015-08-28 at 11.37.54 AM

The American Sociological Association (ASA) annual meeting last week featured a plenary panel with an unusual speaker: comedian Aziz Ansari. Ansari just released a book that he co-wrote with sociologist Eric Klinenberg titled “Modern Romance.” The panel, by the same name, featured a psychologist working within the academy, a biological anthropologist working for Match.com, Christian Rudder from OkCupid, and of course, Ansari and Klinenberg. This was truly an inter/nondisciplinary panel striving for public engagement. I was excited and intrigued. The panel is archived here.

This panel seemingly had all of the elements that make for great public scholarships for college Sophomores or any other kind of scholarship. Yet somehow, it felt empty, cheap, and at times offensive. Or as I appreciatively retweeted:

Screen Shot 2015-08-28 at 11.36.29 AM

My discomfort and disappointment with this panel got me thinking about how public scholarship should look. As a person who co-edits an academic(ish) blog, this concern is dear to me. It is also a key issue of contemporary intellectualism. It is increasingly easy to disseminate and find information. Publishing is no longer bound by slow and paywalled peer-review journals. Finally, we have an opportunity to talk, listen, share, and reflect on the ideas about which we are so passionate. But how do we do this well? I suggest two guiding rules: rigor and accommodation.

Be rigorous. Social science is like a super power that lets you see what others take for granted and imagine alternate configurations of reality. Common sense comes under question and is often revealed as nonsensical. Public scholarship therefore maintains both the opportunity and responsibility to push boundaries and challenge predominant ways of thinking. The ASA panel missed this opportunity and in doing so, shirked their responsibility.

First of all the panel, like Ansari and Klinenberg’s book, was titled “Modern Romance.” When drafting my Master’s thesis, the people supervising the work taught me that “modern” did not mean what I thought it meant. Modernism is a particular historical moment brought forth during the industrial revolution. Without going too far into it, scholars continue to debate if we have moved past modernism, and if so, what characterizes this new era, and in turn, what we should call it. Labeling the contemporary era “modern” is therefore an argument in and of itself, one that reveals a set of underlying assumptions that differ from those of postmodernism, poststructuralism, liquid modernity etc. My thesis advisors told me to use “contemporary” instead. It means “now” and is a far less value-laden way of representing the current time period. I got no indication that the ASA panelists held strong to the theoretical underpinnings of modernism vis-à-vis other historical designations. Modernism, therefore, was misused. Just as once I misused it in the first draft of my Master’s thesis. This seems like a nitpicky point, and admittedly it is, but it matters. Public engagement entails opening dialogue between those with different kinds and levels of intellectual capital. This means that discourse can operate at multiple levels. The public scholar can communicate something broad to the larger citizenry, while communicating a more nuanced point to insiders. Moreover, how scholars speak becomes a form of training. If we say modern, then the citizens engaged in discourse with us will also say modern, thereby cultivating imprecision and perhaps even generating confusion.

The second (and larger) issue was with the tenor of the panel as a whole. About halfway through, I shot off this tweet:

Screen Shot 2015-08-28 at 11.36.10 AM

These panelists had an opportunity to offer new ways of thinking about love, romance, and family. Instead, they maintained heterosexual, monogamous, procreative, marriage relationships at the center of their discussion. Leaving aside a few cringe-worthy statements from Ansari, the panel as a whole presumed that marriage was the ultimate goal for those using dating apps, even if users wished to employ them for casual hookups in the meanwhile. The biological anthropologist made evolutionary arguments about procreation, and concluded that changes in romantic connections represented “slow love” in which marriage was the “finale” rather than the beginning. In this vein, they all talked about increases in cohabitation through the lens of declining marriage rates, rather than a reconfiguration of kinship ties and life course trajectories. In an exciting historical moment of dynamic cultural change, the panelists’ take on romance was painfully linear.

Rather than rigorous, lazy language choices and linear heteronormative logic kept the panel safely inside mainstream ways of thinking.

The flip side of rigor is accommodation. To engage the public is not to mansplain things to them, but to offer the fruits of academic training in an accessible way while taking seriously the counterpoints, hesitations, and misunderstandings this may entail. Tangibly, this means intellectuals should use language that is as simple as possible while remaining precise; it means exercising patience when lay-publics espouse ideas or use language that seems outdated or even offensive; it means remaining open to viewpoints rooted in lived experience rather than scientific study or high theory; it means remaining flexible while maintaining intellectual integrity. As an audience, members of the crowd at ASA failed to strike this balance. Instead, it became a weird dichotomy between fanenbying[1] and hyper-pretentious pushback. As I noted earlier, the panelists were heteronormative to a fault. The panel itself was therefore something of an intellectual sacrifice, as were the wholesale endorsements coming from the crowd. Those who engaged the panel critically, however, often did so without accommodation. They censured panelists in the pretentious language of insiders complete with conference-tropes such as “troubled by,” “problematic,” and “this isn’t so much a question as it is a comment.”

This all came to a head when the first person to ask a question took about five minutes to use all of the conference tropes I just mentioned. Ansari replied: “ It’s clear that you have some issues, and I also have an issue. You just said ‘this isn’t really a question it’s a comment,’ but you’re standing in the Q&A line!!” The crowd erupted. Ansari said something we have all wanted to say to long-winded commentators. He identified and called out a truly poor habit within the academy. However, the person who Ansari shut down was making a valid point, only she did so in a way that was unaccommodating. Because of this, the cheering felt uncomfortable for me. The cheers invalidated the commentator’s points and in doing so, endorsed the panelists’ message, a message which really, deserved a harsh critique.

I appreciate that ASA made the move towards public scholarship, and I appreciate that public scholarship is difficult. This is why I’m pushing them—pushing us—to think about how public scholarship can/should look in practice. A simple starting place is to engage with rigor and accommodation. Maintain intellectual standards while meeting publics where they are.

I’m interested to hear other people’s thoughts on the panel and/or public scholarship more generally.

 

Jenny Davis practices public scholarship regularly on Twitter @Jenny_L_Davis

[1] Google tells me fanenby is the gender neutral way to say “fangirl/fanboy” (enby for the NB of non-binary)

 

TargetHeadlineDisclaimer: Nothing I say in this post is new or theoretically novel. The story to which I’ll refer already peaked over the weekend, and what I have to say about it–that trolling is sometimes productive– is a point well made by many others (like on this blog last month by Nathan Jurgenson). But seriously, can we all please just take a moment and bask in appreciation of trolling at its best?

For those who missed it, Target recently announced that they would do away with gender designations for kids toys and bedding. The retailer’s move toward gender neutrality, unsurprisingly, drew ire from bigoted jerks who apparently fear that mixing dolls with trucks will hasten the unraveling of American society (if David Banks can give himself one more calls it as I sees it moment, I can too).

Sensing “comedy gold” Mike Melgaard went to Target’s Facebook page. He quickly created a fake Facebook account under the name “Ask ForHelp” with a red bullseye as the profile picture. Using this account to pose as the voice of Target’s customer service, he then proceeded to respond with sarcastic mockery to customer complaints. And hit gold, Mike did!! For 16 hilarious hours transphobic commenters provided a rich well of comedic fodder. Ultimately, Facebook stopped the fun by removing Melgaard’s Ask ForHelp account. Although Target never officially endorsed Melgaard, they made their support clear in this Facebook post on Thursday evening: Screen Shot 2015-08-17 at 1.02.12 PM

While you enjoy a short selection of my personal favorite Ask ForHelp moments, keep in mind a larger point: trolling can be a good thing, and trolls can do important work. The act of trolling refers to intentionally disrupting discourse. Often, this is destructive and shuts dialogue down. Sometimes, though, trolling is productive and brings a dialogue to new depths, heights, and/or in new directions. Melgaard’s Ask ForHelp account is a beautiful example of trolling gone wonderfully right. The troll managed to co-opt a corporate site (Facebook) for purposes of co-opting a corporate identity (Target) for purposes of discrediting those who espouse hate and endorse exclusionary policies/practices. And he was funny about it. THIS is how you troll.

Target

Target1

Target2

Target4

 

 

 

 

 

 

 

 

Jenny is in Twitter @Jenny_L_Davis

African Burial Ground Monument with Office Building Reflection.
African Burial Ground Monument with Office Building Reflection.

Towards the beginning of Italo Calvino’s novel Invisible Cities, Marco Polo sits with Kublai Khan and tries to describe the city of Zaira. To do this, Marco Polo could trace the city as it exists in that moment, noting its geometries and materials. But, such a description “would be the same as telling (Kublai) nothing.” Marco Polo explains, “The city does not consist of this, but of relationships between the measurements of its space and the events of its past: the height of a lamppost and the distance from the ground of a hanged usurper’s swaying feet.” This same city exists by a different name in Teju Cole’s novel, Open City. It’s protagonist, Julius, wanders through New York City, mapping his world in terms reminiscent of Marco Polo’s. One day, Julius happens upon the African Burial Ground National Monument. Here, in the heart of downtown Manhattan, Julius measures the distance between his place and the events of its past: “It was here, on the outskirts of the city at the time, north of Wall Street and so outside civilization as it was then defined, that blacks were allowed to bury their dead. Then the dead returned when, in 1991, construction of a building on Broadway and Duane brought human remains to the surface.” The lamppost and the hanged usurper, the federal buildings and the buried enslaved: these are the relationships, obscured and rarely recoverable though they are, on which our cities stand.

One morning early this spring, I stood at the intersection of Broadway and Duane and faced the African Burial Ground National Monument. It is, as Julius describes it, little more than a “patch of grass” with a “curious shape… set into the middle of it.” Inside the neighboring office building, though, is a visitor center with its own federal security guards, gift shop, history of the burial site and its rediscovery, and narrative of Africans in America. The tower in which it sits, named the Ted Weiss Federal Building, was completed in 1994, three years after the intact burials were discovered. (The “echo across centuries” Julius hears at the site of course fell on deaf developers’ ears.) In the lobby between the visitor center and the monument, employees of the Internal Revenue Service shuffle over the sacred ground as Barbara Chase-Riboud’s sculpture Africa Rising looks on, one face to the West, another to the East.

Karp1
Africa Rising, photo from Library of Congress

I visited the African Burial Ground National Monument during a spring break trip for a course called Exploring Ground Zeros. We spent much of our class time during the semester visiting sites of trauma near our university in St. Louis, trying to uncover the webs of historical and contemporary claims that determine their meaning. In East St. Louis, we tracked the (now invisible) paths of the 1917 race riot/massacre. In St. Louis, we walked through the urban forest of Pruitt-Igoe, stared aghast at the Confederate Memorial in Forest Park, and visited the Ferguson Quick Trip, the now (in)famous epicenter of the Mike Brown protests in which many of us continue to take part. In New York, we did the same, moving from the Triangle Shirtwaist Factory to 23 Wall Street to Ground Zero to the African Burial Ground National Monument. At each site, we analyzed architecture, commemorations, official literature, and wrote field notes, trying to measure the city by its relationships so that we might later recreate it in our assignments.

We also took pictures, and a few of us uploaded ours to Instagram. After visiting the African Burial Ground and the old Triangle Shirtwaist Factory (now an NYU building), I turned to Instagram, not only to preserve and catalogue my photographic evidence, but also to find more. Were these sites worthy of selfies and faux nostalgic filters? What hashtags blessed the posts? By exploring others’ photos, I hoped to learn more about how people engage with place, and how the sites in their photos exist in contemporary cultural memory. After all, though critics have focused on what Instagram and the selfie culture it enables say about our relationship to our social world and ourselves, Instagram reveals just as much about our relationships with place. In every selfie, there’s a background, a beautiful bathroom or sunset immortalized.

And so, I tagged my photos with the location at which they were taken, and looked through publicly posted photos under the same name. Quickly, though, I ran into a problem. As I mentioned, the African Burial Ground National Monument shares the same coordinates as the Ted Weiss Federal Building. Instagram, however, must recognize them as distinct locations. On its website, Instagram defines location as the “location name” added by the uploader. Simple enough. In the next paragraph, though, “place” replaces “location.” So, in the world of Instagram, “location” is “location name” is “place” is “place name”; it is also never plural. What’s more, these definitions form a curatorial practice. On Instagram, photos are organized by location name, and the newest update offers a “search places” option. This means the reality of social place- that of competing claims and layered meanings, physical or otherwise- cannot be found. Each place is neatly and securely tied to its single tag. This is no doubt an issue of convenience and loyalty to the wisdom “you can’t be in two places at once,” but the result is undeniable: on Instagram, the Ted Weiss Building has never heard of the Burial Ground.

Karp2

 

Karp3

What Instagram does offer is the possibility to “create a place,” a feature that most honestly reflects our everyday experience of place. Technically speaking, creating a place means attaching a manually entered location name to the coordinates where the photo was taken. Conceptually, though, this action has greater significance. Defining place, or place-making, anthropologist Keith Basso tells us, is “a common response to common curiosities… As roundly ubiquitous as it is seemingly unremarkable, place-making is a universal tool of the historical imagination.” Instagram does act as a tool for place-making, then, but its singular definitions prevent it from acting as a site from which to honestly make place. This is because it severely, unnecessarily limits the possibilities of the historical imagination. By equating “place” to “location name” and organizing photographs according to that name, Instagram creates a world in which places and locations are only as old as their names.

There are two more apparent obstacles in the way of Instagram’s historical imagination: oversaturation and amateurism. In the same essay, Keith Basso offers a compelling account of how historical imagination works to make place:

When accorded attention at all, places are perceived in terms of their outward aspects—as being, on their manifest surfaces, the familiar places they are—and unless something happens to dislodge these perceptions they are left, as it were, to their own enduring devices. But then something does happen. Perhaps one spots a freshly fallen tree, or a bit of flaking paint, or a house where none has stood before—any disturbance, large or small, that inscribes the passage of time—and a place presents itself as bearing on prior events. And at that precise moment, when ordinary perceptions begin to loosen their hold, a border has been crossed and the country starts to change. Awareness has shifted its footing, and the character of the place, now transfigured by thoughts of an earlier day, swiftly takes on a new and foreign look.

In photographic terms, this disturbance sounds exactly like Roland Barthes’ punctum, a term he coined in Camera Lucida: Reflections on Photography to describe a significant and hidden detail in emotionally moving photographs. There he describes the punctum as “that accident (in the photograph) which pricks me (but also bruises me, is poignant to me).” Like Basso’s fallen tree, it “rises from the scene, shoots out of it like an arrow, and pierces.” It also shifts his awareness, transfiguring reality across time and space. Noticing the dirt road in a photograph by Kertesz, Barthes “recognize(s), with my whole body, the straggling villages I passed through on my long-ago travels in Hungary and Rumania.” But are punctums possible on Instagram?

Of course, our contemporary relationship with photographs on Instagram is quite different from Barthes’.  In a post called “From Memory Scarcity to Memory Abundance,” Michael Sacasas asks, “What if Barthes had not a few, but hundreds or even thousands of images of his mother?” The answer, naturally, is a dramatic decrease in meaning. Sacasas writes, “Gigabytes and terabytes of digital memories will not make us care more about those memories, they will make us care less.” Surely, it seems unreasonable to expect Instagram users to view its photos with the same attention Barthes paid the photographs in Camera Lucida. And yet, all is not lost. Perhaps, Barthes’ punctum and Basso’s disturbance are merely displaced from an individual photograph to the application itself.

Imagine uploading a picture of your biology class at NYU and finding that hundreds have taken photos of the same room, but from the street, where garment workers jumped to their deaths when your lab room, formerly the Triangle Shirtwaist Factory, was on fire. Even if you haven’t visited the site, looking at photographs from both perspectives might inspire the same prick the dirt road caused in Barthes. Describing that prick, Barthes writes parenthetically, “Here, the photograph really transcends itself: is this not the sole proof of its art? To annihilate itself as medium, to be no longer a sign but the thing itself?” An Instagram grid showing photos of the Ted Weiss Federal Building and the African Burial Ground National Monument simultaneously could, for some, be the thing itself. “The thing”, in this case, is not the texture of the ground or a traveller’s weariness, but the unexpected terror and sadness of a city showing its true foundations.

This combined grid’s disturbance is not only destructive, destabilizing conceptions of place, but creative. Indeed, according to Basso, “Building and sharing place-worlds… is not only a means of reviving the former times but also of revising them, a means of exploring not merely how things might have been but also how, just possibly, they might have been different from what others have supposed.” This function is particularly important today, as communities worldwide question the names they call their spaces. In the past few months, UNC Chapel Hill changed the name of Saunder’s Hall to Carolina Hall (Saunders was apparently a KKK leader); Aldermen in St. Louis are trying to change the name of Confederate Drive, which cuts through Forest Park, a 1300-acre public park; the student government at Rhodes University in South Africa voted in March to rename the university. How will Instagram incorporate contentious and changing place names? Today, there isn’t yet a geotag for Carolina Hall, but when there is, its photos will be kept separate from those taken at Saunder’s Hall. So, like the story of 290 Broadway, the UNC name change will be preserved and hidden. Students at UNC will create a new place, and before long it will have more photographs than the old. The archive of photos taken at Saunder’s Hall will continue to exist, static, and visible only to those who remember its previous name and implied owner.

Karp4

Instagram makes room for these revisions, but not for places to present themselves “as bearing on prior events.” Allowing places to converse with their pasts—by allowing multiple geotags or associating geotags with time period(s) to display layers of names and meanings—would transform Instagram from a platform to share into a platform to converse and learn. Here, places would exist free of Instagram’s current architectural constraints, as palimpsests waiting to “inscribe the passage of time” and activate the historical imagination.

Karp5

 

Jonathan Karp (@statusfro) is a recent graduate from Washington University in St. Louis, where he studied English literature and music.

Headline Pic via: Source

Autonomous Intelligence

The International Joint Conference on Artificial Intelligence (IJCAI15) met last week in Buenos Aires. AI has long captured the public imagination, and researchers are making fast advances. Conferences like IJCAI15 have high stakes, as “smart” machines become increasingly integrated into personal and public life. Because of these high stakes, it is important that we remain precise in our discourse and thought surrounding these technologies. In an effort towards precision, I offer simple, corrective point: intelligence is never artificial.

Machine intelligence, like human intelligence, is quite real. That machine intelligence processes through hardware and code has no bearing on its authenticity. The machine does not pretend to learn, it really learns, it actually gets smarter, and it does so through interactions with the environment. Ya know, just like people.

In the case of AI, artificiality implicitly refers to that which is inorganic; an intelligence born of human design rather than within the human body. Machine intelligence is built, crafted, curated by human hands. But is human intelligence not? Like Ex Machina’s Ava and HUMANS’ Anita/Mia, we fleshy cyborgs would be lost, perhaps dead, certainly unintelligent, without the keen guidance of culture and community. And yet, human intelligence remains the unmarked category while machine intelligence is qualified “fake.”

To distinguish human from machine intelligence through the criteria of artificiality is to misunderstand the very basis of the human mind.

Humans have poor instincts. Far from natural, even our earliest search for food (i.e., the breast) is often fraught with hardship, confusion, and many times, failure. Let alone learning to love, empathize, write, and calculate numbers. As anthropologists and neuroscientists have and continue to show, the mind is a process, and this process requires training. Like machines, humans have to learn how to learn. Like machines, the human brain is manufactured. Like machines, the human brain learns with the cultural logic within which it develops.

Variations in intelligence do not, therefore, hinge on artificiality. Instead, they hinge on vessel and autonomy.

The two ideal-type vessels are human and machine; one soft, wet, and vulnerable to disease, the other materially variable and vulnerable to malware. Think Jennings and Rutter vs. Watson on Jeopardy. In practice, these vessels can—and do—overlap. Ritalin affects human learning just as software upgrades affect machine learning; humans can plant chips in their bodies, learning with both organic matter and silicon; robots can feel soft, wet, and look convincingly fleshy. Both humans and machines can originate in labs. However, without going entirely down the philosophical and scientific rabbit hole of the human/machine Venn diagram, humans and machines certainly maintain distinct material makeups, existential meanings, and ontologies. One way to differentiate between human and machine intelligence is therefore quite simple: human intelligence resides in humans while machine intelligence resides in machines. Sometimes, humans and machines combine.

 Both human and machine intelligence vary in their autonomy. While some humans are “free thinkers” others remain “close-minded.” Some socio-historical conditions afford broad lines of thought and action, while others relegate human potential to narrow and linear tracks. So too, when talking about machine intelligence and its advances, what people are really referring to is the extent to which the machine can think, learn, and do, on its own. For example, IJCIA15 opened with a now viral letter from the Future of Life Institute in which AI and Robotics researchers warn against the weaponziation/militarization of artificial intelligence. In it, the authors explicitly differentiate between autonomous weapons and drones. The latter is under ultimate human control, while the former is not. Instead, an autonomous weapon acts on its own accord. Machine intelligence is therefore not more or less artificial, but more or less autonomous.

I should point out that ultimate autonomy is not possible for either humans or machines. Machines are coded with a logic that limits the horizon of possibilities, just as humans are socialized with implicit and explicit logical boundaries[1]. Autonomy therefore operates upon a continuum, with only machines able to achieve absolute zero autonomy, and neither human nor machine able to achieve total autonomy. In short, an autonomous machine approximates human levels of free thought and action, but neither human nor machine operate with boundless cognitive freedom.

Summarily: ‘Artificial Intelligence’ is  not artificial. It is machine-based and autonomous.

 

Follow Jenny Davis on Twitter @Jenny_L_Davis

 

Pic via: Source

[1] I want to thank James Chouinard for this point.