I confess. I am a Googlephile. Right now on my desktop, I have Gmail, Google Reader, Google Docs and Google Calendar open in separate tabs on my Chrome browser.

I know that every keystroke inputted into Google is saved and stored. For now, it’s all rather innocuous. Mostly work e-mails, calendar entries of kids parties and dentist appointments, etc. Rather than be worried about this, I’m willingly participating in Google’s effort to learn even more about me. I have an Android phone that tracks my whereabouts, lets me check e-mail, rss feeds, calendar etc.

But link what Google knows about me to what Google knows about you and what it seeks to know about the world and you have a massive project. As Daniel Soar points out in the London Review of Books, Google’s efforts at rolling out new ways to create data is creating an increasingly smarter, more intiutive, perhaps essential, information behemoth:

Google is getting cleverer precisely because it is so big. If it’s cut down to size then what will happen to everything it knows? That’s the conundrum. It’s clearly wrong for all the information in all the world’s books to be in the sole possession of a single company. It’s clearly not ideal that only one company in the world can, with increasing accuracy, translate text between 506 different pairs of languages. On the other hand, if Google doesn’t do these things, who will?

The broader question about Google is whether private surveilance is inherently less nefarious and intrusive than state-based public surveilance? After all, Google doesn’t have an army. In addition, Google still needs to respond to customer demands. Last year, Google acquiecsed to the German public’s privacy concerns by allowing users to “opt out” their home addresses of it’s street view application.

The bigger issues comes from Government seeking access to Google’s repository of data. The public and the private are then in danger of becoming blurred. Google makes it’s interaction with government agencies public via it’s transparency report. But what happens when the state, with its monopoly of force, wants access to Google’s data?

The New York Times Bits blog invites a number of readers to “unthether” themselves from technology for a period of time and to create a video of their experience. Reactions to this mini-exercise ran the gamut:

For Jenn Monroe, 40, giving up the Internet and phone led to a desire to purge other technologies from her life.

“I didn’t want to open my computer at all, even though that wasn’t part of the deal,” she said. “I avoided the microwave, which was also sort of strange and surprising to me.”

But for many, finding the right balance can be hard. James Cornell, 18, spent his day away from his cellphone feeling jittery, and he worried that he was annoying people by not responding to them. John Stark, 46, told his friends that he wouldn’t be responding to text messages, expecting them to call him on the phone if they needed to communicate. They sent text messages to his wife instead, asking her to relay information to him.

I know I have to make it a point to turn the computer off when I’m with my six year old. The instant gratification of a tweet or an e-mail is hard to resist. But then again, so is television, food, a good novel, smoking, etc. The need to distract ourselves from our daily lives does not begin and end with the Internet. The distraction might be more visceral on-line, but couldn’t we say the same thing about radio, print, phonographs, etc. I worry about this “Google is making us stupid” meme, popularized by Nick Carr’s Atlantic article, is producing a whole set of articles and books that don’t really advance our understanding of the effect of technology on our lives. Imagine an article called “is alcohol making me drunk”? or “is food making me fat”? You couldn’t. It’s more complicated than that. The point isn’t that the medium has no effect on humans, it’s that those effects are nuanced and contextual.

Marshall McLuhan Way, Downtown Toronto, ON, Canada

Crossposed on Rhizomicomm

McLuhan Way is just down the street from me, so perhaps it’s my inspiration.  I remember reading Marshall McLuhan‘s Understanding Media over 14 years ago in a seminar on the Internet.  The hot/cool media continuum perplexed many of us and some say technology has rendered the concept obsolete.  In terms of hot/cool, where does the Internet stand?

  • Hot media are high-definition.  Media that fully-engages one sense of the audience member:: print {visual}, radio {sound}, film {visual}, & the photograph {visual}.
  • Cool media are low-definition.  Media that require more active participation from the audience member to interpret::  Television {visual with limitations in the 1960s},  telephone {sound of a relatively poor quality in the 1960s}, and comic strips {cheaply reproduced mass-entertainment}.  The video game as a hyperreal construct, where the audience/player must fill in gaps of this representation of the real.

Reading is engaging in hot media and is a solitary experience.  Reading, contrasted with speech, forces an isolating consciousness, perhaps one overly-immersed in the individual.

How does Web 2.0 fit into all of this?  Well, new technologies trend towards the hot.  The iPod engages us, bathes us in a bubble of sound of our choosing.  What about this paradox?  New technologies are higher-definition, engaging us more and more, but also allowing us to be interactive with others {social media}.  Moreover, there is convergence of the technologies.  The smartphone {MP3 player, telephony, Internet web surfing} is a stunning example of multisensory engagement that also allows us to communicate and share with others.

What happened?  Is the singularity of media, where all media is converging, making it all lukewarm?  The continuum is shrinking to a singular point, as in the multimedia experiences of the smartphone.  Has technology sped up our communications, so that there is the appearance that time has folded upon itself.  We read text or see a video and now we can immediately respond to others.  We read a tweet from Twitter and immediately respond to it.

So, bear with me as I think out loud here.  Let’s assume that media are approaching singularity.  As you go up the cone, technologies converge and the user is collapsing hot/cold, engaging both simultaneously.

McLuhan Conic:: Rough ideas for understanding trajectories for social media. ~Kambara

Let’s assume that at the circular base of the cone, along the diameter is the continuum from hot to cold.  Perpendicular to that diameter is another continuum, the institutional semistructures, rigid {controlling} versus chaotic {open}.  The base would have 4 quadrants, each with prototypical examples::

  1. Hot & Rigid- Old “big media” {print, radio, film, etc.)
  2. Hot & Chaotic- Engaging content in unstructured/uncontrolled  databases
  3. Cool & Rigid- Newsgroups
  4. Cool & Chaotic- Synchronous unmoderated chat

The origin will be “lukewarm” and semi-structured.  The origin is somewhat of a normative assumption.  Individual user experiences may vary and may not even be contiguous.  I know I need to refine these ideas and construct a better diagram.  Nevertheless, I think this concept might be valuable in thinking about how people’s use of technologies is likely to evolve.  Where would you put the following::

  • Facebook {social networking site}
  • Twitter {microblogging}
  • YouTube {video filesharing}
  • Hulu {long-form professional videos}
  • Google {all things data}

Where are they moving towards -or- how could they better provide value?  Of course, despite McLuhan being gone for quite a while, I half-expect this to happen to me::

Twitterversion:: Can #MarshallMcLuhan ‘s hot/cold continuum inform #socialmedia? #sociology #web2.0 @Prof_K

Song:: “Suspect Device” Ted Leo & the Pharmacists-lyrics

Here’s one I’ll put away in my ever expanding “future research project file.”


Recently Google began selling location-specific advertising on its search pages. Nancy Scola at TechPresident blogs on what she calls ambient advertising the use of location-specific ads by the AFL-CIO in the debate over the Employee Free Choice Act

Google ads now has a location targeting option, allowing advertisers to either drop a pin on a map or type in an address, and then set a radius within which their ads runs. (Google doesn’t set a minimum circle of influence, but suggests drilling in no closer than 20 miles.)

Scloa notes that the ads have been targeting readers in Maine, the home of the Senators Olympia Snoew and Susan Collins (a.k.a the moderate wing of the Republican party) with passages like “78% of Americans support workers’ freedom to form unions and bargain for a better life.”

My question is whether Internet advertising of this form is a medium that lends itself to formal political appeals. The theory would be that people who searched for something related to the legislation would already be cued into wanting to know more about the bill and would thus be predisposed to click on a link to content related to the bill. Makes sense, but we know little about the political behavior of this population of “potential ad link clickers.” Is the group that would break the divide between legitimate link and Google Ad the same as the group that would not link to a Google ad under any circumstances. Are “ad link clickers” more or less disposed to be politically active.

On it’s face, one would think not. But I suspect there are differences between those who reject direct political appeals (i.e. they won’t click on a banner ad) and those who see no distinction (will happily click a banner ad). I’m in the former category. Not exactly sure why, other than most of my friends in high school became car salesemen (as did I for a short time).

Like I said…one for the “to do” file.

José’s last post made me think of visualizing processes.  Of late, I’ve been thinking about how Jim Cramer had his head handed to him by Jon Stewart on the Daily Show (link to 3/12 Cramer episode).  Stewart mentioned how markets are two-tiered, one for the insiders and one for the rest of us.  The warnings were out there that the system is broken.  One Frontline from a few years ago, Dot Con (2002),  talks about how during the dot com boom, initial public offerings (IPOs) of stock were rigged by the powers that be.  Another, The Wall Street Fix (2003), discusses the circumstances that led to the World Com bubble that led to a meltdown and eventually a $1.4B settlement between regulators and 10 Wall Street firms.

Sociologists often view markets as social constructions.  I tend to view markets as like sausage.  You really don’t want to know too much about the details of production.  An example of this is Google IPO in 2004, a novel approach which became riddled with turf wars involving the status quo.  Google sought to price its initial offering of stock in a more fair and equitable manner, eschewing the “insider” bias typical of IPOs.  A 2005 American Sociology Association presentation by Martin Barron noted Google’s intentions:

“Google’s IPO eschewed the traditional method for pricing and allocating shares for an unconventional auction format. This was an attempt to minimize the underpricing of its shares and allow a broader class of investor access to IPO shares. By pursuing this more equitable approach, however, Google threatened the enormous profits that IPOs had previously generated for entrenched Wall Street interests.” —The Google IPO

This NYTimes graphic tries to explain what Google intentions with a modified version of what is called a Dutch auction.


Personally, all you viz kids out there, I think the above is a horrible graphic.  I reworked it, as I do for when I teach data visualizations and highlight my (ahem) mad Photoshop skillz:


In my graphic, I have price on the Y-axis & number of shares on the X-axis.  Generally speaking, investors bid a given number of shares at a given price.  Starting at the highest price, the number of shares bid for is tallied until all the outstanding shares are allocated.  The price at which the last bid that allocates all shares becomes the offer price.  So, even if you’re the highest bidder, you buy shares at the offer price .

I like the idea of markets actually moving towards satisfying the underlying assumptions that economists make.  I feel the Dutch auction moves towards that by reducing the transaction costs that often go to the investment banks and increases fairness by not allowing insiders to buy at a deflated price, only to flip (dump) the stock just after it’s offered and reap huge profits.

Of course, there are consequences for countering Wall Street, as Barron notes:

“… far from passively accepting this challenge to the status quo, these interests actively worked to ensure that Google’s IPO—and hence the auction format—would be seen as a failure.”  —The Google IPO

Lo and behold, look at how the business press framed the IPO, despite it being a “success” in Slate:  “Four Ways Google Failed: How the IPO didn’t change Wall Street.”  So, we arrive full circle to Jon Stewart who quipped that maybe the business press should be more than cheerleaders for the status quo.  Perhaps business schools should consider this, as well.

Those Google folks are just brilliant, aren’t they? In yet another moment of insight about human behavior, Google Trends has developed a map to track rises and falls in flu-related searches by location, working off the assumption that people suffering from the flu are more likely to search for information about it. One can track the total volume of flu search, study a national map of flu search trends, or enter your zip code and learn about local trends.
While it is unclear how accurately such a map can track the spread of flu, it potentially offers a powerful public health tool. I can’t get over what a clever use of search data this is.

Let’s think of the next big thing! What are some other forms of internet data that (if made public) could be of great value to the public interest?