privacy

I confess. I am a Googlephile. Right now on my desktop, I have Gmail, Google Reader, Google Docs and Google Calendar open in separate tabs on my Chrome browser.

I know that every keystroke inputted into Google is saved and stored. For now, it’s all rather innocuous. Mostly work e-mails, calendar entries of kids parties and dentist appointments, etc. Rather than be worried about this, I’m willingly participating in Google’s effort to learn even more about me. I have an Android phone that tracks my whereabouts, lets me check e-mail, rss feeds, calendar etc.

But link what Google knows about me to what Google knows about you and what it seeks to know about the world and you have a massive project. As Daniel Soar points out in the London Review of Books, Google’s efforts at rolling out new ways to create data is creating an increasingly smarter, more intiutive, perhaps essential, information behemoth:

Google is getting cleverer precisely because it is so big. If it’s cut down to size then what will happen to everything it knows? That’s the conundrum. It’s clearly wrong for all the information in all the world’s books to be in the sole possession of a single company. It’s clearly not ideal that only one company in the world can, with increasing accuracy, translate text between 506 different pairs of languages. On the other hand, if Google doesn’t do these things, who will?

The broader question about Google is whether private surveilance is inherently less nefarious and intrusive than state-based public surveilance? After all, Google doesn’t have an army. In addition, Google still needs to respond to customer demands. Last year, Google acquiecsed to the German public’s privacy concerns by allowing users to “opt out” their home addresses of it’s street view application.

The bigger issues comes from Government seeking access to Google’s repository of data. The public and the private are then in danger of becoming blurred. Google makes it’s interaction with government agencies public via it’s transparency report. But what happens when the state, with its monopoly of force, wants access to Google’s data?

Today, Facebook signed up to use Web of Trust (WOT) reputation ratings to help create a safter on-line experience for its users. The effort is intended to avoid phishing scams within Facebook.  Once a Facebook user shares a link:

Facebook automatically scans the links, applying WOT’s information, to determine if the website is known to distribute spam or contain malware. If the link is identified as untrustworthy, then a warning will appear allowing the person to avoid the link, learn more about the rating or continue forward.

Assessments about the trustworthiness of the site are determined by the crowd. I’m not sure exactly how it will work but presumably if enough people flag a site as malicious, a WOT warning appears.

Sounds good so far.But I wonder how this crowdsourcing of malicious links on Facebook simultaneously binds us even more closely to an “architecture of publicness” (a term I’m playing with as I prepare a manuscript on Facebook’s effect on political identity).  What I mean by this term is a on-line design structure that provides social incentives to reveal elements of yourself, whether it be your behavior, your likes and dislikes or pieces of information from your past or present.  All this can of course be aggregated and mined for marketing purposes, even if it won’t necessarily be used in this way.

Theoretically, WOT data would seem to be no different.  As you report which sites are unsavory, Facebook (and/or WOT, I’m not sure how this data is collected) learns more about your tastes and preferences and your browsing habits.

An appropriate retort would be that this is all happening in the name of making Facebook a more secure environment….fair enough.  There is no reason why the relentless revelation of your online self has to be all bad.  In fact revelation is cathartic and desirable in many ways.  However when we start to rationalize revelation by making it mundane, it does something to us (I think).  I’m not sure what that is yet, but I’m afraid there’s a part of it that’s not so savory.  How much sharing is too much sharing on-line?  I’m not entirely sure.

Evolution of Social & Information Connections

José has a great post on privacy, Privacy Schmivacy, which highlighted how algorithms can infer information about you rendering privacy settings in a certain context obsolete. The implication is the public-private divide and as José aptly puts it::

“This poses a paradox…if people freely give this information to a web site in exchange for the pleasures of friendship/connection, then are we obliged to regulate how the information is used by others? Isn’t a central element of connection the fact that you’re ‘putting yourself out there’ in public. Being public poses risks. Can we have the pleasures of the public with the protections of the private?”

I’ve been following developments on the semantic web, Web 3.0, which is all over the personal information and data about us that’s out there and can be used, as in Facebook profiles, and computers talking to computers to anticipate our needs. Ideally, it’s a benign Skynet from the Terminator movies.

While there have been discussions of a privacy ontology, this one from way back in 2002, the sticky wicket is that most users don’t understand the ramifications of using sites as we move towards the semantic web. For example, last July, Facebook’s algorithms were tweaked to be able to scour your contacts in your address book. You can opt out of this, but what about all the address books that you’re in? I’ve noticed that one’s Facebook friends list could construct one’s social graph for quite some time now, so I’m not surprised that social networks and profiling of users under lockdown can be done so readily and relatively accurately. That said, I think that users need to be more aware of the risks of engaging social media and not be lulled into a false sense of privacy. In terms of policy, I think more can and should be done to {a} limit what information is accessible and {b} companies and organizations need to be more up-front about what information is accessible and to whom, along with the ramifications of this. I firmly believe there is a knowledge gap between what users know and the reality of privacy on the web.

Should there be more regulation or more strict privacy policies by companies and organizations? I think that’s an interesting question. The stakes are the benefits of interacting with your identity, but the risks are the use of that information constructing that very identity. My initial reaction is no, but with a twist. I think there needs to be more information presented to users in lay language on the implications of using social media as the contextual web becomes more ubiquitous.

A more interesting issue, to me, isn’t the privacy issue, but how the semantic web can alter the social world and policy, which encompasses privacy and the nature of data in everyday life. One area in particular is what I see as an intrusion of the economic sphere on the personal through the use of data::

  • Should your employer be privy to your credit rating or driving record?
  • Should they be allowed to use public information about you {from databases or on social networking sites} as a condition of employment?
  • Where does one’s role as a employee end and a private citizen begin? In other words, is speech less-than-free if you want to keep your job?

You can pose similar questions regarding the intersections of the personal and the political, the social, etc., with the main point being that these intersections are altering our everyday lives.

The semantic web is the churlish love child of Foucault’s surveillance and Derrida’s deconstruction.

Twitterversion:: Will the semantic web destroy privacy, given current policies & trends? How will it affect everyday life? #ThickCulture

Song:: Camera Obscura, ‘I Don’t Do Crowds’

Erik Hayden at Miller McCune links to a study done Alan Mislove of Northeastern University and his colleagues at the Max Planck Institute for Software Systems
that reveals how easy it is to create a profile of you from your Facebook contacts. Using alogrithmic magic, the team was able to create profiles for thousands of students at Rice from their profiles and the profiles of those they had “friended.”

the algorithm accurately predicted the correct dormitory, graduation year and area of study for the many of the students. In fact, among these undergraduates, researchers found that “with as little as 20 percent of the users providing attributes we can often infer the attributes for the remaining users with over 80 percent accuracy.

Hayden sees this as a problem:

Not to seem alarmist (“privacy” on the Web has always been overrated), but if these researchers could develop a limited algorithm that can infer rudimentary attributes off locked profiles, the possibilities seem endless for others to harness advanced software that could render current privacy controls completely useless.

This poses a paradox…if people freely give this information to a web site in exchange for the pleasures of friendship/connection, then are we obliged to regulate how the information is used by others? Isn’t a central element of connection the fact that you’re “putting yourself out there” in public. Being public poses risks. Can we have the pleasures of the public with the protections of the private?

facebook-cartoonMany of us post to Facebook, perhaps unaware of what can happen to that content and who has rights to it.  All of this came to a head a few days ago, as Facebook’s new terms of service (TOS) came to light and were met with a range of reactions from dismay to outrage.  

I’ve been reading Convergence Culture and being in Jane Jacob’s adopted home, I couldn’t help but think of how the social space of Facebook relates to how social interactions are shaped by governance and polity in online realms, as well as the idea of a commons that is a privatized space, as opposed to a public one.

While I’m resigned to the fact that there is no privacy online and I don’t know whether to laugh or cry when I hear that Facebook is being used by collections departments to locate unstealthy credit defaulters (true story), I do bristle at the idea of content being appropriated by companies hosting these web commons.

Why?  I’m using the private space of Facebook, why should I feel that what I post is still my intellectual property?  Am I being unreasonable?  After all, I push the boundaries of fair use quite a bit.

Can social network sites really be sites of democratic action, when they can ultimately be censored, not as a matter of public policy, but rather corporate TOS?  On the other side of the Web 2.0 fence, how much freedom should an organization grant users?

I feel that what any site engaging in Web 2.0 should do if they want to use content posted by users is…to simply ask them for permission.  It’s simple good manners and building of social capital.  I do think privatized social spaces or commons can be used for civic engagement, but I find emerging technologies being developed up here in Canada that allow content to be fed from multiple sites (e.g., MySpace, Facebook, Twitter, LinkedIn) into one location to be rather interesting.  More on this in a future post.  I feel the overlap of Web 2.0 with open source will make us all rethink ownership and privacy and force organizations to ponder what intellectual property really means, what the risks are in terms of what the courts are stating, and how to implement processes.  Or not.  That devil inertia.

Ben Smith at Politico has a fascinating little tidbit about Obama’s release of photos from his daughters’ first day of school.  While Smith suggests that at first glance the release of these photos might seem invasive, he links to Garance Franke-Ruta at the Washington Post who offers up this keen observation:

It may sound counterintuitive, but the best way for Barack Obama to keep any of his life private in this era of cell phone-snaps, Facebook goofs and long-lensed paparazzi is to do exactly this: reliably and regularly release pictures of newsworthy intimate family moments in a manner that he can control.

That’s because online, the only way to control your own image is to drown outsiders’ takes in media stream of your own creation — and there is no news agency or paparazzo in the world with better access to inner workings of Obamaland and the Obama family than Obama himself.

If Obama’s active Flickr account means the end of the paparratzi, then I’m all for it!