I Gave Up My Privacy and All I Got Was This Lousy Filter Bubble

[this is one of a series of posts that I did while a student in Nicco Mele‘s class at Harvard in 2013]
what was I reading this week?

In “Hyperconnected”, chapter eight of Christakis and Fowler’s Connected: The Surprising Power of Our Social Networks and How They Shape Our Lives, the authors examine how our natural inclination to create social networks is strengthened and enhanced by the Internet. While some forms of social interaction must take place along with actual physical presence, the authors explain how the web offers the possibility to both engage in new kinds of social interaction and to spread our traditional networks more quickly to more far-flung locations.

“Social Has a Shape: Why Networks Matter”, chapter 5 of Rheingold’s Net Smart: How to Thrive Online, similarly describes how social interaction has changed, making use of a number of computer/web concepts (power laws, long tails, network effects) along with social psychology to help make sense of the new ways we interact online.

Pariser’s TED Talk on “Filter Bubbles” helps describe one of the pitfalls of easy access to information along with engaging in most social interaction on the web. Stray’s article, “Are we stuck in filter bubbles? Here are five potential paths out”, on the other hand, asks us to rethink whether these filter bubbles are really harmful first and, if the are, how to get out of them.

Finally, Palfrey and Gasser’s book Born Digital examines these issues through the lens of the first generation to be brought up entirely in a web-enabled world. Their third chapter, “Privacy” examines some of the serious issues arising from the bargain we’ve made to give up privacy in exchange for convenience.

Was it any good?

The readings on social networks were very useful – particularly reading Christakis & Fowler and Rheingold together. Both pieces demonstrated a great sense of history as a way of informing our ideas of how the Internet has both augmented existing social structures and revolutionized how we interact as human beings. Rheingold’s mash-up of social theory, personal anecdotes, and historical trivia made for especially enlightening and, frankly, entertaining reading.

The Filter Bubble and Privacy pieces, strong as they are, just scratched the surface of the issues they raise. Pariser’s TED Talk was great in that it’s always nice to hear the person who identified a concept talk about it in his own words. Stray’s article was also very good in asking much needed questions and his strongest point is in refining the notion of a filter bubble to reflect weak ties. Stray also helped to remind us that, before the advent of the Internet, mass media was really just a filter bubble for middle class white men that everyone else also happened to be stuck in.

What both these pieces miss is that perhaps the filter bubble is a feature and not a bug. Confirmation bias has been a part of the human psyche since long before the Internet and probably even before written language existed. People want their notions to be confirmed and are intimidated and angered by inconsistency. So maybe the fact that the Filter Bubble is so easy to see now is that, like everything else, the Internet enables us to get what we really want – not so much junk food for the mind or nutritious food for the mind, but the comfort food of the mind.

Born Digital’s chapter on privacy is a great taking-off point. Considering what’s happened in the realm of Internet privacy since that books publication, however, merely worrying about corporations keeping one’s freely given-up data isn’t enough. Apparently we now have to worry about living in a surveillance state and whether those privacy sacrifices are worth making.

So…

Taken together, the readings give us a good way to think about the web and privacy. Christakis and Fowler described how online social networking has recognizable roots in the types of interpersonal connections humans have made throughout our history. In order to make these connections, we must give up some privacy (at the very least, reveal that we exist) and hope that revelation won’t be used maliciously in order to secure for ourselves the benefits of social interaction. The web allows us easily to connect with everyone but this forces us to give up our privacy to everyone. Unlike previous, limited social interaction, interaction with everyone virtually guarantees that someone will abuse his access to someone else’s privacy at some point.

The concept of the “Filter Bubble” helps explain why we give up some of our privacy. It turns out that we seek out information that confirms our preexisting notions (this is not new), but with the aid of technology, this isolation is given even greater force in a self-replicating cycle wherein our searches form the basis for search and social network algorithms, which then feed us more of the same in the interests of keeping our eyes glued to the screen so that they can charge advertisers to intrude into our lives. In order for this to work (and we do want it to work) we have to give up information about ourselves to the companies running these algorithms. We trade our privacy and our freedom to be let alone from advertiser intrusions for the ease and comfort of our filter bubbles.

If this trade were an arms-length one, where everyone using the Internet understood the value of his/her privacy and could make a rational determination as to how to give it away, there’d be little to complain about. However, Born Digital helps show that we really have very little idea of what, in the future, could be done with the information we so readily give up today.

If we’re going to keep any recognizable part of our privacy on the Internet, we need some kind of regulation. Private self-regulation occasionally works, especially when it’s done by the far-sighted minds who work on the Internet (see ICANN). However, now there’s so much money in people’s private information that we’re unlikely to see any pro-privacy self-regulation with teeth. This seems like the ideal place for some kind of government regulation, but, considering the NSA revelations, who can say our government is up to the task? We’re seemingly left with few options: A) hope for benevolent corporate self-governance with the understanding that it’s not likely to fully protect us; or B) try to get some regulation by the government with the added burden of having to continually police them as well.

This would probably be a good segue into Consent of the Networked, but I will get to that in a future post.

Here Comes Everybody asking “What is Web 2.0”

[this is one of a series of posts that I did while a student in Nicco Mele‘s class at Harvard in 2013]
what was I reading this week?

Here Comes Everybody: The Power of Organizing Without Organizations by Clay Shirky

This is an anecdote-driven explanation of how technological innovations on the Internet (and almost exclusively through blogs and social media) enable and hyper-extend our inherent capacity for forming social connections. Shirky describes the impact of these innovations and provides a useful framework to think about the kind of group tasks that the web can enable. He describes how the web knocks down traditional walls and lowers the transaction costs for sharing (easiest group task) to cooperation to collective action (most difficult group task). From mass amateurization of professions like journalism to mass control of the means of production to rapid deployment of social tools, Shirky helps explain how the new technological innovations of the last decade are shaping our culture and society.

What is Web 2.0” by Tim O’Reilly

O’Reilly’s article is a bit more limited in scope. He sums up much of what we’ve seen happening on the web under the moniker “Web 2.0”. His article is a mostly description of business model competition (operating systems versus browsers-as-platform; traditional media versus the blogosphere, software-as-service versus software-as-product; apps for single devices versus apps the coordinate across devices). In the end, he describes the significant changes the businesses must undertake in order to survive in a world grown comfortable with using the web.

was it any good?

Both of these pieces aim to explain how the web changes some aspects of our society. O’Reilly does a bit better with his razor-sharp focus on business models. Shirky, because he is taking an approach that aims to show changes in the culture as a whole, is bound to fall short.

O’Reilly captures and explains most, if not all, of the things that the average reader will think of when she hears the term “Web 2.0”. My main problem with the article is its scope – by limiting himself to the business ramifications, I think he misses out on the greater cultural impact that the “Web 2.0” style changes are making on our work- and lifestyles. Doing so would have allowed for a logical progression from his fleeting but prophetic question, “[w]ho owns the data?” to a discussion of he personal ramifications of a business model where all data is worth something. But I’ll talk a little more about the importance of data and the implications of data-as-commodity to personal privacy below.

Shirky, on the other hand, does have a larger scope. However, for all the good he does being descriptive – giving an account of the cultural significance of new technology – he fails to become comprehensively proscriptive and seems to shy away from discussing the significance of those changes and how they have or have not fulfilled technology’s egalitarian promise.

I’m particularly disappointed in his failure to note more than in passing how the new technologies were already being co-opted to support and strengthen existing class structures. It is still clearly class that matters – just take the example of Evan in the first chapter – he harnesses the power of the web in order to get the police (who are already looking out for the interests of upper-class professionals) to gratify his need to exert power over a lower class girl who happened to acquire the wrong phone and had the temerity to be a jerk about it to her social “betters”. This is where a proscriptive approach would have been helpful – the web does have the capacity to erase these distinctions, I would be curious to see how a writer like Shirky explains why that isn’t happening and what we can do to create a more egalitarian technological revolution.

so…

Having worked as an attorney and dealt with consumer privacy issues, some of the changes highlighted by O’Reilly’s article stood out to me. I wonder whether it was naïve for such a smart writer to only devote one sentence – “[a] further point must be noted with regard to data, and that is user concerns about privacy and their rights to their own data” – to the notion that maybe the data most coveted by businesses might very well be the data we most want to keep to ourselves? Considering at least two post-Web 2.0 giants are earning opprobrium in their boundary-pushing pursuits of data, this may be one of the most important issues to arise from the world envisioned by O’Reilly’s article. Now, we also know that even something as seemingly innocuous as the business model shift to browser-as-platform has all sorts of unintended, potentially negative consequences to personal privacy. People like to think they own their data, but we’re now coming to the point where we have to figure out what to do when there’s another claimant to that ownership.

This weeks readings show that the societal shifts engendered by new and ubiquitous technology like social media and the web-based systems aren’t just business or even communications or social issues. This technological revolution presents us with novel moral, ethical, legal, and political challenges. The institutions that we use to deal with these challenges aren’t nearly as nimble as the firms that Shirky and O’Reilly describe, but they’ll have to become so in order to minimize the harm and maximize the good (and there is a lot of good) that can come from our technological achievements.