In the collective mind, being weaned off sanctioned information and switching to a collective self-service media is accompanied by withdrawal and culture shock. On a daily basis, people thirstily seek information on the Internet – they are led by it, they create it and they distribute it themselves, investing the very best they are capable of. And yet, at the same time, they don’t trust the Internet. A myth has been propagated that the Internet is a trash dump.
Is the Internet really just a supplier of rubbish?
One of the video presentations of American media thinker Clay Shirky is entitled It’s Not Information Overload. It’s Filter Failure.[i] The problem is not the volume or quality of information, but the filter quality.
Yes, in terms of how information appears on the Internet, it could be compared with a rubbish dump. Everything gets in. But far from everything circulates and finds an end-user. In reality, nobody uses rubbish. There are no restrictions on entering the Internet. But there are rather serious – albeit not all that obvious – restrictions on the path information takes from the Internet to our brains. Content is now filtered not before publication, but at distribution.
There are three bastions guarding our mental health – three echelons of filters that sift the contents of the Internet for us:
1) personal settings;
2) the Viral Editor;
3) algorithms of relevance.
***
Personal settings. In adding a particular web link to their browser bookmarks, Internet users create their individual map of the web, in future relying on the routes they’ve already tried out.
The selection of friends in our friend feed provides almost the same result. A person will choose his or her friends based on shared views and common history – that is, people he has personal experience of. He or she is not that close with all of them, but they are statistically much closer than a random Internet stranger. Further, the user observes what the friends he or she has selected read and what they are interested in. This is how the filter of recommendation works, creating for each of us a web of our own correspondents who “know” what we are interested in, because they themselves are interested in pretty much the same things.
Our browser bookmarks and the friends in our friend feed are not always as good and secure as we would like, but this is a very serious form of screening that decreases entropy by several orders of magnitude, restricts output and increases relevance. For each of us, browser bookmarks and friend feed settings intercept 99.99999% of Internet content, or, to be more precise, Internet rubbish.
***
The Viral Editor. Previously it was the newspaper or television editor who defined the news for us, relying on his or her own understanding of the significance of events. Today, we get the news via a web of viral distribution on the Internet.
As a result of the cumulative efforts of users – micro-editors united in a neural network – a sort of artificial intelligence emerges, within which it is people themselves who are the processing chips. They not merely transmit information to each other, they “improve” the information in the process, according to their personal understanding of “what is interesting.” The sum total of individual understandings of what is interesting, not as voiced by a journalist, but directly amalgamated through participants’ immediate relations, becomes the general opinion of my social group or even public opinion.
This does not mean that each user performs his or her private “editing” well – not at all. That being said, each individual tries as hard as he or she can; clearly he or she needs response and it takes effort to get response. Some are better at this than others, some are worse, but personal effort bears forth a huge collective result. The weak thirst for response in each individual case creates a systemic positive selection. The thirst for response compels an entire multitude of users to seek, obtain and distribute whatever might be of interest and might attract attention. Therefore, whatever is of interest and significance will inevitably be selected, reinforced by the Viral Editor and conveyed to precisely those for whom it has relevance. And they, in turn, will communicate it further.
The Viral Editor’s filter collaborates with the settings of my friend feed and my browser. Information that has not captivated my friends will not reach me. It will not pass my friends’ screening, nor the screening of the friends of my friends, who are carrying out the work of the Viral Editor specifically for me.
***
The algorithms of relevance. Algorithms in search engines (such as Google) and social networks (such as Facebook) analyze which websites a user visits, where he or she enters the Internet from, what he or she looks for, who he or she communicates with, what he or she clicks on, what he or she “likes.” Using such information, the algorithm decides what might interest users and gives them the most relevant answers to their search queries (Google) or shows messages from their friends that are most interesting to these users (Facebook).
Eli Pariser, the author of The Filter Bubble, tells the following story. He asked two of his friends in different states in the US to enter the word “Egypt” in Google and send him the search results. And what was the outcome? One of his friends was given a collection of tourist links, and the other links about the Arab revolutions. And all because his friends had different interests, they visited different websites. Therefore, according to Pariser, Google interpreted exactly the same query in different ways.[ii]
This is yet another prototype of artificial intelligence, of the mechanical variety. This robotic intelligence already evaluates us and makes decisions about what we need.
A similar algorithm works on Facebook. Analyzing a user’s past preferences, the algorithm lines up its output priorities. For example, Pariser observed that Facebook never shows him messages from his friend who has the opposite political views as he does. The algorithm decided that if Pariser never likes such posts and never comments on them, they are of no interest to him. And there’s no point in littering a feed; with the superfluous removed, the result is beautiful!
The algorithm thus adjusts the output of a feed according to our interests. This is very convenient, as it truncates a huge volume of information that is irrelevant to us. Another task is resolved at the same time: this kind of algorithm increases advertising efficiency, as it allows only advertising to be shown to us that we have a high chance of being interested in. Have you ever noticed that if you look up mouthwash in a search engine, the following week different websites will display advertising for dental clinics? This is The Filter Bubble in action.
***
Personal browser settings, the Viral Editor and relevance algorithms create a three-layer filter that restricts the variety of Internet content to a digestible level of information that is thematically suitable to you.
The settings of the web environment (bookmarks and friend feeds), as created by the user, work as a personal content filter.
The Viral Editor can be seen as an interpersonal filter of content.
And, finally, relevance algorithms are a machine content filter.
Altogether, they offer a fairly harmonious and interactive system that turns the raw content of the Internet into our personal “The Daily Me” (to use the terminology of Nicholas Negroponte[iii]) – special media, within which information is selected personally for you, but taking into account its public significance.
A sensible balance between personalization and generality is very important for people so that, on the one hand, the huge volume of information required to satisfy personal interests and needs is reduced and, on the other, the commonality of information is preserved to guarantee individual and group social integration. At the dawn of the computer era, there were popular ideas about how future mass media would be personalized. A special machine would print newspapers tailored to your personal requirements. There were even attempts to create programmes that would select content according to your personal settings: MIT Media Lab, for example, realized this concept, based on Nicholas Negroponte’s ideas about The Daily Me. As early as the mid-90s, MIT Media Lab developed a programme called Fish Wrap[iv] that allowed the user to set up a bespoke news feed drawing from various media. The programme asked the user who he or she was and what he or she needed (the questionnaire included questions about his postal code, academic interests and personal hobbies). And it subsequently selected “relevant” news from a fixed list of sources.
Now, of course, nobody asks the user. The Internet immediately gives the user what he or she wants; or, to be more precise, what he or she should want.
Hence, such programmes were never widely used. Some companies or government structures use thematic digests, prepared by people and/or machines. But outside of highly specialized interests, there is little demand for extremely customized programmes for content selection. They are redundant. The Internet’s ecosystem builds itself around each person, thereby fulfilling the very same function: the selection of relevant content. Moreover, an artificial selection system requires administrative effort and this, once again, is redundant. You can get the same result on the web without any effort.
***
Finally, there is an important factor, of which advocates of customized content are often not aware. Personalisation is a marketing tool. Marketing messages must be addressed to the client with the highest level of personalization. The mass media have quite a different task. The mass media is an intermediary between the general and the individual.
Marketing atomizes and disconnects, while the mass media unites and generalizes. Marketing strives to reach the individual user and to remain with him or her one-on-one; the media operates on a combination of personal, group and social interests. The task of the mass media is fundamentally opposed to that of marketing: the mass media should create coherence and social gravitation in society. Media is the connective tissue of society,[v] according to Clay Shirky. Therefore, the idea of extreme personalization never actually took hold in the development of the mass media.
Designers of both new media and relevance algorithms should take this factor into account. Media should involve individuals in society, not exclude them from it. Democrats and Republicans do not need different pictures of the world; they need different perspectives of one and the same picture. Points of view differ, but not the perceived objects themselves. Otherwise, society would become impossible.
Yes, sometimes points of view diverge so far that they appear to reflect different worlds. When this happens, society splits and loses cohesion. What ensues is disintegration and social upheaval. It is unlikely that this is the goal of the customization of media mechanisms.
In other words, customization means not only content personalization for the user, but also the typification of that same user. From the media user’s point of view, customization should include individuals in society, rather than exclude them.
In this sense, the combination of Internet filters (personal settings, the Viral Editor, relevance algorithms) obviously generates a sensible balance between the personal and the common, allowing the vast volume of general information to be reduced to a quantity and quality that is of use to the individual user. In the contemporary Daily Me, which takes shape on the screen quite naturally, individuals receive information that has been personally selected for them, but from common sources.
***
The three-part filter system has its problems. Eli Pariser speaks about them in particular in his book The Filter Bubble.[vi] On the one hand, relevance algorithms create comfort, intercepting noise and selecting information that fits our interests. On the other hand, relevance algorithms create that very same Filter Bubble, the impervious cocoon that locks our future outlooks within the prison of our past preferences. If a robot judges exclusively by what we have liked previously, we will lose the chance for serendipity, accidental meetings with unexpected information that may expand our perception, pouring fresh blood into our familiar world.
There is no doubt that relevance algorithms will be further refined and perfected. They will select information according to our interests more and more thoroughly, accurately and unnoticeably. Eli Pariser worries that, because of this, humans’ intellectual self-sufficiency and even democracy will suffer, because at the end of the day, with the evolution of relevance algorithms, we will lose the chance of seeing something different.
Yes, these are real risks. Thus far, Eli Pariser, just like any of us, is capable of noticing that one of his friends with different points of view never appears in his Facebook feed, because the relevance algorithm maintains that that person’s posts are not of interest to us and doesn’t show them. But it’s already fairly difficult for us to notice that the output of the search engine for us is nothing like what it is for someone else. We only can find this out by asking a friend to perform exactly the same search and comparing the results.
***
The development of content filtration undoubtedly brings with it new risks. By their very nature, these are ecological risks. We are entering a new, digital living environment. The displacement of contemporary humans in the digital world can be compared with ancient humans’ early days in the physical world. In both instances, this is about people establishing themselves in an unexplored environment.
In the physical world, human is now so powerful that we are faced with an ecological goal: to defend the environment from human impact. But imagine the early ages of humankind: in an aggressive natural environment, it was people themselves who needed protection. Now we are in the early stages of a new environment; not a natural, but a digital one.
We are still in the stone age of the Internet. Human doesn’t have experience of digital existence and therefore sees novelty as something worrying, often even a threat. On a state level there is a desire to regulate the digital environment for the sake of the comfort and stability of the people, or indeed, of the state itself. What is more, the digital environment is so changeable and is developing so quickly that the adaptive abilities of human can’t keep up with the changes.
The Internet is itself providing assistance with the process of adapting to the Internet’s challenges, a fact that seems somewhat worrying. “The Internet is no longer an alternative to real life, it’s a tool for arranging it,” writes Clay Shirky.[vii] By orchestrating the information flows, the Internet is starting to orchestrate human life in new ways, while humans have less and less influence over the process. (As Eli Pariser said, “The algorithms that orchestrate our ads are starting to orchestrate our lives.”[viii])
The problems of interaction between the environment and human beings are ecological problems. And they are now occurring not only in the natural world, but in the digital environment. This is a new dimension of ecological tasks that neither politicians nor environmentalists see.
In their fight for user convenience (and also for the user’s attention to the interests of advertisers), relevance algorithms unavoidably enter into conflict with human settings for perceiving the world. The Filter Bubble will unleash a war against the Viral Editor. Following the logic of evolutionary development, the Filter Bubble will win. What will happen to humankind next? (More on this in the third part of the book.) For now, the relevance algorithms are doing us a service that, on the whole, will sustain our Internet hygiene.
And these conveniences enslave us.
Andrey Miroshnichenko, 2013
______________________________
[i] Clay Shirky. It’s Not Information Overload. It’s Filter Failure. Video at TED, presentation at Web 2.0 Expo NY, 2008. http://www.youtube.com/watch?v=LabqeJEOQyI
[ii] Eli Pariser: Beware online “filter bubbles.” Video on TED.com. http://www.ted.com/talks/eli_pariser_beware_online_filter_bubbles.html
[iii] Fred Hapgood. The Media Lab at 10. Wired, November 1995. http://www.wired.com/wired/archive/3.11/media.html?topic=&topic_set
[iv] Christopher Harper. The Daily Me. American Journalism Review, April 1997. http://www.ajr.org/Article.asp?id=268
[v] Clay Shirky, Cognitive Surplus: Creativity and Generosity in a Connected Age. 2010. P. 54.
[vi] Eli Pariser. The Filter Bubble: What the Internet Is Hiding from You. 2011.
[vii] Big Think interview with Clay Shirky. Big Think, May 26, 2010. http://bigthink.com/videos/big-think-interview-with-clay-shirky
[viii] Eli Pariser. The Filter Bubble: What the Internet Is Hiding from You. 2011.
___________________________
The chapter extracted from: Human as media. The emancipation of authorship
Available on Amazon now.
Categories: Future of journalism
Leave a Reply