Is the Internet really just a supplier of rubbish?
One of the video presentations of American media thinker Clay Shirky is entitled It’s Not Information Overload. It’s Filter Failure.[i] The problem is not the volume or quality of information, but the filter quality.
Yes, in terms of how information appears on the Internet, it could be compared with a rubbish dump. Everything gets in. But far from everything circulates and finds an end-user. In reality, nobody uses rubbish. There are no restrictions on entering the Internet. But there are rather serious – albeit not all that obvious – restrictions on the path information takes from the Internet to our brains. Content is now filtered not before publication, but at distribution.
There are three bastions guarding our mental health – three echelons of filters that sift the contents of the Internet for us:
1) personal settings;
2) the Viral Editor;
3) algorithms of relevance.
Personal settings. In adding a particular web link to their browser bookmarks, Internet users create their individual map of the web, in future relying on the routes they’ve already tried out.
The selection of friends in our friend feed provides almost the same result. A person will choose his or her friends based on shared views and common history – that is, people he has personal experience of. He or she is not that close with all of them, but they are statistically much closer than a random Internet stranger. Further, the user observes what the friends he or she has selected read and what they are interested in. This is how the filter of recommendation works, creating for each of us a web of our own correspondents who “know” what we are interested in, because they themselves are interested in pretty much the same things.
Our browser bookmarks and the friends in our friend feed are not always as good and secure as we would like, but this is a very serious form of screening that decreases entropy by several orders of magnitude, restricts output and increases relevance. For each of us, browser bookmarks and friend feed settings intercept 99.99999% of Internet content, or, to be more precise, Internet rubbish.
The Viral Editor. Previously it was the newspaper or television editor who defined the news for us, relying on his or her own understanding of the significance of events. Today, we get the news via a web of viral distribution on the Internet.
As a result of the cumulative efforts of users – micro-editors united in a neuronal web – a sort of artificial intelligence emerges, within which it is people themselves who are the processing chips. They not merely transmit information to each other, they “improve” the information in the process, according to their personal understanding of “what is interesting.” The sum total of individual understandings of what is interesting, not as voiced by a journalist, but directly amalgamated through participants’ immediate relations, becomes the general opinion of my social group or even public opinion.
This does not mean that each user performs his or her private “editing” well – not at all. That being said, each individual tries as hard as he or she can; clearly he or she needs a response and it takes effort to get response. Some are better at this than others, some are worse, but personal effort bears forth a huge collective result. The weak thirst for response in each individual case creates a systemic positive selection. The thirst for response compels an entire multitude of users to seek, obtain and distribute whatever might be of interest and might attract attention. Therefore, whatever is of interest and significance will inevitably be selected, reinforced by the Viral Editor and conveyed to precisely those for whom it has relevance. And they, in turn, will communicate it further.
The Viral Editor’s filter collaborates with the settings of my friend feed and my browser. Information that has not captivated my friends will not reach me. It will not pass my friends’ screening, nor the screening of the friends of my friends, who are carrying out the work of the Viral Editor specifically for me.
The algorithm of relevance. Algorithms in search engines (such as Google) and social networks (such as Facebook) analyze which websites a user visits, where he or she enters the Internet from, what he or she looks for, who he or she communicates with, what he or she clicks on, what he or she “likes.” Using such information, the algorithm decides what might interest users and gives them the most relevant answers to their search queries (Google) or shows messages from their friends that are most interesting to these users (Facebook).
Eli Pariser, the author of The Filter Bubble, tells the following story. He asked two of his friends in different states in the US to enter the word “Egypt” in Google and send him the search results. And what was the outcome? One of his friends was given a collection of tourist links, and the other links about the Arab revolutions. And all because his friends had different interests, they visited different websites. Therefore, according to Pariser, Google interpreted exactly the same query in different ways.[ii]
This is yet another prototype of artificial intelligence, of the mechanical variety. This robotic intelligence already evaluates us and makes decisions about what we need.
A similar algorithm works on Facebook. Analyzing a user’s past preferences, the algorithm lines up its output priorities. For example, Pariser observed that Facebook never shows him messages from his friend who has the opposite political views as he does. The algorithm decided that if Pariser never likes such posts and never comments on them, they are of no interest to him. And there’s no point in littering a feed; with the superfluous removed, the result is beautiful!
The algorithm thus adjusts the output of a feed according to our interests. This is very convenient, as it truncates a huge volume of information that is irrelevant to us. Another task is resolved at the same time: this kind of algorithm increases advertising efficiency, as it allows only advertising to be shown to us that we have a high chance of being interested in. Have you ever noticed that if you look up mouthwash in a search engine, the following week different websites will display advertising for dental clinics? This is The Filter Bubble in action.
Personal browser settings, the Viral Editor and relevance algorithms create a three-layer filter that restricts the variety of Internet content to a digestible level of information that is thematically suitable to you.
The settings of the web environment (bookmarks and friend feeds), as created by the user, work as a personal content filter.
The Viral Editor can be seen as an interpersonal filter of content.
And, finally, relevance algorithms are a machine content filter.
Altogether, they offer a fairly harmonious and interactive system that turns the raw content of the Internet into our personal “The Daily Me” (to use the terminology of Nicholas Negroponte[iii]) – special media, within which information is selected personally for you, but taking into account its common significance.
[i] Clay Shirky. It’s Not Information Overload. It’s Filter Failure. Video at TED, presentation at Web 2.0 Expo NY, 2008. http://www.youtube.com/watch?v=LabqeJEOQyI
[ii] Eli Pariser: Beware online “filter bubbles.” Video on TED.com. http://www.ted.com/talks/eli_pariser_beware_online_filter_bubbles.html
[iii] Fred Hapgood. The Media Lab at 10. Wired, November 1995. http://www.wired.com/wired/archive/3.11/media.html?topic=&topic_set
Extracted from: Human as media. The emancipation of authorship – by Andrey Miroshnichenko
Available now on Amazon
Categories: Human as media book, Media ecology, Viral Editor
Leave a Reply