Myth about rubbish content on the Internet. How filters create a customized “The Daily Me” for everyone.

Sure, in terms of how information appears on the Internet, it could be compared with a rubbish dump. Everything gets in. But far from everything circulates and finds an end-user. In reality, nobody uses rubbish. There are no restrictions on entering the Internet. But there are rather serious – albeit not all that obvious – restrictions on the path information takes from the Internet to our brains. Content is now filtered not before publication, but at distribution.

There are three bastions guarding our mental health – three echelons of filters that sift the contents of the Internet for us:

1) personal settings;

2) the Viral Editor;

3) algorithms of relevance.

Personal settings. In adding a particular web link to his browser bookmarks, an Internet user creates his own map of the web, in future relying on the routes he’s already tried out.

The selection of friends in our friend feed provides almost the same result. A person will choose his friends based on shared views and common history – that is, people he has personal experience of. He is not that close with all of them, but they are statistically much closer than a random Internet stranger. Further, the user observes what the friends he has selected read and what they are interested in. This is how the filter of recommendation works, creating for each of us a web of our own correspondents who “know” what we are interested in, because they themselves are interested in pretty much the same things.

Our browser bookmarks and the friends in our friend feed are not always as good and secure as we would like, but this is a very serious form of screening that decreases entropy by several orders of magnitude, restricts output and increases relevance. For each of us, browser bookmarks and friend feed settings intercept 99.99999% of Internet content, or, to be more precise, Internet rubbish.

The Viral Editor. Previously it was the newspaper or television editor who defined the news for us, relying on his own understanding of the significance of events. Today, we get the news via a web of viral distribution on the Internet.

As a result of the cumulative efforts of users – micro-editors united in a neuronal web – a sort of artificial intelligence emerges, within which it is people themselves who are the processing chips. They not merely transmit information to each other, they “improve” the information in the process, according to their personal understanding of “what is interesting.” The sum total of individual understandings of what is interesting, not as voiced by a journalist, but directly amalgamated through participants’ immediate relations, becomes the general opinion of my social group or even public opinion.

This does not mean that each user performs his private “editing” well – not at all. That being said, each individual tries as hard as he can; clearly he needs response and it takes effort to get response. Some are better at this than others, some are worse, but personal effort bears forth a huge collective result. The weak thirst for response in each individual case creates systemic positive selection. The thirst for response compels an entire multitude of users to seek, obtain and distribute whatever might be of interest and might attract attention. Therefore, whatever is of interest and significance will inevitably be selected, reinforced by the Viral Editor and conveyed to precisely those for whom it has relevance. And they, in turn, will communicate it further.

The Viral Editor’s filter collaborates with the settings of my friend feed and my browser. Information that has not captivated my friends will not reach me. It will not pass my friends’ screening, nor the screening of the friends of my friends, who are carrying out the work of the Viral Editor specifically for me.

The algorithm of relevance. Algorithms in search engines (such as Google) and social networks (such as Facebook) analyse which websites a user visits, where he enters the Internet from, what he looks for, who he communicates with, what he clicks on, what he “likes.” Using such information, the algorithm decides what might interest a person and gives him the most relevant answers to his search queries (Google) or shows messages from his friends that are most interesting to him (Facebook).

Eli Pariser, author of The Filter Bubble, tells the following story. He asked two of his friends in different states in the US to enter the word “Egypt” in Google and send him the search results. And what was the outcome? One of his friends was given a collection of tourist links, and the other links about the Arab revolutions. And all because his friends had different interests, they visited different websites. Therefore, according to Pariser, algorithm interpreted exactly the same query in different ways.[i]

This is yet another prototype of artificial intelligence, of the mechanical variety. This robotic intelligence already evaluates us and makes decisions about what we need.

A similar algorithm works on Facebook. Analysing a user’s past preferences, the algorithm lines up its output priorities. For example, Pariser observed that Facebook never shows him messages from his friend who has the opposite political views as he does. The algorithm decided that if Pariser never likes such posts and never comments on them, they are of no interest to him. And there’s no point in littering a feed; with the superfluous removed, the result is beautiful!

The algorithm thus adjusts the output of a feed according to our interests. This is very convenient, as it truncates a huge volume of information that is irrelevant to us. Another task is resolved at the same time: this kind of algorithm increases advertising efficiency, as it allows only advertising to be shown to us that we have a high chance of being interested in. Have you ever noticed that if you look up mouthwash in a search engine, the following week different websites will display advertising for dental clinics? This is The Filter Bubble in action.

***

Personal browser settings, the Viral Editor and relevance algorithms create a three-layer filter that restricts the variety of Internet content to a digestible level of information that is thematically suitable to you.

The settings of the web environment (bookmarks and friend feeds), as created by the user, work as a personal content filter.

The Viral Editor can be seen as an interpersonal filter of content.

And, finally, relevance algorithms are a machine content filter.

All together, they offer a fairly harmonious and interactive system that turns the raw content of the Internet into our personal “The Daily Me” (to use the terminology of Nicholas Negroponte[ii]) – special media, within which information is selected personally for you, but taking into account its common significance.

______________________________________________

[i]Eli Pariser: Beware online “filter bubbles.” Video on TED.com.

[ii]Fred Hapgood. The Media Lab at 10. Wired, November 1995.

 

Andrey Miroshnichenko

Extracted from: Human as media. The emancipation of authorship – by Andrey Miroshnichenko
Available now on Amazon

Msm_amir_coverCC.indd

 

Advertisements


Categories: Decline of newspapers, Emancipation of Authorship, Future of journalism, Future of news, Human as media book, Media ecology, Viral Editor

Tags: , , , ,

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: