Coffee makes life beautiful

For someone who does not regularly drink coffee, I sure seem to praise it a lot, don't I? Well, speaking of praise, researchers now claim that coffee selectively improves processing speed for positive words. So if you had drunk coffee just before reading this post you would have read five of its words faster. Just think how much you could accomplish in that time.

Advertisements

Existential risk in academia

A friend and I observed that academic machine learning people tend to dismiss the so-called "unfriendly AI" risks, with ourselves as strong examples. We are too focused on our own small incremental improvements on very domain-specific problems, she suggested, to seriously consider massive progress in general AI. Furthermore, we tend to look down on "non-professionals" who speak of AI without knowing much about it.

Even after observing this, I still find it hard to take the singularity / unfriendly AI talk seriously, despite my respect to the people involved such as Jaan Tallinn, Nick Bostrom and others. The recent New York Times article by philosopher Huw Price, however, is one of the best popular pieces I read about the subject, and without providing very deep arguments, got me to take his claims seriously. Perhaps this is just another indication of my existing bias towards established academics.

In any case, I have no intention to stop doing research in probabilistic machine learning.

A summary of Eli Pariser’s "The Filter Bubble"

I recently read Eli Pariser's new book, The Filter Bubble. Unlike most books that I read, I felt like summarizing it. Here is my summary. It is very condensed and will probably be most useful if you are already more or less familiar with the ideas in the book. The rest of this post is my summary, and does not necessarily reflect my own opinions. Let me know if this kind of summary is useful to you.

Personalization is on the rise: Google showing personalized search results marked the beginning of the era of personalization. Google is not alone: Facebook, Yahoo, Microsoft and others all use personalization. This trend is rising and the big companies predict that practically all information will be personalized in the near future.

The filter bubble is the universe of information individualized for each one of us. It is completely individual, transparent and out of our control. The book aims to explore the implications of the filter bubble, mostly focusing on the risks, and to make them and the bubble itself visible.

Through decreased exposure to unfiltered information, the filter bubble reduces creativity and learning. It gives big companies sensitive information about us and the power to control what we are exposed to and therefore take away some of our freedom. It makes it easy to avoid involvement in global affairs.

Explicit intelligent agents have been around for over a decade but failed. Now they are still there, but hidden. Amazon was the first successful and big recommendation system. It showed the power of relevance. Then Google came and although they grew through their search engine, they started collecting massive amounts of personal information through free services that require logging in (which also helped compete with Microsoft). This information is used to personalize search results but also ads. Facebook grew with a different focus, the social interactions. Both are trying to lock users in so they don't switch to competitors. Other, less visible but even bigger companies like Acxiom, BlueKai, TargusInfo and the Rubikon project are silently collecting massive amounts of information about everyone. These internet giants allow more personalization than ever before, and there are important implications.

The power of classic newspapers is rapidly decreasing. News becomes fragmented. Traditional newspapers were supposed to neutrally inform the public. The filter bubble doesn't.

Disintermediation of news means removal of the middleman – the newspaper. Information is available in many other ways. Overpersonalization of news can exclude important stories. People trust random bloggers almost as much as they trust the New York Times, and trust is moving from centralized information sources to personalized ones. This reduces exposure to different opinions and reduces our active effort to acquire information.

With personalization, the best stories are those that appeal to most people, and these tend to be superficial and target the lowest common denominator. Important news get ignored because of its low appeal.

We are biased to believe what we see, and filters affect what we see. They reinforce what we already believe, and hide different opinions. Therefore they dramatically amplify our existing confirmation biases. They also reduce the confusion which is inherent in the world, and since such confusion drives us to seek new information, they make us lazily stick to our existing beliefs.

Filters promote narrow focus similar to stimulants like Adderall. They limit creativity by limiting both the exposure to new information and the motivation to seek it. The reduced variation reduces serendipity. Creativity requires some diversity. With filters we are not even aware of the existence of everything which is filtered away, and we don't know how narrow our vision becomes.

Filters assume wrongly that we have a unique identity. Facebook forces us to be the same with everyone.

Google personalizes based on behavior (searches, clicks). Facebook personalizes based on how we present ourselves (posts, comments). These private and public profiles are different and both are incomplete.

We have "should" interests, which we think we should hold, and "want" interests, which are what we actually consume. Good media mixes should with want, but click-based filters are biased to "want".

The personal information companies collect about us allows them to use personalized persuasion methods and exploit our weaknesses.

Filters lock us in local maxima by reinforcing even weak or arbitrary preferences through positive feedback loops ("identity cascades"). This actually affects our thinking. When the internet thinks we are dumb, we get dumber.

Overfitting algorithms can make us more discriminatory. Netflix may be an example of stereotyping. To prevent this, algorithms need to continuously test their models, but they rarely do. Algorithmic induction can lead to information determinism, where past clicks decide the future.

Personalization gives dangerous power to few companies, which can be exploited for political ends. Google try to do no evil, but they certainly could if they wanted to.

Filters create the friendliness illusion by making unpleasant information invisible.

Personalized political ads fragment campaigns so that no one really knows what is being shown to whom, destroying public debate about important national and global affairs.

Postmaterialism refers to emphasis of identity over basic needs. It combines with the filter bubble to reduce overlaps in content and erode shared experiences.

Democracy only works if citizens are capable of thinking beyond their narrow self interests, and this requires a shared view of the world.

Programmers sometimes shun politics, preferring the world they can create in code over real world influence. Google and Facebook don't consider moral roles often. Programmers and engineers have much power today to shape not just technology but also society. Building an informed and engaged citizenry requires engineers to go beyond not doing evil, to do good.

The filter bubble now expands outside our computers and into reality. Huge amounts of data can easily turn to artificial intelligence. Product ads enter blockbuster movies and bestselling books. Smart devices and augmented reality allow more personalization of reality. And filtering now extends to people too. Machine driven systems take away from our freedom and control, and they don't work perfectly.

We need to avoid falling into the bubble trap. Opt out of personalized ads, erase cookies, use sites that give more control and transparency over filtering like Twitter and unlike Facebook. Become algorithm literate. Companies should also help by making their algorithms more transparent and encouraging political engagement and citizenship. An "important" button can be added to "like" so people themselves can collaboratively filter what's important. Systems should also promote serendipity by exposing users to more content outside their narrow interests.

Governments can also help by regulations to give users more control over data collected about them and how it is used. Personal data should be considered as personal property.

It’s easy to not notice the progress of science

Earlier today I spoke with a friend about my excitement with the progress of science, how I keep reading about amazing achievements that somehow seem like no big deal. He asked for some examples, since he did not share my feeling that scientific progress is so quick and significant. This made me decide to start posting more about cool stuff I read about, even if I have no special thoughts about it.

Here is the first example, just a few hours after that conversation. In this paper, the researchers identified human microRNAs that induce regeneration in mice hearts, and used them in living mice to achieve almost complete recovery following myocardial infarction in mice.

Future discounting and language

An interesting paper suggests that speakers languages where the future is clearly separated from the present show less future-oriented behavior as measured through health-related behaviors, saving, etc.. This is a beautiful result, if it is indeed real. The classifications were taken from the European Science Foundation’s Typology of Languages in Europe project, so I do not suspect unconscious data doctoring. There could be confounders, but nothing too obvious comes to mind. This result seems to me legitimate and another interesting demonstration of the strong connections between language and thought.

End of History Illusion: underestimating future change

This fun Science article suggests that people underestimate how much they are going to change in the future. They compared people's responses at different ages of how much they have changed in the last ten years vs how much they are going to change in the next ten years (using established personality and other measures, and comparing the present responses with the predicted or postdicted responses). I found this interesting because I am always looking to keep growing and developing and fear the slowing down of change. This paper suggests that this fear is natural because indeed I am prone to underestimating my future change, while I probably still have a decent amount of growth ahead of me.