Existential risk in academia

A friend and I observed that academic machine learning people tend to dismiss the so-called "unfriendly AI" risks, with ourselves as strong examples. We are too focused on our own small incremental improvements on very domain-specific problems, she suggested, to seriously consider massive progress in general AI. Furthermore, we tend to look down on "non-professionals" who speak of AI without knowing much about it.

Even after observing this, I still find it hard to take the singularity / unfriendly AI talk seriously, despite my respect to the people involved such as Jaan Tallinn, Nick Bostrom and others. The recent New York Times article by philosopher Huw Price, however, is one of the best popular pieces I read about the subject, and without providing very deep arguments, got me to take his claims seriously. Perhaps this is just another indication of my existing bias towards established academics.

In any case, I have no intention to stop doing research in probabilistic machine learning.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s