Is AI the worst mistake In human history?

In an interesting article, John Batelle added some fuel to the fire, in the ongoing discussion about the promises and dangers of Artificial Intelligence technology.

Physicists Stephen Hawking, Max Tegmark and Frank Wilczek, together with influential AI researcher Stuart Russell, have stated, in a widely cited article, that “Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.” Elon Musk, billionaire and founder of several well-known companies, including SpaceX, PayPal, and Tesla Motors, also joined the fray by stating that “we should be very careful about artificial intelligence” and that if he had to guess, “what our biggest existential threat is, it’s probably that.

Some people dismiss these worries. Andrew Ng, chief scientist at Baidu Research in Silicon Valley and professor at Stanford stated that “Fearing a rise of killer robots is like worrying about overpopulation on Mars“: it is not impossible, but should not be a major worry.

Others point to the fact that, while the first industrial revolution gave us cheap physical labor, freeing people to do other, more interesting jobs, the AI revolution will give us cheaper intellectual labor, freeing people to do more creative jobs. Anything other than that is either wishful thinking or paranoid worries. That may indeed be the case but some, including myself, worry that this time it may be different.

Previous articles in this blog have also addressed this topic, including March of the Machines, a reference to a recent special edition of The Economist and a brief review of Supperintelligence, a book by Nick Bostrom about the dangers of AI.



When so many people talk about the danger of technology, the Valley listens. One of the most open responses, so far, has been OpenAI, a non-profit AI company, whose goal is “to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return“.

The idea is that, by being free from the need to generate income, OpenAI may develop more effectively significant advances in Artificial Intelligence and make them open and usable by everyone. Also, by making sure that AI research is kept in the open, OpenAI hopes to reduce the risks of a takeover by an hostile AI.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s