Will a superintelligent machine be the last thing we invent?

From the very beginning, computer scientists have aimed at creating machines that are as intelligent as humans. However, there is no reason to believe that the level of intelligence of machines will stop there. They may become, very rapidly, much more intelligent than humans, once they have the ability to design the next versions of artificial intelligences (AI)s.

This idea is not new. In a 1951 lecture entitled “Intelligent Machinery, A Heretical Theory”Alan Turing said that, “...it seems probable that once the machine thinking method has started, it would not take long to outstrip our feeble powers… At some stage therefore we should have to expect the machines to take control…”

19cmqyrruw68tpng

The worry that superintelligent machines may one day take charge has been worrying an increasing larger number of researchers, and has been extensively addressed in a recent book, Supperintelligence, already covered in this blog.

Luke Muehlhauser, the Executive Director of the Machine Intelligence Research Institute (MIRI), in Berkeley, published recently a paper with Nick Bostrom, from the the Future of Humanity Institute,  at Oxford, on the need to guarantee that advanced AIs will be friendly to the human species.

Muehlhauser and Bostrom argue that “Humans will not always be the most intelligent agents on Earth, the ones steering the future.” and ask “What will happen to us when we no longer play that role, and how can we prepare for this transition?

In an interesting interview, which appeared in io9, Muehlhauser states that he was brought into this problem when he became familiar of a work by Irving J. Good, a british mathematician that worked with Alan Turing at Bletchley Park. The authors’ opinion is that further research, strategic and technical, on this problem is required, to avoid the risk that a supperintelligent system is created before we fully understand the consequences. Furthermore, they believe a much higher level of awareness is necessary, in general, in order to align research agendas with the safety requirements. Their point is that a supperintelligent system would be the most dangerous weapon ever developed by humanity.

All of this creates the risk that a supperintelligent machine may the be last thing invented by humanity, either because humanity becomes extinct, or because our intellects would be so vastly surpassed by AIs that they would make all the significant contributions.

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s