Stuart Russell and Sam Harris on The Dawn of Artificial Intelligence

In one of the latest episodes of his interesting podcast, Waking Up , Sam Harris discusses with Stuart Russell the future of Artificial Intelligence (AI).

Stuart Russel is one of the foremost world authorities on AI, and author of the most widely used textbook on the subject, Artificial Intelligence, a Modern Approach. Interestingly, most of the (very interesting) conversation focuses not so much on the potential of AI, but on the potential dangers of the technology.

Many AI researchers have dismissed offhand the worries many people have expressed over the possibility of runaway Artificial Intelligence. In fact, most active researchers know very well that most of the time is spent worrying about the convergence of algorithms, the lack of efficiency of training methods, or in difficult searches for the right architecture for some narrow problem. AI researchers spend no time at all worrying about the possibility that the systems they are developing will, suddenly, become too intelligent and a danger to humanity.

On the other hand, famous philosophers, scientists and entrepreneurs, such as Elon Musk, Richard Dawkins, Bill Gates, and Nick Bostrom have been very vocal about the possibility that man-made AI systems may one day run amok and become a danger to humanity.

From this duality one is led to believe that only people who are away from the field really worry about the possibility of dangerous super-intelligences. People inside the field pay little or no attention to that possibility and, in many cases, consider these worries baseless and misinformed.

That is why this podcast, with the participation of Stuart Russell, is interesting and well worth hearing. Russell cannot be accused of being an outsider to the field of AI, and yet his latest interests are focused on the problem of making sure that future AIs will have their objectives closely allied with those of the human race.

The Great Filter: are we rare, are we first, or are we doomed?

Fermi’s Paradox (the fact that we never detected any sign of aliens even though, conceptually, life could be relatively common in the universe) has already been discussed in this blog, as new results come in about the rarity of life bearing planets, the discovery of new Earth-like planets, or even the detection of possible signs of aliens.

There are a number of possible explanations for Fermi’s Paradox and one of them is exactly that sufficiently advanced civilizations could retreat into their own planets, or star systems, exploring the vastness of the nano-world, becoming digital minds.

A very interesting concept related with Fermi’s Paradox is the Great Filter theory, which states, basically, that if intelligent civilizations do not exist in the galaxy we, as a civilization, are either rare, first, or doomed. As this post very clearly describes, one of these three explanations has to be true, if no other civilizations exist.

The Great Filter theory is based on Robin Hanson’s argument that the failure to find any extraterrestrial civilizations in the observable universe has to be explained by the fact that somewhere, in the sequence of steps that leads from planet formation to the creation of technological civilizations, there has to be an extremely unlikely event, which he called the Great Filter.

This Great Filter may be behind us, in the process that led from inorganic compounds to humans. That means that we, intelligent beings, are rare in the universe. Maybe the conditions that lead to life are extremely rare, either due to the instability of planetary systems, or to the low probability that life gets started in the first place, or to some other phenomenon that we were lucky enough to overcome.

It can also happen that conditions that make possible the existence of life are relatively recent in the universe. That would mean that conditions for life only became common in the universe (or the galaxy) in the last few billions years. In that case, we may not be rare, but we would be the first, or among the first, planets to develop intelligent life.

The final explanation is that the Great Filter is not behind us, but ahead of us. That would mean that many technological civilizations develop but, in the end, they all collapse, due to unknown factors (some of them we can guess). In this case, we are doomed, like all other civilizations that, presumably, existed.

There is, of course, another group of explanations, which states that advanced civilizations do exist in the galaxy, but we are simply too dumb to contact or to observe them. Actually, many people believe that we should not even be trying to contact them, by broadcasting radio-signals into space, advertising that we are here. It may, simply, be too dangerous.

 

Image by the Bureau of Land Management, available at Wikimedia Commons