Novacene: the future of humanity is digital?

As it says on the cover of the book, James Lovelock may well be “the great scientific visionary of our age“. He is probably best known for the Gaia Hypothesis, but he made several other major contributions. While working for NASA, he was the first to propose looking for chemical biomarkers in the atmosphere of other planets as a sign of extraterrestrial life, a method that has been extensively used and led to a number of interesting results, some of them very recent. He has argued for climate engineering methods, to fight global warming, and a strong supporter of nuclear energy, by far the safest and less polluting form of energy currently available.

Lovelock has been an outspoken environmentalist, a strong voice against global warming, and the creator of the Gaia Hypothesis, the idea that all organisms on Earth are part of a synergistic and self-regulating system that seeks to maintain the conditions for life on Earth. The ideas he puts forward in this book are, therefore, surprising. To him, we are leaving the Anthropocene (a geological epoch, characterized by the profound effect of men on the Earth environment, still not recognized as a separate epoch by mainstream science) and entering the Novacene, an epoch where digital intelligence will become the most important form of life on Earth and near space.

Although it may seem like a position inconsistent with his previous arguments about the nature of life on Earth, I find the argument for the Novacene era convincing and coherent. Again, Lovelock appears as a visionary, extrapolating to its ultimate conclusion the trend of technological development that started with the industrial revolution.

As he says, “The intelligence that launches the age that follows the Anthropocene will not be human; it will be something wholly different from anything we can now conceive.”

To me, his argument that artificial intelligence, digital intelligence, will be our future, our offspring, is convincing. It will be as different from us as we are from the first animals that appeared hundreds of millions ago, which were also very different from the cells that started life on Earth. Four billion years after the first lifeforms appeared on Earth, life will finally create a new physical support, that does not depend on DNA, water, or an Earth-like environment and is adequate for space.

Human Compatible: AI and the Problem of Control

Stuart Russell, one of the better-known researchers in Artificial Intelligence, author of the best selling textbook Artificial Intelligence, A Modern Approach addresses, in his most recent book, what is probably one of the most interesting open questions in science and technology: can we control the artificially intelligent systems that will be created in the decades to come?

In Human Compatible: AI and the Problem of Control Russell formulates and answers the following, very important question: what are the consequences if we succeed in creating a truly intelligent machine?

The question brings, with it, many other questions, of course. Will intelligent machines be dangerous to humanity? Will they take over the world? Could we control machines that are more intelligent than ourselves? Many writers and scientists, like Nick Bostrom, Stephen Hawking, Elon Musk, Sam Harris, and Max Tegmark have raised these questions, several of them claiming that superintelligent machines could be around the corner and become extremely dangerous to the humanity.

However, most AI researchers have dismissed these questions as irrelevant, concentrated as they are in the development of specific techniques and well aware that Artificial General Intelligence is far away, if it is at all achievable.  Andrew Ng, another famous AI researcher, said that worrying about superintelligent machines is like worrying about the overpopulation. of Mars.

There could be a race of killer robots in the far future, but I don’t work on not turning AI evil today for the same reason I don’t worry about the problem of overpopulation on the planet Mars

Another famous Machine Learning researcher, Pedro Domingos, in his bestselling book, The Master Algorithm, about Machine Learning, the driving force behind modern AI, also ignores these issues, concentrating on concrete technologies and applications. In fact, he says often that he is more worried about dumb machines than about superintelligent machines.

Stuart Russell’s book is different, making the point that we may, indeed, lose control of such systems, even though he does not believe they could harm us by malice or with intention. In fact, Russell is quite dismissive of the possibility that machines could one day become truly intelligent and conscious, a position I find, personally, very brave, 70 years after Alan Turing saying exactly the opposite.

Yet, Russell believes we may be in trouble if sufficiently intelligent and powerful machines have objectives that are not well aligned with the real objectives of their designers. His point is that a poorly conceived AI system, which aims at optimizing some function that was badly specified can lead to bad results and even tragedy if such a system controls critical facilities. One well-known example is Bostrom’s paperclip problem, where an AI system designed to maximize the production of paperclips turns the whole planet into a paperclip production factory, eliminating humanity in the process. As in the cases that Russell fears, the problem comes not from a machine which wants to kill all humans, but from a machine that was designed with the wrong objectives in mind and does not stop before achieving them.

To avoid that risk os misalignment between human and machine objectives, Russell proposes designing provably beneficial AI systems, based on three principles that can be summarized as:

  • Aim to maximize the realization of human preferences
  • Assume uncertainty about these preferences
  • Learn these preferences from human behavior

Although I am not fully aligned with Russell in all the positions he defends in this book, it makes for interesting reading, coming from someone who is a knowledgeable AI researcher and cares about the problems of alignment and control of AI systems.

Mindscape, a must-have podcast by Sean Carroll

Sean Carroll’s Mindscape podcast addresses topics as diverse as the interests of the author, including (but not limited to) physics, biology, philosophy, complexity, intelligence, and consciousness. Carroll has interviewed, in-depth, a large number of very interesting scientists, philosophers, writers, and thinkers, who come to talk about some of the most central open topics in science and philosophy.

Among many other, Daniel Dennett discusses minds and patterns; Max Tegmark  physics, simulation and the multiverse;   António Damásio  feeling, emotions and evolution; Patricia Churchland, conscience and morality; and David Chalmers, the hard problem of consciousness.

In all the interviews, Sean Carroll conducts the conversation in an easy and interactive mode, not imposing his own views, not even on the more controversial topics where the interviewees hold diametrically opposed opinions.

If you are into science and into podcasts, you cannot miss this one.

Enlightenment Now: The case for reason, science, humanism and progress

Steven Pinker’s latest book, Enlightenment Now, deserves high praise and careful attention, in a world where reason and science are being increasingly threatened. Bill Gates called it “My new favorite book of all time“, which may be somewhat of an exaggeration. Still, the book is, definitely, a must read, and should figure in the top 10 of any reader that believes that science plays an important role in the development of humanity.

Pinker’s main point is that the values of the Enlightenment, which he lists as reason, science, humanism, and progress have not only enabled humanity to evolve immensely since they were adopted, somewhere in the 18th century, but are also our best hope for the future. He argues that these values have not only improved our lives immensely, in the last two and a half centuries, but will also lead us to vastly improved lives in the future. “Dare to understand“, the cry for reason made by David Deutsch in his The Beginning of Infinity, is the key argument made by Pinker in this book. The critical use of reason leads to understanding and understanding leads to progress, unlike the beliefs in myths, religions, miracles, and signs from God(s).  Pinker’s demolition of all the values not based on the critical use of reason is complete and utterly convincing. Do not read this book if, at some level, you believe in things that cannot be explained by reason.

To be fair, a large part of the book is dedicated to showing that progress has, indeed, been remarkable, since the 18th century, when reason and science took hold and replaced myths and religions as the major references for the development of nations and societies. No less than 17 chapters are dedicated to describing the many ways humanity has progressed in the last two and a half centuries, in fields as diverse as health, democracy, wealth, peace and, yes, even sustainability.  Pinker may come up as an incorrigible optimist, describing a world so much better than that which existed in the past, so at odds with the most popular current views that everything is going to the dogs. However, the evidence he presents is compelling, well documented, and discussed at length. Counter-arguments against the idea that progress is true and unstoppable are analyzed in depth and disposed of with style and elegance.

But the book is not only about past progress. In fact, it is mostly about the importance of viewing the Enlightenment values as the only ones that will safeguard a future for humanity. If we want a future, we need to preserve them, in a world where fake news, false science, and radical politics are endangering progress, democracy, and human rights.

It is comforting to find a book that so powerfully defends science, reason, and humanistic values against the claims that only a return to the ways of the past will save humanity of certain doom. Definitely, a must read if you believe in, and care for, Humanity.

Our Final Invention: Artificial Intelligence and the end of the human era

In what regards the state of the art in Artificial Intelligence, and the speed that it will develop, James Barrat is extremely optimistic. The author of Our Final Invention is fully convinced that existing systems are much more advanced than we give them credit for, and also that  AI researchers will create Artificial General Intelligence (AGI) much sooner than we expect.

In what regards the consequences of AGI, however, Barrat is uncompromisingly pessimistic. He believes, and argues at length, that AGI will bring with it the demise of the human race and that we should stop messing with advanced AI altogether.

I found the arguments presented for both positions rather unconvincing. His argument for the most likely development of AGI in the next decade or so is based on rather high-level considerations and conversations with a number of scientists, researchers, and entrepreneurs from the field. Needless to say, they were picked from the ones most connected with his ideas. As for the arguments that AGI will be not only dangerous but, ultimately, fatal for humanity, they are borrowed, with minor changes, from the standard superintelligence (Bostrom) and intelligence explosion (I. J. Good) ideas.

From Watson’s performance in Jeopardy and from the ANN’s small victories in the perception fields, Barrat concludes, without any additional considerations, that AGI is around the corner and that it will be very, very, dangerous. The book was written before the recent successes achieved by DeepMind and others, which leads me to believe that, if written now, his conclusions would be even more drastic.

Even though there is relatively new material here, a few stories and descriptions are interesting. Barrat makes extensive use of his conversations with the likes of Omohundro, Yudkwosky, Vassar, and Kurzweil and some stories are very entertaining, even though they look a bit like science fiction. Altogether, the book makes for some interesting, if somewhat unconvincing, reading.

Stuart Russell and Sam Harris on The Dawn of Artificial Intelligence

In one of the latest episodes of his interesting podcast, Waking Up , Sam Harris discusses with Stuart Russell the future of Artificial Intelligence (AI).

Stuart Russel is one of the foremost world authorities on AI, and author of the most widely used textbook on the subject, Artificial Intelligence, a Modern Approach. Interestingly, most of the (very interesting) conversation focuses not so much on the potential of AI, but on the potential dangers of the technology.

Many AI researchers have dismissed offhand the worries many people have expressed over the possibility of runaway Artificial Intelligence. In fact, most active researchers know very well that most of the time is spent worrying about the convergence of algorithms, the lack of efficiency of training methods, or in difficult searches for the right architecture for some narrow problem. AI researchers spend no time at all worrying about the possibility that the systems they are developing will, suddenly, become too intelligent and a danger to humanity.

On the other hand, famous philosophers, scientists and entrepreneurs, such as Elon Musk, Richard Dawkins, Bill Gates, and Nick Bostrom have been very vocal about the possibility that man-made AI systems may one day run amok and become a danger to humanity.

From this duality one is led to believe that only people who are away from the field really worry about the possibility of dangerous super-intelligences. People inside the field pay little or no attention to that possibility and, in many cases, consider these worries baseless and misinformed.

That is why this podcast, with the participation of Stuart Russell, is interesting and well worth hearing. Russell cannot be accused of being an outsider to the field of AI, and yet his latest interests are focused on the problem of making sure that future AIs will have their objectives closely allied with those of the human race.