Novacene: the future of humanity is digital?

As it says on the cover of the book, James Lovelock may well be “the great scientific visionary of our age“. He is probably best known for the Gaia Hypothesis, but he made several other major contributions. While working for NASA, he was the first to propose looking for chemical biomarkers in the atmosphere of other planets as a sign of extraterrestrial life, a method that has been extensively used and led to a number of interesting results, some of them very recent. He has argued for climate engineering methods, to fight global warming, and a strong supporter of nuclear energy, by far the safest and less polluting form of energy currently available.

Lovelock has been an outspoken environmentalist, a strong voice against global warming, and the creator of the Gaia Hypothesis, the idea that all organisms on Earth are part of a synergistic and self-regulating system that seeks to maintain the conditions for life on Earth. The ideas he puts forward in this book are, therefore, surprising. To him, we are leaving the Anthropocene (a geological epoch, characterized by the profound effect of men on the Earth environment, still not recognized as a separate epoch by mainstream science) and entering the Novacene, an epoch where digital intelligence will become the most important form of life on Earth and near space.

Although it may seem like a position inconsistent with his previous arguments about the nature of life on Earth, I find the argument for the Novacene era convincing and coherent. Again, Lovelock appears as a visionary, extrapolating to its ultimate conclusion the trend of technological development that started with the industrial revolution.

As he says, “The intelligence that launches the age that follows the Anthropocene will not be human; it will be something wholly different from anything we can now conceive.”

To me, his argument that artificial intelligence, digital intelligence, will be our future, our offspring, is convincing. It will be as different from us as we are from the first animals that appeared hundreds of millions ago, which were also very different from the cells that started life on Earth. Four billion years after the first lifeforms appeared on Earth, life will finally create a new physical support, that does not depend on DNA, water, or an Earth-like environment and is adequate for space.

Human Compatible: AI and the Problem of Control

Stuart Russell, one of the better-known researchers in Artificial Intelligence, author of the best selling textbook Artificial Intelligence, A Modern Approach addresses, in his most recent book, what is probably one of the most interesting open questions in science and technology: can we control the artificially intelligent systems that will be created in the decades to come?

In Human Compatible: AI and the Problem of Control Russell formulates and answers the following, very important question: what are the consequences if we succeed in creating a truly intelligent machine?

The question brings, with it, many other questions, of course. Will intelligent machines be dangerous to humanity? Will they take over the world? Could we control machines that are more intelligent than ourselves? Many writers and scientists, like Nick Bostrom, Stephen Hawking, Elon Musk, Sam Harris, and Max Tegmark have raised these questions, several of them claiming that superintelligent machines could be around the corner and become extremely dangerous to the humanity.

However, most AI researchers have dismissed these questions as irrelevant, concentrated as they are in the development of specific techniques and well aware that Artificial General Intelligence is far away, if it is at all achievable.  Andrew Ng, another famous AI researcher, said that worrying about superintelligent machines is like worrying about the overpopulation. of Mars.

There could be a race of killer robots in the far future, but I don’t work on not turning AI evil today for the same reason I don’t worry about the problem of overpopulation on the planet Mars

Another famous Machine Learning researcher, Pedro Domingos, in his bestselling book, The Master Algorithm, about Machine Learning, the driving force behind modern AI, also ignores these issues, concentrating on concrete technologies and applications. In fact, he says often that he is more worried about dumb machines than about superintelligent machines.

Stuart Russell’s book is different, making the point that we may, indeed, lose control of such systems, even though he does not believe they could harm us by malice or with intention. In fact, Russell is quite dismissive of the possibility that machines could one day become truly intelligent and conscious, a position I find, personally, very brave, 70 years after Alan Turing saying exactly the opposite.

Yet, Russell believes we may be in trouble if sufficiently intelligent and powerful machines have objectives that are not well aligned with the real objectives of their designers. His point is that a poorly conceived AI system, which aims at optimizing some function that was badly specified can lead to bad results and even tragedy if such a system controls critical facilities. One well-known example is Bostrom’s paperclip problem, where an AI system designed to maximize the production of paperclips turns the whole planet into a paperclip production factory, eliminating humanity in the process. As in the cases that Russell fears, the problem comes not from a machine which wants to kill all humans, but from a machine that was designed with the wrong objectives in mind and does not stop before achieving them.

To avoid that risk os misalignment between human and machine objectives, Russell proposes designing provably beneficial AI systems, based on three principles that can be summarized as:

  • Aim to maximize the realization of human preferences
  • Assume uncertainty about these preferences
  • Learn these preferences from human behavior

Although I am not fully aligned with Russell in all the positions he defends in this book, it makes for interesting reading, coming from someone who is a knowledgeable AI researcher and cares about the problems of alignment and control of AI systems.

Mindscape, a must-have podcast by Sean Carroll

Sean Carroll’s Mindscape podcast addresses topics as diverse as the interests of the author, including (but not limited to) physics, biology, philosophy, complexity, intelligence, and consciousness. Carroll has interviewed, in-depth, a large number of very interesting scientists, philosophers, writers, and thinkers, who come to talk about some of the most central open topics in science and philosophy.

Among many other, Daniel Dennett discusses minds and patterns; Max Tegmark  physics, simulation and the multiverse;   António Damásio  feeling, emotions and evolution; Patricia Churchland, conscience and morality; and David Chalmers, the hard problem of consciousness.

In all the interviews, Sean Carroll conducts the conversation in an easy and interactive mode, not imposing his own views, not even on the more controversial topics where the interviewees hold diametrically opposed opinions.

If you are into science and into podcasts, you cannot miss this one.

Enlightenment Now: The case for reason, science, humanism and progress

Steven Pinker’s latest book, Enlightenment Now, deserves high praise and careful attention, in a world where reason and science are being increasingly threatened. Bill Gates called it “My new favorite book of all time“, which may be somewhat of an exaggeration. Still, the book is, definitely, a must read, and should figure in the top 10 of any reader that believes that science plays an important role in the development of humanity.

Pinker’s main point is that the values of the Enlightenment, which he lists as reason, science, humanism, and progress have not only enabled humanity to evolve immensely since they were adopted, somewhere in the 18th century, but are also our best hope for the future. He argues that these values have not only improved our lives immensely, in the last two and a half centuries, but will also lead us to vastly improved lives in the future. “Dare to understand“, the cry for reason made by David Deutsch in his The Beginning of Infinity, is the key argument made by Pinker in this book. The critical use of reason leads to understanding and understanding leads to progress, unlike the beliefs in myths, religions, miracles, and signs from God(s).  Pinker’s demolition of all the values not based on the critical use of reason is complete and utterly convincing. Do not read this book if, at some level, you believe in things that cannot be explained by reason.

To be fair, a large part of the book is dedicated to showing that progress has, indeed, been remarkable, since the 18th century, when reason and science took hold and replaced myths and religions as the major references for the development of nations and societies. No less than 17 chapters are dedicated to describing the many ways humanity has progressed in the last two and a half centuries, in fields as diverse as health, democracy, wealth, peace and, yes, even sustainability.  Pinker may come up as an incorrigible optimist, describing a world so much better than that which existed in the past, so at odds with the most popular current views that everything is going to the dogs. However, the evidence he presents is compelling, well documented, and discussed at length. Counter-arguments against the idea that progress is true and unstoppable are analyzed in depth and disposed of with style and elegance.

But the book is not only about past progress. In fact, it is mostly about the importance of viewing the Enlightenment values as the only ones that will safeguard a future for humanity. If we want a future, we need to preserve them, in a world where fake news, false science, and radical politics are endangering progress, democracy, and human rights.

It is comforting to find a book that so powerfully defends science, reason, and humanistic values against the claims that only a return to the ways of the past will save humanity of certain doom. Definitely, a must read if you believe in, and care for, Humanity.

Our Final Invention: Artificial Intelligence and the end of the human era

In what regards the state of the art in Artificial Intelligence, and the speed that it will develop, James Barrat is extremely optimistic. The author of Our Final Invention is fully convinced that existing systems are much more advanced than we give them credit for, and also that  AI researchers will create Artificial General Intelligence (AGI) much sooner than we expect.

In what regards the consequences of AGI, however, Barrat is uncompromisingly pessimistic. He believes, and argues at length, that AGI will bring with it the demise of the human race and that we should stop messing with advanced AI altogether.

I found the arguments presented for both positions rather unconvincing. His argument for the most likely development of AGI in the next decade or so is based on rather high-level considerations and conversations with a number of scientists, researchers, and entrepreneurs from the field. Needless to say, they were picked from the ones most connected with his ideas. As for the arguments that AGI will be not only dangerous but, ultimately, fatal for humanity, they are borrowed, with minor changes, from the standard superintelligence (Bostrom) and intelligence explosion (I. J. Good) ideas.

From Watson’s performance in Jeopardy and from the ANN’s small victories in the perception fields, Barrat concludes, without any additional considerations, that AGI is around the corner and that it will be very, very, dangerous. The book was written before the recent successes achieved by DeepMind and others, which leads me to believe that, if written now, his conclusions would be even more drastic.

Even though there is relatively new material here, a few stories and descriptions are interesting. Barrat makes extensive use of his conversations with the likes of Omohundro, Yudkwosky, Vassar, and Kurzweil and some stories are very entertaining, even though they look a bit like science fiction. Altogether, the book makes for some interesting, if somewhat unconvincing, reading.

LIFE 3.0: Being Human in the Age of Artificial Intelligence

Max Tegmark’s latest book, LIFE 3.0: Being Human in the Age of Artificial Intelligence, is an enthralling journey into the future, when the developments in artificial intelligence create a new type of lifeform on Earth.

Tegmark proposes to classify life in three stages. Life 1.0, unintelligent life, is able to change its hardware and improve itself only through the very slow and blind process of natural evolution. Single cell organisms, plants and simple animals are in this category. Life 2.0 is also unable to change its hardware (excepto through evolution, as for Life 1.0) but can change its software, stored in the brains, by using previous experience to learn new behaviors. Higher animals and humans, in particular, belong here. Humans can now, up to a limited point, change their hardware (through prosthetics, cellphones, computers and other devices) so they could also be considered now Life 2.1.

Life 3.0 is the new generation of life, which can change both its software and its hardware. The ability to change the computational support (i.e., the physical basis of computation) results from technological advances, which will only accelerate with the advent of Artificial General Intelligence (AGI). The book is really about the future of a world where AGI enables humanity to create a whole range of new technologies, and expand new forms of life through the cosmos.

The riveting prelude, The Tale of the Omega Team, is the story of the group of people who “created” the first intelligence explosion on planet Earth makes this a “hard-to-put-down” book.  The rest of the book goes through the consequences of this intelligence explosion, a phenomenon the author believes will undoubtedly take place, sooner or later. Chapter 4 focus on the explosion proper, and on how it could happen. Chapter 5, appropriately titled “Aftermath: The Next 10,000 Years” is one of the most interesting ones, and describes a number of long term scenarios that could result from such an event. These scenarios range from a benevolent and enlightened dictatorship (by the AI) to the enslaved God situation, where humanity keeps the AI in chains and uses it as a slave to develop new technologies, inaccessible to unaided humanity’s simpler minds. Always present, in these scenarios, are the risks of a hostile takeover by a human-created AGI, a theme that this book also addresses in depth, following on the ideas proposed by Nick Bostrom, in his book Superintelligence.

Being a cosmologist, Tegmark could not leave out the question of how life can spread through the Cosmos, a topic covered in depth in chapter 6, in a highly speculative fashion. Tegmark’s view is, to say the least, grandiose, envisaging a future where AGI will make it possible to spread life through the reachable universe, climbing the three levels of the Kardashev scale. The final chapters address (in a necessarily more superficial manner) the complex topics of goal setting for AI systems and artificial (or natural) consciousness. These topics somehow felt less well developed and more complete and convincing treatments can be found elsewhere. The book ends with a description of the mission of the Future of Life Institute, and the Asilomar AI Principles.

A book like this cannot leave anyone indifferent, and you will be likely to take one of two opposite sides: the optimistis, with many famous representatives, including Elon Mush, Stuart Russel and Nick Bostrom, who believe AGI can be developed and used to make humanity prosper; or the pessimists , whose more visible member is probably Yuval Noah Harari, who has voiced very serious concerns about technology developments in his book Homo Deus and in this review of Life 3.0.

Will a superintelligent machine be the last thing we invent?

From the very beginning, computer scientists have aimed at creating machines that are as intelligent as humans. However, there is no reason to believe that the level of intelligence of machines will stop there. They may become, very rapidly, much more intelligent than humans, once they have the ability to design the next versions of artificial intelligences (AI)s.

This idea is not new. In a 1951 lecture entitled “Intelligent Machinery, A Heretical Theory”Alan Turing said that, “...it seems probable that once the machine thinking method has started, it would not take long to outstrip our feeble powers… At some stage therefore we should have to expect the machines to take control…”

19cmqyrruw68tpng

The worry that superintelligent machines may one day take charge has been worrying an increasing larger number of researchers, and has been extensively addressed in a recent book, Supperintelligence, already covered in this blog.

Luke Muehlhauser, the Executive Director of the Machine Intelligence Research Institute (MIRI), in Berkeley, published recently a paper with Nick Bostrom, from the the Future of Humanity Institute,  at Oxford, on the need to guarantee that advanced AIs will be friendly to the human species.

Muehlhauser and Bostrom argue that “Humans will not always be the most intelligent agents on Earth, the ones steering the future.” and ask “What will happen to us when we no longer play that role, and how can we prepare for this transition?

In an interesting interview, which appeared in io9, Muehlhauser states that he was brought into this problem when he became familiar of a work by Irving J. Good, a british mathematician that worked with Alan Turing at Bletchley Park. The authors’ opinion is that further research, strategic and technical, on this problem is required, to avoid the risk that a supperintelligent system is created before we fully understand the consequences. Furthermore, they believe a much higher level of awareness is necessary, in general, in order to align research agendas with the safety requirements. Their point is that a supperintelligent system would be the most dangerous weapon ever developed by humanity.

All of this creates the risk that a supperintelligent machine may the be last thing invented by humanity, either because humanity becomes extinct, or because our intellects would be so vastly surpassed by AIs that they would make all the significant contributions.