Other Minds and Alien Intelligences

Peter Godfrey-Smith’s Other Minds makes for an interesting read on the subject of the evolution of intelligence. The book focuses on the octopus and the evolution of intelligent life.Octopuses belong to the same class of animals as squid and cuttlefish (the cephalopods), a class which separated from the evolutionary line that led to humans more than 600 million years ago. As Godfrey-Smith describes, many experiments have shown that octopuses are highly intelligent, and capable of complex behaviours that are deemed to require sophisticated forms of intelligence. They are, therefore, the closest thing to alien intelligence that we can get our hands on, since the evolution of their bodies and brains was, in the last 600 million years, independent from our own evolution.

The book explores very well this issue and dives deep into the matters of cephalopod intelligence. The nervous systems of octopuses brains are very different from ours and, in fact, they are not even organised in the same way. Each of the eight arms of an octopus is controlled by a separate “small brain”. These small brains report to, and are coordinated by, the central brain but retain some ability to act independently, an arrangement that is, to say the least, foreign to us.

Godfrey-Smith leads us through the branches of the evolutionary tree, and argues that advanced intelligence has evolved not once, but a number of times, perhaps four times as shown in the picture, in mammals, birds and two branches of cephalopods.

If his arguments are right, this work and this book provide an important insight on the nature of the bottlenecks that may block the evolution of higher intelligence, on Earth and in other planets. If, indeed, life on Earth has evolved higher intelligence multiple times, independently, this fact provides strong evidence that the evolution of brains, from simple nervous systems to complex ones, able to support higher intelligence, is not a significant bottleneck. That reduces the possible number of explanations for the fact that we have never observed technological civilisations on the Galaxy, also known as the Great Filter. Whatever the reasons, it is probably not because intelligence evolves only rarely in living organisms.

The scientific components of the book are admirably intertwined with the descriptions of the author’s appreciation of cephalopods, in particular, and marine life, in general. All in all, a very interesting read for those interested in the evolution of intelligence.

Picture (not to scale) from the book, adapted to show the possible places where higher intelligence evolved.


The Second Machine Age

The Second Machine Age, by Erik Brynjolfsson and Andrew McAfee, two MIT professors and researchers, offers mostly an economist’s point of view on the consequences of the technological changes that are remaking civilisation.

Although a fair number of chapters is dedicated to the technological innovations that are shaping the first decades of the 21st century, the book is at its best when the economic issues are presented and discussed.

The book is particularly interesting in its treatment of the bounty vs. spread dilema: will economic growth be fast enough to lift everyone’s standard of living, or will increased concentration of wealth lead to such an increase in inequality that many will be left behind?

The chapter that provides evidence on the steady increase in inequality is specially appealing and convincing. While average income, in the US, has been increasing steadily in the last decades, median income (the income of those who are exactly in the middle of the pay scale) has stagnated for several decades, and may even be decreasing in the last few years. For the ones at the bottom at the scale, the situation is much worst now than decades ago.

Abundant evidence of this trend also comes from the analysis of the shares of GDP that are due to wages and to corporate profits. Although these two fractions of GDP have fluctuated somewhat in the last century, there is mounting evidence that the fraction due to corporate profits is now increasing, while the fraction due to wages is decreasing.

All this evidence, put together, leads to the inevitable conclusion that society has to explicitly address the challenges posed by the fourth industrial revolution.

The last chapters are, indeed, dedicated to this issue. The authors do not advocate a universal basic income, but come out in defence of a negative income tax for those whose earnings are below a given level. The mathematics of the proposal are somewhat unclear but, in the end, one thing remains certain: society will have to address the problem of mounting inequality brought in by technology and globalisation.

The Computer and the Brain

The Computer and the Brain, first published in 1958, is a delightful little book by John von Neumman, his attempt to compare two very different information processing devices: computers and brains. Although written more than sixty years ago, it has more than historic interest, even though it addresses two topics that have developed enormously in the decades since von Neumann’s death.

John von Neumann’s genius comes through very clearly in this essay. Sixty years ago, very few persons knew what a computer was, and probably even less had some idea how the brain performed its magic. This book, written just a few years after the invention of the transistor (by Bardeen and Brattain), and the discovery of the membrane mechanisms that explain the electrical behaviour of neurons (by Hodgkin and Huxley), nonetheless compares, in very clear terms, the relative computational power of computers and brains.

Von Neumann aim is to compare many of the characteristics of the processing devices used by computers (vacuum tubes and transistors) with the ones used by the brain (neurons). His objective is to perform an objective comparison of the two technologies, as of their ability to process information. He addresses speed, size, memory and other characteristics of the two types of information processing devices.

One of the central and (to me) most interesting parts of the book is the comparison of artificial information processing devices (vacuum tubes and transistors) with natural information processing devices (neurons), in terms of speed and size.

Von Neumman concludes that vacuum tubes and transistors are faster, by a factor of 10,000 to 100,000, than neurons, and occupy about 1000 times more space (with the technologies of the day). All together, if one assumes that speed can be traded by number of devices (for instance, reusing electronic devices to perform computations that, in the brain, are performed by slower, but independent, neurons), his comparisons lead to the conclusion (not explicit in the book, I must add) that an electronic computer the size of a human brain would be one to two orders of magnitude less powerful than the human brain itself.

John von Neumann could not have predicted, in 1957, that transistors would be packed, by the billions, on integrated circuits no larger than a postal stamp. If one uses the numbers that correspond to the technologies of today, one is led to conclude that a modern CPU (such as the Intel Core i7), with a billion transistors, operating in the nanoseconds range, is a few orders of magnitude (10000 times) more powerful than the human brain, with its hundred billion neurons operating in the milliseconds range.

Of course one has to consider, as John von Neumann also wrote, that a neuron is considerably more complex and can perform more complex computations than a transistor. But even if one takes that into account, and assumes that a transistor is roughly equivalent to a synapse, in raw computing power, one gets the final result that the human brain and an Intel Core i7 have about the same raw processing power.

It is a sobering thought, one which von Neumann would certainly have liked to share.

LIFE 3.0: Being Human in the Age of Artificial Intelligence

Max Tegmark’s latest book, LIFE 3.0: Being Human in the Age of Artificial Intelligence, is an enthralling journey into the future, when the developments in artificial intelligence create a new type of lifeform on Earth.

Tegmark proposes to classify life in three stages. Life 1.0, unintelligent life, is able to change its hardware and improve itself only through the very slow and blind process of natural evolution. Single cell organisms, plants and simple animals are in this category. Life 2.0 is also unable to change its hardware (excepto through evolution, as for Life 1.0) but can change its software, stored in the brains, by using previous experience to learn new behaviors. Higher animals and humans, in particular, belong here. Humans can now, up to a limited point, change their hardware (through prosthetics, cellphones, computers and other devices) so they could also be considered now Life 2.1.

Life 3.0 is the new generation of life, which can change both its software and its hardware. The ability to change the computational support (i.e., the physical basis of computation) results from technological advances, which will only accelerate with the advent of Artificial General Intelligence (AGI). The book is really about the future of a world where AGI enables humanity to create a whole range of new technologies, and expand new forms of life through the cosmos.

The riveting prelude, The Tale of the Omega Team, is the story of the group of people who “created” the first intelligence explosion on planet Earth makes this a “hard-to-put-down” book.  The rest of the book goes through the consequences of this intelligence explosion, a phenomenon the author believes will undoubtedly take place, sooner or later. Chapter 4 focus on the explosion proper, and on how it could happen. Chapter 5, appropriately titled “Aftermath: The Next 10,000 Years” is one of the most interesting ones, and describes a number of long term scenarios that could result from such an event. These scenarios range from a benevolent and enlightened dictatorship (by the AI) to the enslaved God situation, where humanity keeps the AI in chains and uses it as a slave to develop new technologies, inaccessible to unaided humanity’s simpler minds. Always present, in these scenarios, are the risks of a hostile takeover by a human-created AGI, a theme that this book also addresses in depth, following on the ideas proposed by Nick Bostrom, in his book Superintelligence.

Being a cosmologist, Tegmark could not leave out the question of how life can spread through the Cosmos, a topic covered in depth in chapter 6, in a highly speculative fashion. Tegmark’s view is, to say the least, grandiose, envisaging a future where AGI will make it possible to spread life through the reachable universe, climbing the three levels of the Kardashev scale. The final chapters address (in a necessarily more superficial manner) the complex topics of goal setting for AI systems and artificial (or natural) consciousness. These topics somehow felt less well developed and more complete and convincing treatments can be found elsewhere. The book ends with a description of the mission of the Future of Life Institute, and the Asilomar AI Principles.

A book like this cannot leave anyone indifferent, and you will be likely to take one of two opposite sides: the optimistis, with many famous representatives, including Elon Mush, Stuart Russel and Nick Bostrom, who believe AGI can be developed and used to make humanity prosper; or the pessimists , whose more visible member is probably Yuval Noah Harari, who has voiced very serious concerns about technology developments in his book Homo Deus and in this review of Life 3.0.

Consciousness: Confessions of a Romantic Reductionist

Christoph Koch, the author of “Consciousness: Confessions of a Romantic Reductionist”  is not only a renowned researcher in brain science but also the president of the Allen Institute for Brain Science, one of the foremost institutions in brain research. What he has to tell us about consciousness, and how he believes it is produced by the brain is certainly of great interest for anyone interested in these topics.

However, the book is more that just another philosophical treatise on the issue of consciousness, as it is also a bit of an autobiography and an open window on Koch’s own consciousness.

With less than 200 pages (in the paperback edition), this book is indeed a good start for those interested in the centuries-old problem of the mind-body duality and how a physical object (the brain) creates such an ethereal thing as a mind. He describes and addresses clearly the central issue of why there is such a thing as consciousness in humans, and how it creates self-awareness, free-will (maybe) and the qualia that characterize the subjective experiences each and (almost) every human has.

In Koch’s view, consciousness is not a thing that can be either on or off. He ascribes different levels of consciousness to animals and even to less complex creatures and systems. Consciousness, he argues, is created by the fact that very complex systems have a high dimensional state space, creating a subjective experience that corresponds to each configuration of this state space. In this view, computers and other complex systems can also exhibit some degree of consciousness, although much smaller than living entities, since they are much less complex.

He goes on to describe several approaches that have aimed at elucidating the complex feedback loops existing in brains, which have to exist in order to create these complex state spaces. Modern experimental techniques can analyze the differences between awake (conscious) and asleep (unconscious) brains, and learn from these differentes what exactly does create consciousness in a brain.

Parts of the book are more autobiographical, however. He describes not only his life-long efforts to address these questions, many of them developed together with Francis Crick, who remains a reference to him, as a scientist and as a person. The final chapter is more philosophical, and addresses other questions for which we have no answer yet, and may never have, such as “Why there is something instead of nothing?” or “Did an all powerful God create the universe, 14 billions year ago, complete with the laws of physics, matter and energy, or is this God simply a creation of man?”.

All in all, excellent reading, accessible to anyone interested in the topic but still deep and scientifically exact.

Portuguese Edition of The Digital Mind

IST Press, the publisher of Instituto Superior Técnico, just published the Portuguese edition of The Digital Mind, originally published by MIT Press.

The Portuguese edition, translated by Jorge Pereirinha Pires, follow the same organization and has been reviewed by a number of sources. The back-cover reviews are by Pedro Domingos, Srinivas Devadas, Pedro Guedes de Oliveira and Francisco Veloso.

A pre-publication was made by the Público newspaper, under the title Até que mundos digitais nos levará o efeito da Rainha Vermelha, making the first chapter of the book publicly available.

There are also some publicly available reviews and pieces about this edition, including an episode of a podcast and a review in the radio.

Stuart Russell and Sam Harris on The Dawn of Artificial Intelligence

In one of the latest episodes of his interesting podcast, Waking Up , Sam Harris discusses with Stuart Russell the future of Artificial Intelligence (AI).

Stuart Russel is one of the foremost world authorities on AI, and author of the most widely used textbook on the subject, Artificial Intelligence, a Modern Approach. Interestingly, most of the (very interesting) conversation focuses not so much on the potential of AI, but on the potential dangers of the technology.

Many AI researchers have dismissed offhand the worries many people have expressed over the possibility of runaway Artificial Intelligence. In fact, most active researchers know very well that most of the time is spent worrying about the convergence of algorithms, the lack of efficiency of training methods, or in difficult searches for the right architecture for some narrow problem. AI researchers spend no time at all worrying about the possibility that the systems they are developing will, suddenly, become too intelligent and a danger to humanity.

On the other hand, famous philosophers, scientists and entrepreneurs, such as Elon Musk, Richard Dawkins, Bill Gates, and Nick Bostrom have been very vocal about the possibility that man-made AI systems may one day run amok and become a danger to humanity.

From this duality one is led to believe that only people who are away from the field really worry about the possibility of dangerous super-intelligences. People inside the field pay little or no attention to that possibility and, in many cases, consider these worries baseless and misinformed.

That is why this podcast, with the participation of Stuart Russell, is interesting and well worth hearing. Russell cannot be accused of being an outsider to the field of AI, and yet his latest interests are focused on the problem of making sure that future AIs will have their objectives closely allied with those of the human race.