The Computer and the Brain

The Computer and the Brain, first published in 1958, is a delightful little book by John von Neumman, his attempt to compare two very different information processing devices: computers and brains. Although written more than sixty years ago, it has more than historic interest, even though it addresses two topics that have developed enormously in the decades since von Neumann’s death.

John von Neumann’s genius comes through very clearly in this essay. Sixty years ago, very few persons knew what a computer was, and probably even less had some idea how the brain performed its magic. This book, written just a few years after the invention of the transistor (by Bardeen and Brattain), and the discovery of the membrane mechanisms that explain the electrical behaviour of neurons (by Hodgkin and Huxley), nonetheless compares, in very clear terms, the relative computational power of computers and brains.

Von Neumann aim is to compare many of the characteristics of the processing devices used by computers (vacuum tubes and transistors) with the ones used by the brain (neurons). His objective is to perform an objective comparison of the two technologies, as of their ability to process information. He addresses speed, size, memory and other characteristics of the two types of information processing devices.

One of the central and (to me) most interesting parts of the book is the comparison of artificial information processing devices (vacuum tubes and transistors) with natural information processing devices (neurons), in terms of speed and size.

Von Neumman concludes that vacuum tubes and transistors are faster, by a factor of 10,000 to 100,000, than neurons, and occupy about 1000 times more space (with the technologies of the day). All together, if one assumes that speed can be traded by number of devices (for instance, reusing electronic devices to perform computations that, in the brain, are performed by slower, but independent, neurons), his comparisons lead to the conclusion (not explicit in the book, I must add) that an electronic computer the size of a human brain would be one to two orders of magnitude less powerful than the human brain itself.

John von Neumann could not have predicted, in 1957, that transistors would be packed, by the billions, on integrated circuits no larger than a postal stamp. If one uses the numbers that correspond to the technologies of today, one is led to conclude that a modern CPU (such as the Intel Core i7), with a billion transistors, operating in the nanoseconds range, is a few orders of magnitude (10000 times) more powerful than the human brain, with its hundred billion neurons operating in the milliseconds range.

Of course one has to consider, as John von Neumann also wrote, that a neuron is considerably more complex and can perform more complex computations than a transistor. But even if one takes that into account, and assumes that a transistor is roughly equivalent to a synapse, in raw computing power, one gets the final result that the human brain and an Intel Core i7 have about the same raw processing power.

It is a sobering thought, one which von Neumann would certainly have liked to share.

Advertisements

LIFE 3.0: Being Human in the Age of Artificial Intelligence

Max Tegmark’s latest book, LIFE 3.0: Being Human in the Age of Artificial Intelligence, is an enthralling journey into the future, when the developments in artificial intelligence create a new type of lifeform on Earth.

Tegmark proposes to classify life in three stages. Life 1.0, unintelligent life, is able to change its hardware and improve itself only through the very slow and blind process of natural evolution. Single cell organisms, plants and simple animals are in this category. Life 2.0 is also unable to change its hardware (excepto through evolution, as for Life 1.0) but can change its software, stored in the brains, by using previous experience to learn new behaviors. Higher animals and humans, in particular, belong here. Humans can now, up to a limited point, change their hardware (through prosthetics, cellphones, computers and other devices) so they could also be considered now Life 2.1.

Life 3.0 is the new generation of life, which can change both its software and its hardware. The ability to change the computational support (i.e., the physical basis of computation) results from technological advances, which will only accelerate with the advent of Artificial General Intelligence (AGI). The book is really about the future of a world where AGI enables humanity to create a whole range of new technologies, and expand new forms of life through the cosmos.

The riveting prelude, The Tale of the Omega Team, is the story of the group of people who “created” the first intelligence explosion on planet Earth makes this a “hard-to-put-down” book.  The rest of the book goes through the consequences of this intelligence explosion, a phenomenon the author believes will undoubtedly take place, sooner or later. Chapter 4 focus on the explosion proper, and on how it could happen. Chapter 5, appropriately titled “Aftermath: The Next 10,000 Years” is one of the most interesting ones, and describes a number of long term scenarios that could result from such an event. These scenarios range from a benevolent and enlightened dictatorship (by the AI) to the enslaved God situation, where humanity keeps the AI in chains and uses it as a slave to develop new technologies, inaccessible to unaided humanity’s simpler minds. Always present, in these scenarios, are the risks of a hostile takeover by a human-created AGI, a theme that this book also addresses in depth, following on the ideas proposed by Nick Bostrom, in his book Superintelligence.

Being a cosmologist, Tegmark could not leave out the question of how life can spread through the Cosmos, a topic covered in depth in chapter 6, in a highly speculative fashion. Tegmark’s view is, to say the least, grandiose, envisaging a future where AGI will make it possible to spread life through the reachable universe, climbing the three levels of the Kardashev scale. The final chapters address (in a necessarily more superficial manner) the complex topics of goal setting for AI systems and artificial (or natural) consciousness. These topics somehow felt less well developed and more complete and convincing treatments can be found elsewhere. The book ends with a description of the mission of the Future of Life Institute, and the Asilomar AI Principles.

A book like this cannot leave anyone indifferent, and you will be likely to take one of two opposite sides: the optimistis, with many famous representatives, including Elon Mush, Stuart Russel and Nick Bostrom, who believe AGI can be developed and used to make humanity prosper; or the pessimists , whose more visible member is probably Yuval Noah Harari, who has voiced very serious concerns about technology developments in his book Homo Deus and in this review of Life 3.0.

Consciousness: Confessions of a Romantic Reductionist

Christoph Koch, the author of “Consciousness: Confessions of a Romantic Reductionist”  is not only a renowned researcher in brain science but also the president of the Allen Institute for Brain Science, one of the foremost institutions in brain research. What he has to tell us about consciousness, and how he believes it is produced by the brain is certainly of great interest for anyone interested in these topics.

However, the book is more that just another philosophical treatise on the issue of consciousness, as it is also a bit of an autobiography and an open window on Koch’s own consciousness.

With less than 200 pages (in the paperback edition), this book is indeed a good start for those interested in the centuries-old problem of the mind-body duality and how a physical object (the brain) creates such an ethereal thing as a mind. He describes and addresses clearly the central issue of why there is such a thing as consciousness in humans, and how it creates self-awareness, free-will (maybe) and the qualia that characterize the subjective experiences each and (almost) every human has.

In Koch’s view, consciousness is not a thing that can be either on or off. He ascribes different levels of consciousness to animals and even to less complex creatures and systems. Consciousness, he argues, is created by the fact that very complex systems have a high dimensional state space, creating a subjective experience that corresponds to each configuration of this state space. In this view, computers and other complex systems can also exhibit some degree of consciousness, although much smaller than living entities, since they are much less complex.

He goes on to describe several approaches that have aimed at elucidating the complex feedback loops existing in brains, which have to exist in order to create these complex state spaces. Modern experimental techniques can analyze the differences between awake (conscious) and asleep (unconscious) brains, and learn from these differentes what exactly does create consciousness in a brain.

Parts of the book are more autobiographical, however. He describes not only his life-long efforts to address these questions, many of them developed together with Francis Crick, who remains a reference to him, as a scientist and as a person. The final chapter is more philosophical, and addresses other questions for which we have no answer yet, and may never have, such as “Why there is something instead of nothing?” or “Did an all powerful God create the universe, 14 billions year ago, complete with the laws of physics, matter and energy, or is this God simply a creation of man?”.

All in all, excellent reading, accessible to anyone interested in the topic but still deep and scientifically exact.

AlphaZero masters the game of Chess

DeepMind, a company that was acquired by Google, made headlines when the program AlphaGo Zero managed to become the best Go player in the world, without using any human knowledge, a feat reported in this blog less than two months ago.

Now, just a few weeks after that result, DeepMind reports, in an article posted in arXiv.org, that the program AlphaZero obtained a similar result for the game of chess.

Computer programs have been the world’s best players for a long time now, basically since Deep Blue defeated the reigning world champion, Garry Kasparov, in 1997. Deep Blue, as almost all the other top chess programs, was deeply specialized in chess, and played the game using handcrafted position evaluation functions (based on grand-master games) coupled with deep search methods. Deep Blue evaluated more than 200 million positions per second, using a very deep search (between 6 and 8 moves, sometimes more) to identify the best possible move.

Modern computer programs use a similar approach, and have attained super-human levels, with the best programs (Komodo and Stockfish) reaching a Elo Rating higher than 3300. The best human players have Elo Ratings between 2800 and 2900. This difference implies that they have less than a one in ten chance of beating the top chess programs, since a difference of 366 points in Elo Rating (anywhere in the scale) mean a probability of winning of 90%, for the most ranked player.

In contrast, AlphaZero learned the game without using any human generated knowledge, by simply playing against another copy of itself, the same approach used by AlphaGo Zero. As the authors describe, AlphaZero learned to play at super-human level, systematically beating the best existing chess program (Stockfish), and in the process rediscovering centuries of human-generated knowledge, such as common opening moves (Ruy Lopez, Sicilian, French and Reti, among others).

The flexibility of AlphaZero (which also learned to play Go and Shogi) provides convincing evidence that general purpose learners are within the reach of the technology. As a side note, the author of this blog, who was a fairly decent chess player in his youth, reached an Elo Rating of 2000. This means that he has less than a one in ten chance of beating someone with a rating of 2400 who has less than a one in ten chance of beating the world champion who has less than a one in ten chance of beating AlphaZero. Quite humbling…

Image by David Lapetina, available at Wikimedia Commons.

Portuguese Edition of The Digital Mind

IST Press, the publisher of Instituto Superior Técnico, just published the Portuguese edition of The Digital Mind, originally published by MIT Press.

The Portuguese edition, translated by Jorge Pereirinha Pires, follow the same organization and has been reviewed by a number of sources. The back-cover reviews are by Pedro Domingos, Srinivas Devadas, Pedro Guedes de Oliveira and Francisco Veloso.

A pre-publication was made by the Público newspaper, under the title Até que mundos digitais nos levará o efeito da Rainha Vermelha, making the first chapter of the book publicly available.

There are also some publicly available reviews and pieces about this edition, including an episode of a podcast and a review in the radio.

The last invention of humanity

Irving John Good was a British mathematician who worked with Alan Turing in the famous Hut 8 of Bletchley Park, contributing to the war effort by decrypting the messages coded by the German enigma machines. After that, he became a professor at Virginia Tech and, later in life, he was a consultant for the cult movie 2001: A Space Odyssey, by Stanley Kubrick.

Irving John Good (born Isadore Jacob Gudak to a Polish jewish family) is credited with coining the term intelligence explosion, to refer to the possibility that a super-intelligent system may, one day, be able to design an even more intelligent successor. In his own words:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.

We are still very far from being able to design an artificially intelligent (AI)  system that is smart enough to design and code even better AI systems. Our current efforts address very narrow fields, and obtain systems that do not have the general intelligence required to create the phenomenon I. J. Good was referring to. However, in some very restrict domains, we can see at work mechanisms that resemble the that very same phenomenon.

Go is a board game, very difficult to master because of the huge number of possible games and high number of possible moves at each position. Given the complexity of the game, branch and bound approaches could not be used, until recently, to derive good playing strategies. Until only a few years ago, it was believed that it would take decades to create a program that would master the game of Go, at a level comparable with the best human players.

In January 2016, DeepMind, an AI startup (which was at that time acquired by Google by a sum reported to exceed 500M dollars), reported in an article in Nature that they had managed to master the complex game of Go by using deep neural networks and a tree search engine. The system, called AlphaGo, was trained on databases of human games and eventually managed to soundly beat the best human players, becoming the best player in the world, as reported in this blog.

A couple of weeks ago, in October of 2017, DeepMind reported, in a second article in Nature, that they programmed a system, which became even more proficient at the game, that mastered the game without using any human knowledge. AlphaGo Zero did not use any human games to acquire knowledge about the game. Instead, it played millions of games (close to 30 millions, in fact, played over a period of 40 days) against another version of itself, eventually acquiring knowledge about tactics and strategies that have been slowly created by the human race for more than two millennia. By simply playing against itself, the system went from a child level (random moves) to a novice level to a world champion level. AlphaGo Zero steamrolled the original AlphaGo by 100 to 0,  showing that it is possible to obtain super-human strength without using any human generated knowledge.

In a way, the computer improved itself, by simply playing against itself until it reached perfection. Irving John Good, who died in 2009, would have liked to see this invention of mankind. Which will not be the last, yet…

Picture credits: Go board, picture taken by Hoge Rielen, available at Wikimedia Commons.

 

Bell’s Theorem, or why the universe is even stranger than we might imagine

The Einstein-Podolsky-Rosen “paradox” was at first presented as an argument against some of the basic tenets of quantum mechanics.

One of these basic tenets is that there is genuine randomness in the characteristics of particles. For instance, when one measures the spin of an electron, it is only at the instant the measure is taken that the actual value of the spin is defined. Until then, its value was defined by a probability function, that collapses when the measurement is taken.

The EPR paradox uses the concept of entangled particles. Two particles are “entangled” if they were generated in such a way that they exhibit a totally correlated particular characteristic. For instance, two photons generated by a specific phenomenon (such as an electron-positron annihilation, under some circumstances) will have opposite polarizations. Once generated, these particles can travel vast distances, still entangled.

If some particular characteristic of one of these particles is measured (e.g., the polarization of a photon) in one location, this measurement will, probabilistically, result in a given value. That particular value will determine, instantaneously, the value of that same characteristic on the other particle, no matter how far the particles are. It is this “spooky action at a distance” that Einstein, Podolsky and Rosen believed to be impossible. It seems that the information about the state of one of the particles travels, faster than light, to the place where the other particle is.

Now, we can imagine that that particular characteristic of the particles was defined the very instant they were generated. Imagine you have one bag with one white ball and one black ball, and you separate the balls, without looking at them,  and put them into separate boxes. If one of the boxes is opened in Australia, say, and it is white, we will know instantaneously the color of the other ball. There is nothing magic or strange about this. Hidden inside the boxes, was all along the true color of the boxes, a hidden variable.

Maybe this is exactly what happens with the entangled photons. When they are generated, each one already carries with it the actual value of the polarization.

It is here that Bell’s Theorem comes to show that the universe is even stranger than we might conceive. Bell’s result, beautifully explained in this video, shows that the particles cannot carry with them any hidden variable that tells them what to do when they face a measurement. Each particle has to decide, probabilistically, at the time of the measurement, the value that should be reported. And, once this decision is made, the measurement for the other entangled particle is also defined, even if the other particle is on the other side of the universe. It seems that information travels faster than light.

The fact is that hidden variables cannot be used to explain this phenomenon. As Bell concluded “In a theory in which parameters are added to quantum mechanics to determine the results of individual measurements, without changing the statistical predictions, there must be a mechanism whereby the setting of one measuring device can influence the reading of another instrument, however remote. Moreover, the signal involved must propagate instantaneously, …

A very easy and practical demonstration of Bell’s theorem can be done with polarized filters, like the ones used in cameras or some 3D glasses. If you take two filters and put them at an angle, only a fraction of the photons that go through the first one make it through the second one. The actual fraction is given by the cosine squared of the angle between the filters(so, if the angle is 90º, no photons go through the two filters). So far, so good. Now, if you have the two filters at an angle (say 45º, so that half the photons that pass the first go through the second filter) and put an additional filter between them, at an angle of 22.5º, it happens that roughly 85% of the photons go through the (now) second filter. Of these, roughly 85% go through the third filter (which used to be the second). That means that, with the three filters in place, roughly 72% of the photons go through, way more than if you had just the two first filters, which were not changed in any way. This, obviously, cannot happen if the decision of the photons was determined from the start.

Do look at the video, and do the experience yourself.