A conversation with GPT-3 on COVID-19

GPT-3 is the most advanced language model ever created, a product of an effort by OpenAI to create a publicly available system that can be used to advance research and applications in natural language. The model itself published less than three months ago, is an autoregressive language model with 175 billion parameters and was trained with a dataset that includes almost a trillion words.

Impressive as that may be, it is difficult to get some intuition of what such a complex model, trained on billions of human-generated texts, can actually do. Can it be used effectively in translation tasks or in answering questions?

To get some idea of what a sufficiently high-level statistical model of human language can do, I challenge you to have a look at this conversation with GPT-3, published by Kirk Ouimet a few days ago. It relates a dialogue between him and GPT-3 on the topic of COVID-19. The most impressive thing about this conversation with an AI is not that it gets many of the responses right (others not so much). What impressed me is that the model was trained with a dataset created before the existence of COVID-19, which provided GPT-3 no specific knowledge about this pandemic. Whatever answers GPT-3 gives to the questions related to COVID-19 are obtained with knowledge that was already available before the pandemic began.

This certainly raises some questions on whether advanced AI systems should be more widely used to define and implement policies important to the human race.

If you want more information bout GPT-3, it is easy to find in a multitude of sites with tutorials and demonstrations, such as TheNextWeb, MIT Technology Review, and many, many others.

Instantiation, another great collection of Greg Egan’s short stories

Greg Egan is a master of short-story telling. His Axiomatic collection of short stories is one of my favorites. This new collection of short stories keeps Egan’s knack for communicating deep concepts using few words and dives deeper into the concepts of virtual reality and the impacts of technology in society.

The first story, The discrete charm of the Turing machine, could hardly be more relevant these days, when the discussions on the economic impacts of Artificial Intelligence are taking place everywhere. But the main conducting line of the book is the series of stories where sentient humans who are, in fact, characters in virtual reality games, plot to break free of their slave condition. To find out whether they succeed or not, you will have to read to book yourself!

PS: As a joke, I leave here a meme of unknown origin

SIMULACRON-3: are we living in a computer simulation?

Are we living in a computer simulation? And, if so, how could we tell? This question became very popular in the last few years and has led to many articles, comments, and arguments. The simulation hypothesis which states that all of reality, including the Earth and the observable universe, could, in fact, be the result of a computer simulation is a hot topic of debate among philosophers, scientists, and SF writers.  Even the popular Saturday Morning Breakfast Cereal (SMBC) webcomic has helped clarify the issue, in a very popular strip. Greg Egan, the master of realistic SF, may have taken the matter to its ultimate consequences, with Permutation City and Instantiation, but the truth is that this question has been the subject of many books, including the famous Neuromancer, by William Gibson.

Still, to my knowledge, Simulacron-3, by Daniel Galouye, may have been the first SF book to tackle the issue head-on. For a book written more than half a century ago, the story is surprisingly modern and up-to-date. Not only the presentation of the simulated reality world is very convincing and the technology very believable, but it also turns out that the reasons why the simulated reality world (Simulacron-3) was created could be sold as a business plan for any ambitious startup today.

There is not much more that I can write about this book without depriving you of the pleasure of reading it, so let me just recommend that you get a copy from a website near you and take with you for the summer holidays.

Human Compatible: AI and the Problem of Control

Stuart Russell, one of the better-known researchers in Artificial Intelligence, author of the best selling textbook Artificial Intelligence, A Modern Approach addresses, in his most recent book, what is probably one of the most interesting open questions in science and technology: can we control the artificially intelligent systems that will be created in the decades to come?

In Human Compatible: AI and the Problem of Control Russell formulates and answers the following, very important question: what are the consequences if we succeed in creating a truly intelligent machine?

The question brings, with it, many other questions, of course. Will intelligent machines be dangerous to humanity? Will they take over the world? Could we control machines that are more intelligent than ourselves? Many writers and scientists, like Nick Bostrom, Stephen Hawking, Elon Musk, Sam Harris, and Max Tegmark have raised these questions, several of them claiming that superintelligent machines could be around the corner and become extremely dangerous to the humanity.

However, most AI researchers have dismissed these questions as irrelevant, concentrated as they are in the development of specific techniques and well aware that Artificial General Intelligence is far away, if it is at all achievable.  Andrew Ng, another famous AI researcher, said that worrying about superintelligent machines is like worrying about the overpopulation. of Mars.

There could be a race of killer robots in the far future, but I don’t work on not turning AI evil today for the same reason I don’t worry about the problem of overpopulation on the planet Mars

Another famous Machine Learning researcher, Pedro Domingos, in his bestselling book, The Master Algorithm, about Machine Learning, the driving force behind modern AI, also ignores these issues, concentrating on concrete technologies and applications. In fact, he says often that he is more worried about dumb machines than about superintelligent machines.

Stuart Russell’s book is different, making the point that we may, indeed, lose control of such systems, even though he does not believe they could harm us by malice or with intention. In fact, Russell is quite dismissive of the possibility that machines could one day become truly intelligent and conscious, a position I find, personally, very brave, 70 years after Alan Turing saying exactly the opposite.

Yet, Russell believes we may be in trouble if sufficiently intelligent and powerful machines have objectives that are not well aligned with the real objectives of their designers. His point is that a poorly conceived AI system, which aims at optimizing some function that was badly specified can lead to bad results and even tragedy if such a system controls critical facilities. One well-known example is Bostrom’s paperclip problem, where an AI system designed to maximize the production of paperclips turns the whole planet into a paperclip production factory, eliminating humanity in the process. As in the cases that Russell fears, the problem comes not from a machine which wants to kill all humans, but from a machine that was designed with the wrong objectives in mind and does not stop before achieving them.

To avoid that risk os misalignment between human and machine objectives, Russell proposes designing provably beneficial AI systems, based on three principles that can be summarized as:

  • Aim to maximize the realization of human preferences
  • Assume uncertainty about these preferences
  • Learn these preferences from human behavior

Although I am not fully aligned with Russell in all the positions he defends in this book, it makes for interesting reading, coming from someone who is a knowledgeable AI researcher and cares about the problems of alignment and control of AI systems.

The Origin of Consciousness in the Breakdown of the Bicameral Mind

The origin of consciousness in the breakdown of the bicameral mind, a 1976 book by Julian Jaynes, is probably one of the most intriguing and contentious works in the already unusually controversial field of consciousness studies. This book proposed bicameralism, the hypothesis that the human mind once operated in a state in which cognitive functions were divided between one half of the brain, which appears to be speaking, and another half which listens and follows instructions. Julian Jaynes’ central claim is that consciousness in humans, in the form that is familiar to us today, is a relatively recent phenomenon, whose development followed the invention of writing, the evolution of complex societies and the collapse of bicameralism. According to Jaynes, in the bicameral eras, humans attributed the origin of the inner voices (which we presumably all hear) not to themselves, but to gods. Human behavior was, therefore, not conscious but automatic. Actions followed from strict obedience to these inner voices, which represented orders from a personal god, themselves conditioned by social and cultural norms.

In Jaynes view, consciousness is strongly connected with human language (an assertion hard to refute but possibly an insufficiently general description) and results, in large part, from our ability to introspect, and to hold conversations and dialogues with ourselves. The change in human’s perception of these voices, a process which, according to Jaynes, took place over a time span that lasted only a couple of millennia, during the Babylonian, Assyrian, Greek and Egyptian civilizations (the ones he studied) led to the creation of consciousness as we know it today. This implies that human consciousness, as it exists today, is a brand new phenomenon, in the evolutionary timescale.

Taken at face value, this theory goes totally against the very ingrained belief that humans have been fully conscious for hundreds of thousands or even millions of years, if we consider other species of hominids and other primates. It is certainly strange to think that consciousness, as we know it, is a phenomenon with only a few millennia.

And yet, Jaynes’ arguments are everything but naive. They are, in fact, very sophisticated and based on extensive analyses of historical evidence. The problem with the theory is not that it is simplistic or that there is a lack of presented evidence. The problem I have with this theory is that the evidence presented comes mostly from a very subjective and argumentative analysis of historical artifacts (books, texts, vases, ruins), which are interpreted, in a very intelligent way, to support Jaynes’ main points.

To give an example, which plays an important role in the argument, let’s consider the Iliad. In this text, which predates, according to Jaynes, conscious behavior, and has its origins in bicameral times, all human actions derive, directly, from the clear and audible instructions received from gods. In the Iliad, there is no space for reflection, autonomy, cogitations, hesitations or doubts. Heroes and plain humans act on the voices of gods, and that’s it. The Odyssey and posterior texts are progressively more elaborate on human thought and motivation and (according to Jaynes) the works of Solon are the first that can be viewed as modern, consistent with our current views of human will and human consciousness. Most significant of all, to Jaynes, is the Bible, in particular the Old Testament, which he sees as the ultimate record of the progressive evolution of men from bicameralism to subjective, conscious, behavior.  Analysis of these texts and of other evidence of the evolution of consciousness in Mesopotamia, Assyria, Greece, and Egypt, are exhaustively presented, and should not be taken lightly. At the least, Jaynes may have a point in that consciousness, today, is not the same thing as consciousness, five millennia ago. This may well be true, and it is hard for us to understand human thought from that time.

An yet, I remained unconvinced of Jaynes’ main point. True, the interpretation he makes of the historical evidence is from someone who has studied the materials deeply and I am certainly unable to counter-argue with someone who is so familiar with the topics. But, to me, the many facts (thousands, probably) that he brings to bear on his argument can all be the result of many other factors. Maybe the writers of the Iliad wanted to use god’s voices for stylistic effect, maybe the empty throne of the Assyrian king Tukulti-Ninurta depicted in a famous scene is not due to the disappearance and silence of the gods (as he argues) but to some other reasons. Jaynes proposes many interesting and ingenious interpretations of historical data, but in the end I was not convinced that these interpretations are sufficient to support his main thesis.

Despite missing his main objective, however, the book makes for a great read, presenting an interpretation of ancient history that is gripping and enlightening, if not fully convincing.

The mind of a fly

Researchers from the Howard Hughes Medical Institute, Google and other institutions have published the neuron level connectome of a significant part of the brain of the fruit fly, what they called the hemibrain. This may become one of the most significant advances in our understanding of the detailed structure of complex brains, since the 302 neurons connectome of C. elegans was published in 1986, by a team headed by Sydney Brenner, in an famous article with the somewhat whimsical subtitle of The mind of a worm. Both methods used an approach based on the slicing of the brains in very thin slices, followed by the use of scanning electron microscopy and the processing of the resulting images in order to obtain the 3D structure of the brain.

The neuron-level connectome of C. elegans was obtained after a painstaking effort that lasted decades, of manual annotation of the images obtained from the thousands of slices imaged using electron microscopy. As the brain of Drosophila melanogaster, the fruit fly, is thousands of times more complex, such an effort would have required several centuries if done by hand. Therefore, Google’s machine learning algorithms have been trained to identify sections of neurons, including axons, bodies and dendritic trees, as well as synapses and other components. After extensive training, the millions of images that resulted from the serial electron microscopy procedure were automatically annotated by the machine learning algorithms, enabling the team to complete in just a few years the detailed neuron-level connectome of a significant section of the fly brain, which includes roughly 25000 neurons and 20 million synapses.

The results, published in the first of a number of articles, can be freely analyzed by anyone interested in the way a fly thinks. A Google account can be used to log in to the neuPrint explorer and an interactive exploration of the 3D electron microscopy images is also available with neuroglancer. Extensive non-technical coverage by the media is also widely available. See, for instance, the article in The Economist or the piece in The Verge.

Image from the HHMI Janelia Research Campus site.

The Big Picture: On the Origins of Life, Meaning and the Universe Itself

Sean Carroll’s 2016 book, The Big Picture, is a rather well-succeeded attempt to cover all the topics that are listed in the subtitle of the book, life, the universe, and everything.  Carroll calls himself a poetic naturalist, short for someone who believes physics explains everything but does not eliminate the need for other levels of description of the universe, such as biology, psychology, and sociology, to name a few.

Such an ambitious list of topics requires a fast-paced book, and that is exactly what you get. Organized in no less than 50 chapters, the book brings us from the very beginning of the universe to the many open questions related to intelligence, consciousness, and free-will. In the process, we get to learn about what Carroll calls the “core theory”, the complete description of all the particles and forces that make the universe, as we know it today, encompassing basically the standard model and general relativity. In the process, he takes us through the many things we know (and a few of the ones we don’t know) about quantum field theory and the strangeness of the quantum world, including a rather good description of the different possibilities of addressing this strangeness: the Copenhaguen interpretation, hidden variables theories and (the one the author advocates) Everett’s many-worlds interpretation.

Although fast-paced, the book succeeds very well in connecting and going into some depth into these different fields. The final sections of the book, covering life, intelligence, consciousness, and morals are a very good introduction to these complex topics, many of them addressed also in Sean Carroll popular podcast, Mindscape.

Mindscape, a must-have podcast by Sean Carroll

Sean Carroll’s Mindscape podcast addresses topics as diverse as the interests of the author, including (but not limited to) physics, biology, philosophy, complexity, intelligence, and consciousness. Carroll has interviewed, in-depth, a large number of very interesting scientists, philosophers, writers, and thinkers, who come to talk about some of the most central open topics in science and philosophy.

Among many other, Daniel Dennett discusses minds and patterns; Max Tegmark  physics, simulation and the multiverse;   António Damásio  feeling, emotions and evolution; Patricia Churchland, conscience and morality; and David Chalmers, the hard problem of consciousness.

In all the interviews, Sean Carroll conducts the conversation in an easy and interactive mode, not imposing his own views, not even on the more controversial topics where the interviewees hold diametrically opposed opinions.

If you are into science and into podcasts, you cannot miss this one.

In the theater of consciousness

Bernard Baars has been one of the few neuroscientists who has dared to face the central problem of consciousness head-on. This 1997 book, which follows his first and most popular book, “A cognitive theory of consciousness”, aims at shedding some light on that most interesting of phenomena, the emergence of conscious reasoning from the workings of atoms and molecules that follow the laws of physics. This book is one of his most relevant works and supports the Global Workspace Theory (GWT), which is one of the few existing alternatives to describe the phenomenon of consciousness (the other one is Integrated Information Theory, IIT).

Baars’ work is probably not as widely known as it deserved, even though he is a famous author and neuroscientist. Unlike several other approaches, by authors as well-known as Daniel Dennett and Douglas Hofstadter, Baars tries to connect actual neuroscience knowledge with what we know about the phenomenon of consciousness.

He does not believe consciousness is an illusion, as several other authors (Dennet and Nørretranders, for instance) have argued. Instead, he argues that specific phenomena that occur in the cortex give rise to consciousness, and provides evidence that such is indeed the case. He argues for a principled approach to study consciousness, treating the phenomenon as a variable, and looking for specific situations that are similar between them but sufficiently different to be diverse in what respects to consciousness.

He proposes a theater metaphor to model the way consciousness arises and provides some evidence that this may be a workable metaphor to understand exactly what goes on in the brain when conscious behavior occurs. He presents evidence from neuroimaging and from specific dysfunctions in the brain that the theater metaphor may, indeed, serve as the basis for the creation of actual conscious, synthetic, systems. This work is today more relevant than ever, as we approach rapidly what can be learned with deep neural networks, which are not only unconscious but also unaware of what they are learning. Further advances in learning and in AI may depend critically on our ability to understand what is consciousness and how it can be used to make the learning of abstract concepts possible.

Do humankind’s best days lie ahead?

This book, which transcribes one of the several Munk debates organized by an initiative financed by Peter and Melanie Munk, addresses the question of whether the future of humanity will be better or worst than the present.

The debate, also available in video, takes place between four formidable names, the wizards Steven Pinker and Matt Ridley (apologists of the theory that technology will continue to bring progress) and the prophets Alain de Botton and Malcolm Gladwell (doubters of the idea that further technological developments will keep improving the world).


The dialogue that takes place, between the Pollyanas and the Cassandras (to use an expression coined in the debate itself) is vivid, interesting and, at times, highly emotional. Not one of the debaters has doubts that progress has improved immensely the human condition in the last few centuries, but the consensus ends with that. Will we be able to use science and technology to surmount the environmental, social, and political challenges faced by humanity or did we already reach “peak development” and the future will be worst than the past? Read or watch the debate, and decide for yourself.

My take is that the Pollyanas, Steven Pinker and Matt Ridley, with their optimistic take on the future, win the debate by a large margin, against the Cassandras, Their arguments that the world will continue to improve, based both on the historical tendencies but also on the hope that technology will solve the significant challenges we face do not meet a coherent resistance from Alain de Botton and Malcolm Gladwell. At least they did not manage to convince me that famines, cybersecurity threats, climate change, and inequality will be enough to reverse the course of human progress.