You’re not the customer, you’re the product!

The attention that each one of us pays to an item and the time we spend on a site, article, or application is the most valuable commodity in the world, as witnessed by the fact that the companies that sell it, wholesale, are the largest in the world. Attracting and selling our attention is, indeed, the business of Google and Facebook but also, to a larger extent, of Amazon, Apple, Microsoft, Tencent, or Alibaba. We may believe we are the customers of these companies but, in fact, many of the services provided serve, only, to attract our attention and sell it to the highest bidder, in the form of publicity of personal information. In the words of Richard Serra and Carlota Fay Schoolman, later reused by a number of people including Tom Johnson, if you are not paying “You’re not the customer; you’re the product.

Attracting and selling attention is an old business, well described in Tim Wu’s book The Attention Merchants. First created by newspapers, then by radios and television, the market of attention came to maturity with the Internet. Although newspapers, radio programs, and television shows have all been designed to attract our attention and use it to sell publicity, none of them had the potential of the Internet, which can attract and retain our attention by tailoring the contents to each and everyone’s content.

The problem is that, with excessive customization, comes a significant and very prevalent problem. As sites, social networks, and content providers fight to attract our attention, they show us exactly the things we want to see, and not the things as they are. Each person lives, nowadays, in a reality that is different from anyone else’s reality. The creation of a separate and different reality, for each person, has a number of negative side effects, that include the creation of paranoia-inducing rabbit holes, the radicalization of opinions, the inability to establish democratic dialogue, and the diffiulty to distinguish reality from fabricated fiction.

Wu’s book addresses, in no light terms, this issue, but the Netflix documentary The Social Dilemma makes an even stronger point that customized content, as shown to us by social networks and other content providers is unraveling society and creating a host of new and serious problems. Social networks are even more worrying than other content providers because they create pressure in children and young adults to conform to a reality that is fabricated and presented to them in order to retain (and resell) their attention.

Novacene: the future of humanity is digital?

As it says on the cover of the book, James Lovelock may well be “the great scientific visionary of our age“. He is probably best known for the Gaia Hypothesis, but he made several other major contributions. While working for NASA, he was the first to propose looking for chemical biomarkers in the atmosphere of other planets as a sign of extraterrestrial life, a method that has been extensively used and led to a number of interesting results, some of them very recent. He has argued for climate engineering methods, to fight global warming, and a strong supporter of nuclear energy, by far the safest and less polluting form of energy currently available.

Lovelock has been an outspoken environmentalist, a strong voice against global warming, and the creator of the Gaia Hypothesis, the idea that all organisms on Earth are part of a synergistic and self-regulating system that seeks to maintain the conditions for life on Earth. The ideas he puts forward in this book are, therefore, surprising. To him, we are leaving the Anthropocene (a geological epoch, characterized by the profound effect of men on the Earth environment, still not recognized as a separate epoch by mainstream science) and entering the Novacene, an epoch where digital intelligence will become the most important form of life on Earth and near space.

Although it may seem like a position inconsistent with his previous arguments about the nature of life on Earth, I find the argument for the Novacene era convincing and coherent. Again, Lovelock appears as a visionary, extrapolating to its ultimate conclusion the trend of technological development that started with the industrial revolution.

As he says, “The intelligence that launches the age that follows the Anthropocene will not be human; it will be something wholly different from anything we can now conceive.”

To me, his argument that artificial intelligence, digital intelligence, will be our future, our offspring, is convincing. It will be as different from us as we are from the first animals that appeared hundreds of millions ago, which were also very different from the cells that started life on Earth. Four billion years after the first lifeforms appeared on Earth, life will finally create a new physical support, that does not depend on DNA, water, or an Earth-like environment and is adequate for space.

The Book of Why

Correlation is not causation is a mantra that you may have heard many times, calling attention to the fact that no matter how strong the relations one may find between variables, they are not conclusive evidence for the existence of a cause and effect relationship. In fact, most modern AI and Machine Learning techniques look for relations between variables to infer useful classifiers, regressors, and decision mechanisms. Statistical studies, with either big or small data, have also generally abstained from explicitly inferring causality between phenomena, except when randomized control trials are used, virtually the unique case where causality can be inferred with little or no risk of confounding.

In The Book of Why, Judea Pearl, in collaboration with Dana Mackenzie, ups the ante and argues not only that one should not stay away from reasoning about causes and effects, but also that the decades-old practice of avoiding causal reasoning has been one of the reasons for our limited success in many fields, including Artificial Intelligence.

Pearl’s main point is that causal reasoning is not only essential for higher-level intelligence but is also the natural way we, humans, think about the world. Pearl, a world-renowned researcher for his work in probabilistic reasoning, has made many contributions to AI and statistics, including the well known Bayesian networks, an approach that exposes regularities in joint probability distributions. Still, he thinks that all those contributions pale in comparison with the revolution he speared on the effective use of causal reasoning in statistics.

Pearl argues that statistical-based AI systems are restricted to finding associations between variables, stuck in what he calls rung 1 of the Ladder of Causation: Association. Seeing associations leads to a very superficial understanding of the world since it restricts the actor to the observation of variables and the analysis of relations between them. In rung 2 of the Ladder, Intervention, actors can intervene and change the world, which leads to an understanding of cause and effect. In rung 3, Counterfactuals, actors can imagine different worlds, namely what would have happened if the actor did this instead of that.

This may seem a bit abstract, but that is where the book becomes a very pleasant surprise. Although it is a book written for the general public, the authors go deeply into the questions, getting to the point where they explain the do-calculus, a methodology Pearl and his students developed to calculate, under a set of dependence/independence assumptions, what would happen if a specific variable is changed in a possibly complex network of interconnected variables. In fact, graphic representations of these networks, causal diagrams, are at the root of the methods presented and are extensively used in the book to illustrate many challenges, problems, and paradoxes.

In fact, the chapter on paradoxes is particularly entertaining, covering the Monty Hall, Berkson, and Simpson’s paradoxes, all of them quite puzzling. My favorite instance of Simpson’s paradox is the Berkeley admissions puzzle, the subject of a famous 1975 Science article. The paradox comes from the fact that, at the time, Berkeley admitted 44% of male candidates to graduate studies, but only 35% of female applicants. However, each particular department (departments decide the admissions in Berkeley, as in many other places) made decisions that were more favorable to women than men. As it turns out, this strange state of affairs has a perfectly reasonable explanation, but you will have to read the book to find out.

The book contains many fascinating stories and includes a surprising amount of personal accounts, making for a very entertaining and instructive reading.

Note: the ladder of causation figure is from the book itself.

A conversation with GPT-3 on COVID-19

GPT-3 is the most advanced language model ever created, a product of an effort by OpenAI to create a publicly available system that can be used to advance research and applications in natural language. The model itself published less than three months ago, is an autoregressive language model with 175 billion parameters and was trained with a dataset that includes almost a trillion words.

Impressive as that may be, it is difficult to get some intuition of what such a complex model, trained on billions of human-generated texts, can actually do. Can it be used effectively in translation tasks or in answering questions?

To get some idea of what a sufficiently high-level statistical model of human language can do, I challenge you to have a look at this conversation with GPT-3, published by Kirk Ouimet a few days ago. It relates a dialogue between him and GPT-3 on the topic of COVID-19. The most impressive thing about this conversation with an AI is not that it gets many of the responses right (others not so much). What impressed me is that the model was trained with a dataset created before the existence of COVID-19, which provided GPT-3 no specific knowledge about this pandemic. Whatever answers GPT-3 gives to the questions related to COVID-19 are obtained with knowledge that was already available before the pandemic began.

This certainly raises some questions on whether advanced AI systems should be more widely used to define and implement policies important to the human race.

If you want more information bout GPT-3, it is easy to find in a multitude of sites with tutorials and demonstrations, such as TheNextWeb, MIT Technology Review, and many, many others.

Instantiation, another great collection of Greg Egan’s short stories

Greg Egan is a master of short-story telling. His Axiomatic collection of short stories is one of my favorites. This new collection of short stories keeps Egan’s knack for communicating deep concepts using few words and dives deeper into the concepts of virtual reality and the impacts of technology in society.

The first story, The discrete charm of the Turing machine, could hardly be more relevant these days, when the discussions on the economic impacts of Artificial Intelligence are taking place everywhere. But the main conducting line of the book is the series of stories where sentient humans who are, in fact, characters in virtual reality games, plot to break free of their slave condition. To find out whether they succeed or not, you will have to read to book yourself!

PS: As a joke, I leave here a meme of unknown origin

Human Compatible: AI and the Problem of Control

Stuart Russell, one of the better-known researchers in Artificial Intelligence, author of the best selling textbook Artificial Intelligence, A Modern Approach addresses, in his most recent book, what is probably one of the most interesting open questions in science and technology: can we control the artificially intelligent systems that will be created in the decades to come?

In Human Compatible: AI and the Problem of Control Russell formulates and answers the following, very important question: what are the consequences if we succeed in creating a truly intelligent machine?

The question brings, with it, many other questions, of course. Will intelligent machines be dangerous to humanity? Will they take over the world? Could we control machines that are more intelligent than ourselves? Many writers and scientists, like Nick Bostrom, Stephen Hawking, Elon Musk, Sam Harris, and Max Tegmark have raised these questions, several of them claiming that superintelligent machines could be around the corner and become extremely dangerous to the humanity.

However, most AI researchers have dismissed these questions as irrelevant, concentrated as they are in the development of specific techniques and well aware that Artificial General Intelligence is far away, if it is at all achievable.  Andrew Ng, another famous AI researcher, said that worrying about superintelligent machines is like worrying about the overpopulation. of Mars.

There could be a race of killer robots in the far future, but I don’t work on not turning AI evil today for the same reason I don’t worry about the problem of overpopulation on the planet Mars

Another famous Machine Learning researcher, Pedro Domingos, in his bestselling book, The Master Algorithm, about Machine Learning, the driving force behind modern AI, also ignores these issues, concentrating on concrete technologies and applications. In fact, he says often that he is more worried about dumb machines than about superintelligent machines.

Stuart Russell’s book is different, making the point that we may, indeed, lose control of such systems, even though he does not believe they could harm us by malice or with intention. In fact, Russell is quite dismissive of the possibility that machines could one day become truly intelligent and conscious, a position I find, personally, very brave, 70 years after Alan Turing saying exactly the opposite.

Yet, Russell believes we may be in trouble if sufficiently intelligent and powerful machines have objectives that are not well aligned with the real objectives of their designers. His point is that a poorly conceived AI system, which aims at optimizing some function that was badly specified can lead to bad results and even tragedy if such a system controls critical facilities. One well-known example is Bostrom’s paperclip problem, where an AI system designed to maximize the production of paperclips turns the whole planet into a paperclip production factory, eliminating humanity in the process. As in the cases that Russell fears, the problem comes not from a machine which wants to kill all humans, but from a machine that was designed with the wrong objectives in mind and does not stop before achieving them.

To avoid that risk os misalignment between human and machine objectives, Russell proposes designing provably beneficial AI systems, based on three principles that can be summarized as:

  • Aim to maximize the realization of human preferences
  • Assume uncertainty about these preferences
  • Learn these preferences from human behavior

Although I am not fully aligned with Russell in all the positions he defends in this book, it makes for interesting reading, coming from someone who is a knowledgeable AI researcher and cares about the problems of alignment and control of AI systems.

The Big Picture: On the Origins of Life, Meaning and the Universe Itself

Sean Carroll’s 2016 book, The Big Picture, is a rather well-succeeded attempt to cover all the topics that are listed in the subtitle of the book, life, the universe, and everything.  Carroll calls himself a poetic naturalist, short for someone who believes physics explains everything but does not eliminate the need for other levels of description of the universe, such as biology, psychology, and sociology, to name a few.

Such an ambitious list of topics requires a fast-paced book, and that is exactly what you get. Organized in no less than 50 chapters, the book brings us from the very beginning of the universe to the many open questions related to intelligence, consciousness, and free-will. In the process, we get to learn about what Carroll calls the “core theory”, the complete description of all the particles and forces that make the universe, as we know it today, encompassing basically the standard model and general relativity. In the process, he takes us through the many things we know (and a few of the ones we don’t know) about quantum field theory and the strangeness of the quantum world, including a rather good description of the different possibilities of addressing this strangeness: the Copenhaguen interpretation, hidden variables theories and (the one the author advocates) Everett’s many-worlds interpretation.

Although fast-paced, the book succeeds very well in connecting and going into some depth into these different fields. The final sections of the book, covering life, intelligence, consciousness, and morals are a very good introduction to these complex topics, many of them addressed also in Sean Carroll popular podcast, Mindscape.

Mindscape, a must-have podcast by Sean Carroll

Sean Carroll’s Mindscape podcast addresses topics as diverse as the interests of the author, including (but not limited to) physics, biology, philosophy, complexity, intelligence, and consciousness. Carroll has interviewed, in-depth, a large number of very interesting scientists, philosophers, writers, and thinkers, who come to talk about some of the most central open topics in science and philosophy.

Among many other, Daniel Dennett discusses minds and patterns; Max Tegmark  physics, simulation and the multiverse;   António Damásio  feeling, emotions and evolution; Patricia Churchland, conscience and morality; and David Chalmers, the hard problem of consciousness.

In all the interviews, Sean Carroll conducts the conversation in an easy and interactive mode, not imposing his own views, not even on the more controversial topics where the interviewees hold diametrically opposed opinions.

If you are into science and into podcasts, you cannot miss this one.

In the theater of consciousness

Bernard Baars has been one of the few neuroscientists who has dared to face the central problem of consciousness head-on. This 1997 book, which follows his first and most popular book, “A cognitive theory of consciousness”, aims at shedding some light on that most interesting of phenomena, the emergence of conscious reasoning from the workings of atoms and molecules that follow the laws of physics. This book is one of his most relevant works and supports the Global Workspace Theory (GWT), which is one of the few existing alternatives to describe the phenomenon of consciousness (the other one is Integrated Information Theory, IIT).

Baars’ work is probably not as widely known as it deserved, even though he is a famous author and neuroscientist. Unlike several other approaches, by authors as well-known as Daniel Dennett and Douglas Hofstadter, Baars tries to connect actual neuroscience knowledge with what we know about the phenomenon of consciousness.

He does not believe consciousness is an illusion, as several other authors (Dennet and Nørretranders, for instance) have argued. Instead, he argues that specific phenomena that occur in the cortex give rise to consciousness, and provides evidence that such is indeed the case. He argues for a principled approach to study consciousness, treating the phenomenon as a variable, and looking for specific situations that are similar between them but sufficiently different to be diverse in what respects to consciousness.

He proposes a theater metaphor to model the way consciousness arises and provides some evidence that this may be a workable metaphor to understand exactly what goes on in the brain when conscious behavior occurs. He presents evidence from neuroimaging and from specific dysfunctions in the brain that the theater metaphor may, indeed, serve as the basis for the creation of actual conscious, synthetic, systems. This work is today more relevant than ever, as we approach rapidly what can be learned with deep neural networks, which are not only unconscious but also unaware of what they are learning. Further advances in learning and in AI may depend critically on our ability to understand what is consciousness and how it can be used to make the learning of abstract concepts possible.

I am a strange loop – by Douglas Hofstadter

Douglas Hofstadter has always been fond of recursion and self-referential loops, the central topic of his acclaimed “Gödel, Escher and Bach” . In his 2007 book, “I am a strange loop”, Hofstadter goes even deeper into the idea that self-referential loops are the secret item that explains consciousness and self-awareness. The idea that consciousness is the result of our ability to look inside ourselves, and to model our selves in the world, is explored in this book, together with a number of related issues.

To Hofstadter, Gödel theorem, and the way Gödel has shown that any sufficiently complex mathematical system can be used to assert things about itself, is strongly related with our ability to reflect into our own selves, the phenomenon that, according to the author, creates consciousness.

Hofstadter uses the terms “soul” and “consciousness” almost interchangeably, meaning that, to him, our soul and our consciousness – our inner light – are one and the same. Other animals may have souls, such as dogs or cats (but not mosquitoes) although “smaller” and less complex than ours. One of the strongest ideas of the book, much cherished by the author, is that your soul is mostly contained within your brain but is also present, at varying lower levels of fidelity, in the brains of other people that know you and that have models of you inside their own brains.

In the process of describing these ideas, Hofstadter also dispatches with a few “sacred cows”, such as the idea that “zombies” are possible, even in principle, the “inverted spectrum” conundrum (is your red the same as my red?) and the “impossible” – to him- idea of free will.