The Book of Why

Correlation is not causation is a mantra that you may have heard many times, calling attention to the fact that no matter how strong the relations one may find between variables, they are not conclusive evidence for the existence of a cause and effect relationship. In fact, most modern AI and Machine Learning techniques look for relations between variables to infer useful classifiers, regressors, and decision mechanisms. Statistical studies, with either big or small data, have also generally abstained from explicitly inferring causality between phenomena, except when randomized control trials are used, virtually the unique case where causality can be inferred with little or no risk of confounding.

In The Book of Why, Judea Pearl, in collaboration with Dana Mackenzie, ups the ante and argues not only that one should not stay away from reasoning about causes and effects, but also that the decades-old practice of avoiding causal reasoning has been one of the reasons for our limited success in many fields, including Artificial Intelligence.

Pearl’s main point is that causal reasoning is not only essential for higher-level intelligence but is also the natural way we, humans, think about the world. Pearl, a world-renowned researcher for his work in probabilistic reasoning, has made many contributions to AI and statistics, including the well known Bayesian networks, an approach that exposes regularities in joint probability distributions. Still, he thinks that all those contributions pale in comparison with the revolution he speared on the effective use of causal reasoning in statistics.

Pearl argues that statistical-based AI systems are restricted to finding associations between variables, stuck in what he calls rung 1 of the Ladder of Causation: Association. Seeing associations leads to a very superficial understanding of the world since it restricts the actor to the observation of variables and the analysis of relations between them. In rung 2 of the Ladder, Intervention, actors can intervene and change the world, which leads to an understanding of cause and effect. In rung 3, Counterfactuals, actors can imagine different worlds, namely what would have happened if the actor did this instead of that.

This may seem a bit abstract, but that is where the book becomes a very pleasant surprise. Although it is a book written for the general public, the authors go deeply into the questions, getting to the point where they explain the do-calculus, a methodology Pearl and his students developed to calculate, under a set of dependence/independence assumptions, what would happen if a specific variable is changed in a possibly complex network of interconnected variables. In fact, graphic representations of these networks, causal diagrams, are at the root of the methods presented and are extensively used in the book to illustrate many challenges, problems, and paradoxes.

In fact, the chapter on paradoxes is particularly entertaining, covering the Monty Hall, Berkson, and Simpson’s paradoxes, all of them quite puzzling. My favorite instance of Simpson’s paradox is the Berkeley admissions puzzle, the subject of a famous 1975 Science article. The paradox comes from the fact that, at the time, Berkeley admitted 44% of male candidates to graduate studies, but only 35% of female applicants. However, each particular department (departments decide the admissions in Berkeley, as in many other places) made decisions that were more favorable to women than men. As it turns out, this strange state of affairs has a perfectly reasonable explanation, but you will have to read the book to find out.

The book contains many fascinating stories and includes a surprising amount of personal accounts, making for a very entertaining and instructive reading.

Note: the ladder of causation figure is from the book itself.

A conversation with GPT-3 on COVID-19

GPT-3 is the most advanced language model ever created, a product of an effort by OpenAI to create a publicly available system that can be used to advance research and applications in natural language. The model itself published less than three months ago, is an autoregressive language model with 175 billion parameters and was trained with a dataset that includes almost a trillion words.

Impressive as that may be, it is difficult to get some intuition of what such a complex model, trained on billions of human-generated texts, can actually do. Can it be used effectively in translation tasks or in answering questions?

To get some idea of what a sufficiently high-level statistical model of human language can do, I challenge you to have a look at this conversation with GPT-3, published by Kirk Ouimet a few days ago. It relates a dialogue between him and GPT-3 on the topic of COVID-19. The most impressive thing about this conversation with an AI is not that it gets many of the responses right (others not so much). What impressed me is that the model was trained with a dataset created before the existence of COVID-19, which provided GPT-3 no specific knowledge about this pandemic. Whatever answers GPT-3 gives to the questions related to COVID-19 are obtained with knowledge that was already available before the pandemic began.

This certainly raises some questions on whether advanced AI systems should be more widely used to define and implement policies important to the human race.

If you want more information bout GPT-3, it is easy to find in a multitude of sites with tutorials and demonstrations, such as TheNextWeb, MIT Technology Review, and many, many others.

Meet Duplex, your new assistant, courtesy of Google

Advances in natural language processing have enabled systems such as Siri, Alexa, Google Assistant or Cortana to be at the service of anyone owning a smartphone or a computer. Still, so far, none of these systems managed to cross the thin dividing line that would make us take them for humans. When we ask Alexa to play music or Siri do dial a telephone number, we know very well that we are talking with a computer and the replies of the systems would remind us, were we to forget that.

It was to be expected that, with the evolution of the technology, this type of interactions would become more and more natural, possibly reaching a point where a computer could impersonate a real human, taking us closer to the vision of Alan Turing, a situation where you cannot tell a human apart from a computer by simply talking to both.

In an event widely reported in the media, at the I/O 2018 conference, Google made a demonstration of Duplex, a system that is able to process and execute requests in specific areas, interacting in a very human way with human operators. While Google states that the system is still under development, and only able to handle very specific situations, you get a feeling that, soon enough, digital assistants will be able to interact with humans without disclosing their artificial nature.  You can read the Google AI blog post here, or just listen to a couple of examples, where Duplex is scheduling a haircut or making a restaurant reservation. Both the speech recognition system and the speech synthesis system, as well as the underlying knowledge base and natural language processing engines, operate flawlessly in these cases, anticipating a widely held premonition that AI systems will soon be replacing humans in many specific tasks.

Photo by Kevin Bhagat on Unsplash

Empathy with robots – science fiction or reality?

A number of popular videos made available by Boston Dynamics (a Google company) has shown different aspects of the potential of bipedal and quadrupedal robots to move around in rough terrain, and to carry out complex tasks. The behavior of the robots is strangely natural, even though they are clearly man-made mechanical contraptions.

screen-shot-2016-09-18-at-18-18-32

In a recent interview given at Disrupt SF, Boston Dynamics CEO Marc Raibert put the emphasis on making the robots friendlier. Being around a 250 pound robot that can move very fast may be very dangerous to humans, and the company is creating smaller and friendlier robots that can move around safely inside peoples houses.

This means that this robots can have many more applications, other than military ones. They may serve as butlers, servants or even as pets.

It is hard to predict what sort of emotional relationship these robots may eventually become able to create with their owners. Their animal-like behavior makes them almost likeable to us, despite their obviously mechanic appearance.

In some of these videos, humans intervene to make the jobs harder for the robots, kicking them, and moving things around in a way that looks frustrating to the robots. To many viewers, this may seem to amount to acts of actual robot cruelty, since the robots seem to become sad and frustrated. You can see some of those images around minute 3 of the video above, made available by TechCrunch and Boston Dynamics or in the (fake) commercial below.

Our ideas that robots and machines don’t have feelings may be challenged in the near future, when human or animal-like mechanical creatures become common. After all, extensive emotional attachment to Roombas robotic vacuum cleaners is nothing new!

Videos made available by TechCrunch and Boston Dynamics.

Would you throw the fat guy off the bridge?

The recent fatal accident with a Tesla in autopilot mode did not involve any difficult moral decisions by the automatic driving systems, as it resulted from insufficient awareness of the conditions, both by the Tesla Autopilot and by the (late) driver.

However, it brings to the fore other cases where more difficult moral decisions may need to be made by intelligent systems in charge of driving autonomous vehicles. The famous trolley problem has been the subject of many analyses, articles and discussions and it remains a challenging topic in philosophy and ethics.

In this problem, which has many variants, you have to decide whether to pull a lever and divert an incoming runaway trolley, saving five people but killing one innocent bystander. In a variant of the problem, you cannot pull a lever, but you can throw a fat man from a bridge, thus stopping the trolley, but killing the fat man. The responses of different people vary wildly with the specific case in analysis.

These complex moral dilemmas have been addressed in detail many times, and a good overview is presented in the book by Thomas Cathcart.

trolley

In order to obtain more data about moral and difficult decisions, a group of researchers at MIT have created a website, where you can try to decide between the though choices available, yourself.

Instead of the more contrived cases involved in the trolley problem, you have to decide whether, as a driver, you should swerve or not, in the process deciding the fate of a number of passengers and bystanders.

Why don’t you try it?

Bill Gates recommends the two books to read if you want to understand Artificial Intelligence

Also in the 2026 Code conference, Bill Gates recommended the two books you need to read if you want to understand Artificial Intelligence. By coincidence (or not) these two books are exactly the ones I have previously covered in this blog, The Master Algorithm and Superintelligence.

Given Bill Gates strong recent interests in Artificial Intelligence, there is a fair chance that Windows 20 will have a natural language interface just like the one in the movie Her (caution, spoiler below).

If you haven’t seen the movie, maybe you should. It is about a guy who falls in love with the operating system of his computer.

So, there is no doubt that operating systems will keep evolving in order to offer more natural user interfaces. Will they ever reach the point where you can fall in love with them?

Crazy chatbots or smart personal assistants?

Well-known author, scientist, and futurologist Ray Kurzweil is reportedly working with Google to create a chatbot, named Danielle. Chatbots, i.e., natural language parsing programs that get their input from social networks and other groups on the web, have been of interest for researchers since they represent an easy way to test new technologies in the real world.

Very recently, a chatbot created by Microsoft, Tay, made the news because it became “a Hitler-loving sex robot” after chatting for less than 24 hours with teens, on the web. Tay was an AI created to speak like a teen girl, and it was an experiment done in order to improve Microsoft voice recognition software. The chatbot was rapidly “deleted”, after it started comparing Hitler, in favorable terms, with well known contemporary politicians.

Presumably, Danielle, reportedly under development by Google, with the cooperation of Ray Kurzweil, will be released later this year. According to Kurzweil, Danielle will be able to maintain relevant, meaningful, conversations, but he still points to 2029 as the year when a chatbot will pass the Turing test, becoming indistinguishable from a human. Kurzweil, the author of The Singularity is Near and many other books on the future of technology, is a firm believer in the singularity, a point in human history where society will suffer such a radical change that it will become unrecognizable to contemporary humans.

DSCN0277

In a brief video interview (which was since removed from YouTube), Kurzweil describes the Google chatbot project, and the hopes he pins on this project.

While chatbots may not look very interesting, unless you have a lot of spare time on your hands, the technology can be used to create intelligent personal assistants. These assistants can take verbal instructions and act on your behalf and may therefore become very useful, almost indispensable “tools”. As Austin Okere puts it in this article , “in five or ten years, when we have got over our skepticism and become reliant upon our digital assistants, we will wonder how we ever got along without them.