The Book of Why

Correlation is not causation is a mantra that you may have heard many times, calling attention to the fact that no matter how strong the relations one may find between variables, they are not conclusive evidence for the existence of a cause and effect relationship. In fact, most modern AI and Machine Learning techniques look for relations between variables to infer useful classifiers, regressors, and decision mechanisms. Statistical studies, with either big or small data, have also generally abstained from explicitly inferring causality between phenomena, except when randomized control trials are used, virtually the unique case where causality can be inferred with little or no risk of confounding.

In The Book of Why, Judea Pearl, in collaboration with Dana Mackenzie, ups the ante and argues not only that one should not stay away from reasoning about causes and effects, but also that the decades-old practice of avoiding causal reasoning has been one of the reasons for our limited success in many fields, including Artificial Intelligence.

Pearl’s main point is that causal reasoning is not only essential for higher-level intelligence but is also the natural way we, humans, think about the world. Pearl, a world-renowned researcher for his work in probabilistic reasoning, has made many contributions to AI and statistics, including the well known Bayesian networks, an approach that exposes regularities in joint probability distributions. Still, he thinks that all those contributions pale in comparison with the revolution he speared on the effective use of causal reasoning in statistics.

Pearl argues that statistical-based AI systems are restricted to finding associations between variables, stuck in what he calls rung 1 of the Ladder of Causation: Association. Seeing associations leads to a very superficial understanding of the world since it restricts the actor to the observation of variables and the analysis of relations between them. In rung 2 of the Ladder, Intervention, actors can intervene and change the world, which leads to an understanding of cause and effect. In rung 3, Counterfactuals, actors can imagine different worlds, namely what would have happened if the actor did this instead of that.

This may seem a bit abstract, but that is where the book becomes a very pleasant surprise. Although it is a book written for the general public, the authors go deeply into the questions, getting to the point where they explain the do-calculus, a methodology Pearl and his students developed to calculate, under a set of dependence/independence assumptions, what would happen if a specific variable is changed in a possibly complex network of interconnected variables. In fact, graphic representations of these networks, causal diagrams, are at the root of the methods presented and are extensively used in the book to illustrate many challenges, problems, and paradoxes.

In fact, the chapter on paradoxes is particularly entertaining, covering the Monty Hall, Berkson, and Simpson’s paradoxes, all of them quite puzzling. My favorite instance of Simpson’s paradox is the Berkeley admissions puzzle, the subject of a famous 1975 Science article. The paradox comes from the fact that, at the time, Berkeley admitted 44% of male candidates to graduate studies, but only 35% of female applicants. However, each particular department (departments decide the admissions in Berkeley, as in many other places) made decisions that were more favorable to women than men. As it turns out, this strange state of affairs has a perfectly reasonable explanation, but you will have to read the book to find out.

The book contains many fascinating stories and includes a surprising amount of personal accounts, making for a very entertaining and instructive reading.

Note: the ladder of causation figure is from the book itself.

The mind of a fly

Researchers from the Howard Hughes Medical Institute, Google and other institutions have published the neuron level connectome of a significant part of the brain of the fruit fly, what they called the hemibrain. This may become one of the most significant advances in our understanding of the detailed structure of complex brains, since the 302 neurons connectome of C. elegans was published in 1986, by a team headed by Sydney Brenner, in an famous article with the somewhat whimsical subtitle of The mind of a worm. Both methods used an approach based on the slicing of the brains in very thin slices, followed by the use of scanning electron microscopy and the processing of the resulting images in order to obtain the 3D structure of the brain.

The neuron-level connectome of C. elegans was obtained after a painstaking effort that lasted decades, of manual annotation of the images obtained from the thousands of slices imaged using electron microscopy. As the brain of Drosophila melanogaster, the fruit fly, is thousands of times more complex, such an effort would have required several centuries if done by hand. Therefore, Google’s machine learning algorithms have been trained to identify sections of neurons, including axons, bodies and dendritic trees, as well as synapses and other components. After extensive training, the millions of images that resulted from the serial electron microscopy procedure were automatically annotated by the machine learning algorithms, enabling the team to complete in just a few years the detailed neuron-level connectome of a significant section of the fly brain, which includes roughly 25000 neurons and 20 million synapses.

The results, published in the first of a number of articles, can be freely analyzed by anyone interested in the way a fly thinks. A Google account can be used to log in to the neuPrint explorer and an interactive exploration of the 3D electron microscopy images is also available with neuroglancer. Extensive non-technical coverage by the media is also widely available. See, for instance, the article in The Economist or the piece in The Verge.

Image from the HHMI Janelia Research Campus site.

Mastering Starcraft

The researchers at DeepMind keep advancing the state of the art on the utilization of deep learning to master ever more complex games. After recently reporting a system that learns how to play a number of different and very complex board games, including Go and Chess, the company announced a system that is able to beat the best players in the world at a complex strategy game, Startcraft.

AlphaStar, the system designed to learn to play Starcraft, one of the most challenging Real-Time Strategy (RTS) games, by playing against other versions of itself, represents a significant advance in the application of machine learning. In Starcraft, a significant amount of information is hidden from the players, and each player has to balance short term and long term objectives, just like in the real world. Players have to master fast-paced battle techniques and, at the same time, develop their own armies and economies.

This result is important because it shows that deep reinforcement learning, which has already shown remarkable results in all sorts of board games,  can scale up to complex environments with multiple time scales and hidden information. It opens the way to the application of machine learning to real-world problems, until now deemed to difficult to be tackled by machine learning.

Kill the baby or the grandma?

What used to be an arcane problem in philosophy and ethics, The Trolley Problem, has been taking center stage in the discussions about the way autonomous vehicles should behave in the case of an accident. As reported previously in this blog, a website created by MIT researchers, The Moral Machine, gave everyone the opportunity to confront him or herself with the dilemmas that an autonomous car may have to face when deciding what action to take in the presence of an unavoidable accident.

The site became so popular that it was possible to gather more than 40 million decisions, from people in 233 countries and territories. The analysis of this massive amount of data was just published in an article in the journal Nature. In the site, you are faced with a simple choice. Drive forward, possibly killing some pedestrians or vehicle occupants, or swerve left, killing a different group of people. From the choices made by millions of persons, it is possible to derive some general rules of how ethics commands people to act, when faced with the difficult choice of who to kill and who to spare.

The results show some clear choices, but also that some decisions vary strongly with the culture of the person in charge. In general, people decide to protect babies, youngsters and pregnant women, as well as doctors (!). At the bottom of the preference scale are old people, animals and criminals. 

Images: from the original article in Nature.

MIT distances itself from Nectome, a mind uploading company

The MIT Media Lab, a unit of MIT, decided to sever the ties that connected it with Nectome, a startup that proposes to make available a technology that processes and chemically preserves a brain, down to its most minute details, in order to make it possible, at least in principle, to simulate your brain and upload your mind, sometime in the future.

According to the MIT news release, “MIT’s connection to the company came into question after MIT Technology Review detailed Nectome’s promotion of its “100 percent fatal” technology” in an article posted in the MIT Technology Review site.

As reported in this blog, Nectome claims that by preserving the brain, it may be possible, one day, “to digitize your preserved brain and use that information to recreate your mind”. Nectome acknowledges, however, that the technology is fatal to the brain donor and that there are no warranties that future recovery of the memories, knowledge and personality will be possible.

Detractors have argued that the technology is not sound, since simulating a preserved brain is a technology that is at least many decades in the future and may even be impossible in principle. The criticisms were, however, mostly based on the argument the whole enterprise is profoundly unethical.

This kind of discussion between proponents of technologies aimed at performing whole brain emulation, sometimes in the future, and detractors that argue that such an endeavor is fundamentally flawed, has occurred in the past, most notably a 2014 controversy concerning the objectives of the Human Brain Project. In this controversy, critics argued that the goal of a large-scale simulation of the brain is premature and unsound, and that funding should be redirected towards more conventional approaches to the understanding of brain functions. Supporters of the Human Brain Project approach argued that reconstructing and simulating the human brain is an important objective in itself, which will bring many benefits and advance our knowledge of the brain and of the mind.

Picture by the author.

New technique for high resolution imaging of brain connections

MIT researchers have proposed a new technique that leads to very high resolution images of the detailed connections of neurons in the human brain. Taeyun Ku, Justin Swaney and Jeong-Yoon Park were the lead researchers of the work published in a Nature Biotechnology article. They have developed a new technique for imaging brain tissue at multiple scales that leads to unprecedented high resolution images of significant regions of the brain, which allows them to detect the presence of proteins within cells and determine the long-range connections between neurons.

The technique actually blows up the size of the tissues under observation, increasing their dimension, while preserving nearly all of the proteins within the cells, which can be labeled with fluorescent molecules and imaged.

The technique floods the brain tissue with acrylamide polymers, which end up forming a dense gel. The proteins are attached to this gel and, after they are denatured, the gel can be expanded to four or five times its original size. This leads to the possibility of imaging the blown-up tissue with a resolution that is much higher than would be possible if the original tissue was used.

Techniques like create the conditions to advance with reverse engineering techniques that could lead to a better understanding of the way neurons connect with each other, creating the complex structures in the brain.

Image credit: MIT

 

Europe wants to have one exascale supercomputer by 2023

On March 23rd, in Rome, seven European countries signed a joint declaration on High Performance Computing (HPC), committing to an initiative that aims at securing the required budget and developing the technologies necessary to acquire and deploy two exascale supercomputers, in Europe, by 2023. Other Member States will be encouraged to join this initiative.

Exascale computers, defined as machines that execute 10 to the 18th power operations per second will be roughly 10 times more powerful than the existing fastest supercomputer, the Sunway TaihuLight, which clocks in at 93 petaflop/s, or 93 times 10 to the 15 floating point operations per second. No country in Europe has, at the moment, any machine among the 10 most powerful in the world. The declaration, and related documents, do not fully specify that these machines will clock at more than one exaflop/s, given that the requirements for supercomputers are changing with the technology, and floating point operations per second may not be the right measure.

This renewed interest of European countries in High Performance Computing highlights the fact that this technology plays a significant role in the economic competitiveness of research and development. Machines with these characteristics are used mainly in complex system simulations, in physics, chemistry, materials, fluid dynamics, but they are also useful in storing and processing the large amounts of data required to create intelligent systems, namely by using deep learning.

Andrus Ansip, European Commission Vice-President for the Digital Single Market remarked that: “High-performance computing is moving towards its next frontier – more than 100 times faster than the fastest machines currently available in Europe. But not all EU countries have the capacity to build and maintain such infrastructure, or to develop such technologies on their own. If we stay dependent on others for this critical resource, then we risk getting technologically ‘locked’, delayed or deprived of strategic know-how. Europe needs integrated world-class capability in supercomputing to be ahead in the global race. Today’s declaration is a great step forward. I encourage even more EU countries to engage in this ambitious endeavour”.

The European Commission press release includes additional information on the next steps that will be taken in the process.

Photo of the signature event, by the European Commission. In the photo, from left to right, the signatories: Mark Bressers (Netherlands), Thierry Mandon (France), Etienne Schneider (Luxembourg), Andrus Ansip (European Commission), Valeria Fedeli (Italy), Manuel Heitor (Portugal), Carmen Vela (Spain) and Herbert Zeisel (Germany).

 

Intel buys Mobileye by $15 billion

Mobileye, a company that develops computer vision and sensor fusion technology for autonomous and computer assisted driving, has been bought by Intel, in a deal worth 15.3 billion dollars.

The company develops a large range of technologies and services related with computer based driving. These technologies include rear facing and front facing cameras, sensor fusions, and high-definition mapping. Mobileye has been working with a number of car manufacturers, including Audi and BMW.

Mobileye already sells devices that you install in your car, to monitor the road and warn the driver of impeding risks. A number of insurance companies in Israel have reduced the insurance premium for drivers who have installed the devices in their cars.

This sale is another strong indication that autonomous and computer assisted driving will be a mature technology within the next decade, changing profoundly our relation with cars and driving.

The products of Mobileye have been extensively covered in the news recently, including TechCrunchThe New York Times and Forbes.

Image by Ranbar, available at Wikimedia Commons.

IBM TrueNorth neuromorphic chip does deep learning

In a recent article, published in the Proceedings of the National Academy of Sciences, IBM researchers demonstrated that the TrueNorth chip, designed to perform neuromorphic computing, can be trained using deep learning algorithms.

brain_anatomy_medical_head_skull_digital_3_d_x_ray_xray_psychedelic_3720x2631

The TrueNorth chip was designed to efficiently simulate the efficient modeling of spiking neural networks, a model for neurons that closely mimics the way biological neurons work. Spiking neural networks are based on the integrate and fire model, inspired on the fact that actual neurons integrate the incoming ion currents caused by synaptic firing and generate an output spike only when sufficient synaptic excitation has been accumulated. Spiking neural network models tend to be less efficient than more abstract models of neurons, which simply compute the real valued output directly from the values of the real valued inputs multiplied by the input weights.

As IEEE Spectrum explains: “Instead of firing every cycle, the neurons in spiking neural networks must gradually build up their potential before they fire. To achieve precision on deep-learning tasks, spiking neural networks typically have to go through multiple cycles to see how the results average out. That effectively slows down the overall computation on tasks such as image recognition or language processing.

In the article just published, IBM researchers have adapted deep learning algorithms to run on their TrueNorth architecture, and have achieved comparable precision, with lower energy dissipation. This research raises the prospect that energy-efficient neuromorphic chips may be competitive in deep learning tasks.

Image from Wikimedia Commons

Writing a Human Genome from scratch: the Genome Project-write

The Genome Project-write has released a white paper, with a clear proposal of the steps and timeline that will be required to design and assemble a human genome from scratch.

sanger

The project is a large scale project, involving a significant number of institutions, and many well-known researchers, including George Church and Jef Boeke. According to the project web page:

“Writing DNA is the future of science and medicine, and holds the promise of pulling us forward into a better future. While reading DNA code has continued to advance, our capability to write DNA code remains limited, which in turn restricts our ability to understand and manipulate biological systems. GP-write will enable scientists to move beyond observation to action, and facilitate the use of biological engineering to address many of the global problems facing humanity.”

The idea is to use existing technologies for DNA synthesis to accelerate research in a wide spectrum of life-sciences. The synthesis of human genomes may make it possible to understand the phenotypic results of specific genome sequences and will contribute to improve the quality of synthetic biology tools.

Special attention will be paid to the complex ethical, legal and social issues that are a consequence of the project.

The project has received wide coverage, in a number of news sources, including popular science sites such as Statnews and the journal Science.