Extraterrestrial: The First Sign of Alien Life?

Avi Loeb is not exactly someone who one may call an outsider to the scientific community. As a reputed scholar and the longest serving chair of Harvard’s Department of Astronomy, he is a well-known and reputed physicist, with many years of experience in astrophysics and cosmology. It is therefore somewhat surprising that in this book he strongly supports an hypothesis that is anything but widely accepted in the scientific community: ʻOumuamua, the first interstellar object ever detected in our solar system may be an artifact created by an alien civilization.

We are not talking here about alien conspiracies, UFOs or little green men from Mars. Loeb’s idea, admirably explained, is that there are enough strange things about ʻOumuamua to raise the real possibility that it is not simply a strange rock and that it may be an artificial construct, maybe a lightsail or a beacon.

There are, indeed, several strange things about this object, discovered by a telescope in Hawaii, in October 2017. It was the first object ever discovered near the Sun that did not orbit our star; its luminosity changed radically, by a factor of about 10; it is very bright for its size; and, perhaps more strangely, it exhibited non‑gravitational acceleration as its orbit did not exactly match the orbit of a normal rock with no external forces applied other than the gravity of the Sun.

None of these abnormalities, per se, would be enough to raise eyebrows. But, all combined, they do indeed make for a strange object. And Loeb’s point is, exactly, that the possibility that ‘Oumuamua is an artifact of alien origin should be taken seriously by the scientific community. And yet, he argues, anything that has to do with extraterrestrial life is not considered serious science, leading to a negative bias and to a lack of investment in what should be one of the most important scientific questions: are we alone in the Universe? As such, SETI, the Search for Extra-Terrestrial Life, does not get the recognition and the funding it deserves. Paradoxically, other fields whose theories may never be confirmed by experiment nor have any real impact on us, such as multiverse based explanations of quantum mechanics or string theory, are considered serious fields, attract much more funding, and are more favorably viewed by young researchers.

The book makes for very interesting reading, both for the author’s positions about ‘Oumuamua itself and for his opinions about today’s scientific establishment.

Possible minds

John Brockman’s project of bringing together 25 pioneers in Artificial Intelligence to discuss the promises and perils of the field makes for some interesting reading. This collection of short essays lets you peer inside the minds of such luminaries as Judea Pearl, Stuart Russell, Daniel Dennett, Frank Wilczek, Max Tegmark, Steven Pinter or David Deutsch, to name only a few. The fact that each one of them contributed with an essay that is only a dozen pages long does not hinder the transmission of the messages and ideas they support. On the contrary, it is nice to read about Pearl’s ideas about causality or Tegmark’s thoughts on the future of intelligence in a short essay. Although the essays do not replace longer and more elaborate texts, they certainly give the reader the gist of the central arguments that, in many cases, made the authors well-known. Although the organization of the essay varies from author to author, all contributions are relevant and entertaining, whether they come from lesser-known artists or from famous scientists such as George Church, Seth Loyd, or Rodney Brooks.

The texts in this book did not appear out of thin air. In fact, the invited contributors were given the same starting point: Norbert Wiener’s influential book “The Human Use of Human Beings”, a prescient text authored more than 70 years ago by one of the most influential researchers in the field that, ultimately, originally coined as cybernetics ultimately led to digital computers and Artificial Intelligence. First published in 1950, Wiener’s book serves as the starting point for 25 interesting takes on the future of computation, artificial intelligence, and humanity. Whether you believe that the future of humanity will be digital or are concerned that we are losing our humanity, there will be something in this book for you.

Is the Universe a mathematical structure?

In his latest book, Our Mathematical Universe: My Quest for the Ultimate Nature of Reality, Max Tegmark tries to answer what is maybe the most fundamental question in science and philosophy: what is the nature of reality?

Our understanding of reality has certainly undergone deep change in the last few centuries. From Galileo and Newton, to Maxwell, Einstein, Bohr and Heisenberg, Physics has evolved by leaps and bounds, as well as our understanding of the place of humans in the Universe. And yet, in some respects, we know little more than the ancient Greeks. Is the visible Universe all that exists? Could other universes, with different laws of physics, exist? Does the universe split into several universes every time a quantum observation takes place? Why is mathematics such a good model for physics (an old question) and could there exist other universes which obey different mathematical structures? These questions are not arbitrary ones, as their answers take us into the four levels of the multiverse proposed by Tegmark.

As you dive into it, the book takes us into an ever-expanding model of reality. Tegmark defines four level of multiverses: the first one consisting of all the (possibly infinite) spacetime of which we see only a ball with a radius of 14 billion light-years, since the rest is too far for light to have reached us; the second one which possibly holds other parts of spacetime which obey different laws of physics; a third one, implied by the many-worlds interpretation of quantum physics; and a fourth one, where other mathematical structures, different from the spacetime we know and love, define the rules of the game.

It is certainly a lot to take in, in a book that has less than 400 pages, and the reader may feel dizzy at times. But, in the process, Tegmark does his best at explaining what inflation is and why it plays such an important role in cosmology, how the laws of quantum physics can be viewed simply as an equation (the Schrödinger equation) describing the evolution of a point in Hilbert space, doing away with all the difficult-to-explain consequences of the Copenhagen interpretation, the difficulties caused by the measure problem, why is the space so flat, and many, many other fascinating topics in modern physics.

Since the main point of the book is to help is understand our place in this not only enormous Universe but unthinkably enormous multiverse, he brings us back to Earth (literally) with a few disturbing questions, such as:

  • What is the role of intelligence and consciousness in this humongous multiverse?
  • Why is this Universe we see amenable to life, in some places, and why have we been so lucky to be born exactly here?
  • Shall one view oneself as a random sample of an intelligent being existing in the universe (the SSA, or Self-Sampling Assumption proposed by Bostrom in his book Anthropic Bias)
  • If the SSA is valid, does it imply the Doomsday Argument, that it is very unlikely that humans will last for a long time because such a fact that would make it highly unlikely that I would have been born so soon?

All in all, a fascinating read, if at times is reads more like sci-fi than science!

Chinese translation of The Digital Mind

The Chinese translation of my book, The Digital Mind, is now available. For those who want to dust off their (simplified) Chinese, it can be found in the usual physical and online bookstores, including Amazon and Books.com. Regrettably, I cannot directly assess the quality of the translation, you will have to decide for yourself. Or maybe you’d rather go for the more mundane English version, published by MIT Press, or the Portuguese one, published by IST Press.

You’re not the customer, you’re the product!

The attention that each one of us pays to an item and the time we spend on a site, article, or application is the most valuable commodity in the world, as witnessed by the fact that the companies that sell it, wholesale, are the largest in the world. Attracting and selling our attention is, indeed, the business of Google and Facebook but also, to a larger extent, of Amazon, Apple, Microsoft, Tencent, or Alibaba. We may believe we are the customers of these companies but, in fact, many of the services provided serve, only, to attract our attention and sell it to the highest bidder, in the form of publicity of personal information. In the words of Richard Serra and Carlota Fay Schoolman, later reused by a number of people including Tom Johnson, if you are not paying “You’re not the customer; you’re the product.

Attracting and selling attention is an old business, well described in Tim Wu’s book The Attention Merchants. First created by newspapers, then by radios and television, the market of attention came to maturity with the Internet. Although newspapers, radio programs, and television shows have all been designed to attract our attention and use it to sell publicity, none of them had the potential of the Internet, which can attract and retain our attention by tailoring the contents to each and everyone’s content.

The problem is that, with excessive customization, comes a significant and very prevalent problem. As sites, social networks, and content providers fight to attract our attention, they show us exactly the things we want to see, and not the things as they are. Each person lives, nowadays, in a reality that is different from anyone else’s reality. The creation of a separate and different reality, for each person, has a number of negative side effects, that include the creation of paranoia-inducing rabbit holes, the radicalization of opinions, the inability to establish democratic dialogue, and the diffiulty to distinguish reality from fabricated fiction.

Wu’s book addresses, in no light terms, this issue, but the Netflix documentary The Social Dilemma makes an even stronger point that customized content, as shown to us by social networks and other content providers is unraveling society and creating a host of new and serious problems. Social networks are even more worrying than other content providers because they create pressure in children and young adults to conform to a reality that is fabricated and presented to them in order to retain (and resell) their attention.

Decoding the code of life

We have known, since 1953, that the DNA molecule encodes the genetic information that transmits characteristics from ancestors to descendants, in all types of lifeforms on Earth. Genes, in the DNA sequences, specify the primary structure of proteins, the sequence of amino acids that are the components of the proteins, the cellular machines that do the jobs required to keep a cell alive. The secondary structure of proteins specifies some of the ways a protein folds locally, in structures like alpha helices and beta sheets. Methods that can determine reliably the secondary structure of proteins have existed for some time. However, determining the way a protein folds globally in space (its tertiary structure, the shape it assumes) has remained, mostly, an open problem, outside the reach of most algorithms, in the general case.

The Critical Assessment of protein Structure Prediction (CASP) competition, started in 1994, took place every two years since then and made it possible for hundreds of competing teams to test their algorithms and approaches in this difficult problem. Thousands of approaches have been tried, to some success, but the precision of the predictions was still rather low, especially for proteins that were not similar to other known proteins.

A number of different challenges have taken place over the years in CASP, ranging from ab-initio prediction to the prediction of structure using homology information and the field has seen steady improvements, over time. However, the entrance of DeepMind into the competition upped the stakes and revolutionized the field. As DeepMind itself reports in a blog post, the program AlphaFold 2, a successor of AlphaFold, entered the 2020 edition of CASP and managed to obtain a score of 92.4%, measured in the Global Distance Test (GDT) scale, which ranges from 0 to 100. This value should be compared with the value 58.9% obtained by AlphaFold (the previous version of this year’s winner) in 2018, and the 40% score obtained by the winner of the 2016 competition.

Structure of insulin

Even though details of the algorithm have still not been published, the information provided in the DeepMind post provides enough information to realize that this result is a very significant one. Although the whole approach is complex and the system integrates information from a number of sources, it relies on an attention-based neural network, which is trained end-to-end to learn which amino acids are close to each other, and at which distance.

Given the importance of the problem on areas like biology, medical science and pharmaceutics, it is to be expected that this computational approach to the problem of protein structure determination will have a significant impact in the future. Once more, rather general machine learning techniques, which have been developed over the last decades, have shown great potential in real world problems.

Novacene: the future of humanity is digital?

As it says on the cover of the book, James Lovelock may well be “the great scientific visionary of our age“. He is probably best known for the Gaia Hypothesis, but he made several other major contributions. While working for NASA, he was the first to propose looking for chemical biomarkers in the atmosphere of other planets as a sign of extraterrestrial life, a method that has been extensively used and led to a number of interesting results, some of them very recent. He has argued for climate engineering methods, to fight global warming, and a strong supporter of nuclear energy, by far the safest and less polluting form of energy currently available.

Lovelock has been an outspoken environmentalist, a strong voice against global warming, and the creator of the Gaia Hypothesis, the idea that all organisms on Earth are part of a synergistic and self-regulating system that seeks to maintain the conditions for life on Earth. The ideas he puts forward in this book are, therefore, surprising. To him, we are leaving the Anthropocene (a geological epoch, characterized by the profound effect of men on the Earth environment, still not recognized as a separate epoch by mainstream science) and entering the Novacene, an epoch where digital intelligence will become the most important form of life on Earth and near space.

Although it may seem like a position inconsistent with his previous arguments about the nature of life on Earth, I find the argument for the Novacene era convincing and coherent. Again, Lovelock appears as a visionary, extrapolating to its ultimate conclusion the trend of technological development that started with the industrial revolution.

As he says, “The intelligence that launches the age that follows the Anthropocene will not be human; it will be something wholly different from anything we can now conceive.”

To me, his argument that artificial intelligence, digital intelligence, will be our future, our offspring, is convincing. It will be as different from us as we are from the first animals that appeared hundreds of millions ago, which were also very different from the cells that started life on Earth. Four billion years after the first lifeforms appeared on Earth, life will finally create a new physical support, that does not depend on DNA, water, or an Earth-like environment and is adequate for space.

Could Venus possibly harbor life?

Two recently published papers, including one in Nature Astronomy (about the discovery itself) and this one in Astrobiology (describing a possible life cycle), report the existence of phosphine in the upper atmosphere of Venus, a gas that cannot be easily generated by non-biological processes in the conditions believed to exist in that planet. Phosphine may, indeed, turn out to be a biosignature, an indicator of the possible existence of micro-organisms in a planet that was considered, up to now, barren. Search for life in our solar system has been concentrated in other bodies, more likely to host micro-organisms, like Mars of the icy moons of outer planets.

The findings have been reported in many media outlets, including the NY Times and The Economist, raising interesting questions about the prevalence of life in the universe and the possible existence of life in one of our nearest neighbor planets. If the biological origin of phosphine were to be confirmed, it would qualify as the discovery of the century, maybe the most important discovery in the history of science! We are, however, far from that point. A number of things may make this finding another false alarm. Still, it is quite exciting that what has been considered a possible sign of life has been found so close to us and even a negative result would increase our knowledge about the chemical processes that generate this compound until now believed to be a reliable biomarker.

This turns out to be a first step, not a final result. Quoting from the Nature Astronomy paper:

Even if confirmed, we emphasize that the detection of PH3 is not robust evidence for life, only for anomalous and unexplained chemistry. There are substantial conceptual problems for the idea of life in Venus’s clouds—the environment is extremely dehydrating as well as hyperacidic. However, we have ruled out many chemical routes to PH3, with the most likely ones falling short by four to eight orders of magnitude (Extended Data Fig. 10). To further discriminate between unknown photochemical and/or geological processes as the source of Venusian PH3, or to determine whether there is life in the clouds of Venus, substantial modelling and experimentation will be important. Ultimately, a solution could come from revisiting Venus for in situ measurements or aerosol return.

The Book of Why

Correlation is not causation is a mantra that you may have heard many times, calling attention to the fact that no matter how strong the relations one may find between variables, they are not conclusive evidence for the existence of a cause and effect relationship. In fact, most modern AI and Machine Learning techniques look for relations between variables to infer useful classifiers, regressors, and decision mechanisms. Statistical studies, with either big or small data, have also generally abstained from explicitly inferring causality between phenomena, except when randomized control trials are used, virtually the unique case where causality can be inferred with little or no risk of confounding.

In The Book of Why, Judea Pearl, in collaboration with Dana Mackenzie, ups the ante and argues not only that one should not stay away from reasoning about causes and effects, but also that the decades-old practice of avoiding causal reasoning has been one of the reasons for our limited success in many fields, including Artificial Intelligence.

Pearl’s main point is that causal reasoning is not only essential for higher-level intelligence but is also the natural way we, humans, think about the world. Pearl, a world-renowned researcher for his work in probabilistic reasoning, has made many contributions to AI and statistics, including the well known Bayesian networks, an approach that exposes regularities in joint probability distributions. Still, he thinks that all those contributions pale in comparison with the revolution he speared on the effective use of causal reasoning in statistics.

Pearl argues that statistical-based AI systems are restricted to finding associations between variables, stuck in what he calls rung 1 of the Ladder of Causation: Association. Seeing associations leads to a very superficial understanding of the world since it restricts the actor to the observation of variables and the analysis of relations between them. In rung 2 of the Ladder, Intervention, actors can intervene and change the world, which leads to an understanding of cause and effect. In rung 3, Counterfactuals, actors can imagine different worlds, namely what would have happened if the actor did this instead of that.

This may seem a bit abstract, but that is where the book becomes a very pleasant surprise. Although it is a book written for the general public, the authors go deeply into the questions, getting to the point where they explain the do-calculus, a methodology Pearl and his students developed to calculate, under a set of dependence/independence assumptions, what would happen if a specific variable is changed in a possibly complex network of interconnected variables. In fact, graphic representations of these networks, causal diagrams, are at the root of the methods presented and are extensively used in the book to illustrate many challenges, problems, and paradoxes.

In fact, the chapter on paradoxes is particularly entertaining, covering the Monty Hall, Berkson, and Simpson’s paradoxes, all of them quite puzzling. My favorite instance of Simpson’s paradox is the Berkeley admissions puzzle, the subject of a famous 1975 Science article. The paradox comes from the fact that, at the time, Berkeley admitted 44% of male candidates to graduate studies, but only 35% of female applicants. However, each particular department (departments decide the admissions in Berkeley, as in many other places) made decisions that were more favorable to women than men. As it turns out, this strange state of affairs has a perfectly reasonable explanation, but you will have to read the book to find out.

The book contains many fascinating stories and includes a surprising amount of personal accounts, making for a very entertaining and instructive reading.

Note: the ladder of causation figure is from the book itself.

A conversation with GPT-3 on COVID-19

GPT-3 is the most advanced language model ever created, a product of an effort by OpenAI to create a publicly available system that can be used to advance research and applications in natural language. The model itself published less than three months ago, is an autoregressive language model with 175 billion parameters and was trained with a dataset that includes almost a trillion words.

Impressive as that may be, it is difficult to get some intuition of what such a complex model, trained on billions of human-generated texts, can actually do. Can it be used effectively in translation tasks or in answering questions?

To get some idea of what a sufficiently high-level statistical model of human language can do, I challenge you to have a look at this conversation with GPT-3, published by Kirk Ouimet a few days ago. It relates a dialogue between him and GPT-3 on the topic of COVID-19. The most impressive thing about this conversation with an AI is not that it gets many of the responses right (others not so much). What impressed me is that the model was trained with a dataset created before the existence of COVID-19, which provided GPT-3 no specific knowledge about this pandemic. Whatever answers GPT-3 gives to the questions related to COVID-19 are obtained with knowledge that was already available before the pandemic began.

This certainly raises some questions on whether advanced AI systems should be more widely used to define and implement policies important to the human race.

If you want more information bout GPT-3, it is easy to find in a multitude of sites with tutorials and demonstrations, such as TheNextWeb, MIT Technology Review, and many, many others.