Extraterrestrial: The First Sign of Alien Life?

Avi Loeb is not exactly someone who one may call an outsider to the scientific community. As a reputed scholar and the longest serving chair of Harvard’s Department of Astronomy, he is a well-known and reputed physicist, with many years of experience in astrophysics and cosmology. It is therefore somewhat surprising that in this book he strongly supports an hypothesis that is anything but widely accepted in the scientific community: ʻOumuamua, the first interstellar object ever detected in our solar system may be an artifact created by an alien civilization.

We are not talking here about alien conspiracies, UFOs or little green men from Mars. Loeb’s idea, admirably explained, is that there are enough strange things about ʻOumuamua to raise the real possibility that it is not simply a strange rock and that it may be an artificial construct, maybe a lightsail or a beacon.

There are, indeed, several strange things about this object, discovered by a telescope in Hawaii, in October 2017. It was the first object ever discovered near the Sun that did not orbit our star; its luminosity changed radically, by a factor of about 10; it is very bright for its size; and, perhaps more strangely, it exhibited non‑gravitational acceleration as its orbit did not exactly match the orbit of a normal rock with no external forces applied other than the gravity of the Sun.

None of these abnormalities, per se, would be enough to raise eyebrows. But, all combined, they do indeed make for a strange object. And Loeb’s point is, exactly, that the possibility that ‘Oumuamua is an artifact of alien origin should be taken seriously by the scientific community. And yet, he argues, anything that has to do with extraterrestrial life is not considered serious science, leading to a negative bias and to a lack of investment in what should be one of the most important scientific questions: are we alone in the Universe? As such, SETI, the Search for Extra-Terrestrial Life, does not get the recognition and the funding it deserves. Paradoxically, other fields whose theories may never be confirmed by experiment nor have any real impact on us, such as multiverse based explanations of quantum mechanics or string theory, are considered serious fields, attract much more funding, and are more favorably viewed by young researchers.

The book makes for very interesting reading, both for the author’s positions about ‘Oumuamua itself and for his opinions about today’s scientific establishment.

Chinese translation of The Digital Mind

The Chinese translation of my book, The Digital Mind, is now available. For those who want to dust off their (simplified) Chinese, it can be found in the usual physical and online bookstores, including Amazon and Books.com. Regrettably, I cannot directly assess the quality of the translation, you will have to decide for yourself. Or maybe you’d rather go for the more mundane English version, published by MIT Press, or the Portuguese one, published by IST Press.

Decoding the code of life

We have known, since 1953, that the DNA molecule encodes the genetic information that transmits characteristics from ancestors to descendants, in all types of lifeforms on Earth. Genes, in the DNA sequences, specify the primary structure of proteins, the sequence of amino acids that are the components of the proteins, the cellular machines that do the jobs required to keep a cell alive. The secondary structure of proteins specifies some of the ways a protein folds locally, in structures like alpha helices and beta sheets. Methods that can determine reliably the secondary structure of proteins have existed for some time. However, determining the way a protein folds globally in space (its tertiary structure, the shape it assumes) has remained, mostly, an open problem, outside the reach of most algorithms, in the general case.

The Critical Assessment of protein Structure Prediction (CASP) competition, started in 1994, took place every two years since then and made it possible for hundreds of competing teams to test their algorithms and approaches in this difficult problem. Thousands of approaches have been tried, to some success, but the precision of the predictions was still rather low, especially for proteins that were not similar to other known proteins.

A number of different challenges have taken place over the years in CASP, ranging from ab-initio prediction to the prediction of structure using homology information and the field has seen steady improvements, over time. However, the entrance of DeepMind into the competition upped the stakes and revolutionized the field. As DeepMind itself reports in a blog post, the program AlphaFold 2, a successor of AlphaFold, entered the 2020 edition of CASP and managed to obtain a score of 92.4%, measured in the Global Distance Test (GDT) scale, which ranges from 0 to 100. This value should be compared with the value 58.9% obtained by AlphaFold (the previous version of this year’s winner) in 2018, and the 40% score obtained by the winner of the 2016 competition.

Structure of insulin

Even though details of the algorithm have still not been published, the information provided in the DeepMind post provides enough information to realize that this result is a very significant one. Although the whole approach is complex and the system integrates information from a number of sources, it relies on an attention-based neural network, which is trained end-to-end to learn which amino acids are close to each other, and at which distance.

Given the importance of the problem on areas like biology, medical science and pharmaceutics, it is to be expected that this computational approach to the problem of protein structure determination will have a significant impact in the future. Once more, rather general machine learning techniques, which have been developed over the last decades, have shown great potential in real world problems.

Do humankind’s best days lie ahead?

This book, which transcribes one of the several Munk debates organized by an initiative financed by Peter and Melanie Munk, addresses the question of whether the future of humanity will be better or worst than the present.

The debate, also available in video, takes place between four formidable names, the wizards Steven Pinker and Matt Ridley (apologists of the theory that technology will continue to bring progress) and the prophets Alain de Botton and Malcolm Gladwell (doubters of the idea that further technological developments will keep improving the world).

71gpthXVVxL.jpg

The dialogue that takes place, between the Pollyanas and the Cassandras (to use an expression coined in the debate itself) is vivid, interesting and, at times, highly emotional. Not one of the debaters has doubts that progress has improved immensely the human condition in the last few centuries, but the consensus ends with that. Will we be able to use science and technology to surmount the environmental, social, and political challenges faced by humanity or did we already reach “peak development” and the future will be worst than the past? Read or watch the debate, and decide for yourself.

My take is that the Pollyanas, Steven Pinker and Matt Ridley, with their optimistic take on the future, win the debate by a large margin, against the Cassandras, Their arguments that the world will continue to improve, based both on the historical tendencies but also on the hope that technology will solve the significant challenges we face do not meet a coherent resistance from Alain de Botton and Malcolm Gladwell. At least they did not manage to convince me that famines, cybersecurity threats, climate change, and inequality will be enough to reverse the course of human progress.

Mastering Starcraft

The researchers at DeepMind keep advancing the state of the art on the utilization of deep learning to master ever more complex games. After recently reporting a system that learns how to play a number of different and very complex board games, including Go and Chess, the company announced a system that is able to beat the best players in the world at a complex strategy game, Startcraft.

AlphaStar, the system designed to learn to play Starcraft, one of the most challenging Real-Time Strategy (RTS) games, by playing against other versions of itself, represents a significant advance in the application of machine learning. In Starcraft, a significant amount of information is hidden from the players, and each player has to balance short term and long term objectives, just like in the real world. Players have to master fast-paced battle techniques and, at the same time, develop their own armies and economies.

This result is important because it shows that deep reinforcement learning, which has already shown remarkable results in all sorts of board games,  can scale up to complex environments with multiple time scales and hidden information. It opens the way to the application of machine learning to real-world problems, until now deemed to difficult to be tackled by machine learning.

Deepmind presents Artificial General Intelligence for board games

In a paper recently published in the journal Science, researchers from DeepMind describe Alpha Zero, a system that mastered three very complex games, Go, chess, and shogi, using only self-play and reinforcement learning. What is different in this system (a preliminary version was previously referred in this blog), when compared with previous ones, like AlphaGo Zero, is that the same learning architecture and hyperparameters were used to learn different games, without any specific customization for each different game.
Historically, the best programs for each game were heavily customized to use and exploit specific characteristics of that game. AlphaGo Zero, the most impressive previous result, used the spatial symmetries of Go and a number of other specific optimizations. Special purpose chess program like Stockfish took years to develop, use enormous amounts of field-specific knowledge and can, therefore, only play one specific game.
Alpha Zero is the closest thing to a general purpose board game player ever designed. Alpha Zero uses a deep neural network to estimate move probabilities and position values. It performs the search using a Monte Carlo tree search algorithm, which is general-purpose and not specifically tuned to any particular game. Overall, Alpha Zero gets as close as ever to the dream of artificial general intelligence, in this particular domain. As the authors say, in the conclusions, “These results bring us a step closer to fulfilling a longstanding ambition of Artificial Intelligence: a general game-playing system that can master any game.
While mastering these ancient games, AlphaZero also teaches us a few things we didn’t know about the games. For instance, that, in chess, white has a strong upper hand when playing the Ruy Lopez opening, or when playing against the French and Caro-Kann defenses. Sicilian defense, on the other hand, gives black much better chances. At least, that is what the function learned by the deep neural network obtains…
Actualization: The NY Times just published an interesting piece on this topic, with some additional information.

The Evolution of Everything, or the use of Universal Acid, by Matt Ridley

Matt Ridley never disappoints but his latest book, The Evolution of Everything is probably the most impressive one. Daniel Dennett called evolution the universal acid, an idea that dissolves every existing preconception we may have about the world. Ridley uses this universal acid to show that the ideas behind evolution apply not only to living beings but to all sorts of things in the world and, particularly, to society. The universal acid is used by Ridley to deconstruct our preconceptions about history and to present his own view that centralized control does not work and that bottom-up driven evolution is the engine behind progress.

When Ridley means everything, he is not exaggerating. The chapters in this book cover, among many others, topics as different as the universe, life, moral, culture, technology, leadership, education, religion, and money. To all these topics Ridley applies the universal acid to arrive at the conclusion that (almost) all thas is planned and directed leads to bad results, and that all that evolves by the pressures of competition and natural selection provides advances and improvements in society. Bottom-up mechanisms, he argues, are what creates innovation in the world, be it in the natural world, in culture, in technology or in any other area of society. To this view, he gives explicit credit to Lucretius who, in his magnum opus The Rerum Natura from the fourth century BC, proposed essentially the same idea, and to Adam Smith’s who, in The Wealth of Nations, proposed the central role of commerce in the development of society.

Sometimes, his arguments look too farfetched like, for instance, when he argues that the state should stay out of the education business, or that the 2008 crisis was caused not by runaway private initiative but by wrong governmental policies. Nonetheless, even in these cases, the arguments are very persuasive and always entertaining. Even someone like me, who believes that there are some roles to be played by the state, ends up doubting his own convictions.

All in all, a must read.

 

Uber temporarily halts self-driving cars on the wake of fatal accident

Uber decided to halt all self-driving cars operations following a fatal accident involving an Uber car driving in autonomous mode, in Tempe, Arizona. Although the details are sketchy, Elaine Herzberg, a 49-year-old woman was crossing the street, outside the crosswalk,  in her bike, when she was fatally struck by a Volvo XC90 outfitted with the company’s sensing systems, in autonomous mode. She was taken to the hospital, where she later died as a consequence of the injuries. A human safety driver was behind the wheel but did not intervene.  The weather was clear and no special driving conditions have been reported but reports say she crossed the road suddenly, coming from a poorly lit area.

The accident raised concerns about the safety of autonomous vehicles, and the danger they may cause to people. Uber has decided to halt all self-driving car operations, pending investigation of the accident.

Video released by the Tempe police shows the poor light conditions and the sudden appearance of the woman with the bike. From the video, the collision looks unavoidable, by looking only at camera images. Other sensors, on the other hand, might have helped.

In 2016, about 1 person has died in traffic accidents, per each 100 million miles travelled by cars. Uber has, reportedly, logged 3 million miles in its autonomous vehicles. Since no technology will reduce the number of accidents to zero, further studies will be required to assess the comparative safety of autonomous vs. non-autonomous vehicles.

Photo credits: ABC-15 via Associated Press.

Nectome, a Y-combinator startup, wants to upload your mind

Y-combinator is a well known startup accelerator, which accepts and supports startups developing new ideas. Well-known companies, like Airbnb, Dropbox and Unbabel were incubated there, as were many others which became successful.

Wild as the ideas pitched at Y-combinator may be, however, so far no proposal was as ambitious as the one pitched by Nectome, a startup that wants to backup your mind. More precisely, Nectome wants to process and chemically preserve your brain, down to its most detailed structures, in order to make it possible to upload your mind sometime in the future. Robert McIntyre, founder and CEO of Nectome, and an MIT graduate, will pitch his company in a meeting in New York, next week.

Nectome’s is committed to the goal of archiving your mind, as goes the description in the website, by building the next generation of tools to preserve the connectome, the pattern of neuron interconnections that constitutes a brain. Nectome’s technology uses a process known as vitrifixation (also known as Aldehyde-Stabilized Cryopreservation) to stabilize and preserve a brain, down to its finer structures.

The idea is to keep the physical structure of the brain intact for the future (even though that will involve destroying the actual brain) in the hope that you may one day reverse engineer and reproduce, in the memory of a computer, the working processes of that brain. This idea, that you may be able to simulate a particular brain in a computer, a process known as mind uploading is, of course, not novel. It was popularized by many authors, most famously by Ray Kurzweil,  in his books. It has also been addressed in many non-fiction books, such as Superintelligence and The Digital Mind, both featured in this blog.

Photo by Nectome

 

 

AIs running wild at Facebook? Not yet, not even close!

Much was written about two Artificial Intelligence systems developing their own language. Headlines like “Facebook shuts down down AI after it invents its own creepy language” and “Facebook engineers panic, pull plug on AI after bots develop their own language” were all over the place, seeming to imply that we were just at the verge of a significant incident in AI research.

As it happens, nothing significant really happened, and these headlines are only due to the inordinate appetite of the media for catastrophic news. Most AI systems currently under development have narrow application domains, and do not have the capabilities to develop their own general strategies, languages, or motivations.

To be fair, many AI systems do develop their own language. Whenever a neural network is trained to perform pattern recognition, for instance, a specific internal representation is chosen by the network to internally encode specific features of the pattern under analysis. When everything goes smoothly, these internal representations correspond to important concepts in the patterns under analysis (a wheel of car, say, or an eye) and are combined by the neural network to provide the output of interest. In fact, creating these internal representations, which, in a way, correspond to concepts in a language, is exactly one of the most interesting features of neural networks, and of deep neural networks, in particular.

Therefore, systems creating their own languages are nothing new, really. What happened with the Facebook agents that made the news was that two systems were being trained using a specific algorithm, a generative adversarial network. When this training method is used, two systems are trained against each other. The idea is that system A tries to make the task of system B more difficult and vice-versa. In this way, both systems evolve towards becoming better at their respective tasks, whatever they are. As this post clearly describes, the two systems were being trained at a specific negotiation task, and they communicated using English words. As the systems evolved, the systems started to use non-conventional combinations of words to exchange their information, leading to the seemingly strange language exchanges that led to the scary headlines, such as this one:

Bob: I can i i everything else

Alice: balls have zero to me to me to me to me to me to me to me to me to

Bob: you i everything else

Alice: balls have a ball to me to me to me to me to me to me to me to me

Strange as this exchange may look, nothing out of the ordinary was really happening. The neural network training algorithms were simply finding concept representations which were used by the agents to communicate their intentions in this specific negotiation task (which involved exchanging balls and other items).

The experience was stopped not because Facebook was afraid that some runaway explosive intelligence process was underway, but because the objective was to have the agents use plain English, and not a made up language.

Image: Picture taken at the Institute for Systems and Robotics of Técnico Lisboa, courtesy of IST.