Uber temporarily halts self-driving cars on the wake of fatal accident

Uber decided to halt all self-driving cars operations following a fatal accident involving an Uber car driving in autonomous mode, in Tempe, Arizona. Although the details are sketchy, Elaine Herzberg, a 49-year-old woman was crossing the street, outside the crosswalk,  in her bike, when she was fatally struck by a Volvo XC90 outfitted with the company’s sensing systems, in autonomous mode. She was taken to the hospital, where she later died as a consequence of the injuries. A human safety driver was behind the wheel but did not intervene.  The weather was clear and no special driving conditions have been reported but reports say she crossed the road suddenly, coming from a poorly lit area.

The accident raised concerns about the safety of autonomous vehicles, and the danger they may cause to people. Uber has decided to halt all self-driving car operations, pending investigation of the accident.

Video released by the Tempe police shows the poor light conditions and the sudden appearance of the woman with the bike. From the video, the collision looks unavoidable, by looking only at camera images. Other sensors, on the other hand, might have helped.

In 2016, about 1 person has died in traffic accidents, per each 100 million miles travelled by cars. Uber has, reportedly, logged 3 million miles in its autonomous vehicles. Since no technology will reduce the number of accidents to zero, further studies will be required to assess the comparative safety of autonomous vs. non-autonomous vehicles.

Photo credits: ABC-15 via Associated Press.


Nectome, a Y-combinator startup, wants to upload your mind

Y-combinator is a well known startup accelerator, which accepts and supports startups developing new ideas. Well-known companies, like Airbnb, Dropbox and Unbabel were incubated there, as were many others which became successful.

Wild as the ideas pitched at Y-combinator may be, however, so far no proposal was as ambitious as the one pitched by Nectome, a startup that wants to backup your mind. More precisely, Nectome wants to process and chemically preserve your brain, down to its most detailed structures, in order to make it possible to upload your mind sometime in the future. Robert McIntyre, founder and CEO of Nectome, and an MIT graduate, will pitch his company in a meeting in New York, next week.

Nectome’s is committed to the goal of archiving your mind, as goes the description in the website, by building the next generation of tools to preserve the connectome, the pattern of neuron interconnections that constitutes a brain. Nectome’s technology uses a process known as vitrifixation (also known as Aldehyde-Stabilized Cryopreservation) to stabilize and preserve a brain, down to its finer structures.

The idea is to keep the physical structure of the brain intact for the future (even though that will involve destroying the actual brain) in the hope that you may one day reverse engineer and reproduce, in the memory of a computer, the working processes of that brain. This idea, that you may be able to simulate a particular brain in a computer, a process known as mind uploading is, of course, not novel. It was popularized by many authors, most famously by Ray Kurzweil,  in his books. It has also been addressed in many non-fiction books, such as Superintelligence and The Digital Mind, both featured in this blog.

Photo by Nectome



The Computer and the Brain

The Computer and the Brain, first published in 1958, is a delightful little book by John von Neumman, his attempt to compare two very different information processing devices: computers and brains. Although written more than sixty years ago, it has more than historic interest, even though it addresses two topics that have developed enormously in the decades since von Neumann’s death.

John von Neumann’s genius comes through very clearly in this essay. Sixty years ago, very few persons knew what a computer was, and probably even less had some idea how the brain performed its magic. This book, written just a few years after the invention of the transistor (by Bardeen and Brattain), and the discovery of the membrane mechanisms that explain the electrical behaviour of neurons (by Hodgkin and Huxley), nonetheless compares, in very clear terms, the relative computational power of computers and brains.

Von Neumann aim is to compare many of the characteristics of the processing devices used by computers (vacuum tubes and transistors) with the ones used by the brain (neurons). His objective is to perform an objective comparison of the two technologies, as of their ability to process information. He addresses speed, size, memory and other characteristics of the two types of information processing devices.

One of the central and (to me) most interesting parts of the book is the comparison of artificial information processing devices (vacuum tubes and transistors) with natural information processing devices (neurons), in terms of speed and size.

Von Neumman concludes that vacuum tubes and transistors are faster, by a factor of 10,000 to 100,000, than neurons, and occupy about 1000 times more space (with the technologies of the day). All together, if one assumes that speed can be traded by number of devices (for instance, reusing electronic devices to perform computations that, in the brain, are performed by slower, but independent, neurons), his comparisons lead to the conclusion (not explicit in the book, I must add) that an electronic computer the size of a human brain would be one to two orders of magnitude less powerful than the human brain itself.

John von Neumann could not have predicted, in 1957, that transistors would be packed, by the billions, on integrated circuits no larger than a postal stamp. If one uses the numbers that correspond to the technologies of today, one is led to conclude that a modern CPU (such as the Intel Core i7), with a billion transistors, operating in the nanoseconds range, is a few orders of magnitude (10000 times) more powerful than the human brain, with its hundred billion neurons operating in the milliseconds range.

Of course one has to consider, as John von Neumann also wrote, that a neuron is considerably more complex and can perform more complex computations than a transistor. But even if one takes that into account, and assumes that a transistor is roughly equivalent to a synapse, in raw computing power, one gets the final result that the human brain and an Intel Core i7 have about the same raw processing power.

It is a sobering thought, one which von Neumann would certainly have liked to share.

LIFE 3.0: Being Human in the Age of Artificial Intelligence

Max Tegmark’s latest book, LIFE 3.0: Being Human in the Age of Artificial Intelligence, is an enthralling journey into the future, when the developments in artificial intelligence create a new type of lifeform on Earth.

Tegmark proposes to classify life in three stages. Life 1.0, unintelligent life, is able to change its hardware and improve itself only through the very slow and blind process of natural evolution. Single cell organisms, plants and simple animals are in this category. Life 2.0 is also unable to change its hardware (excepto through evolution, as for Life 1.0) but can change its software, stored in the brains, by using previous experience to learn new behaviors. Higher animals and humans, in particular, belong here. Humans can now, up to a limited point, change their hardware (through prosthetics, cellphones, computers and other devices) so they could also be considered now Life 2.1.

Life 3.0 is the new generation of life, which can change both its software and its hardware. The ability to change the computational support (i.e., the physical basis of computation) results from technological advances, which will only accelerate with the advent of Artificial General Intelligence (AGI). The book is really about the future of a world where AGI enables humanity to create a whole range of new technologies, and expand new forms of life through the cosmos.

The riveting prelude, The Tale of the Omega Team, is the story of the group of people who “created” the first intelligence explosion on planet Earth makes this a “hard-to-put-down” book.  The rest of the book goes through the consequences of this intelligence explosion, a phenomenon the author believes will undoubtedly take place, sooner or later. Chapter 4 focus on the explosion proper, and on how it could happen. Chapter 5, appropriately titled “Aftermath: The Next 10,000 Years” is one of the most interesting ones, and describes a number of long term scenarios that could result from such an event. These scenarios range from a benevolent and enlightened dictatorship (by the AI) to the enslaved God situation, where humanity keeps the AI in chains and uses it as a slave to develop new technologies, inaccessible to unaided humanity’s simpler minds. Always present, in these scenarios, are the risks of a hostile takeover by a human-created AGI, a theme that this book also addresses in depth, following on the ideas proposed by Nick Bostrom, in his book Superintelligence.

Being a cosmologist, Tegmark could not leave out the question of how life can spread through the Cosmos, a topic covered in depth in chapter 6, in a highly speculative fashion. Tegmark’s view is, to say the least, grandiose, envisaging a future where AGI will make it possible to spread life through the reachable universe, climbing the three levels of the Kardashev scale. The final chapters address (in a necessarily more superficial manner) the complex topics of goal setting for AI systems and artificial (or natural) consciousness. These topics somehow felt less well developed and more complete and convincing treatments can be found elsewhere. The book ends with a description of the mission of the Future of Life Institute, and the Asilomar AI Principles.

A book like this cannot leave anyone indifferent, and you will be likely to take one of two opposite sides: the optimistis, with many famous representatives, including Elon Mush, Stuart Russel and Nick Bostrom, who believe AGI can be developed and used to make humanity prosper; or the pessimists , whose more visible member is probably Yuval Noah Harari, who has voiced very serious concerns about technology developments in his book Homo Deus and in this review of Life 3.0.

Consciousness: Confessions of a Romantic Reductionist

Christoph Koch, the author of “Consciousness: Confessions of a Romantic Reductionist”  is not only a renowned researcher in brain science but also the president of the Allen Institute for Brain Science, one of the foremost institutions in brain research. What he has to tell us about consciousness, and how he believes it is produced by the brain is certainly of great interest for anyone interested in these topics.

However, the book is more that just another philosophical treatise on the issue of consciousness, as it is also a bit of an autobiography and an open window on Koch’s own consciousness.

With less than 200 pages (in the paperback edition), this book is indeed a good start for those interested in the centuries-old problem of the mind-body duality and how a physical object (the brain) creates such an ethereal thing as a mind. He describes and addresses clearly the central issue of why there is such a thing as consciousness in humans, and how it creates self-awareness, free-will (maybe) and the qualia that characterize the subjective experiences each and (almost) every human has.

In Koch’s view, consciousness is not a thing that can be either on or off. He ascribes different levels of consciousness to animals and even to less complex creatures and systems. Consciousness, he argues, is created by the fact that very complex systems have a high dimensional state space, creating a subjective experience that corresponds to each configuration of this state space. In this view, computers and other complex systems can also exhibit some degree of consciousness, although much smaller than living entities, since they are much less complex.

He goes on to describe several approaches that have aimed at elucidating the complex feedback loops existing in brains, which have to exist in order to create these complex state spaces. Modern experimental techniques can analyze the differences between awake (conscious) and asleep (unconscious) brains, and learn from these differentes what exactly does create consciousness in a brain.

Parts of the book are more autobiographical, however. He describes not only his life-long efforts to address these questions, many of them developed together with Francis Crick, who remains a reference to him, as a scientist and as a person. The final chapter is more philosophical, and addresses other questions for which we have no answer yet, and may never have, such as “Why there is something instead of nothing?” or “Did an all powerful God create the universe, 14 billions year ago, complete with the laws of physics, matter and energy, or is this God simply a creation of man?”.

All in all, excellent reading, accessible to anyone interested in the topic but still deep and scientifically exact.

AlphaZero masters the game of Chess

DeepMind, a company that was acquired by Google, made headlines when the program AlphaGo Zero managed to become the best Go player in the world, without using any human knowledge, a feat reported in this blog less than two months ago.

Now, just a few weeks after that result, DeepMind reports, in an article posted in arXiv.org, that the program AlphaZero obtained a similar result for the game of chess.

Computer programs have been the world’s best players for a long time now, basically since Deep Blue defeated the reigning world champion, Garry Kasparov, in 1997. Deep Blue, as almost all the other top chess programs, was deeply specialized in chess, and played the game using handcrafted position evaluation functions (based on grand-master games) coupled with deep search methods. Deep Blue evaluated more than 200 million positions per second, using a very deep search (between 6 and 8 moves, sometimes more) to identify the best possible move.

Modern computer programs use a similar approach, and have attained super-human levels, with the best programs (Komodo and Stockfish) reaching a Elo Rating higher than 3300. The best human players have Elo Ratings between 2800 and 2900. This difference implies that they have less than a one in ten chance of beating the top chess programs, since a difference of 366 points in Elo Rating (anywhere in the scale) mean a probability of winning of 90%, for the most ranked player.

In contrast, AlphaZero learned the game without using any human generated knowledge, by simply playing against another copy of itself, the same approach used by AlphaGo Zero. As the authors describe, AlphaZero learned to play at super-human level, systematically beating the best existing chess program (Stockfish), and in the process rediscovering centuries of human-generated knowledge, such as common opening moves (Ruy Lopez, Sicilian, French and Reti, among others).

The flexibility of AlphaZero (which also learned to play Go and Shogi) provides convincing evidence that general purpose learners are within the reach of the technology. As a side note, the author of this blog, who was a fairly decent chess player in his youth, reached an Elo Rating of 2000. This means that he has less than a one in ten chance of beating someone with a rating of 2400 who has less than a one in ten chance of beating the world champion who has less than a one in ten chance of beating AlphaZero. Quite humbling…

Image by David Lapetina, available at Wikimedia Commons.

Portuguese Edition of The Digital Mind

IST Press, the publisher of Instituto Superior Técnico, just published the Portuguese edition of The Digital Mind, originally published by MIT Press.

The Portuguese edition, translated by Jorge Pereirinha Pires, follow the same organization and has been reviewed by a number of sources. The back-cover reviews are by Pedro Domingos, Srinivas Devadas, Pedro Guedes de Oliveira and Francisco Veloso.

A pre-publication was made by the Público newspaper, under the title Até que mundos digitais nos levará o efeito da Rainha Vermelha, making the first chapter of the book publicly available.

There are also some publicly available reviews and pieces about this edition, including an episode of a podcast and a review in the radio.