Meet Duplex, your new assistant, courtesy of Google

Advances in natural language processing have enabled systems such as Siri, Alexa, Google Assistant or Cortana to be at the service of anyone owning a smartphone or a computer. Still, so far, none of these systems managed to cross the thin dividing line that would make us take them for humans. When we ask Alexa to play music or Siri do dial a telephone number, we know very well that we are talking with a computer and the replies of the systems would remind us, were we to forget that.

It was to be expected that, with the evolution of the technology, this type of interactions would become more and more natural, possibly reaching a point where a computer could impersonate a real human, taking us closer to the vision of Alan Turing, a situation where you cannot tell a human apart from a computer by simply talking to both.

In an event widely reported in the media, at the I/O 2018 conference, Google made a demonstration of Duplex, a system that is able to process and execute requests in specific areas, interacting in a very human way with human operators. While Google states that the system is still under development, and only able to handle very specific situations, you get a feeling that, soon enough, digital assistants will be able to interact with humans without disclosing their artificial nature.  You can read the Google AI blog post here, or just listen to a couple of examples, where Duplex is scheduling a haircut or making a restaurant reservation. Both the speech recognition system and the speech synthesis system, as well as the underlying knowledge base and natural language processing engines, operate flawlessly in these cases, anticipating a widely held premonition that AI systems will soon be replacing humans in many specific tasks.

Photo by Kevin Bhagat on Unsplash

Advertisements

MIT distances itself from Nectome, a mind uploading company

The MIT Media Lab, a unit of MIT, decided to sever the ties that connected it with Nectome, a startup that proposes to make available a technology that processes and chemically preserves a brain, down to its most minute details, in order to make it possible, at least in principle, to simulate your brain and upload your mind, sometime in the future.

According to the MIT news release, “MIT’s connection to the company came into question after MIT Technology Review detailed Nectome’s promotion of its “100 percent fatal” technology” in an article posted in the MIT Technology Review site.

As reported in this blog, Nectome claims that by preserving the brain, it may be possible, one day, “to digitize your preserved brain and use that information to recreate your mind”. Nectome acknowledges, however, that the technology is fatal to the brain donor and that there are no warranties that future recovery of the memories, knowledge and personality will be possible.

Detractors have argued that the technology is not sound, since simulating a preserved brain is a technology that is at least many decades in the future and may even be impossible in principle. The criticisms were, however, mostly based on the argument the whole enterprise is profoundly unethical.

This kind of discussion between proponents of technologies aimed at performing whole brain emulation, sometimes in the future, and detractors that argue that such an endeavor is fundamentally flawed, has occurred in the past, most notably a 2014 controversy concerning the objectives of the Human Brain Project. In this controversy, critics argued that the goal of a large-scale simulation of the brain is premature and unsound, and that funding should be redirected towards more conventional approaches to the understanding of brain functions. Supporters of the Human Brain Project approach argued that reconstructing and simulating the human brain is an important objective in itself, which will bring many benefits and advance our knowledge of the brain and of the mind.

Picture by the author.

LIFE 3.0: Being Human in the Age of Artificial Intelligence

Max Tegmark’s latest book, LIFE 3.0: Being Human in the Age of Artificial Intelligence, is an enthralling journey into the future, when the developments in artificial intelligence create a new type of lifeform on Earth.

Tegmark proposes to classify life in three stages. Life 1.0, unintelligent life, is able to change its hardware and improve itself only through the very slow and blind process of natural evolution. Single cell organisms, plants and simple animals are in this category. Life 2.0 is also unable to change its hardware (excepto through evolution, as for Life 1.0) but can change its software, stored in the brains, by using previous experience to learn new behaviors. Higher animals and humans, in particular, belong here. Humans can now, up to a limited point, change their hardware (through prosthetics, cellphones, computers and other devices) so they could also be considered now Life 2.1.

Life 3.0 is the new generation of life, which can change both its software and its hardware. The ability to change the computational support (i.e., the physical basis of computation) results from technological advances, which will only accelerate with the advent of Artificial General Intelligence (AGI). The book is really about the future of a world where AGI enables humanity to create a whole range of new technologies, and expand new forms of life through the cosmos.

The riveting prelude, The Tale of the Omega Team, is the story of the group of people who “created” the first intelligence explosion on planet Earth makes this a “hard-to-put-down” book.  The rest of the book goes through the consequences of this intelligence explosion, a phenomenon the author believes will undoubtedly take place, sooner or later. Chapter 4 focus on the explosion proper, and on how it could happen. Chapter 5, appropriately titled “Aftermath: The Next 10,000 Years” is one of the most interesting ones, and describes a number of long term scenarios that could result from such an event. These scenarios range from a benevolent and enlightened dictatorship (by the AI) to the enslaved God situation, where humanity keeps the AI in chains and uses it as a slave to develop new technologies, inaccessible to unaided humanity’s simpler minds. Always present, in these scenarios, are the risks of a hostile takeover by a human-created AGI, a theme that this book also addresses in depth, following on the ideas proposed by Nick Bostrom, in his book Superintelligence.

Being a cosmologist, Tegmark could not leave out the question of how life can spread through the Cosmos, a topic covered in depth in chapter 6, in a highly speculative fashion. Tegmark’s view is, to say the least, grandiose, envisaging a future where AGI will make it possible to spread life through the reachable universe, climbing the three levels of the Kardashev scale. The final chapters address (in a necessarily more superficial manner) the complex topics of goal setting for AI systems and artificial (or natural) consciousness. These topics somehow felt less well developed and more complete and convincing treatments can be found elsewhere. The book ends with a description of the mission of the Future of Life Institute, and the Asilomar AI Principles.

A book like this cannot leave anyone indifferent, and you will be likely to take one of two opposite sides: the optimistis, with many famous representatives, including Elon Mush, Stuart Russel and Nick Bostrom, who believe AGI can be developed and used to make humanity prosper; or the pessimists , whose more visible member is probably Yuval Noah Harari, who has voiced very serious concerns about technology developments in his book Homo Deus and in this review of Life 3.0.

Portuguese Edition of The Digital Mind

IST Press, the publisher of Instituto Superior Técnico, just published the Portuguese edition of The Digital Mind, originally published by MIT Press.

The Portuguese edition, translated by Jorge Pereirinha Pires, follow the same organization and has been reviewed by a number of sources. The back-cover reviews are by Pedro Domingos, Srinivas Devadas, Pedro Guedes de Oliveira and Francisco Veloso.

A pre-publication was made by the Público newspaper, under the title Até que mundos digitais nos levará o efeito da Rainha Vermelha, making the first chapter of the book publicly available.

There are also some publicly available reviews and pieces about this edition, including an episode of a podcast and a review in the radio.

The last invention of humanity

Irving John Good was a British mathematician who worked with Alan Turing in the famous Hut 8 of Bletchley Park, contributing to the war effort by decrypting the messages coded by the German enigma machines. After that, he became a professor at Virginia Tech and, later in life, he was a consultant for the cult movie 2001: A Space Odyssey, by Stanley Kubrick.

Irving John Good (born Isadore Jacob Gudak to a Polish jewish family) is credited with coining the term intelligence explosion, to refer to the possibility that a super-intelligent system may, one day, be able to design an even more intelligent successor. In his own words:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.

We are still very far from being able to design an artificially intelligent (AI)  system that is smart enough to design and code even better AI systems. Our current efforts address very narrow fields, and obtain systems that do not have the general intelligence required to create the phenomenon I. J. Good was referring to. However, in some very restrict domains, we can see at work mechanisms that resemble the that very same phenomenon.

Go is a board game, very difficult to master because of the huge number of possible games and high number of possible moves at each position. Given the complexity of the game, branch and bound approaches could not be used, until recently, to derive good playing strategies. Until only a few years ago, it was believed that it would take decades to create a program that would master the game of Go, at a level comparable with the best human players.

In January 2016, DeepMind, an AI startup (which was at that time acquired by Google by a sum reported to exceed 500M dollars), reported in an article in Nature that they had managed to master the complex game of Go by using deep neural networks and a tree search engine. The system, called AlphaGo, was trained on databases of human games and eventually managed to soundly beat the best human players, becoming the best player in the world, as reported in this blog.

A couple of weeks ago, in October of 2017, DeepMind reported, in a second article in Nature, that they programmed a system, which became even more proficient at the game, that mastered the game without using any human knowledge. AlphaGo Zero did not use any human games to acquire knowledge about the game. Instead, it played millions of games (close to 30 millions, in fact, played over a period of 40 days) against another version of itself, eventually acquiring knowledge about tactics and strategies that have been slowly created by the human race for more than two millennia. By simply playing against itself, the system went from a child level (random moves) to a novice level to a world champion level. AlphaGo Zero steamrolled the original AlphaGo by 100 to 0,  showing that it is possible to obtain super-human strength without using any human generated knowledge.

In a way, the computer improved itself, by simply playing against itself until it reached perfection. Irving John Good, who died in 2009, would have liked to see this invention of mankind. Which will not be the last, yet…

Picture credits: Go board, picture taken by Hoge Rielen, available at Wikimedia Commons.

 

Stuart Russell and Sam Harris on The Dawn of Artificial Intelligence

In one of the latest episodes of his interesting podcast, Waking Up , Sam Harris discusses with Stuart Russell the future of Artificial Intelligence (AI).

Stuart Russel is one of the foremost world authorities on AI, and author of the most widely used textbook on the subject, Artificial Intelligence, a Modern Approach. Interestingly, most of the (very interesting) conversation focuses not so much on the potential of AI, but on the potential dangers of the technology.

Many AI researchers have dismissed offhand the worries many people have expressed over the possibility of runaway Artificial Intelligence. In fact, most active researchers know very well that most of the time is spent worrying about the convergence of algorithms, the lack of efficiency of training methods, or in difficult searches for the right architecture for some narrow problem. AI researchers spend no time at all worrying about the possibility that the systems they are developing will, suddenly, become too intelligent and a danger to humanity.

On the other hand, famous philosophers, scientists and entrepreneurs, such as Elon Musk, Richard Dawkins, Bill Gates, and Nick Bostrom have been very vocal about the possibility that man-made AI systems may one day run amok and become a danger to humanity.

From this duality one is led to believe that only people who are away from the field really worry about the possibility of dangerous super-intelligences. People inside the field pay little or no attention to that possibility and, in many cases, consider these worries baseless and misinformed.

That is why this podcast, with the participation of Stuart Russell, is interesting and well worth hearing. Russell cannot be accused of being an outsider to the field of AI, and yet his latest interests are focused on the problem of making sure that future AIs will have their objectives closely allied with those of the human race.

The Great Filter: are we rare, are we first, or are we doomed?

Fermi’s Paradox (the fact that we never detected any sign of aliens even though, conceptually, life could be relatively common in the universe) has already been discussed in this blog, as new results come in about the rarity of life bearing planets, the discovery of new Earth-like planets, or even the detection of possible signs of aliens.

There are a number of possible explanations for Fermi’s Paradox and one of them is exactly that sufficiently advanced civilizations could retreat into their own planets, or star systems, exploring the vastness of the nano-world, becoming digital minds.

A very interesting concept related with Fermi’s Paradox is the Great Filter theory, which states, basically, that if intelligent civilizations do not exist in the galaxy we, as a civilization, are either rare, first, or doomed. As this post very clearly describes, one of these three explanations has to be true, if no other civilizations exist.

The Great Filter theory is based on Robin Hanson’s argument that the failure to find any extraterrestrial civilizations in the observable universe has to be explained by the fact that somewhere, in the sequence of steps that leads from planet formation to the creation of technological civilizations, there has to be an extremely unlikely event, which he called the Great Filter.

This Great Filter may be behind us, in the process that led from inorganic compounds to humans. That means that we, intelligent beings, are rare in the universe. Maybe the conditions that lead to life are extremely rare, either due to the instability of planetary systems, or to the low probability that life gets started in the first place, or to some other phenomenon that we were lucky enough to overcome.

It can also happen that conditions that make possible the existence of life are relatively recent in the universe. That would mean that conditions for life only became common in the universe (or the galaxy) in the last few billions years. In that case, we may not be rare, but we would be the first, or among the first, planets to develop intelligent life.

The final explanation is that the Great Filter is not behind us, but ahead of us. That would mean that many technological civilizations develop but, in the end, they all collapse, due to unknown factors (some of them we can guess). In this case, we are doomed, like all other civilizations that, presumably, existed.

There is, of course, another group of explanations, which states that advanced civilizations do exist in the galaxy, but we are simply too dumb to contact or to observe them. Actually, many people believe that we should not even be trying to contact them, by broadcasting radio-signals into space, advertising that we are here. It may, simply, be too dangerous.

 

Image by the Bureau of Land Management, available at Wikimedia Commons