LIFE 3.0: Being Human in the Age of Artificial Intelligence

Max Tegmark’s latest book, LIFE 3.0: Being Human in the Age of Artificial Intelligence, is an enthralling journey into the future, when the developments in artificial intelligence create a new type of lifeform on Earth.

Tegmark proposes to classify life in three stages. Life 1.0, unintelligent life, is able to change its hardware and improve itself only through the very slow and blind process of natural evolution. Single cell organisms, plants and simple animals are in this category. Life 2.0 is also unable to change its hardware (excepto through evolution, as for Life 1.0) but can change its software, stored in the brains, by using previous experience to learn new behaviors. Higher animals and humans, in particular, belong here. Humans can now, up to a limited point, change their hardware (through prosthetics, cellphones, computers and other devices) so they could also be considered now Life 2.1.

Life 3.0 is the new generation of life, which can change both its software and its hardware. The ability to change the computational support (i.e., the physical basis of computation) results from technological advances, which will only accelerate with the advent of Artificial General Intelligence (AGI). The book is really about the future of a world where AGI enables humanity to create a whole range of new technologies, and expand new forms of life through the cosmos.

The riveting prelude, The Tale of the Omega Team, is the story of the group of people who “created” the first intelligence explosion on planet Earth makes this a “hard-to-put-down” book.  The rest of the book goes through the consequences of this intelligence explosion, a phenomenon the author believes will undoubtedly take place, sooner or later. Chapter 4 focus on the explosion proper, and on how it could happen. Chapter 5, appropriately titled “Aftermath: The Next 10,000 Years” is one of the most interesting ones, and describes a number of long term scenarios that could result from such an event. These scenarios range from a benevolent and enlightened dictatorship (by the AI) to the enslaved God situation, where humanity keeps the AI in chains and uses it as a slave to develop new technologies, inaccessible to unaided humanity’s simpler minds. Always present, in these scenarios, are the risks of a hostile takeover by a human-created AGI, a theme that this book also addresses in depth, following on the ideas proposed by Nick Bostrom, in his book Superintelligence.

Being a cosmologist, Tegmark could not leave out the question of how life can spread through the Cosmos, a topic covered in depth in chapter 6, in a highly speculative fashion. Tegmark’s view is, to say the least, grandiose, envisaging a future where AGI will make it possible to spread life through the reachable universe, climbing the three levels of the Kardashev scale. The final chapters address (in a necessarily more superficial manner) the complex topics of goal setting for AI systems and artificial (or natural) consciousness. These topics somehow felt less well developed and more complete and convincing treatments can be found elsewhere. The book ends with a description of the mission of the Future of Life Institute, and the Asilomar AI Principles.

A book like this cannot leave anyone indifferent, and you will be likely to take one of two opposite sides: the optimistis, with many famous representatives, including Elon Mush, Stuart Russel and Nick Bostrom, who believe AGI can be developed and used to make humanity prosper; or the pessimists , whose more visible member is probably Yuval Noah Harari, who has voiced very serious concerns about technology developments in his book Homo Deus and in this review of Life 3.0.

AlphaZero masters the game of Chess

DeepMind, a company that was acquired by Google, made headlines when the program AlphaGo Zero managed to become the best Go player in the world, without using any human knowledge, a feat reported in this blog less than two months ago.

Now, just a few weeks after that result, DeepMind reports, in an article posted in arXiv.org, that the program AlphaZero obtained a similar result for the game of chess.

Computer programs have been the world’s best players for a long time now, basically since Deep Blue defeated the reigning world champion, Garry Kasparov, in 1997. Deep Blue, as almost all the other top chess programs, was deeply specialized in chess, and played the game using handcrafted position evaluation functions (based on grand-master games) coupled with deep search methods. Deep Blue evaluated more than 200 million positions per second, using a very deep search (between 6 and 8 moves, sometimes more) to identify the best possible move.

Modern computer programs use a similar approach, and have attained super-human levels, with the best programs (Komodo and Stockfish) reaching a Elo Rating higher than 3300. The best human players have Elo Ratings between 2800 and 2900. This difference implies that they have less than a one in ten chance of beating the top chess programs, since a difference of 366 points in Elo Rating (anywhere in the scale) mean a probability of winning of 90%, for the most ranked player.

In contrast, AlphaZero learned the game without using any human generated knowledge, by simply playing against another copy of itself, the same approach used by AlphaGo Zero. As the authors describe, AlphaZero learned to play at super-human level, systematically beating the best existing chess program (Stockfish), and in the process rediscovering centuries of human-generated knowledge, such as common opening moves (Ruy Lopez, Sicilian, French and Reti, among others).

The flexibility of AlphaZero (which also learned to play Go and Shogi) provides convincing evidence that general purpose learners are within the reach of the technology. As a side note, the author of this blog, who was a fairly decent chess player in his youth, reached an Elo Rating of 2000. This means that he has less than a one in ten chance of beating someone with a rating of 2400 who has less than a one in ten chance of beating the world champion who has less than a one in ten chance of beating AlphaZero. Quite humbling…

Image by David Lapetina, available at Wikimedia Commons.

Portuguese Edition of The Digital Mind

IST Press, the publisher of Instituto Superior Técnico, just published the Portuguese edition of The Digital Mind, originally published by MIT Press.

The Portuguese edition, translated by Jorge Pereirinha Pires, follow the same organization and has been reviewed by a number of sources. The back-cover reviews are by Pedro Domingos, Srinivas Devadas, Pedro Guedes de Oliveira and Francisco Veloso.

A pre-publication was made by the Público newspaper, under the title Até que mundos digitais nos levará o efeito da Rainha Vermelha, making the first chapter of the book publicly available.

There are also some publicly available reviews and pieces about this edition, including an episode of a podcast and a review in the radio.

The last invention of humanity

Irving John Good was a British mathematician who worked with Alan Turing in the famous Hut 8 of Bletchley Park, contributing to the war effort by decrypting the messages coded by the German enigma machines. After that, he became a professor at Virginia Tech and, later in life, he was a consultant for the cult movie 2001: A Space Odyssey, by Stanley Kubrick.

Irving John Good (born Isadore Jacob Gudak to a Polish jewish family) is credited with coining the term intelligence explosion, to refer to the possibility that a super-intelligent system may, one day, be able to design an even more intelligent successor. In his own words:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.

We are still very far from being able to design an artificially intelligent (AI)  system that is smart enough to design and code even better AI systems. Our current efforts address very narrow fields, and obtain systems that do not have the general intelligence required to create the phenomenon I. J. Good was referring to. However, in some very restrict domains, we can see at work mechanisms that resemble the that very same phenomenon.

Go is a board game, very difficult to master because of the huge number of possible games and high number of possible moves at each position. Given the complexity of the game, branch and bound approaches could not be used, until recently, to derive good playing strategies. Until only a few years ago, it was believed that it would take decades to create a program that would master the game of Go, at a level comparable with the best human players.

In January 2016, DeepMind, an AI startup (which was at that time acquired by Google by a sum reported to exceed 500M dollars), reported in an article in Nature that they had managed to master the complex game of Go by using deep neural networks and a tree search engine. The system, called AlphaGo, was trained on databases of human games and eventually managed to soundly beat the best human players, becoming the best player in the world, as reported in this blog.

A couple of weeks ago, in October of 2017, DeepMind reported, in a second article in Nature, that they programmed a system, which became even more proficient at the game, that mastered the game without using any human knowledge. AlphaGo Zero did not use any human games to acquire knowledge about the game. Instead, it played millions of games (close to 30 millions, in fact, played over a period of 40 days) against another version of itself, eventually acquiring knowledge about tactics and strategies that have been slowly created by the human race for more than two millennia. By simply playing against itself, the system went from a child level (random moves) to a novice level to a world champion level. AlphaGo Zero steamrolled the original AlphaGo by 100 to 0,  showing that it is possible to obtain super-human strength without using any human generated knowledge.

In a way, the computer improved itself, by simply playing against itself until it reached perfection. Irving John Good, who died in 2009, would have liked to see this invention of mankind. Which will not be the last, yet…

Picture credits: Go board, picture taken by Hoge Rielen, available at Wikimedia Commons.

 

AIs running wild at Facebook? Not yet, not even close!

Much was written about two Artificial Intelligence systems developing their own language. Headlines like “Facebook shuts down down AI after it invents its own creepy language” and “Facebook engineers panic, pull plug on AI after bots develop their own language” were all over the place, seeming to imply that we were just at the verge of a significant incident in AI research.

As it happens, nothing significant really happened, and these headlines are only due to the inordinate appetite of the media for catastrophic news. Most AI systems currently under development have narrow application domains, and do not have the capabilities to develop their own general strategies, languages, or motivations.

To be fair, many AI systems do develop their own language. Whenever a neural network is trained to perform pattern recognition, for instance, a specific internal representation is chosen by the network to internally encode specific features of the pattern under analysis. When everything goes smoothly, these internal representations correspond to important concepts in the patterns under analysis (a wheel of car, say, or an eye) and are combined by the neural network to provide the output of interest. In fact, creating these internal representations, which, in a way, correspond to concepts in a language, is exactly one of the most interesting features of neural networks, and of deep neural networks, in particular.

Therefore, systems creating their own languages are nothing new, really. What happened with the Facebook agents that made the news was that two systems were being trained using a specific algorithm, a generative adversarial network. When this training method is used, two systems are trained against each other. The idea is that system A tries to make the task of system B more difficult and vice-versa. In this way, both systems evolve towards becoming better at their respective tasks, whatever they are. As this post clearly describes, the two systems were being trained at a specific negotiation task, and they communicated using English words. As the systems evolved, the systems started to use non-conventional combinations of words to exchange their information, leading to the seemingly strange language exchanges that led to the scary headlines, such as this one:

Bob: I can i i everything else

Alice: balls have zero to me to me to me to me to me to me to me to me to

Bob: you i everything else

Alice: balls have a ball to me to me to me to me to me to me to me to me

Strange as this exchange may look, nothing out of the ordinary was really happening. The neural network training algorithms were simply finding concept representations which were used by the agents to communicate their intentions in this specific negotiation task (which involved exchanging balls and other items).

The experience was stopped not because Facebook was afraid that some runaway explosive intelligence process was underway, but because the objective was to have the agents use plain English, and not a made up language.

Image: Picture taken at the Institute for Systems and Robotics of Técnico Lisboa, courtesy of IST.

The Digital Mind: How Science is Redefining Humanity

Following the release in the US,  The Digital Mind, published by MIT Press,  is now available in Europe, at an Amazon store near you (and possibly in other bookstores). The book covers the evolution of technology, leading towards the expected emergence of digital minds.

Here is a short rundown of the book, kindly provided by yours truly, the author.

New technologies have been introduced in human lives at an ever increasing rate, since the first significant advances took place with the cognitive revolution, some 70.000 years ago. Although electronic computers are recent and have been around for only a few decades, they represent just the latest way to process information and create order out of chaos. Before computers, the job of processing information was done by living organisms, which are nothing more than complex information processing devices, created by billions of years of evolution.

Computers execute algorithms, sequences of small steps that, in the end, perform some desired computation, be it simple or complex. Algorithms are everywhere, and they became an integral part of our lives. Evolution is, in itself, a complex and long- running algorithm that created all species on Earth. The most advanced of these species, Homo sapiens, was endowed with a brain that is the most complex information processing device ever devised. Brains enable humans to process information in a way unparalleled by any other species, living or extinct, or by any machine. They provide humans with intelligence, consciousness and, some believe, even with a soul, a characteristic that makes humans different from all other animals and from any machine in existence.

But brains also enabled humans to develop science and technology to a point where it is possible to design computers with a power comparable to that of the human brain. Artificial intelligence will one day make it possible to create intelligent machines and computational biology will one day enable us to model, simulate and understand biological systems and even complete brains with unprecedented levels of detail. From these efforts, new minds will eventually emerge, minds that will emanate from the execution of programs running in powerful computers. These digital minds may one day rival our own, become our partners and replace humans in many tasks. They may usher in a technological singularity, a revolution in human society unlike any other that happened before. They may make humans obsolete and even a threatened species or they make us super-humans or demi-gods.

How will we create these digital minds? How will they change our daily lives? Will we recognize them as equals or will they forever be our slaves? Will we ever be able to simulate truly human-like minds in computers? Will humans transcend the frontiers of biology and become immortal? Will humans remain, forever, the only known intelligence in the universe?

 

Will the fourth industrial revolution destroy or create jobs?

The impact of the fourth industrial revolution on jobs has been much discussed.

On one side, there are the traditional economists, who argue that technological advances have always created more and better jobs than the ones they destroyed. On the other side, the people that believe that with the arrival of artificial intelligence and robotics, there will simply not exist enough jobs that cannot be done by machines.

So, in this post, I try to present a balanced analysis on the subject, as deeply as allowed by the space and time available.

Many studies have addressed the question of which jobs are more likely to be destroyed by automation.  This study, by McKinsey, provides a very comprehensive analysis.

lixo

Recently, The Economist also published a fairly balanced analysis of the topic, already posted in this blog. In this analysis, The Economist makes a reference to a number of studies on the jobs that are at high risk but, in the end, it sides with the opinion that enough jobs will be created to replace the ones technology will destroy.

A number of books and articles have been written on the topic, including “Raising the Floor“, “The Wealth of Humans: Work, Power, and Status in the Twenty-first Century“, “The Second Machine Age“, and “No More Work“, some of them already reviewed in this blog.

In most cases, the authors of these books advocate the need for significant changes in the way society is organized, and on the types of social contracts that need to be drawn. Guaranteeing every one a universal basic income is a proposal that has become very popular, as a way to address the question of how humanity will live in a time when there are much less jobs to go around.

Further evidence that some deep change is in the cards is provided by data that shows that, with the begining of the XXI century, income is being moved away from jobs (and workers) towards capital (and large companies):

15134556_10210587766985940_7255657276005857315_n

On the other side of the debate, there are many people who believe that humans will always be able to adapt and add value to society, regardless of what machines can or cannot do. David Autor, in his TED talk, makes a compelling point that many times before it was argued that “this time is different” and that it never was.

Other articles, including this one in the Washington Post, argue that the fears are overblown. The robots will not be coming in large numbers, to replace humans. Not in the near future, anyway.

Other economists, such as  Richard Freeman, in an article published in Harvard Magazine agree and also believe that the fears are unwarranted: “We should worry less about the potential displacement of human labor by robots than about how to share fairly across society the prosperity that the robots produce.

His point is that the problem is not so much on the lack of jobs, but on the depression of wages. Jobs may still exist, but will not be well paid, and the existing imbalances in income distribution will only become worst.

Maybe, in the end, this opinion represents a balanced synthesis of the two competing views: jobs will still exist, for anyone who wants to take them, but there will be competition (from robots and intelligent agents) for them, pushing down the wages.

European Parliament committee approves proposal to give robots legal status and responsibilities

The committee on legal affairs of the European Parliament has drafted and approved a report that addresses many of the legal, social and financial consequences of the development of robots and artificial intelligence (AI).

The draft report addresses a large number of issues related with the advances of robotics, AI and related technologies, and proposes a number of european regulations to govern the utilization of robots and other advanced AI agents.

The report was approved with a 17-2 vote (and two abstentions) by the parliament’s legal affairs committee.

epstrasbourg

Among many other issues addressed, the report considers:

  • The question of legal status: “whereas, ultimately, robots’ autonomy raises the question of their nature in the light of the existing legal categories – of whether they should be regarded as natural persons, legal persons, animals or objects – or whether a new category should be created”, advancing with the proposal of “creating a specific legal status for robots, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons with specific rights and obligations…”
  • The impact of robotics and AI on employment and social security, and concludes that “consideration should be given to the possible need to introduce corporate reporting requirements on the extent and proportion of the contribution of robotics and AI to the economic results of a company for the purpose of taxation and social security contributions; takes the view that in the light of the possible effects on the labour market of robotics and AI a general basic income should be seriously considered, and invites all Member States to do so;”
  • The need for a clear and unambiguous registration system for robots, recommending that “a system of registration of advanced robots should be introduced, and calls on the Commission to establish criteria for the classification of robots with a view to identifying the robots that would need to be registered;”

 

How to create a mind

Ray Kurzweil’s latest book, How to Create a Mind, published in 2012, is an interesting read and shows some welcome change on his views of science and technology. Unlike some of his previous (and influntial) books, including The Singularity is Near, The Age of Spiritual Machines and The Age of Intelligent Machines, the main point of this book is not that exponential technological development will bring in a technological singularity in a few decades.

how-to-create-a-mind-cover-347x512

True, that theme is still present, but takes second place to the main theme of the book, a concrete (although incomplete) proposal to build intelligent systems that are inspired in the architecture of the human neocortex.

Kurzweil main point in this book is to present a model of the human neocortex, what he calls The Pattern Recognition Theory of the Mind (PRTM). In this theory, the neocortex is simply a very powerful pattern recognition system, built out of about 300 million (his number, not mine) similar pattern recognizers. The input from each of these recognizers can come from either external inputs, through the senses, or from the older parts (evolutionary speaking) of the brain, or from the output of other pattern recognizers in the neocortex. Each recognizer is relatively simple, and can only recognize a simple pattern (say the word APPLE) but, through complex interconnections with other recognizers above and below, it makes possible all sorts of thinking and abstract reasoning.

Each pattern consists, in its essence, in a short sequence of symbols, and is connected, through bundles of axons, to the actual place in the cortex where these symbols are activated, by another pattern recognizer. In most cases, the memories these recognizers represent must be accessed in a specific order. He gives the example that very few persons can recite the alphabet backwards, or even their social security number, which is taken as evidence of the sequential nature of operation of these pattern recognizers.

The key point of the book is that the actual algorithms used to build and structure a neocortex may soon become well understood, and used to build intelligent machines, embodied with true strong Artificial Intelligence. How to Create a Mind falls somewhat short of the promise in the subtitle, The Secret of Human Thought Revealed, but still makes for some interesting reading.

Artificial Intelligence developments: the year in review

TechCrunch, a popular site dedicated to technology news, has published a list of the the top Artificial Intelligence news of 2016.

2016 seems indeed to have been the year Artificial Intelligence (AI) left the confinement of university labs to come into public view.

review

Several of the news selected by TechCrunch, were also covered in this blog.

In March a Go playing program, developed by Google’s DeepMind, AlphaGo, defeated 18-time world champion Lee Sedol (reference in the TechCrunch review).

Digital Art, where deep learning algorithms learn to paint in the style of a particular artist, was also the topic of one post (reference in the TechCrunch review).

In May, Digital Minds posted Moore’s law is dead, long live Moore´s law, describing how Google’s new chip can be used to run deep learning algorithms using Google’s TensorFlow (related article in the TechCrunch review).

TechCrunch has identified a number of other relevant developments that make for an interesting reading, including the Facebook-Amazon-Google-IBM-Microsoft mega partnership on AI, the Facebook strategy on AI and the news about the language invented by Google’s translation tool.

Will the AI wave gain momentum in 2017, as predicted by this article? I think the chances are good, but only the future will tell.