Virtually Human: the promise of digital immortality

Martine Rothblatt’s latest book, Virtually Human, the promise – and the peril – of digital immortality, recommended by none less than the likes of Craig Venter and Ray Kurzweil, is based on an interesting premise, which looks quite reasonable in principle.

Each one of us leaves behind such a large digital trace that it could be used, at least in principle, to teach a machine to behave like the person that generated the trace. In fact, if you put together all the pictures, videos, emails and messages that you generate in a lifetime, together with additional information like GPS coordinates, phone conversations, and social network info, there should be enough information for the right software to learn to behave just like you.

Rothblatt imagines that all this information will be stored in what she calls a mindfile and that such a mindfile could be used by software (mindware) to create mindclones, software systems that would think, behave and act like the original human that was used to create the mindfile. Other systems, similar to these, but not based on a copy of a human original, are called bemans, and raise similar questions. Would such systems have rights and responsibilities, just like humans? Rothblatt argues forcefully that society will have to recognize them as persons, sooner or later. Otherwise, we would assist to a return to situations that modern societies have already abandoned, like slavery, and other practices that disrespect basic human rights (in this case, mindclone and beman’s rights).

Most of the book is dedicated to the analysis of the social, ethical, and economic consequences of an environment where humans live with mindclones and bemans. This analysis is entertaining and comprehensive, ranging from subjects as diverse as the economy, human relations, families, psychology, and even religion.  If one assumes the technology to create mindclones will happen, thinking about the consequences of such a technology is interesting and entertaining.

However, the book falls short in that it does not provide any convincing evidence that the technology will come to exist, in any form similar to the one that is assumed so easily by the author. We do not know how to create mindware that could interpret a mindfile and use it to create a conscious, sentient, self-aware system that is indistinguishable, in its behavior, from the original. Nor are we likely to find out soon how such a mindware could be designed. And yet, Rothblatt seems to think that such a technology is just around the corner, maybe just a few decades away. All in all, it sounds more like (poor) science fiction than the shape of things to come.

Advertisements

Deepmind presents Artificial General Intelligence for board games

In a paper recently published in the journal Science, researchers from DeepMind describe Alpha Zero, a system that mastered three very complex games, Go, chess, and shogi, using only self-play and reinforcement learning. What is different in this system (a preliminary version was previously referred in this blog), when compared with previous ones, like AlphaGo Zero, is that the same learning architecture and hyperparameters were used to learn different games, without any specific customization for each different game.
Historically, the best programs for each game were heavily customized to use and exploit specific characteristics of that game. AlphaGo Zero, the most impressive previous result, used the spatial symmetries of Go and a number of other specific optimizations. Special purpose chess program like Stockfish took years to develop, use enormous amounts of field-specific knowledge and can, therefore, only play one specific game.
Alpha Zero is the closest thing to a general purpose board game player ever designed. Alpha Zero uses a deep neural network to estimate move probabilities and position values. It performs the search using a Monte Carlo tree search algorithm, which is general-purpose and not specifically tuned to any particular game. Overall, Alpha Zero gets as close as ever to the dream of artificial general intelligence, in this particular domain. As the authors say, in the conclusions, “These results bring us a step closer to fulfilling a longstanding ambition of Artificial Intelligence: a general game-playing system that can master any game.
While mastering these ancient games, AlphaZero also teaches us a few things we didn’t know about the games. For instance, that, in chess, white has a strong upper hand when playing the Ruy Lopez opening, or when playing against the French and Caro-Kann defenses. Sicilian defense, on the other hand, gives black much better chances. At least, that is what the function learned by the deep neural network obtains…
Actualization: The NY Times just published an interesting piece on this topic, with some additional information.

The Beginning of Infinity

David Deutsch‘s newest book, The Beginning of Infinity is a tour de force argument for the power of science to transform the world. Deutsch’s main point is that human intelligence, once it reached the point where it started to be used to construct predictive explanations about the behavior of nature, became universal. Here, “universal” means that is can be used to understand any phenomenon and that this understanding leads to the creation of new technologies, which will be used to spread human intelligence throughout the known universe.

The Beginning of Infinity is not just one more book about science and how science is transforming our world. It is an all-encompassing analysis of the way human intelligence and human societies can develop or stagnate, by adopting or refusing to adopt the stance of looking for understandable explanations. Deutsch calls “static” those societies that refuse to look for new, non-supernatural explanations and “dynamic” those that are constantly looking for new explanations, based on objective and checkable evidence. Dynamic societies, he argues, develop and propagate rational memes, while static societies hold on to non-rational memes.

In the process, Deutsch talks authoritatively about evolution, the universality of computation, quantum mechanics, the multiverse and the paradoxes of infinity. They are not disparate subjects since they all become part of one single story on how humanity managed to understand and control the physical world.

Deutsch is at his best when arguing that science and technology are not only positive forces but that they are the only way to ensure the survival of Humanity in the long run. He argues, convincingly, against the myth of Gaia, the idea that the planet is a living being providing us with a generous and forgiving environment as well as against the related, almost universal, concern that technological developments are destroying the planet. This is nonsense, he argues. The future survival of Humanity and the hope of spreading human intelligence throughout the Cosmos reside entirely in our ability to control nature and to bend it to our will. Otherwise, we will follow the path of the many species that became extinct, for not being able to control the natural or unnatural phenomena that led to their extinction.

Definitely, the book to read if you care about the Future of Humanity.

 

Crystal Nights

Exactly 80 years ago, Kristallnacht (the night of the crystals) took place in Germany, in the night from the 9th to the 10th of November. Jews were persecuted and killed, and their property was destroyed, in an event that is an important marker in the rise of the anti-semitism movement that characterized Nazi Germany. The name comes from the many windows of Jewish-owned stores broken during that night.

Greg Egan, one of my favorite science fiction writers, wrote a short story inspired in that same night, entitled Crystal Nights. This (very) short story is publicly available (you can find it here ) and is definitely worth a reading. I will not spoil the ending here, but it has to do with computers and singularities. The story was also included in a book that features other short stories by Greg Egan.

If you like this story, maybe you should check other books by Egan, such as Permutation City, Diaspora or Axiomatic (another collection of short stories).

Kill the baby or the grandma?

What used to be an arcane problem in philosophy and ethics, The Trolley Problem, has been taking center stage in the discussions about the way autonomous vehicles should behave in the case of an accident. As reported previously in this blog, a website created by MIT researchers, The Moral Machine, gave everyone the opportunity to confront him or herself with the dilemmas that an autonomous car may have to face when deciding what action to take in the presence of an unavoidable accident.

The site became so popular that it was possible to gather more than 40 million decisions, from people in 233 countries and territories. The analysis of this massive amount of data was just published in an article in the journal Nature. In the site, you are faced with a simple choice. Drive forward, possibly killing some pedestrians or vehicle occupants, or swerve left, killing a different group of people. From the choices made by millions of persons, it is possible to derive some general rules of how ethics commands people to act, when faced with the difficult choice of who to kill and who to spare.

The results show some clear choices, but also that some decisions vary strongly with the culture of the person in charge. In general, people decide to protect babies, youngsters and pregnant women, as well as doctors (!). At the bottom of the preference scale are old people, animals and criminals. 

Images: from the original article in Nature.

Hello World: how to be human in the age of the machine

Computers, algorithms, and data are controlling our lives, powering our economy and changing our world. Unlike a few decades ago, the larger companies on the planet deal mostly with data manipulation, processed by powerful algorithms that help us decide what we buy, which songs we like, where we go and how we get there. More and more, we are becoming unwitty slaves to these algorithms, which are with us all the time, running on cell phones, computers, servers, and smart devices. And yet, few people understand what an algorithm is, what artificial intelligence really means, or what machine learning can do.

Hannah Fry’s new book opens a window on this world of algorithms and on the ways they are changing our lives and societies. Despite its name, this book is not about programming nor is it about programs. The book is about algorithms, and the ways they are being used in the most diverse areas, to process data and obtain results that are of economic or societal value.

While leading us through the many different areas where algorithms are used these days, Fry passes on her own views about the benefits they bring but also about the threats they carry with them. The book starts by addressing the issue of whether we, humans, are handling too much power to algorithms and machines. This has not to do with the fear of intelligent machines taking over the world, the fear that a superintelligence will rule us against our will. On the contrary, the worry is that algorithms that are effective but not that intelligent will be trusted to take decisions on our behalf; that our privacy is being endangered by our willingness to provide personal data to companies and agencies; that sub-optimal algorithms working on insufficient data may bring upon us serious unintended consequences.

As Fry describes, trusting algorithms to run our lives is made all the more dangerous by the fact that each one of us is handing over huge amounts of personal data to big companies and government agencies, which can use them to infer information that many of us would rather keep private. Even data that we deem most innocent, like what we shop at the grocery, is valuable and can be used to extract valuable and, sometimes, surprising information. You will learn, for instance, that pregnant women, on their second trimester, are more likely to buy moisturizer, effectively signaling the data analysts at the stores that a baby is due in a few months. The book is filled with interesting, sometimes fascinating, descriptions of cases like these, where specific characteristics on the data can be used, by algorithms, to infer valuable information.

Several chapters are dedicated to a number of different areas where data processing and algorithmic analysis have been extensively applied. Fry describes how algorithms are currently being used in areas as diverse as justice, transportation, medicine, and crime prevention. She explains and analyses how algorithms can be used to drive cars, influence elections, diagnose cancers, make decisions on parole cases and rulings in courts, guess where crimes will be committed, recognize criminals in surveillance videos, predict the risk of Alzheimer from early age linguistic ability, and many other important and realistic applications of data analysis. Most of these algorithms use what we now call artificial intelligence and machine learning but it is clear that, to the author, these techniques are just toolboxes for algorithm designers. The many examples included in these chapters are, in themselves, very interesting and, in some cases, riveting. However, what is most important is the way the author uses these examples to make what I feel is the central point of the book: using an algorithm implies a tradeoff and every application brings with it benefits and risks, which have to be weighted. If we use face recognition algorithms to spot criminals, we have to accept the risk of an algorithm sending an innocent person to jail. If we police more the locations where crimes are more likely to take place, people on those areas may feel they are treated unfairly. If we use social data to target sale campaigns, then it can also be used to market political candidates and manipulate elections. The list of tradeoffs goes on and on and every one of them is complex.

As every engineer knows, there is no such thing as 100% reliability or 100% precision. Every system that is designed to perform a specific task will have a given probability of failing at it, however small. All algorithms that aim at identifying some specific targets will make mistakes. They will falsely classify some non-target cases as targets (false positives) and will miss some real targets (false negatives). An autonomous car may be safer than a normal car with a human driver but will, in some rare cases, cause accidents that would not have happened, otherwise. How many spurious accidents are we willing to tolerate, in order to make roads safer to everyone? These are difficult questions and this book does a good job at reminding us that technology will not make those choices for us. It is our responsibility to make sure that we, as a society, assess and evaluate clearly the benefits and risks of each and every application of algorithms, in order to make the overall result be positive for the world.

The final chapter addresses a different and subtler point, which can be framed in the same terms that Ada Lovelace put it, more than 150 years ago: can computers originate new things, can they be truly creative? Fry does not try to find a final answer to this conundrum, but she provides interesting data on the subject, for the reader to decide by him- or herself. By analyzing the patterns of the music written by a composer, algorithms can create new pieces that, in many cases, will fool the majority of the people and even many experts. Does this mean that computers can produce novel art? And, if so, is it good art? The answer is made the more difficult by the fact that there are no objective measures for the quality of works of art. Many experiences, some of them described in this chapter, show clearly that the beauty is, in many cases, in the eye of the beholder. Computer produced art is good enough to be treated like the real thing, at least when the origin of the work is not known. But many people will argue that copying someone else’s style is not really creating art. Others will disagree. Nonetheless, this final chapter provides an interesting introduction to the problem of computer creativity and the interested reader can pick on some of the leads provided by the book to investigate the issue further.

Overall, Hello World is definitely worth reading, for those interested in the ways computers and algorithms are changing our lives.

Note: this is an edited version of the full review that appeared in Nature Electronics.

 

The Ancient Origins of Consciousness

The Ancient Origins of Consciousness, by Todd Feinberg and Jon Mallatt, published by MIT Press, addresses the question of the rise of consciousness in living organisms from three different viewpoints: the philosophical, the neurobiological and the neuroevolutionary domains.

From a philosophical standpoint, the question is whether consciousness, i.e., subjective experience, can even be explained by an objective scientific theory. The so-called “hard problem” of consciousness, in the words of David Chalmers, may forever remain outside the realm of science, since we may never know how physical mechanisms in the brain create the subjective experience that gives rises to consciousness. The authors disagree with this pessimistic assessment by Chalmers, and argue that there is biological and evolutionary evidence that consciousness can be studied objectively. This evidence is the one they propose to present in this book.

Despite the argument that the book follows a three-pronged approach, it is most interesting when describing and analyzing the evolutionary history of the neurological mechanisms that ended up created consciousness in humans and, presumably, in other mammals. Starting at the very beginning, with the Cambrian explosion, 540 million years ago, animals may have exhibited some kind of conscious experience, the authors argue. The first vertebrates, which appeared during this period, already exhibited some distinctive anatomic telltales of conscious experiences.

Outside the vertebrates, the question is even more complex, but the authors point to evidence that some arthropods and cephalopods may also exhibit behaviors that signal consciousness (a point poignantly made in another recent book, Other Minds and Alien Intelligences).

Overall, one is left convinced that consciousness can be studied scientifically and that there is significant evidence that graded versions of it have been present for hundreds of millions of years in our distant ancestors and long-removed cousins.