Hello World: how to be human in the age of the machine

Computers, algorithms, and data are controlling our lives, powering our economy and changing our world. Unlike a few decades ago, the larger companies on the planet deal mostly with data manipulation, processed by powerful algorithms that help us decide what we buy, which songs we like, where we go and how we get there. More and more, we are becoming unwitty slaves to these algorithms, which are with us all the time, running on cell phones, computers, servers, and smart devices. And yet, few people understand what an algorithm is, what artificial intelligence really means, or what machine learning can do.

Hannah Fry’s new book opens a window on this world of algorithms and on the ways they are changing our lives and societies. Despite its name, this book is not about programming nor is it about programs. The book is about algorithms, and the ways they are being used in the most diverse areas, to process data and obtain results that are of economic or societal value.

While leading us through the many different areas where algorithms are used these days, Fry passes on her own views about the benefits they bring but also about the threats they carry with them. The book starts by addressing the issue of whether we, humans, are handling too much power to algorithms and machines. This has not to do with the fear of intelligent machines taking over the world, the fear that a superintelligence will rule us against our will. On the contrary, the worry is that algorithms that are effective but not that intelligent will be trusted to take decisions on our behalf; that our privacy is being endangered by our willingness to provide personal data to companies and agencies; that sub-optimal algorithms working on insufficient data may bring upon us serious unintended consequences.

As Fry describes, trusting algorithms to run our lives is made all the more dangerous by the fact that each one of us is handing over huge amounts of personal data to big companies and government agencies, which can use them to infer information that many of us would rather keep private. Even data that we deem most innocent, like what we shop at the grocery, is valuable and can be used to extract valuable and, sometimes, surprising information. You will learn, for instance, that pregnant women, on their second trimester, are more likely to buy moisturizer, effectively signaling the data analysts at the stores that a baby is due in a few months. The book is filled with interesting, sometimes fascinating, descriptions of cases like these, where specific characteristics on the data can be used, by algorithms, to infer valuable information.

Several chapters are dedicated to a number of different areas where data processing and algorithmic analysis have been extensively applied. Fry describes how algorithms are currently being used in areas as diverse as justice, transportation, medicine, and crime prevention. She explains and analyses how algorithms can be used to drive cars, influence elections, diagnose cancers, make decisions on parole cases and rulings in courts, guess where crimes will be committed, recognize criminals in surveillance videos, predict the risk of Alzheimer from early age linguistic ability, and many other important and realistic applications of data analysis. Most of these algorithms use what we now call artificial intelligence and machine learning but it is clear that, to the author, these techniques are just toolboxes for algorithm designers. The many examples included in these chapters are, in themselves, very interesting and, in some cases, riveting. However, what is most important is the way the author uses these examples to make what I feel is the central point of the book: using an algorithm implies a tradeoff and every application brings with it benefits and risks, which have to be weighted. If we use face recognition algorithms to spot criminals, we have to accept the risk of an algorithm sending an innocent person to jail. If we police more the locations where crimes are more likely to take place, people on those areas may feel they are treated unfairly. If we use social data to target sale campaigns, then it can also be used to market political candidates and manipulate elections. The list of tradeoffs goes on and on and every one of them is complex.

As every engineer knows, there is no such thing as 100% reliability or 100% precision. Every system that is designed to perform a specific task will have a given probability of failing at it, however small. All algorithms that aim at identifying some specific targets will make mistakes. They will falsely classify some non-target cases as targets (false positives) and will miss some real targets (false negatives). An autonomous car may be safer than a normal car with a human driver but will, in some rare cases, cause accidents that would not have happened, otherwise. How many spurious accidents are we willing to tolerate, in order to make roads safer to everyone? These are difficult questions and this book does a good job at reminding us that technology will not make those choices for us. It is our responsibility to make sure that we, as a society, assess and evaluate clearly the benefits and risks of each and every application of algorithms, in order to make the overall result be positive for the world.

The final chapter addresses a different and subtler point, which can be framed in the same terms that Ada Lovelace put it, more than 150 years ago: can computers originate new things, can they be truly creative? Fry does not try to find a final answer to this conundrum, but she provides interesting data on the subject, for the reader to decide by him- or herself. By analyzing the patterns of the music written by a composer, algorithms can create new pieces that, in many cases, will fool the majority of the people and even many experts. Does this mean that computers can produce novel art? And, if so, is it good art? The answer is made the more difficult by the fact that there are no objective measures for the quality of works of art. Many experiences, some of them described in this chapter, show clearly that the beauty is, in many cases, in the eye of the beholder. Computer produced art is good enough to be treated like the real thing, at least when the origin of the work is not known. But many people will argue that copying someone else’s style is not really creating art. Others will disagree. Nonetheless, this final chapter provides an interesting introduction to the problem of computer creativity and the interested reader can pick on some of the leads provided by the book to investigate the issue further.

Overall, Hello World is definitely worth reading, for those interested in the ways computers and algorithms are changing our lives.

Note: this is an edited version of the full review that appeared in Nature Electronics.

 

Advertisements

The Ancient Origins of Consciousness

The Ancient Origins of Consciousness, by Todd Feinberg and Jon Mallatt, published by MIT Press, addresses the question of the rise of consciousness in living organisms from three different viewpoints: the philosophical, the neurobiological and the neuroevolutionary domains.

From a philosophical standpoint, the question is whether consciousness, i.e., subjective experience, can even be explained by an objective scientific theory. The so-called “hard problem” of consciousness, in the words of David Chalmers, may forever remain outside the realm of science, since we may never know how physical mechanisms in the brain create the subjective experience that gives rises to consciousness. The authors disagree with this pessimistic assessment by Chalmers, and argue that there is biological and evolutionary evidence that consciousness can be studied objectively. This evidence is the one they propose to present in this book.

Despite the argument that the book follows a three-pronged approach, it is most interesting when describing and analyzing the evolutionary history of the neurological mechanisms that ended up created consciousness in humans and, presumably, in other mammals. Starting at the very beginning, with the Cambrian explosion, 540 million years ago, animals may have exhibited some kind of conscious experience, the authors argue. The first vertebrates, which appeared during this period, already exhibited some distinctive anatomic telltales of conscious experiences.

Outside the vertebrates, the question is even more complex, but the authors point to evidence that some arthropods and cephalopods may also exhibit behaviors that signal consciousness (a point poignantly made in another recent book, Other Minds and Alien Intelligences).

Overall, one is left convinced that consciousness can be studied scientifically and that there is significant evidence that graded versions of it have been present for hundreds of millions of years in our distant ancestors and long-removed cousins.

The Evolution of Everything, or the use of Universal Acid, by Matt Ridley

Matt Ridley never disappoints but his latest book, The Evolution of Everything is probably the most impressive one. Daniel Dennett called evolution the universal acid, an idea that dissolves every existing preconception we may have about the world. Ridley uses this universal acid to show that the ideas behind evolution apply not only to living beings but to all sorts of things in the world and, particularly, to society. The universal acid is used by Ridley to deconstruct our preconceptions about history and to present his own view that centralized control does not work and that bottom-up driven evolution is the engine behind progress.

When Ridley means everything, he is not exaggerating. The chapters in this book cover, among many others, topics as different as the universe, life, moral, culture, technology, leadership, education, religion, and money. To all these topics Ridley applies the universal acid to arrive at the conclusion that (almost) all thas is planned and directed leads to bad results, and that all that evolves by the pressures of competition and natural selection provides advances and improvements in society. Bottom-up mechanisms, he argues, are what creates innovation in the world, be it in the natural world, in culture, in technology or in any other area of society. To this view, he gives explicit credit to Lucretius who, in his magnum opus The Rerum Natura from the fourth century BC, proposed essentially the same idea, and to Adam Smith’s who, in The Wealth of Nations, proposed the central role of commerce in the development of society.

Sometimes, his arguments look too farfetched like, for instance, when he argues that the state should stay out of the education business, or that the 2008 crisis was caused not by runaway private initiative but by wrong governmental policies. Nonetheless, even in these cases, the arguments are very persuasive and always entertaining. Even someone like me, who believes that there are some roles to be played by the state, ends up doubting his own convictions.

All in all, a must read.

 

Other Minds and Alien Intelligences

Peter Godfrey-Smith’s Other Minds makes for an interesting read on the subject of the evolution of intelligence. The book focuses on the octopus and the evolution of intelligent life.Octopuses belong to the same class of animals as squid and cuttlefish (the cephalopods), a class which separated from the evolutionary line that led to humans more than 600 million years ago. As Godfrey-Smith describes, many experiments have shown that octopuses are highly intelligent, and capable of complex behaviours that are deemed to require sophisticated forms of intelligence. They are, therefore, the closest thing to alien intelligence that we can get our hands on, since the evolution of their bodies and brains was, in the last 600 million years, independent from our own evolution.

The book explores very well this issue and dives deep into the matters of cephalopod intelligence. The nervous systems of octopuses brains are very different from ours and, in fact, they are not even organised in the same way. Each of the eight arms of an octopus is controlled by a separate “small brain”. These small brains report to, and are coordinated by, the central brain but retain some ability to act independently, an arrangement that is, to say the least, foreign to us.

Godfrey-Smith leads us through the branches of the evolutionary tree, and argues that advanced intelligence has evolved not once, but a number of times, perhaps four times as shown in the picture, in mammals, birds and two branches of cephalopods.

If his arguments are right, this work and this book provide an important insight on the nature of the bottlenecks that may block the evolution of higher intelligence, on Earth and in other planets. If, indeed, life on Earth has evolved higher intelligence multiple times, independently, this fact provides strong evidence that the evolution of brains, from simple nervous systems to complex ones, able to support higher intelligence, is not a significant bottleneck. That reduces the possible number of explanations for the fact that we have never observed technological civilisations on the Galaxy, also known as the Great Filter. Whatever the reasons, it is probably not because intelligence evolves only rarely in living organisms.

The scientific components of the book are admirably intertwined with the descriptions of the author’s appreciation of cephalopods, in particular, and marine life, in general. All in all, a very interesting read for those interested in the evolution of intelligence.

Picture (not to scale) from the book, adapted to show the possible places where higher intelligence evolved.

The Second Machine Age

The Second Machine Age, by Erik Brynjolfsson and Andrew McAfee, two MIT professors and researchers, offers mostly an economist’s point of view on the consequences of the technological changes that are remaking civilisation.

Although a fair number of chapters is dedicated to the technological innovations that are shaping the first decades of the 21st century, the book is at its best when the economic issues are presented and discussed.

The book is particularly interesting in its treatment of the bounty vs. spread dilema: will economic growth be fast enough to lift everyone’s standard of living, or will increased concentration of wealth lead to such an increase in inequality that many will be left behind?

The chapter that provides evidence on the steady increase in inequality is specially appealing and convincing. While average income, in the US, has been increasing steadily in the last decades, median income (the income of those who are exactly in the middle of the pay scale) has stagnated for several decades, and may even be decreasing in the last few years. For the ones at the bottom at the scale, the situation is much worst now than decades ago.

Abundant evidence of this trend also comes from the analysis of the shares of GDP that are due to wages and to corporate profits. Although these two fractions of GDP have fluctuated somewhat in the last century, there is mounting evidence that the fraction due to corporate profits is now increasing, while the fraction due to wages is decreasing.

All this evidence, put together, leads to the inevitable conclusion that society has to explicitly address the challenges posed by the fourth industrial revolution.

The last chapters are, indeed, dedicated to this issue. The authors do not advocate a universal basic income, but come out in defence of a negative income tax for those whose earnings are below a given level. The mathematics of the proposal are somewhat unclear but, in the end, one thing remains certain: society will have to address the problem of mounting inequality brought in by technology and globalisation.

The Computer and the Brain

The Computer and the Brain, first published in 1958, is a delightful little book by John von Neumman, his attempt to compare two very different information processing devices: computers and brains. Although written more than sixty years ago, it has more than historic interest, even though it addresses two topics that have developed enormously in the decades since von Neumann’s death.

John von Neumann’s genius comes through very clearly in this essay. Sixty years ago, very few persons knew what a computer was, and probably even less had some idea how the brain performed its magic. This book, written just a few years after the invention of the transistor (by Bardeen and Brattain), and the discovery of the membrane mechanisms that explain the electrical behaviour of neurons (by Hodgkin and Huxley), nonetheless compares, in very clear terms, the relative computational power of computers and brains.

Von Neumann aim is to compare many of the characteristics of the processing devices used by computers (vacuum tubes and transistors) with the ones used by the brain (neurons). His objective is to perform an objective comparison of the two technologies, as of their ability to process information. He addresses speed, size, memory and other characteristics of the two types of information processing devices.

One of the central and (to me) most interesting parts of the book is the comparison of artificial information processing devices (vacuum tubes and transistors) with natural information processing devices (neurons), in terms of speed and size.

Von Neumman concludes that vacuum tubes and transistors are faster, by a factor of 10,000 to 100,000, than neurons, and occupy about 1000 times more space (with the technologies of the day). All together, if one assumes that speed can be traded by number of devices (for instance, reusing electronic devices to perform computations that, in the brain, are performed by slower, but independent, neurons), his comparisons lead to the conclusion (not explicit in the book, I must add) that an electronic computer the size of a human brain would be one to two orders of magnitude less powerful than the human brain itself.

John von Neumann could not have predicted, in 1957, that transistors would be packed, by the billions, on integrated circuits no larger than a postal stamp. If one uses the numbers that correspond to the technologies of today, one is led to conclude that a modern CPU (such as the Intel Core i7), with a billion transistors, operating in the nanoseconds range, is a few orders of magnitude (10000 times) more powerful than the human brain, with its hundred billion neurons operating in the milliseconds range.

Of course one has to consider, as John von Neumann also wrote, that a neuron is considerably more complex and can perform more complex computations than a transistor. But even if one takes that into account, and assumes that a transistor is roughly equivalent to a synapse, in raw computing power, one gets the final result that the human brain and an Intel Core i7 have about the same raw processing power.

It is a sobering thought, one which von Neumann would certainly have liked to share.

LIFE 3.0: Being Human in the Age of Artificial Intelligence

Max Tegmark’s latest book, LIFE 3.0: Being Human in the Age of Artificial Intelligence, is an enthralling journey into the future, when the developments in artificial intelligence create a new type of lifeform on Earth.

Tegmark proposes to classify life in three stages. Life 1.0, unintelligent life, is able to change its hardware and improve itself only through the very slow and blind process of natural evolution. Single cell organisms, plants and simple animals are in this category. Life 2.0 is also unable to change its hardware (excepto through evolution, as for Life 1.0) but can change its software, stored in the brains, by using previous experience to learn new behaviors. Higher animals and humans, in particular, belong here. Humans can now, up to a limited point, change their hardware (through prosthetics, cellphones, computers and other devices) so they could also be considered now Life 2.1.

Life 3.0 is the new generation of life, which can change both its software and its hardware. The ability to change the computational support (i.e., the physical basis of computation) results from technological advances, which will only accelerate with the advent of Artificial General Intelligence (AGI). The book is really about the future of a world where AGI enables humanity to create a whole range of new technologies, and expand new forms of life through the cosmos.

The riveting prelude, The Tale of the Omega Team, is the story of the group of people who “created” the first intelligence explosion on planet Earth makes this a “hard-to-put-down” book.  The rest of the book goes through the consequences of this intelligence explosion, a phenomenon the author believes will undoubtedly take place, sooner or later. Chapter 4 focus on the explosion proper, and on how it could happen. Chapter 5, appropriately titled “Aftermath: The Next 10,000 Years” is one of the most interesting ones, and describes a number of long term scenarios that could result from such an event. These scenarios range from a benevolent and enlightened dictatorship (by the AI) to the enslaved God situation, where humanity keeps the AI in chains and uses it as a slave to develop new technologies, inaccessible to unaided humanity’s simpler minds. Always present, in these scenarios, are the risks of a hostile takeover by a human-created AGI, a theme that this book also addresses in depth, following on the ideas proposed by Nick Bostrom, in his book Superintelligence.

Being a cosmologist, Tegmark could not leave out the question of how life can spread through the Cosmos, a topic covered in depth in chapter 6, in a highly speculative fashion. Tegmark’s view is, to say the least, grandiose, envisaging a future where AGI will make it possible to spread life through the reachable universe, climbing the three levels of the Kardashev scale. The final chapters address (in a necessarily more superficial manner) the complex topics of goal setting for AI systems and artificial (or natural) consciousness. These topics somehow felt less well developed and more complete and convincing treatments can be found elsewhere. The book ends with a description of the mission of the Future of Life Institute, and the Asilomar AI Principles.

A book like this cannot leave anyone indifferent, and you will be likely to take one of two opposite sides: the optimistis, with many famous representatives, including Elon Mush, Stuart Russel and Nick Bostrom, who believe AGI can be developed and used to make humanity prosper; or the pessimists , whose more visible member is probably Yuval Noah Harari, who has voiced very serious concerns about technology developments in his book Homo Deus and in this review of Life 3.0.