The Second Machine Age

The Second Machine Age, by Erik Brynjolfsson and Andrew McAfee, two MIT professors and researchers, offers mostly an economist’s point of view on the consequences of the technological changes that are remaking civilisation.

Although a fair number of chapters is dedicated to the technological innovations that are shaping the first decades of the 21st century, the book is at its best when the economic issues are presented and discussed.

The book is particularly interesting in its treatment of the bounty vs. spread dilema: will economic growth be fast enough to lift everyone’s standard of living, or will increased concentration of wealth lead to such an increase in inequality that many will be left behind?

The chapter that provides evidence on the steady increase in inequality is specially appealing and convincing. While average income, in the US, has been increasing steadily in the last decades, median income (the income of those who are exactly in the middle of the pay scale) has stagnated for several decades, and may even be decreasing in the last few years. For the ones at the bottom at the scale, the situation is much worst now than decades ago.

Abundant evidence of this trend also comes from the analysis of the shares of GDP that are due to wages and to corporate profits. Although these two fractions of GDP have fluctuated somewhat in the last century, there is mounting evidence that the fraction due to corporate profits is now increasing, while the fraction due to wages is decreasing.

All this evidence, put together, leads to the inevitable conclusion that society has to explicitly address the challenges posed by the fourth industrial revolution.

The last chapters are, indeed, dedicated to this issue. The authors do not advocate a universal basic income, but come out in defence of a negative income tax for those whose earnings are below a given level. The mathematics of the proposal are somewhat unclear but, in the end, one thing remains certain: society will have to address the problem of mounting inequality brought in by technology and globalisation.

Advertisements

LIFE 3.0: Being Human in the Age of Artificial Intelligence

Max Tegmark’s latest book, LIFE 3.0: Being Human in the Age of Artificial Intelligence, is an enthralling journey into the future, when the developments in artificial intelligence create a new type of lifeform on Earth.

Tegmark proposes to classify life in three stages. Life 1.0, unintelligent life, is able to change its hardware and improve itself only through the very slow and blind process of natural evolution. Single cell organisms, plants and simple animals are in this category. Life 2.0 is also unable to change its hardware (excepto through evolution, as for Life 1.0) but can change its software, stored in the brains, by using previous experience to learn new behaviors. Higher animals and humans, in particular, belong here. Humans can now, up to a limited point, change their hardware (through prosthetics, cellphones, computers and other devices) so they could also be considered now Life 2.1.

Life 3.0 is the new generation of life, which can change both its software and its hardware. The ability to change the computational support (i.e., the physical basis of computation) results from technological advances, which will only accelerate with the advent of Artificial General Intelligence (AGI). The book is really about the future of a world where AGI enables humanity to create a whole range of new technologies, and expand new forms of life through the cosmos.

The riveting prelude, The Tale of the Omega Team, is the story of the group of people who “created” the first intelligence explosion on planet Earth makes this a “hard-to-put-down” book.  The rest of the book goes through the consequences of this intelligence explosion, a phenomenon the author believes will undoubtedly take place, sooner or later. Chapter 4 focus on the explosion proper, and on how it could happen. Chapter 5, appropriately titled “Aftermath: The Next 10,000 Years” is one of the most interesting ones, and describes a number of long term scenarios that could result from such an event. These scenarios range from a benevolent and enlightened dictatorship (by the AI) to the enslaved God situation, where humanity keeps the AI in chains and uses it as a slave to develop new technologies, inaccessible to unaided humanity’s simpler minds. Always present, in these scenarios, are the risks of a hostile takeover by a human-created AGI, a theme that this book also addresses in depth, following on the ideas proposed by Nick Bostrom, in his book Superintelligence.

Being a cosmologist, Tegmark could not leave out the question of how life can spread through the Cosmos, a topic covered in depth in chapter 6, in a highly speculative fashion. Tegmark’s view is, to say the least, grandiose, envisaging a future where AGI will make it possible to spread life through the reachable universe, climbing the three levels of the Kardashev scale. The final chapters address (in a necessarily more superficial manner) the complex topics of goal setting for AI systems and artificial (or natural) consciousness. These topics somehow felt less well developed and more complete and convincing treatments can be found elsewhere. The book ends with a description of the mission of the Future of Life Institute, and the Asilomar AI Principles.

A book like this cannot leave anyone indifferent, and you will be likely to take one of two opposite sides: the optimistis, with many famous representatives, including Elon Mush, Stuart Russel and Nick Bostrom, who believe AGI can be developed and used to make humanity prosper; or the pessimists , whose more visible member is probably Yuval Noah Harari, who has voiced very serious concerns about technology developments in his book Homo Deus and in this review of Life 3.0.

The Digital Mind: How Science is Redefining Humanity

Following the release in the US,  The Digital Mind, published by MIT Press,  is now available in Europe, at an Amazon store near you (and possibly in other bookstores). The book covers the evolution of technology, leading towards the expected emergence of digital minds.

Here is a short rundown of the book, kindly provided by yours truly, the author.

New technologies have been introduced in human lives at an ever increasing rate, since the first significant advances took place with the cognitive revolution, some 70.000 years ago. Although electronic computers are recent and have been around for only a few decades, they represent just the latest way to process information and create order out of chaos. Before computers, the job of processing information was done by living organisms, which are nothing more than complex information processing devices, created by billions of years of evolution.

Computers execute algorithms, sequences of small steps that, in the end, perform some desired computation, be it simple or complex. Algorithms are everywhere, and they became an integral part of our lives. Evolution is, in itself, a complex and long- running algorithm that created all species on Earth. The most advanced of these species, Homo sapiens, was endowed with a brain that is the most complex information processing device ever devised. Brains enable humans to process information in a way unparalleled by any other species, living or extinct, or by any machine. They provide humans with intelligence, consciousness and, some believe, even with a soul, a characteristic that makes humans different from all other animals and from any machine in existence.

But brains also enabled humans to develop science and technology to a point where it is possible to design computers with a power comparable to that of the human brain. Artificial intelligence will one day make it possible to create intelligent machines and computational biology will one day enable us to model, simulate and understand biological systems and even complete brains with unprecedented levels of detail. From these efforts, new minds will eventually emerge, minds that will emanate from the execution of programs running in powerful computers. These digital minds may one day rival our own, become our partners and replace humans in many tasks. They may usher in a technological singularity, a revolution in human society unlike any other that happened before. They may make humans obsolete and even a threatened species or they make us super-humans or demi-gods.

How will we create these digital minds? How will they change our daily lives? Will we recognize them as equals or will they forever be our slaves? Will we ever be able to simulate truly human-like minds in computers? Will humans transcend the frontiers of biology and become immortal? Will humans remain, forever, the only known intelligence in the universe?

 

How to create a mind

Ray Kurzweil’s latest book, How to Create a Mind, published in 2012, is an interesting read and shows some welcome change on his views of science and technology. Unlike some of his previous (and influntial) books, including The Singularity is Near, The Age of Spiritual Machines and The Age of Intelligent Machines, the main point of this book is not that exponential technological development will bring in a technological singularity in a few decades.

how-to-create-a-mind-cover-347x512

True, that theme is still present, but takes second place to the main theme of the book, a concrete (although incomplete) proposal to build intelligent systems that are inspired in the architecture of the human neocortex.

Kurzweil main point in this book is to present a model of the human neocortex, what he calls The Pattern Recognition Theory of the Mind (PRTM). In this theory, the neocortex is simply a very powerful pattern recognition system, built out of about 300 million (his number, not mine) similar pattern recognizers. The input from each of these recognizers can come from either external inputs, through the senses, or from the older parts (evolutionary speaking) of the brain, or from the output of other pattern recognizers in the neocortex. Each recognizer is relatively simple, and can only recognize a simple pattern (say the word APPLE) but, through complex interconnections with other recognizers above and below, it makes possible all sorts of thinking and abstract reasoning.

Each pattern consists, in its essence, in a short sequence of symbols, and is connected, through bundles of axons, to the actual place in the cortex where these symbols are activated, by another pattern recognizer. In most cases, the memories these recognizers represent must be accessed in a specific order. He gives the example that very few persons can recite the alphabet backwards, or even their social security number, which is taken as evidence of the sequential nature of operation of these pattern recognizers.

The key point of the book is that the actual algorithms used to build and structure a neocortex may soon become well understood, and used to build intelligent machines, embodied with true strong Artificial Intelligence. How to Create a Mind falls somewhat short of the promise in the subtitle, The Secret of Human Thought Revealed, but still makes for some interesting reading.

Artificial Intelligence developments: the year in review

TechCrunch, a popular site dedicated to technology news, has published a list of the the top Artificial Intelligence news of 2016.

2016 seems indeed to have been the year Artificial Intelligence (AI) left the confinement of university labs to come into public view.

review

Several of the news selected by TechCrunch, were also covered in this blog.

In March a Go playing program, developed by Google’s DeepMind, AlphaGo, defeated 18-time world champion Lee Sedol (reference in the TechCrunch review).

Digital Art, where deep learning algorithms learn to paint in the style of a particular artist, was also the topic of one post (reference in the TechCrunch review).

In May, Digital Minds posted Moore’s law is dead, long live Moore´s law, describing how Google’s new chip can be used to run deep learning algorithms using Google’s TensorFlow (related article in the TechCrunch review).

TechCrunch has identified a number of other relevant developments that make for an interesting reading, including the Facebook-Amazon-Google-IBM-Microsoft mega partnership on AI, the Facebook strategy on AI and the news about the language invented by Google’s translation tool.

Will the AI wave gain momentum in 2017, as predicted by this article? I think the chances are good, but only the future will tell.

The end of Moore’s law?

Gordon Moore, scientist and chairman of Intel, first noticed that the number of transistors that can be placed inexpensively on an integrated circuit increased exponentially over time, doubling approximately every two years. This corresponds to an exponential increase on the number of transistors per chip that led to an increase by a factor of more than 1,000,000 in 40 years. Moore’s law has fueled the enormous developments in computer technology that have revolutionized technology and society in the last decades.

law

A long standing question is for how long will Moore’s law hold, since no exponential growth can last forever. The Technology Quarterly section of this week edition of the  Economist, summarized in this short article, analyzes this question in depth.

The conclusions are that, while the rate of increase of the number of transistors in a chip will become smaller and smaller, advances in other technologies, such as software and cloud computing, will cover the slack, providing us with increases in computational power that will not deviate much from what Moore’s law would have predicted.

Gordon_Moore_Scientists_You_Must_Know

Image of computer scientist and businessman Gordon Moore. The image is a screenshot from the Scientists You Must Know video, created by the Chemical Heritage Foundation, in which he briefly discusses Moore’s Law