The Digital Mind: How Science is Redefining Humanity

Following the release in the US,  The Digital Mind, published by MIT Press,  is now available in Europe, at an Amazon store near you (and possibly in other bookstores). The book covers the evolution of technology, leading towards the expected emergence of digital minds.

Here is a short rundown of the book, kindly provided by yours truly, the author.

New technologies have been introduced in human lives at an ever increasing rate, since the first significant advances took place with the cognitive revolution, some 70.000 years ago. Although electronic computers are recent and have been around for only a few decades, they represent just the latest way to process information and create order out of chaos. Before computers, the job of processing information was done by living organisms, which are nothing more than complex information processing devices, created by billions of years of evolution.

Computers execute algorithms, sequences of small steps that, in the end, perform some desired computation, be it simple or complex. Algorithms are everywhere, and they became an integral part of our lives. Evolution is, in itself, a complex and long- running algorithm that created all species on Earth. The most advanced of these species, Homo sapiens, was endowed with a brain that is the most complex information processing device ever devised. Brains enable humans to process information in a way unparalleled by any other species, living or extinct, or by any machine. They provide humans with intelligence, consciousness and, some believe, even with a soul, a characteristic that makes humans different from all other animals and from any machine in existence.

But brains also enabled humans to develop science and technology to a point where it is possible to design computers with a power comparable to that of the human brain. Artificial intelligence will one day make it possible to create intelligent machines and computational biology will one day enable us to model, simulate and understand biological systems and even complete brains with unprecedented levels of detail. From these efforts, new minds will eventually emerge, minds that will emanate from the execution of programs running in powerful computers. These digital minds may one day rival our own, become our partners and replace humans in many tasks. They may usher in a technological singularity, a revolution in human society unlike any other that happened before. They may make humans obsolete and even a threatened species or they make us super-humans or demi-gods.

How will we create these digital minds? How will they change our daily lives? Will we recognize them as equals or will they forever be our slaves? Will we ever be able to simulate truly human-like minds in computers? Will humans transcend the frontiers of biology and become immortal? Will humans remain, forever, the only known intelligence in the universe?

 

Artificial Intelligence developments: the year in review

TechCrunch, a popular site dedicated to technology news, has published a list of the the top Artificial Intelligence news of 2016.

2016 seems indeed to have been the year Artificial Intelligence (AI) left the confinement of university labs to come into public view.

review

Several of the news selected by TechCrunch, were also covered in this blog.

In March a Go playing program, developed by Google’s DeepMind, AlphaGo, defeated 18-time world champion Lee Sedol (reference in the TechCrunch review).

Digital Art, where deep learning algorithms learn to paint in the style of a particular artist, was also the topic of one post (reference in the TechCrunch review).

In May, Digital Minds posted Moore’s law is dead, long live Moore´s law, describing how Google’s new chip can be used to run deep learning algorithms using Google’s TensorFlow (related article in the TechCrunch review).

TechCrunch has identified a number of other relevant developments that make for an interesting reading, including the Facebook-Amazon-Google-IBM-Microsoft mega partnership on AI, the Facebook strategy on AI and the news about the language invented by Google’s translation tool.

Will the AI wave gain momentum in 2017, as predicted by this article? I think the chances are good, but only the future will tell.

How deep is deep learning, really?

In a recent article, Artificial Intelligence (AI) pioneer and Yale retired professor Roger Schank states that he is “concerned about … the exaggerated claims being made by IBM about their Watson program“. According to Schank, IBM Watson does not really understands the texts it processes, and the IBM claims are baseless, since no deep understanding of the concepts takes place when Watson processes information.

Roger Schank’s argument is an important one and deserves some deeper discussion. First, I will try to summarize the central point of Schank’s argument. Schank has been one of the better known researchers and practitioners of “Good Old Fashioned Artificial Intelligence”, or GOFAI. GOFAI practitioners aimed at creating symbolic models of the world (or of subsets of the world) that were comprehensive enough to support systems able to interpret natural language. Roger Schank is indeed well known for introducing Conceptual Dependency Theory and Case Based Reasoning, well-known GOFAI approaches to natural language understanding.

As Schank states, GOFAI practioners “were making some good progress on getting computers to understand language but, in 1984, AI winter started. AI winter was a result of too many promises about things AI could do that it really could not do.” The AI winter he is referring to, a deep disbelief in the field of AI that lasted more than a decade, was the result of the fact that creating symbolic representations complete enough and robust enough to address real world problems was much harder than it seemed.

The most recent advances in AI, of which IBM Watson is a good example, use mostly statistical methods, like neural networks or support vector machines, to tackle real world problems. Due to much faster computers, better algorithms, and much larger amounts of data available, systems trained using statistical learning techniques, such as deep learningare able to address many real world problems. In particular, they are able to process, with remarkable accuracy, natural language sentences and questions. The essence of Schank’s argument is that this statistical based approach will never lead to true understanding, since true understanding depends on having clear-cut, symbolic representations of the concepts, and that is something statistical learning will never do.

Schank is, I believe, mistaken. The brain is, at its essence, a statistical machine, that learns from statistics and correlations the best way to react. Statistical learning, even if it is not the real thing, may get us very close to the strong Artificial Intelligence. But I will let you make the call.

Watch this brief excerpt of Watson’s participation in the jeopardy competition, and answer by yourself: IBM Watson did, or did not, understand the questions and the riddles?