How deep is deep learning, really?

In a recent article, Artificial Intelligence (AI) pioneer and Yale retired professor Roger Schank states that he is “concerned about … the exaggerated claims being made by IBM about their Watson program“. According to Schank, IBM Watson does not really understands the texts it processes, and the IBM claims are baseless, since no deep understanding of the concepts takes place when Watson processes information.

Roger Schank’s argument is an important one and deserves some deeper discussion. First, I will try to summarize the central point of Schank’s argument. Schank has been one of the better known researchers and practitioners of “Good Old Fashioned Artificial Intelligence”, or GOFAI. GOFAI practitioners aimed at creating symbolic models of the world (or of subsets of the world) that were comprehensive enough to support systems able to interpret natural language. Roger Schank is indeed well known for introducing Conceptual Dependency Theory and Case Based Reasoning, well-known GOFAI approaches to natural language understanding.

As Schank states, GOFAI practioners “were making some good progress on getting computers to understand language but, in 1984, AI winter started. AI winter was a result of too many promises about things AI could do that it really could not do.” The AI winter he is referring to, a deep disbelief in the field of AI that lasted more than a decade, was the result of the fact that creating symbolic representations complete enough and robust enough to address real world problems was much harder than it seemed.

The most recent advances in AI, of which IBM Watson is a good example, use mostly statistical methods, like neural networks or support vector machines, to tackle real world problems. Due to much faster computers, better algorithms, and much larger amounts of data available, systems trained using statistical learning techniques, such as deep learningare able to address many real world problems. In particular, they are able to process, with remarkable accuracy, natural language sentences and questions. The essence of Schank’s argument is that this statistical based approach will never lead to true understanding, since true understanding depends on having clear-cut, symbolic representations of the concepts, and that is something statistical learning will never do.

Schank is, I believe, mistaken. The brain is, at its essence, a statistical machine, that learns from statistics and correlations the best way to react. Statistical learning, even if it is not the real thing, may get us very close to the strong Artificial Intelligence. But I will let you make the call.

Watch this brief excerpt of Watson’s participation in the jeopardy competition, and answer by yourself: IBM Watson did, or did not, understand the questions and the riddles?


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s