Crazy chatbots or smart personal assistants?

Well-known author, scientist, and futurologist Ray Kurzweil is reportedly working with Google to create a chatbot, named Danielle. Chatbots, i.e., natural language parsing programs that get their input from social networks and other groups on the web, have been of interest for researchers since they represent an easy way to test new technologies in the real world.

Very recently, a chatbot created by Microsoft, Tay, made the news because it became “a Hitler-loving sex robot” after chatting for less than 24 hours with teens, on the web. Tay was an AI created to speak like a teen girl, and it was an experiment done in order to improve Microsoft voice recognition software. The chatbot was rapidly “deleted”, after it started comparing Hitler, in favorable terms, with well known contemporary politicians.

Presumably, Danielle, reportedly under development by Google, with the cooperation of Ray Kurzweil, will be released later this year. According to Kurzweil, Danielle will be able to maintain relevant, meaningful, conversations, but he still points to 2029 as the year when a chatbot will pass the Turing test, becoming indistinguishable from a human. Kurzweil, the author of The Singularity is Near and many other books on the future of technology, is a firm believer in the singularity, a point in human history where society will suffer such a radical change that it will become unrecognizable to contemporary humans.


In a brief video interview (which was since removed from YouTube), Kurzweil describes the Google chatbot project, and the hopes he pins on this project.

While chatbots may not look very interesting, unless you have a lot of spare time on your hands, the technology can be used to create intelligent personal assistants. These assistants can take verbal instructions and act on your behalf and may therefore become very useful, almost indispensable “tools”. As Austin Okere puts it in this article , “in five or ten years, when we have got over our skepticism and become reliant upon our digital assistants, we will wonder how we ever got along without them.



How deep is deep learning, really?

In a recent article, Artificial Intelligence (AI) pioneer and Yale retired professor Roger Schank states that he is “concerned about … the exaggerated claims being made by IBM about their Watson program“. According to Schank, IBM Watson does not really understands the texts it processes, and the IBM claims are baseless, since no deep understanding of the concepts takes place when Watson processes information.

Roger Schank’s argument is an important one and deserves some deeper discussion. First, I will try to summarize the central point of Schank’s argument. Schank has been one of the better known researchers and practitioners of “Good Old Fashioned Artificial Intelligence”, or GOFAI. GOFAI practitioners aimed at creating symbolic models of the world (or of subsets of the world) that were comprehensive enough to support systems able to interpret natural language. Roger Schank is indeed well known for introducing Conceptual Dependency Theory and Case Based Reasoning, well-known GOFAI approaches to natural language understanding.

As Schank states, GOFAI practioners “were making some good progress on getting computers to understand language but, in 1984, AI winter started. AI winter was a result of too many promises about things AI could do that it really could not do.” The AI winter he is referring to, a deep disbelief in the field of AI that lasted more than a decade, was the result of the fact that creating symbolic representations complete enough and robust enough to address real world problems was much harder than it seemed.

The most recent advances in AI, of which IBM Watson is a good example, use mostly statistical methods, like neural networks or support vector machines, to tackle real world problems. Due to much faster computers, better algorithms, and much larger amounts of data available, systems trained using statistical learning techniques, such as deep learningare able to address many real world problems. In particular, they are able to process, with remarkable accuracy, natural language sentences and questions. The essence of Schank’s argument is that this statistical based approach will never lead to true understanding, since true understanding depends on having clear-cut, symbolic representations of the concepts, and that is something statistical learning will never do.

Schank is, I believe, mistaken. The brain is, at its essence, a statistical machine, that learns from statistics and correlations the best way to react. Statistical learning, even if it is not the real thing, may get us very close to the strong Artificial Intelligence. But I will let you make the call.

Watch this brief excerpt of Watson’s participation in the jeopardy competition, and answer by yourself: IBM Watson did, or did not, understand the questions and the riddles?

A new and improved tree of life brings some surprising results

In a recent article, published in the journal Nature Microbiology, a group of researchers from UC Berkeley, in collaboration with other universities and institutes, proposed a new version of the tree of life, which dramatically changes our view of the relationships between the species inhabiting planet Earth.

Many depictions of the tree of life tend to focus on the enormous and well known diversity of eukaryotes, a group of organisms composed of complex cells that includes all animals, plants and fungi.

This version of the tree of life, now published, uses metagenomics analysis of genomic data from many organisms little known before, together with published sequences of genomic data, to infer a significantly different version of the tree of life. This new view reveals the dominance of bacterial diversification.  A full scale version of the proposed tree of life enables you to find our own ancestors, in the extreme bottom right of the figure, the Opisthokont group of organisms. The Opisthokonts include both the animal and fungus kingdoms,  together with other eukaryotic microorganisms. Opisthokont flagelate cells, such as the sperm of most animals and the spores of the chytrid fungi, propel themselves using a single posterior flagellum, a feature that gives the group its name. At the level of resolution used in the study, humans and mushrooms are so close that they cannot be told apart.


This version of the tree of life maintains the three great trunks that Carl Woese and his colleagues published in the first “universal tree of life”, in the seventies.

Our own trunk, known as eukaryotes, includes animals, plants, fungi and protozoans. A second trunk included many familiar bacteria like Escherichia coli. The third trunk, the Archaea, includes little-known microbes that live in extreme places like hot springs and oxygen-free wetlands.



However, this more extensive and detailed analysis, based on extensive genomic data, provides a more global view of the evolutionary process that has shaped life on Earth for the last four billion years.

Images from the article in Nature Microbiology, by Hug et. al., and the work of Woese et al.


Moore’s law is dead, long live Moore´s law

Google recently announced the Tensor Processing Unit (TPU), an application-specific integrated circuit (ASIC) tailored for machine learning applications that, according to the company, delivers an order of magnitude improved performance, per watt, over existing general purpose  processors.

The chip, developed specifically to speed up the increasingly common machine learning applications, has already powered a number of state of the art applications, including AlphaGo and StreetView. According to Google, this type of applications is more tolerant to reduced numerical precision and therefore can be implemented using fewer transistors per operation. Because of this, Google engineers were able to squeeze more operations per second out of each transistor.


The new chip is tailored for TensorFlow, an open source library that performs numerical computation using data flow graphs. Each node in the graph represents one mathematical operation that acts on the tensors that come in through the graph edges.

Google stated that TPU represents a jump of ten years into the future, in what regards Moore’s Law, which has been recently viewed as finally coming to a halt. Developments like this, with alternative architectures or alternative ways to perform computations, are likely to continue to lead to exponential improvements in computing power for years to come, compatible with Moore’s Law.

The brain is not a computer! Or is it?

In a recent article, reputed psychologist Robert Epstein, the former editor-in-chief of Psychology Today, argues that the brain is not a computer and it is not an information processing device. His main point is that there is no place in the brain where “copies of words, pictures, grammatical rules or any other kinds of environmental stimuli” are stored. He argues that we are not born with “information, data, rules, software, knowledge, lexicons, representations, algorithms, programs, models, memories, images, processors, subroutines, encoders, decoders, symbols, or buffers” in our brains.


His point is well taken. We now know that the brain does not store its memories in any form comparable to that of a digital computer. Early symbolic approaches to Artificial Intelligence (GOFAI: good old-fashioned artificial intelligence) failed soundly at obtaining anything similar to intelligent behavior.

In a digital computer, memories are stored linearly, in sequential places in the digital memory of the computer. In brains, memories are stored in ways that are still mostly unknown, mostly encoded in the vast network of interconnections between the billions of neurons that constitute a human brain. Memories are not stored in individual neurons, nor in individual synapses. That, he says, and I agree, is a preposterous idea.

Robert Epstein, however, goes further, as he argues that the human brain is not an information processing device. Here, I must disagree. Although they do it in a very different ways from computers, brains are nothing more than information processing devices. He argues against the conclusion that “all entities that are capable of behaving intelligently are information processors”, which he says permeates all of current research in brain and behavior. Needless to say, I disagree. Any entity capable of behaving intelligently needs to be able to process information.

Epstein concludes by arguing that we will never, ever, be able to reproduce the behavior of a human mind in a computer. Not only the challenge of reverse engineering is just too big, he argues, but the behavior of a brain, even if simulated in a computer, would not create a mind.

The jury is still out on the first argument. I agree that reverse engineering a brain may remain, forever, impossible, due to physical and technological limitations. However, if that were to be possible, one day, I do not see any reason why the behavior of a human mind could not emanate from an emulation running in a computer.


Image from the cover of the book “Eye, Brain, and Vision”, by David Hubel, available online at


Whole brain emulation in a super-computer?

The largest spiking neural network simulation performed to date modeled the behavior of a network of 1.8 billion neurons, for one second or real time, using the 83,000 processing nodes of the K computer. The simulation took 40 minutes of wall-clock time, using an average number of sinapses, per neuron, of 6000.

This result, obtained by a team of researchers from the Jülich Research Centre and the Riken Advanced Institute for Computational Science, among other institutions, shows that it is possible to simulate networks with more than one billion neurons in fast supercomputers. Furthermore, the authors have shown that the technology scales up and can be used to simulate even larger networks of neurons, perhaps as large as a whole brain.
K-Comp-640x353The simulations were performed using the NEST software package, designed to efficiently model and simulate networks of spiking neurons. If one extrapolates the use of this technology to perform whole brain (with its 88 billion neurons) emulation, the simulation performed using the K super-computer would be about 100,000 times slower than real time.

The K-computer has an estimated performance of 8 petaflops, or 8 quadrillion (10 to the 15th power) floating point operations per second and is currently the world’s fourth fastest computer.

Meet Ross, our new lawyer

Fortune reports that law firm Baker & Hostetler has hired an artificially intelligent lawyer, Ross. According to the company that created it, Ross Intelligence, the IBM Watson powered digital attorney interacts with other workers as a normal lawyer would.

“You ask your questions in plain English, as you would a colleague, and ROSS then reads through the entire body of law and returns a cited answer and topical readings from legislation, case law and secondary sources to get you up-to-speed quickly.”


Ross will work for the law firm’s bankruptcy practice, which currently employs roughly 50 lawyers. Baker & Hostetler chief information officer explained that the company believes emerging technologies, like cognitive computing and other forms of machine learning, can help enhance the services delivered to their clients. There is no information on the number of lawyers to be replaced by Ross.

Going through large amounts of information stored in plain text and compiling it in usable form is one of the most interesting applications of natural language processing systems, like IBM Watson. If successful, one single system may do the work of hundreds or thousands of specialists, at least in a large fraction of the cases that do not require extensive or involved reasoning. However, as the technology evolves, even these cases may become ultimately amenable to treatment by AI agents.

Picture by Humanrobo (Own work) [CC BY-SA 3.0 (, via Wikimedia Commons


Will a superintelligent machine be the last thing we invent?

From the very beginning, computer scientists have aimed at creating machines that are as intelligent as humans. However, there is no reason to believe that the level of intelligence of machines will stop there. They may become, very rapidly, much more intelligent than humans, once they have the ability to design the next versions of artificial intelligences (AI)s.

This idea is not new. In a 1951 lecture entitled “Intelligent Machinery, A Heretical Theory”Alan Turing said that, “ seems probable that once the machine thinking method has started, it would not take long to outstrip our feeble powers… At some stage therefore we should have to expect the machines to take control…”


The worry that superintelligent machines may one day take charge has been worrying an increasing larger number of researchers, and has been extensively addressed in a recent book, Supperintelligence, already covered in this blog.

Luke Muehlhauser, the Executive Director of the Machine Intelligence Research Institute (MIRI), in Berkeley, published recently a paper with Nick Bostrom, from the the Future of Humanity Institute,  at Oxford, on the need to guarantee that advanced AIs will be friendly to the human species.

Muehlhauser and Bostrom argue that “Humans will not always be the most intelligent agents on Earth, the ones steering the future.” and ask “What will happen to us when we no longer play that role, and how can we prepare for this transition?

In an interesting interview, which appeared in io9, Muehlhauser states that he was brought into this problem when he became familiar of a work by Irving J. Good, a british mathematician that worked with Alan Turing at Bletchley Park. The authors’ opinion is that further research, strategic and technical, on this problem is required, to avoid the risk that a supperintelligent system is created before we fully understand the consequences. Furthermore, they believe a much higher level of awareness is necessary, in general, in order to align research agendas with the safety requirements. Their point is that a supperintelligent system would be the most dangerous weapon ever developed by humanity.

All of this creates the risk that a supperintelligent machine may the be last thing invented by humanity, either because humanity becomes extinct, or because our intellects would be so vastly surpassed by AIs that they would make all the significant contributions.


Jill Watson, a robotic teaching assistant, passes the Turing test?

Ashok Goel, a computer science professor at Georgie Institute of Technology, trained a system using IBM Watson technology to behave as a teaching assistant in an artificial intelligence course. The system, named Jill Watson, answered questions, reminded students of deadlines and, generally, provided feedback to the students by email. It was, so to speak, a robotic teaching assistant.


Jill was trained using nearly 40,000 postings available on a discussion forum, and was configured to answer only when the level of confidence was very high, thus avoiding weak answers that would give “her” away. In March, she went online, began posting responses live.

As the Wall Street journal reports, none of the students seemed to notice, and some of them were “flabbergasted” when they were told about the experiment. Some, however, may have harboured doubts, since Jill replied so quickly to the questions posed by the students.

Even though this falls way short of a full-fledged Turing test, it raises significant questions about how effective can AI agents be in replacing professors and teaching assistants, in the task of providing feedback to students. Next year, Ashok Goel plans to tell his students one of the TAs is a computer, but not which one. Like with the Cylons, you know. What could possibly go wrong?


Is consciousness simply the consequence of complex system organization?

The theory that consciousness is simply an emergent property of complex systems has been gaining adepts lately.

The idea may be originally due to Giulio Tononi, from the University of Wisconsin in Madison. Tononi argued that a system that exhibits  consciousness must be able to store and process large amounts of information and must have some internal structure that cannot be divided into independent parts. In other words, consciousness is a result of the intrinsic complexity of the internal organization of an information processing system, complexity that cannot be broken into parts. A good overview of the theory has been recently published in the Philosophical Transactions of the Royal Society.

The theory has been gaining adepts, such as Max Tegmark, from MIT, who argues that consciousness is simply a state of matter. Tegmark suggests that consciousness arises out of particular arrangements of matter, and there may exist varying degrees of consciousness. Tegmark believes current day computers may be approaching the threshold of higher consciousness.


Historically, consciousness has been extremely difficult to explain because it is essential a totally subjective phenomenon. It is impossible to assess objectively whether an animal or artificial agent (or even a human, for that matter) is conscious or not, since, ultimately, one has to rely on the word of the agent whose consciousness we are trying to assert. Tononi and Tegmark theories may, eventually, shed some light on this obscure phenomenon.