March of the Machines

The Economist dedicates this week’s special report to Artificial Intelligence and the effects it will have on the economy.

20160625_LDD001_0

The first article on the report addresses a two centuries old question, which was called, then, the machinery question. First recorded during the industrial revolution, this question asks whether machines will replace so many human jobs as to leave a large fraction of humanity unemployed. The impact on jobs is addressed in more detail in another piece of the report, automation and anxiety and includes a reference to a 2013 article, by Frey and Osborne. This article reportes that 47% of workers in America have jobs at high risk, including many white collar jobs.

IMG_0012

Countermeasures to these challenges are discussed in some detail, including the idea of universal basic income but, in the end, The Economist seems to side with the traditional opinion of economists, that technology will ultimately create more jobs than it will destroy.

Other pieces on the report describe the technology behind the most significant recent advances in AI, deep learning and the complex ethical questions raised by the possibility of advances artificial intelligences.

Images in this article are from the print edition of The Economist.

 

 

How deep is deep learning, really?

In a recent article, Artificial Intelligence (AI) pioneer and Yale retired professor Roger Schank states that he is “concerned about … the exaggerated claims being made by IBM about their Watson program“. According to Schank, IBM Watson does not really understands the texts it processes, and the IBM claims are baseless, since no deep understanding of the concepts takes place when Watson processes information.

Roger Schank’s argument is an important one and deserves some deeper discussion. First, I will try to summarize the central point of Schank’s argument. Schank has been one of the better known researchers and practitioners of “Good Old Fashioned Artificial Intelligence”, or GOFAI. GOFAI practitioners aimed at creating symbolic models of the world (or of subsets of the world) that were comprehensive enough to support systems able to interpret natural language. Roger Schank is indeed well known for introducing Conceptual Dependency Theory and Case Based Reasoning, well-known GOFAI approaches to natural language understanding.

As Schank states, GOFAI practioners “were making some good progress on getting computers to understand language but, in 1984, AI winter started. AI winter was a result of too many promises about things AI could do that it really could not do.” The AI winter he is referring to, a deep disbelief in the field of AI that lasted more than a decade, was the result of the fact that creating symbolic representations complete enough and robust enough to address real world problems was much harder than it seemed.

The most recent advances in AI, of which IBM Watson is a good example, use mostly statistical methods, like neural networks or support vector machines, to tackle real world problems. Due to much faster computers, better algorithms, and much larger amounts of data available, systems trained using statistical learning techniques, such as deep learningare able to address many real world problems. In particular, they are able to process, with remarkable accuracy, natural language sentences and questions. The essence of Schank’s argument is that this statistical based approach will never lead to true understanding, since true understanding depends on having clear-cut, symbolic representations of the concepts, and that is something statistical learning will never do.

Schank is, I believe, mistaken. The brain is, at its essence, a statistical machine, that learns from statistics and correlations the best way to react. Statistical learning, even if it is not the real thing, may get us very close to the strong Artificial Intelligence. But I will let you make the call.

Watch this brief excerpt of Watson’s participation in the jeopardy competition, and answer by yourself: IBM Watson did, or did not, understand the questions and the riddles?

Moore’s law is dead, long live Moore´s law

Google recently announced the Tensor Processing Unit (TPU), an application-specific integrated circuit (ASIC) tailored for machine learning applications that, according to the company, delivers an order of magnitude improved performance, per watt, over existing general purpose  processors.

The chip, developed specifically to speed up the increasingly common machine learning applications, has already powered a number of state of the art applications, including AlphaGo and StreetView. According to Google, this type of applications is more tolerant to reduced numerical precision and therefore can be implemented using fewer transistors per operation. Because of this, Google engineers were able to squeeze more operations per second out of each transistor.

chip

The new chip is tailored for TensorFlow, an open source library that performs numerical computation using data flow graphs. Each node in the graph represents one mathematical operation that acts on the tensors that come in through the graph edges.

Google stated that TPU represents a jump of ten years into the future, in what regards Moore’s Law, which has been recently viewed as finally coming to a halt. Developments like this, with alternative architectures or alternative ways to perform computations, are likely to continue to lead to exponential improvements in computing power for years to come, compatible with Moore’s Law.

Meet Ross, our new lawyer

Fortune reports that law firm Baker & Hostetler has hired an artificially intelligent lawyer, Ross. According to the company that created it, Ross Intelligence, the IBM Watson powered digital attorney interacts with other workers as a normal lawyer would.

“You ask your questions in plain English, as you would a colleague, and ROSS then reads through the entire body of law and returns a cited answer and topical readings from legislation, case law and secondary sources to get you up-to-speed quickly.”

TOPIO_3_3

Ross will work for the law firm’s bankruptcy practice, which currently employs roughly 50 lawyers. Baker & Hostetler chief information officer explained that the company believes emerging technologies, like cognitive computing and other forms of machine learning, can help enhance the services delivered to their clients. There is no information on the number of lawyers to be replaced by Ross.

Going through large amounts of information stored in plain text and compiling it in usable form is one of the most interesting applications of natural language processing systems, like IBM Watson. If successful, one single system may do the work of hundreds or thousands of specialists, at least in a large fraction of the cases that do not require extensive or involved reasoning. However, as the technology evolves, even these cases may become ultimately amenable to treatment by AI agents.

Picture by Humanrobo (Own work) [CC BY-SA 3.0 (http://creativecommons.org/licenses/by-sa/3.0)%5D, via Wikimedia Commons

 

Will a superintelligent machine be the last thing we invent?

From the very beginning, computer scientists have aimed at creating machines that are as intelligent as humans. However, there is no reason to believe that the level of intelligence of machines will stop there. They may become, very rapidly, much more intelligent than humans, once they have the ability to design the next versions of artificial intelligences (AI)s.

This idea is not new. In a 1951 lecture entitled “Intelligent Machinery, A Heretical Theory”Alan Turing said that, “...it seems probable that once the machine thinking method has started, it would not take long to outstrip our feeble powers… At some stage therefore we should have to expect the machines to take control…”

19cmqyrruw68tpng

The worry that superintelligent machines may one day take charge has been worrying an increasing larger number of researchers, and has been extensively addressed in a recent book, Supperintelligence, already covered in this blog.

Luke Muehlhauser, the Executive Director of the Machine Intelligence Research Institute (MIRI), in Berkeley, published recently a paper with Nick Bostrom, from the the Future of Humanity Institute,  at Oxford, on the need to guarantee that advanced AIs will be friendly to the human species.

Muehlhauser and Bostrom argue that “Humans will not always be the most intelligent agents on Earth, the ones steering the future.” and ask “What will happen to us when we no longer play that role, and how can we prepare for this transition?

In an interesting interview, which appeared in io9, Muehlhauser states that he was brought into this problem when he became familiar of a work by Irving J. Good, a british mathematician that worked with Alan Turing at Bletchley Park. The authors’ opinion is that further research, strategic and technical, on this problem is required, to avoid the risk that a supperintelligent system is created before we fully understand the consequences. Furthermore, they believe a much higher level of awareness is necessary, in general, in order to align research agendas with the safety requirements. Their point is that a supperintelligent system would be the most dangerous weapon ever developed by humanity.

All of this creates the risk that a supperintelligent machine may the be last thing invented by humanity, either because humanity becomes extinct, or because our intellects would be so vastly surpassed by AIs that they would make all the significant contributions.

 

Jill Watson, a robotic teaching assistant, passes the Turing test?

Ashok Goel, a computer science professor at Georgie Institute of Technology, trained a system using IBM Watson technology to behave as a teaching assistant in an artificial intelligence course. The system, named Jill Watson, answered questions, reminded students of deadlines and, generally, provided feedback to the students by email. It was, so to speak, a robotic teaching assistant.

2225507-caprica_6.0.0

Jill was trained using nearly 40,000 postings available on a discussion forum, and was configured to answer only when the level of confidence was very high, thus avoiding weak answers that would give “her” away. In March, she went online, began posting responses live.

As the Wall Street journal reports, none of the students seemed to notice, and some of them were “flabbergasted” when they were told about the experiment. Some, however, may have harboured doubts, since Jill replied so quickly to the questions posed by the students.

Even though this falls way short of a full-fledged Turing test, it raises significant questions about how effective can AI agents be in replacing professors and teaching assistants, in the task of providing feedback to students. Next year, Ashok Goel plans to tell his students one of the TAs is a computer, but not which one. Like with the Cylons, you know. What could possibly go wrong?

 

Google raised $84,000 auctioning computer generated art

Last February, Google auctioned a number of computer generated paintings, raising $84,000 for the Gray Area Foundation for the Arts, a San Francisco nonprofit institution devoted to the convergence of art and technology.

The auction took place during a two day event, which also included a symposium about the technology used to generate the paintings.

 

but-its-when-an-image-goes-through-those-final-layers-where-the-image-output-gets-really-weird-this-layer-will-look-for-complex-things-like-an-entire-buildingThese paintings were generated using a technology dubbed inceptionism, which uses internal representations of neural networks trained using deep learning to derive abstract images, with styles that remind us of different visual art styles. The painting are the results of a project dubbed DeepDream, which can be used by anyone to make their own artworks.

this-kind-of-artwork-is-only-going-to-become-more-common-the-university-of-london-is-now-offering-a-course-on-machine-learning-and-art-and-nyu-is-offering-something-similar

This kind of artwork is probably going to become more common, as more people get interested and more computers “decide” to become artists….

Both the University of London and NYU are now offering courses on computer generated art.

How to make friends, influence people and, ultimately, conquer the world

A recent report on The Economist about Facebook makes clear that the ever-present social network is now more, much more, than simply the sixth most valuable company on Earth and the (virtual) place where humanity spends a significant fraction of its time.

What started simply as a social network, doomed to perish (many believed) as many other social networks, turned into “one great empire with a vast population, immense wealth, a charismatic leader, and mind-boggling reach and influence“, according to The Economist.

20160409_LDD001_0

But, more relevant to the topic of this blog, is the fact that Facebook has amassed immense knowledge and created the tools necessary to explore it, in the process making enormous sums of money from targeted advertising.

As artificial intelligence, machine learning and data analytics advance, companies like Facebook and Google can explore better and better the troves of data they have, in a process that may end up with the engines behind these companies becoming truly intelligent and, who knows, even conscious. Maybe one day Facebook will become not just the place to meet friends, but a friend. The investments made on chatbot and virtual reality technologies certainly show that we have not yet seen all the social network can do.

Artificial intelligence and machine learning in high demand

The Economist reports, in a recent article, that artificial intelligence and machine learning experts are in extremely high demand, even among the already highly-wanted computer scientists.

The Economist reports that faculty, graduate students and practitioners of AI and machine learning are being harvested, in large numbers, by big-name companies, such as Google, Microsoft, Facebook, Uber, and Tesla.

20160402_WBD001_0

The demand for this type of skills is a consequence of the rise of intelligent and adaptive systems, advanced user interfaces, and autonomous agents, that will mark the next decades in computing.

Machine learning conferences, such as Neural Information Processing Systems (NIPS), once a tranquil backwater meeting place for domain experts, are now hunting grounds for companies looking to hire the best talents in the domain.

AlphaGo beats Lee Sedol, one the best Go players in the world

AlphaGo, the Go playing program developed by Google’s DeepMind, scored its first victory in the match against Lee Sedol.

CdFyCoBWEAExhuR

This win comes in the heels of AlphaGo victory over Fan Hui, the reigning 3-times European Champion,  but it has a deeper meaning, since Lee Sedol is one of the two top Go players in the world, together with Lee Changho. Go is viewed as one of the more difficult games to be mastered by computer, given the high branching factor and the inherent difficulty of position evaluation. It has been believed that computers would not master this game for many decades to come.

Ongoing coverage of the match is available in the AlphaGo website and the matches will be livestreamed on DeepMind’s YouTube channel.

AlphaGo used deep neural networks trained by a combination of supervised learning from professional games and reinforcement learning from games it played with itself. Two different networks are used, one to evaluate board positions and another one to select moves. These networks are then used inside a special purpose search algorithm.

The image shows the final position in the game, courtesy of Google’s DeepMind.