Moore’s law is dead, long live Moore´s law

Google recently announced the Tensor Processing Unit (TPU), an application-specific integrated circuit (ASIC) tailored for machine learning applications that, according to the company, delivers an order of magnitude improved performance, per watt, over existing general purpose  processors.

The chip, developed specifically to speed up the increasingly common machine learning applications, has already powered a number of state of the art applications, including AlphaGo and StreetView. According to Google, this type of applications is more tolerant to reduced numerical precision and therefore can be implemented using fewer transistors per operation. Because of this, Google engineers were able to squeeze more operations per second out of each transistor.

chip

The new chip is tailored for TensorFlow, an open source library that performs numerical computation using data flow graphs. Each node in the graph represents one mathematical operation that acts on the tensors that come in through the graph edges.

Google stated that TPU represents a jump of ten years into the future, in what regards Moore’s Law, which has been recently viewed as finally coming to a halt. Developments like this, with alternative architectures or alternative ways to perform computations, are likely to continue to lead to exponential improvements in computing power for years to come, compatible with Moore’s Law.

Advertisements

The brain is not a computer! Or is it?

In a recent article, reputed psychologist Robert Epstein, the former editor-in-chief of Psychology Today, argues that the brain is not a computer and it is not an information processing device. His main point is that there is no place in the brain where “copies of words, pictures, grammatical rules or any other kinds of environmental stimuli” are stored. He argues that we are not born with “information, data, rules, software, knowledge, lexicons, representations, algorithms, programs, models, memories, images, processors, subroutines, encoders, decoders, symbols, or buffers” in our brains.

27076481421_820cf525de_o

His point is well taken. We now know that the brain does not store its memories in any form comparable to that of a digital computer. Early symbolic approaches to Artificial Intelligence (GOFAI: good old-fashioned artificial intelligence) failed soundly at obtaining anything similar to intelligent behavior.

In a digital computer, memories are stored linearly, in sequential places in the digital memory of the computer. In brains, memories are stored in ways that are still mostly unknown, mostly encoded in the vast network of interconnections between the billions of neurons that constitute a human brain. Memories are not stored in individual neurons, nor in individual synapses. That, he says, and I agree, is a preposterous idea.

Robert Epstein, however, goes further, as he argues that the human brain is not an information processing device. Here, I must disagree. Although they do it in a very different ways from computers, brains are nothing more than information processing devices. He argues against the conclusion that “all entities that are capable of behaving intelligently are information processors”, which he says permeates all of current research in brain and behavior. Needless to say, I disagree. Any entity capable of behaving intelligently needs to be able to process information.

Epstein concludes by arguing that we will never, ever, be able to reproduce the behavior of a human mind in a computer. Not only the challenge of reverse engineering is just too big, he argues, but the behavior of a brain, even if simulated in a computer, would not create a mind.

The jury is still out on the first argument. I agree that reverse engineering a brain may remain, forever, impossible, due to physical and technological limitations. However, if that were to be possible, one day, I do not see any reason why the behavior of a human mind could not emanate from an emulation running in a computer.

 

Image from the cover of the book “Eye, Brain, and Vision”, by David Hubel, available online at http://hubel.med.harvard.edu/.

 

Whole brain emulation in a super-computer?

The largest spiking neural network simulation performed to date modeled the behavior of a network of 1.8 billion neurons, for one second or real time, using the 83,000 processing nodes of the K computer. The simulation took 40 minutes of wall-clock time, using an average number of sinapses, per neuron, of 6000.

This result, obtained by a team of researchers from the Jülich Research Centre and the Riken Advanced Institute for Computational Science, among other institutions, shows that it is possible to simulate networks with more than one billion neurons in fast supercomputers. Furthermore, the authors have shown that the technology scales up and can be used to simulate even larger networks of neurons, perhaps as large as a whole brain.
K-Comp-640x353The simulations were performed using the NEST software package, designed to efficiently model and simulate networks of spiking neurons. If one extrapolates the use of this technology to perform whole brain (with its 88 billion neurons) emulation, the simulation performed using the K super-computer would be about 100,000 times slower than real time.

The K-computer has an estimated performance of 8 petaflops, or 8 quadrillion (10 to the 15th power) floating point operations per second and is currently the world’s fourth fastest computer.

Meet Ross, our new lawyer

Fortune reports that law firm Baker & Hostetler has hired an artificially intelligent lawyer, Ross. According to the company that created it, Ross Intelligence, the IBM Watson powered digital attorney interacts with other workers as a normal lawyer would.

“You ask your questions in plain English, as you would a colleague, and ROSS then reads through the entire body of law and returns a cited answer and topical readings from legislation, case law and secondary sources to get you up-to-speed quickly.”

TOPIO_3_3

Ross will work for the law firm’s bankruptcy practice, which currently employs roughly 50 lawyers. Baker & Hostetler chief information officer explained that the company believes emerging technologies, like cognitive computing and other forms of machine learning, can help enhance the services delivered to their clients. There is no information on the number of lawyers to be replaced by Ross.

Going through large amounts of information stored in plain text and compiling it in usable form is one of the most interesting applications of natural language processing systems, like IBM Watson. If successful, one single system may do the work of hundreds or thousands of specialists, at least in a large fraction of the cases that do not require extensive or involved reasoning. However, as the technology evolves, even these cases may become ultimately amenable to treatment by AI agents.

Picture by Humanrobo (Own work) [CC BY-SA 3.0 (http://creativecommons.org/licenses/by-sa/3.0)%5D, via Wikimedia Commons

 

Will a superintelligent machine be the last thing we invent?

From the very beginning, computer scientists have aimed at creating machines that are as intelligent as humans. However, there is no reason to believe that the level of intelligence of machines will stop there. They may become, very rapidly, much more intelligent than humans, once they have the ability to design the next versions of artificial intelligences (AI)s.

This idea is not new. In a 1951 lecture entitled “Intelligent Machinery, A Heretical Theory”Alan Turing said that, “...it seems probable that once the machine thinking method has started, it would not take long to outstrip our feeble powers… At some stage therefore we should have to expect the machines to take control…”

19cmqyrruw68tpng

The worry that superintelligent machines may one day take charge has been worrying an increasing larger number of researchers, and has been extensively addressed in a recent book, Supperintelligence, already covered in this blog.

Luke Muehlhauser, the Executive Director of the Machine Intelligence Research Institute (MIRI), in Berkeley, published recently a paper with Nick Bostrom, from the the Future of Humanity Institute,  at Oxford, on the need to guarantee that advanced AIs will be friendly to the human species.

Muehlhauser and Bostrom argue that “Humans will not always be the most intelligent agents on Earth, the ones steering the future.” and ask “What will happen to us when we no longer play that role, and how can we prepare for this transition?

In an interesting interview, which appeared in io9, Muehlhauser states that he was brought into this problem when he became familiar of a work by Irving J. Good, a british mathematician that worked with Alan Turing at Bletchley Park. The authors’ opinion is that further research, strategic and technical, on this problem is required, to avoid the risk that a supperintelligent system is created before we fully understand the consequences. Furthermore, they believe a much higher level of awareness is necessary, in general, in order to align research agendas with the safety requirements. Their point is that a supperintelligent system would be the most dangerous weapon ever developed by humanity.

All of this creates the risk that a supperintelligent machine may the be last thing invented by humanity, either because humanity becomes extinct, or because our intellects would be so vastly surpassed by AIs that they would make all the significant contributions.

 

Jill Watson, a robotic teaching assistant, passes the Turing test?

Ashok Goel, a computer science professor at Georgie Institute of Technology, trained a system using IBM Watson technology to behave as a teaching assistant in an artificial intelligence course. The system, named Jill Watson, answered questions, reminded students of deadlines and, generally, provided feedback to the students by email. It was, so to speak, a robotic teaching assistant.

2225507-caprica_6.0.0

Jill was trained using nearly 40,000 postings available on a discussion forum, and was configured to answer only when the level of confidence was very high, thus avoiding weak answers that would give “her” away. In March, she went online, began posting responses live.

As the Wall Street journal reports, none of the students seemed to notice, and some of them were “flabbergasted” when they were told about the experiment. Some, however, may have harboured doubts, since Jill replied so quickly to the questions posed by the students.

Even though this falls way short of a full-fledged Turing test, it raises significant questions about how effective can AI agents be in replacing professors and teaching assistants, in the task of providing feedback to students. Next year, Ashok Goel plans to tell his students one of the TAs is a computer, but not which one. Like with the Cylons, you know. What could possibly go wrong?

 

Is consciousness simply the consequence of complex system organization?

The theory that consciousness is simply an emergent property of complex systems has been gaining adepts lately.

The idea may be originally due to Giulio Tononi, from the University of Wisconsin in Madison. Tononi argued that a system that exhibits  consciousness must be able to store and process large amounts of information and must have some internal structure that cannot be divided into independent parts. In other words, consciousness is a result of the intrinsic complexity of the internal organization of an information processing system, complexity that cannot be broken into parts. A good overview of the theory has been recently published in the Philosophical Transactions of the Royal Society.

The theory has been gaining adepts, such as Max Tegmark, from MIT, who argues that consciousness is simply a state of matter. Tegmark suggests that consciousness arises out of particular arrangements of matter, and there may exist varying degrees of consciousness. Tegmark believes current day computers may be approaching the threshold of higher consciousness.

state-of-matter

Historically, consciousness has been extremely difficult to explain because it is essential a totally subjective phenomenon. It is impossible to assess objectively whether an animal or artificial agent (or even a human, for that matter) is conscious or not, since, ultimately, one has to rely on the word of the agent whose consciousness we are trying to assert. Tononi and Tegmark theories may, eventually, shed some light on this obscure phenomenon.