Whole brain emulation in a super-computer?

The largest spiking neural network simulation performed to date modeled the behavior of a network of 1.8 billion neurons, for one second or real time, using the 83,000 processing nodes of the K computer. The simulation took 40 minutes of wall-clock time, using an average number of sinapses, per neuron, of 6000.

This result, obtained by a team of researchers from the Jülich Research Centre and the Riken Advanced Institute for Computational Science, among other institutions, shows that it is possible to simulate networks with more than one billion neurons in fast supercomputers. Furthermore, the authors have shown that the technology scales up and can be used to simulate even larger networks of neurons, perhaps as large as a whole brain.
K-Comp-640x353The simulations were performed using the NEST software package, designed to efficiently model and simulate networks of spiking neurons. If one extrapolates the use of this technology to perform whole brain (with its 88 billion neurons) emulation, the simulation performed using the K super-computer would be about 100,000 times slower than real time.

The K-computer has an estimated performance of 8 petaflops, or 8 quadrillion (10 to the 15th power) floating point operations per second and is currently the world’s fourth fastest computer.


Meet Ross, our new lawyer

Fortune reports that law firm Baker & Hostetler has hired an artificially intelligent lawyer, Ross. According to the company that created it, Ross Intelligence, the IBM Watson powered digital attorney interacts with other workers as a normal lawyer would.

“You ask your questions in plain English, as you would a colleague, and ROSS then reads through the entire body of law and returns a cited answer and topical readings from legislation, case law and secondary sources to get you up-to-speed quickly.”


Ross will work for the law firm’s bankruptcy practice, which currently employs roughly 50 lawyers. Baker & Hostetler chief information officer explained that the company believes emerging technologies, like cognitive computing and other forms of machine learning, can help enhance the services delivered to their clients. There is no information on the number of lawyers to be replaced by Ross.

Going through large amounts of information stored in plain text and compiling it in usable form is one of the most interesting applications of natural language processing systems, like IBM Watson. If successful, one single system may do the work of hundreds or thousands of specialists, at least in a large fraction of the cases that do not require extensive or involved reasoning. However, as the technology evolves, even these cases may become ultimately amenable to treatment by AI agents.

Picture by Humanrobo (Own work) [CC BY-SA 3.0 (http://creativecommons.org/licenses/by-sa/3.0)%5D, via Wikimedia Commons


Will a superintelligent machine be the last thing we invent?

From the very beginning, computer scientists have aimed at creating machines that are as intelligent as humans. However, there is no reason to believe that the level of intelligence of machines will stop there. They may become, very rapidly, much more intelligent than humans, once they have the ability to design the next versions of artificial intelligences (AI)s.

This idea is not new. In a 1951 lecture entitled “Intelligent Machinery, A Heretical Theory”Alan Turing said that, “...it seems probable that once the machine thinking method has started, it would not take long to outstrip our feeble powers… At some stage therefore we should have to expect the machines to take control…”


The worry that superintelligent machines may one day take charge has been worrying an increasing larger number of researchers, and has been extensively addressed in a recent book, Supperintelligence, already covered in this blog.

Luke Muehlhauser, the Executive Director of the Machine Intelligence Research Institute (MIRI), in Berkeley, published recently a paper with Nick Bostrom, from the the Future of Humanity Institute,  at Oxford, on the need to guarantee that advanced AIs will be friendly to the human species.

Muehlhauser and Bostrom argue that “Humans will not always be the most intelligent agents on Earth, the ones steering the future.” and ask “What will happen to us when we no longer play that role, and how can we prepare for this transition?

In an interesting interview, which appeared in io9, Muehlhauser states that he was brought into this problem when he became familiar of a work by Irving J. Good, a british mathematician that worked with Alan Turing at Bletchley Park. The authors’ opinion is that further research, strategic and technical, on this problem is required, to avoid the risk that a supperintelligent system is created before we fully understand the consequences. Furthermore, they believe a much higher level of awareness is necessary, in general, in order to align research agendas with the safety requirements. Their point is that a supperintelligent system would be the most dangerous weapon ever developed by humanity.

All of this creates the risk that a supperintelligent machine may the be last thing invented by humanity, either because humanity becomes extinct, or because our intellects would be so vastly surpassed by AIs that they would make all the significant contributions.


Jill Watson, a robotic teaching assistant, passes the Turing test?

Ashok Goel, a computer science professor at Georgie Institute of Technology, trained a system using IBM Watson technology to behave as a teaching assistant in an artificial intelligence course. The system, named Jill Watson, answered questions, reminded students of deadlines and, generally, provided feedback to the students by email. It was, so to speak, a robotic teaching assistant.


Jill was trained using nearly 40,000 postings available on a discussion forum, and was configured to answer only when the level of confidence was very high, thus avoiding weak answers that would give “her” away. In March, she went online, began posting responses live.

As the Wall Street journal reports, none of the students seemed to notice, and some of them were “flabbergasted” when they were told about the experiment. Some, however, may have harboured doubts, since Jill replied so quickly to the questions posed by the students.

Even though this falls way short of a full-fledged Turing test, it raises significant questions about how effective can AI agents be in replacing professors and teaching assistants, in the task of providing feedback to students. Next year, Ashok Goel plans to tell his students one of the TAs is a computer, but not which one. Like with the Cylons, you know. What could possibly go wrong?


Is consciousness simply the consequence of complex system organization?

The theory that consciousness is simply an emergent property of complex systems has been gaining adepts lately.

The idea may be originally due to Giulio Tononi, from the University of Wisconsin in Madison. Tononi argued that a system that exhibits  consciousness must be able to store and process large amounts of information and must have some internal structure that cannot be divided into independent parts. In other words, consciousness is a result of the intrinsic complexity of the internal organization of an information processing system, complexity that cannot be broken into parts. A good overview of the theory has been recently published in the Philosophical Transactions of the Royal Society.

The theory has been gaining adepts, such as Max Tegmark, from MIT, who argues that consciousness is simply a state of matter. Tegmark suggests that consciousness arises out of particular arrangements of matter, and there may exist varying degrees of consciousness. Tegmark believes current day computers may be approaching the threshold of higher consciousness.


Historically, consciousness has been extremely difficult to explain because it is essential a totally subjective phenomenon. It is impossible to assess objectively whether an animal or artificial agent (or even a human, for that matter) is conscious or not, since, ultimately, one has to rely on the word of the agent whose consciousness we are trying to assert. Tononi and Tegmark theories may, eventually, shed some light on this obscure phenomenon.

Google raised $84,000 auctioning computer generated art

Last February, Google auctioned a number of computer generated paintings, raising $84,000 for the Gray Area Foundation for the Arts, a San Francisco nonprofit institution devoted to the convergence of art and technology.

The auction took place during a two day event, which also included a symposium about the technology used to generate the paintings.


but-its-when-an-image-goes-through-those-final-layers-where-the-image-output-gets-really-weird-this-layer-will-look-for-complex-things-like-an-entire-buildingThese paintings were generated using a technology dubbed inceptionism, which uses internal representations of neural networks trained using deep learning to derive abstract images, with styles that remind us of different visual art styles. The painting are the results of a project dubbed DeepDream, which can be used by anyone to make their own artworks.


This kind of artwork is probably going to become more common, as more people get interested and more computers “decide” to become artists….

Both the University of London and NYU are now offering courses on computer generated art.

How to make friends, influence people and, ultimately, conquer the world

A recent report on The Economist about Facebook makes clear that the ever-present social network is now more, much more, than simply the sixth most valuable company on Earth and the (virtual) place where humanity spends a significant fraction of its time.

What started simply as a social network, doomed to perish (many believed) as many other social networks, turned into “one great empire with a vast population, immense wealth, a charismatic leader, and mind-boggling reach and influence“, according to The Economist.


But, more relevant to the topic of this blog, is the fact that Facebook has amassed immense knowledge and created the tools necessary to explore it, in the process making enormous sums of money from targeted advertising.

As artificial intelligence, machine learning and data analytics advance, companies like Facebook and Google can explore better and better the troves of data they have, in a process that may end up with the engines behind these companies becoming truly intelligent and, who knows, even conscious. Maybe one day Facebook will become not just the place to meet friends, but a friend. The investments made on chatbot and virtual reality technologies certainly show that we have not yet seen all the social network can do.