The mind of a fly

Researchers from the Howard Hughes Medical Institute, Google and other institutions have published the neuron level connectome of a significant part of the brain of the fruit fly, what they called the hemibrain. This may become one of the most significant advances in our understanding of the detailed structure of complex brains, since the 302 neurons connectome of C. elegans was published in 1986, by a team headed by Sydney Brenner, in an famous article with the somewhat whimsical subtitle of The mind of a worm. Both methods used an approach based on the slicing of the brains in very thin slices, followed by the use of scanning electron microscopy and the processing of the resulting images in order to obtain the 3D structure of the brain.

The neuron-level connectome of C. elegans was obtained after a painstaking effort that lasted decades, of manual annotation of the images obtained from the thousands of slices imaged using electron microscopy. As the brain of Drosophila melanogaster, the fruit fly, is thousands of times more complex, such an effort would have required several centuries if done by hand. Therefore, Google’s machine learning algorithms have been trained to identify sections of neurons, including axons, bodies and dendritic trees, as well as synapses and other components. After extensive training, the millions of images that resulted from the serial electron microscopy procedure were automatically annotated by the machine learning algorithms, enabling the team to complete in just a few years the detailed neuron-level connectome of a significant section of the fly brain, which includes roughly 25000 neurons and 20 million synapses.

The results, published in the first of a number of articles, can be freely analyzed by anyone interested in the way a fly thinks. A Google account can be used to log in to the neuPrint explorer and an interactive exploration of the 3D electron microscopy images is also available with neuroglancer. Extensive non-technical coverage by the media is also widely available. See, for instance, the article in The Economist or the piece in The Verge.

Image from the HHMI Janelia Research Campus site.

Meet Duplex, your new assistant, courtesy of Google

Advances in natural language processing have enabled systems such as Siri, Alexa, Google Assistant or Cortana to be at the service of anyone owning a smartphone or a computer. Still, so far, none of these systems managed to cross the thin dividing line that would make us take them for humans. When we ask Alexa to play music or Siri do dial a telephone number, we know very well that we are talking with a computer and the replies of the systems would remind us, were we to forget that.

It was to be expected that, with the evolution of the technology, this type of interactions would become more and more natural, possibly reaching a point where a computer could impersonate a real human, taking us closer to the vision of Alan Turing, a situation where you cannot tell a human apart from a computer by simply talking to both.

In an event widely reported in the media, at the I/O 2018 conference, Google made a demonstration of Duplex, a system that is able to process and execute requests in specific areas, interacting in a very human way with human operators. While Google states that the system is still under development, and only able to handle very specific situations, you get a feeling that, soon enough, digital assistants will be able to interact with humans without disclosing their artificial nature.  You can read the Google AI blog post here, or just listen to a couple of examples, where Duplex is scheduling a haircut or making a restaurant reservation. Both the speech recognition system and the speech synthesis system, as well as the underlying knowledge base and natural language processing engines, operate flawlessly in these cases, anticipating a widely held premonition that AI systems will soon be replacing humans in many specific tasks.

Photo by Kevin Bhagat on Unsplash

The last invention of humanity

Irving John Good was a British mathematician who worked with Alan Turing in the famous Hut 8 of Bletchley Park, contributing to the war effort by decrypting the messages coded by the German enigma machines. After that, he became a professor at Virginia Tech and, later in life, he was a consultant for the cult movie 2001: A Space Odyssey, by Stanley Kubrick.

Irving John Good (born Isadore Jacob Gudak to a Polish jewish family) is credited with coining the term intelligence explosion, to refer to the possibility that a super-intelligent system may, one day, be able to design an even more intelligent successor. In his own words:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.

We are still very far from being able to design an artificially intelligent (AI)  system that is smart enough to design and code even better AI systems. Our current efforts address very narrow fields, and obtain systems that do not have the general intelligence required to create the phenomenon I. J. Good was referring to. However, in some very restrict domains, we can see at work mechanisms that resemble the that very same phenomenon.

Go is a board game, very difficult to master because of the huge number of possible games and high number of possible moves at each position. Given the complexity of the game, branch and bound approaches could not be used, until recently, to derive good playing strategies. Until only a few years ago, it was believed that it would take decades to create a program that would master the game of Go, at a level comparable with the best human players.

In January 2016, DeepMind, an AI startup (which was at that time acquired by Google by a sum reported to exceed 500M dollars), reported in an article in Nature that they had managed to master the complex game of Go by using deep neural networks and a tree search engine. The system, called AlphaGo, was trained on databases of human games and eventually managed to soundly beat the best human players, becoming the best player in the world, as reported in this blog.

A couple of weeks ago, in October of 2017, DeepMind reported, in a second article in Nature, that they programmed a system, which became even more proficient at the game, that mastered the game without using any human knowledge. AlphaGo Zero did not use any human games to acquire knowledge about the game. Instead, it played millions of games (close to 30 millions, in fact, played over a period of 40 days) against another version of itself, eventually acquiring knowledge about tactics and strategies that have been slowly created by the human race for more than two millennia. By simply playing against itself, the system went from a child level (random moves) to a novice level to a world champion level. AlphaGo Zero steamrolled the original AlphaGo by 100 to 0,  showing that it is possible to obtain super-human strength without using any human generated knowledge.

In a way, the computer improved itself, by simply playing against itself until it reached perfection. Irving John Good, who died in 2009, would have liked to see this invention of mankind. Which will not be the last, yet…

Picture credits: Go board, picture taken by Hoge Rielen, available at Wikimedia Commons.

 

Artificial Intelligence developments: the year in review

TechCrunch, a popular site dedicated to technology news, has published a list of the the top Artificial Intelligence news of 2016.

2016 seems indeed to have been the year Artificial Intelligence (AI) left the confinement of university labs to come into public view.

review

Several of the news selected by TechCrunch, were also covered in this blog.

In March a Go playing program, developed by Google’s DeepMind, AlphaGo, defeated 18-time world champion Lee Sedol (reference in the TechCrunch review).

Digital Art, where deep learning algorithms learn to paint in the style of a particular artist, was also the topic of one post (reference in the TechCrunch review).

In May, Digital Minds posted Moore’s law is dead, long live Moore´s law, describing how Google’s new chip can be used to run deep learning algorithms using Google’s TensorFlow (related article in the TechCrunch review).

TechCrunch has identified a number of other relevant developments that make for an interesting reading, including the Facebook-Amazon-Google-IBM-Microsoft mega partnership on AI, the Facebook strategy on AI and the news about the language invented by Google’s translation tool.

Will the AI wave gain momentum in 2017, as predicted by this article? I think the chances are good, but only the future will tell.

Are self-driving cars like elevators or like planes?

As reported in an article by the New York Times, Google and Tesla are working on self-driving cars using radically different approaches. Google is using the “elevator” metaphor, while Tesla is using the “plane autopilot” metaphor. IEEE Spectrum, the journal of the Institute of Electrical and Electronics Engineers, published an interesting analysis of the approaches taken by different companies.

img_0358_wide-dc4be1a3f03ed75555931114af2fd96cd6ebb54b-s800-c85

As you can gather from from this interesting Planet Money podcast, Google has decided that their autonomous vehicles will be much like elevators: you push a button, and the car (like an elevator) drives to the intended destination, without possible intervention from the driver.

The alternative approach, followed by Tesla and other car manufacturers, is the autopilot metaphor. The autopilot in a plane can be programmed to take the plane to a specific location, but the pilot can take back control of the plane at any moment. The autopilot assists, but does not replace, the pilot.

A number of experiments conducted by Google led the company to believe that it would be very risky to bet on the possibility that drivers would be able to take back control of the vehicle, in an emergency. Google found out that many drivers were not paying attention to the road while the autopilot was in charge and, instead, they would be working on their computers, talking on the phone or even taking a nap. Based on this data, Google designed cars without brake pedals, steering wheels or accelerators. These cars may seem strange to us today, just as elevators seemed strange in the beginning, when elevator operators were discontinued, and users started operating the elevators themselves.

The recent accident with a Tesla gives some additional evidence that the “plane autopilot” model may create additional risks, since drivers will not, in general, be alert enough to avoid accidents when the autopilot fails. Additionally, human drivers may become the highest risk in a world where most cars are driven by computers, given the inherent unpredictability of human drivers.

Only the future will tell whether future cars will be more like elevators or like planes, in what respects their self-driving ability.

 

Moore’s law is dead, long live Moore´s law

Google recently announced the Tensor Processing Unit (TPU), an application-specific integrated circuit (ASIC) tailored for machine learning applications that, according to the company, delivers an order of magnitude improved performance, per watt, over existing general purpose  processors.

The chip, developed specifically to speed up the increasingly common machine learning applications, has already powered a number of state of the art applications, including AlphaGo and StreetView. According to Google, this type of applications is more tolerant to reduced numerical precision and therefore can be implemented using fewer transistors per operation. Because of this, Google engineers were able to squeeze more operations per second out of each transistor.

chip

The new chip is tailored for TensorFlow, an open source library that performs numerical computation using data flow graphs. Each node in the graph represents one mathematical operation that acts on the tensors that come in through the graph edges.

Google stated that TPU represents a jump of ten years into the future, in what regards Moore’s Law, which has been recently viewed as finally coming to a halt. Developments like this, with alternative architectures or alternative ways to perform computations, are likely to continue to lead to exponential improvements in computing power for years to come, compatible with Moore’s Law.