Artificial Intelligence developments: the year in review

TechCrunch, a popular site dedicated to technology news, has published a list of the the top Artificial Intelligence news of 2016.

2016 seems indeed to have been the year Artificial Intelligence (AI) left the confinement of university labs to come into public view.

review

Several of the news selected by TechCrunch, were also covered in this blog.

In March a Go playing program, developed by Google’s DeepMind, AlphaGo, defeated 18-time world champion Lee Sedol (reference in the TechCrunch review).

Digital Art, where deep learning algorithms learn to paint in the style of a particular artist, was also the topic of one post (reference in the TechCrunch review).

In May, Digital Minds posted Moore’s law is dead, long live Moore´s law, describing how Google’s new chip can be used to run deep learning algorithms using Google’s TensorFlow (related article in the TechCrunch review).

TechCrunch has identified a number of other relevant developments that make for an interesting reading, including the Facebook-Amazon-Google-IBM-Microsoft mega partnership on AI, the Facebook strategy on AI and the news about the language invented by Google’s translation tool.

Will the AI wave gain momentum in 2017, as predicted by this article? I think the chances are good, but only the future will tell.

Uber to try self-driving cars, sooner than expected

Later this month, customers in downtown Pittsburgh should be able to call in a driverless Uber car. As reported by many news agencies, including CNN and Bloomberg, Uber will use Volvo XC90 sport-utility vehicles, equipped with sensors, radars, lasers and GPS receivers.

uber_volvo

Although we have been expecting driverless cars to hit the streets some time soon, few have predicted that general usage autonomous vehicles would be available this year.

The partnership between Uber and Volvo makes the perspective of streets full of driverless cars less distant. Other companies, including Google, Ford and Tesla have their own plans for autonomous vehicles, but none of them has announced concrete steps towards making their cars available to the general public.

Uber cars will include a human supervisor, that will be in the vehicle at all times. Still, this development raises the prospect of job displacement in a massive scale, as CNN reports. Currently, Uber has 600,000 drivers in the US alone, and 1.5 million worldwide. However, as the technology for driverless cars improves, many more jobs than these ones that are at risk, as there are 3.5 million professional truck drivers in the US alone.

Bill Gates recommends the two books to read if you want to understand Artificial Intelligence

Also in the 2026 Code conference, Bill Gates recommended the two books you need to read if you want to understand Artificial Intelligence. By coincidence (or not) these two books are exactly the ones I have previously covered in this blog, The Master Algorithm and Superintelligence.

Given Bill Gates strong recent interests in Artificial Intelligence, there is a fair chance that Windows 20 will have a natural language interface just like the one in the movie Her (caution, spoiler below).

If you haven’t seen the movie, maybe you should. It is about a guy who falls in love with the operating system of his computer.

So, there is no doubt that operating systems will keep evolving in order to offer more natural user interfaces. Will they ever reach the point where you can fall in love with them?

Crazy chatbots or smart personal assistants?

Well-known author, scientist, and futurologist Ray Kurzweil is reportedly working with Google to create a chatbot, named Danielle. Chatbots, i.e., natural language parsing programs that get their input from social networks and other groups on the web, have been of interest for researchers since they represent an easy way to test new technologies in the real world.

Very recently, a chatbot created by Microsoft, Tay, made the news because it became “a Hitler-loving sex robot” after chatting for less than 24 hours with teens, on the web. Tay was an AI created to speak like a teen girl, and it was an experiment done in order to improve Microsoft voice recognition software. The chatbot was rapidly “deleted”, after it started comparing Hitler, in favorable terms, with well known contemporary politicians.

Presumably, Danielle, reportedly under development by Google, with the cooperation of Ray Kurzweil, will be released later this year. According to Kurzweil, Danielle will be able to maintain relevant, meaningful, conversations, but he still points to 2029 as the year when a chatbot will pass the Turing test, becoming indistinguishable from a human. Kurzweil, the author of The Singularity is Near and many other books on the future of technology, is a firm believer in the singularity, a point in human history where society will suffer such a radical change that it will become unrecognizable to contemporary humans.

DSCN0277

In a brief video interview (which was since removed from YouTube), Kurzweil describes the Google chatbot project, and the hopes he pins on this project.

While chatbots may not look very interesting, unless you have a lot of spare time on your hands, the technology can be used to create intelligent personal assistants. These assistants can take verbal instructions and act on your behalf and may therefore become very useful, almost indispensable “tools”. As Austin Okere puts it in this article , “in five or ten years, when we have got over our skepticism and become reliant upon our digital assistants, we will wonder how we ever got along without them.

 

 

A new and improved tree of life brings some surprising results

In a recent article, published in the journal Nature Microbiology, a group of researchers from UC Berkeley, in collaboration with other universities and institutes, proposed a new version of the tree of life, which dramatically changes our view of the relationships between the species inhabiting planet Earth.

Many depictions of the tree of life tend to focus on the enormous and well known diversity of eukaryotes, a group of organisms composed of complex cells that includes all animals, plants and fungi.

This version of the tree of life, now published, uses metagenomics analysis of genomic data from many organisms little known before, together with published sequences of genomic data, to infer a significantly different version of the tree of life. This new view reveals the dominance of bacterial diversification.  A full scale version of the proposed tree of life enables you to find our own ancestors, in the extreme bottom right of the figure, the Opisthokont group of organisms. The Opisthokonts include both the animal and fungus kingdoms,  together with other eukaryotic microorganisms. Opisthokont flagelate cells, such as the sperm of most animals and the spores of the chytrid fungi, propel themselves using a single posterior flagellum, a feature that gives the group its name. At the level of resolution used in the study, humans and mushrooms are so close that they cannot be told apart.

11TREEOFLIFE-superJumbo

This version of the tree of life maintains the three great trunks that Carl Woese and his colleagues published in the first “universal tree of life”, in the seventies.

Our own trunk, known as eukaryotes, includes animals, plants, fungi and protozoans. A second trunk included many familiar bacteria like Escherichia coli. The third trunk, the Archaea, includes little-known microbes that live in extreme places like hot springs and oxygen-free wetlands.

675px-PhylogeneticTree,_Woese_1990

 

However, this more extensive and detailed analysis, based on extensive genomic data, provides a more global view of the evolutionary process that has shaped life on Earth for the last four billion years.

Images from the article in Nature Microbiology, by Hug et. al., and the work of Woese et al.

 

Meet Ross, our new lawyer

Fortune reports that law firm Baker & Hostetler has hired an artificially intelligent lawyer, Ross. According to the company that created it, Ross Intelligence, the IBM Watson powered digital attorney interacts with other workers as a normal lawyer would.

“You ask your questions in plain English, as you would a colleague, and ROSS then reads through the entire body of law and returns a cited answer and topical readings from legislation, case law and secondary sources to get you up-to-speed quickly.”

TOPIO_3_3

Ross will work for the law firm’s bankruptcy practice, which currently employs roughly 50 lawyers. Baker & Hostetler chief information officer explained that the company believes emerging technologies, like cognitive computing and other forms of machine learning, can help enhance the services delivered to their clients. There is no information on the number of lawyers to be replaced by Ross.

Going through large amounts of information stored in plain text and compiling it in usable form is one of the most interesting applications of natural language processing systems, like IBM Watson. If successful, one single system may do the work of hundreds or thousands of specialists, at least in a large fraction of the cases that do not require extensive or involved reasoning. However, as the technology evolves, even these cases may become ultimately amenable to treatment by AI agents.

Picture by Humanrobo (Own work) [CC BY-SA 3.0 (http://creativecommons.org/licenses/by-sa/3.0)%5D, via Wikimedia Commons

 

Artificial intelligence and machine learning in high demand

The Economist reports, in a recent article, that artificial intelligence and machine learning experts are in extremely high demand, even among the already highly-wanted computer scientists.

The Economist reports that faculty, graduate students and practitioners of AI and machine learning are being harvested, in large numbers, by big-name companies, such as Google, Microsoft, Facebook, Uber, and Tesla.

20160402_WBD001_0

The demand for this type of skills is a consequence of the rise of intelligent and adaptive systems, advanced user interfaces, and autonomous agents, that will mark the next decades in computing.

Machine learning conferences, such as Neural Information Processing Systems (NIPS), once a tranquil backwater meeting place for domain experts, are now hunting grounds for companies looking to hire the best talents in the domain.

AlphaGo beats Lee Sedol, one the best Go players in the world

AlphaGo, the Go playing program developed by Google’s DeepMind, scored its first victory in the match against Lee Sedol.

CdFyCoBWEAExhuR

This win comes in the heels of AlphaGo victory over Fan Hui, the reigning 3-times European Champion,  but it has a deeper meaning, since Lee Sedol is one of the two top Go players in the world, together with Lee Changho. Go is viewed as one of the more difficult games to be mastered by computer, given the high branching factor and the inherent difficulty of position evaluation. It has been believed that computers would not master this game for many decades to come.

Ongoing coverage of the match is available in the AlphaGo website and the matches will be livestreamed on DeepMind’s YouTube channel.

AlphaGo used deep neural networks trained by a combination of supervised learning from professional games and reinforcement learning from games it played with itself. Two different networks are used, one to evaluate board positions and another one to select moves. These networks are then used inside a special purpose search algorithm.

The image shows the final position in the game, courtesy of Google’s DeepMind.

Brain uploading in the NY Times

An article in the NY Times, by Kenneth Miller, addresses the question of whether or not we will one day be able to upload a brain, that is, to simulate in a computer the complete behaviour of a human brain.

The author, a neuroscientist from Columbia University, addresses carefully the challenges involved in mind uploading and whole brain emulation.

544px-DTI-sagittal-fibers

The author’s (wild) guess is that it will take centuries to determine a connectome that is detailed enough to enable us to try brain uploading.

However, he also recognises that we may not need to reconstruct all the fine details of a brain, with its billions of neurons and trillions of synapses, whose structure varies in time and space. Still, a level of detail incommensurable with existing technology would be required to even have a shot of creating a model that would reproduce actual brain behaviour.

It seems the singularity may not be over the corner, after all…

(Image by Thomas Schultz, avaliable at Wikimedia commons).

Reverse engineering the brain, one slice at a time

Narayanan Kasthuri and a team of researchers  from Harvard, MIT, Duke, and John Hopkins universities, fully reconstructed all the neuron sections and many sub-cellular objects, including synapses and synapse vesicles, in a volume of 1500 µm3 (which is just a little more than one millionth of a cubic millimeter) using 3×3×30 nm voxels.

cilinder

The results, published in an article in the journal Cell, in July 2015, describe the experimental procedure and the conclusions. The data was obtained by collecting 2,250 brain slices, each roughly 30 nm thick, obtained with a tape-collecting ultramicrotome that slices brain sections using a diamond knife.The slices were imaged using serial electron microscopy and the images processed in order to reconstruct a number of volumes. In this volume, the authors have reconstructed the 3D structure of the 1500 µm3 of neural tissue, which included hundreds of dendrites, more than 1400 neuron axons and 1700 synapses, which corresponds to about one synapse per cubic micron.

(Rendering by the authors, used with permission)