Gordon Moore, scientist and chairman of Intel, first noticed that the number of transistors that can be placed inexpensively on an integrated circuit increased exponentially over time, doubling approximately every two years. This corresponds to an exponential increase on the number of transistors per chip that led to an increase by a factor of more than 1,000,000 in 40 years. Moore’s law has fueled the enormous developments in computer technology that have revolutionized technology and society in the last decades.
A long standing question is for how long will Moore’s law hold, since no exponential growth can last forever. The Technology Quarterly section of this week edition of the Economist, summarized in this short article, analyzes this question in depth.
The conclusions are that, while the rate of increase of the number of transistors in a chip will become smaller and smaller, advances in other technologies, such as software and cloud computing, will cover the slack, providing us with increases in computational power that will not deviate much from what Moore’s law would have predicted.
Image of computer scientist and businessman Gordon Moore. The image is a screenshot from the Scientists You Must Know video, created by the Chemical Heritage Foundation, in which he briefly discusses Moore’s Law
AlphaGo, the Go playing program developed by Google’s DeepMind, scored its first victory in the match against Lee Sedol.
This win comes in the heels of AlphaGo victory over Fan Hui, the reigning 3-times European Champion, but it has a deeper meaning, since Lee Sedol is one of the two top Go players in the world, together with Lee Changho. Go is viewed as one of the more difficult games to be mastered by computer, given the high branching factor and the inherent difficulty of position evaluation. It has been believed that computers would not master this game for many decades to come.
Ongoing coverage of the match is available in the AlphaGo website and the matches will be livestreamed on DeepMind’s YouTube channel.
AlphaGo used deep neural networks trained by a combination of supervised learning from professional games and reinforcement learning from games it played with itself. Two different networks are used, one to evaluate board positions and another one to select moves. These networks are then used inside a special purpose search algorithm.
The image shows the final position in the game, courtesy of Google’s DeepMind.
A new book by Pedro Domingos, The Master Algorithm, describes how machine learning algorithms will become more and more essential in the development of technology. Machine learning techniques already enable many systems to behave intelligently, and are at the source of many fascinating new developments, including self-driving cars, speech recognition, automated trading systems, intelligent digital assistants, like Siri or Cortana, and many, many, other technologies.
Mastering machine learning is essential to anyone interested in the development of digital technologies, and this book represents the ideal stepping stone towards more technical works.
Domingos provides an excellent, non-technical, introduction to this essential area, describing what he calls the five tribes of machine learning: the symbolists, the connectionists, the evolutionaries, the Bayesians, and the analogizers. He argues that the algorithms of each of these tribes can, and will one day, be combined into one master algorithm, the mother of all learning algorithms.
There are many excellent reviews and pieces on the book, including GoodReads, Times Higher Education and KDNuggets. Now available at Amazon, your corner bookstore or at a FNAC near you.
Recent news about OpenWorm, a project that aims at recreating in a computer the behaviour of a complete animal, the roundworm Caenorhabditis elegans. The OpenWorm project aims at constructing a complete model of this worm, not only of the 302 neurons and the 95 muscle cells, but also of the remaining thousand cells in each worm (more exactly, 959 somatic cell plus about 2000 germ cells in the hermaphrodite sex and 1031 cells in the males).
The one millimeter long worm C. elegans has a long history in science, as one of the animals more extensively used as a model for the study of simple multicellular organisms. It was the first animal to have its genome sequenced, in 1998.
But well before that, in 1963, Sydney Brenner proposed it as a model organism for the investigation of neural development in animals. In an effort that lasted for more than twelve years, the complete structure of the brain of C. elegans was reverse engineered, leading to a diagram of the wiring of each neuron in this simple brain. The effort of reverse engineering the worm brain included slicing, very thinly, several worm brains, obtaining roughly 8000 photos of the slices using an electron microscope and connecting, mostly by hand, each neuron section of each slice to the corresponding neuron section in the neighbor slices. The complete wiring diagram of the 302 neurons and the roughly 7000 synapses, which constitute the brain of this simple creature, was described in minute detail in a 340 pages article, published in 1986, entitled The Structure of the Nervous System of the Nematode Caenorhabditis elegans, with a running head The Mind of a Worm.
In 2008, IEEE Spectrum, the flagship publication of the Institute for Electrical and Electronic Engineers, the major professional association of this area, dedicated a full issue to the question of the singularity. This issue received an award for the best single oopic magazine issue of that year.
In this special report, which is as actual today as it was in 2008, a number of scientists, visionaries and engineers give their opinion on whether a singularity will or will not exist. The issue covers topics related with the singularity, such as robotics, consciousness and quantum phenomena and artificial intelligence. A must read for anyone interested in the topic, one of the best unbiased assessments of whether the singularity will or will exist.
An article in the NY Times, by Kenneth Miller, addresses the question of whether or not we will one day be able to upload a brain, that is, to simulate in a computer the complete behaviour of a human brain.
The author, a neuroscientist from Columbia University, addresses carefully the challenges involved in mind uploading and whole brain emulation.
The author’s (wild) guess is that it will take centuries to determine a connectome that is detailed enough to enable us to try brain uploading.
However, he also recognises that we may not need to reconstruct all the fine details of a brain, with its billions of neurons and trillions of synapses, whose structure varies in time and space. Still, a level of detail incommensurable with existing technology would be required to even have a shot of creating a model that would reproduce actual brain behaviour.
It seems the singularity may not be over the corner, after all…
(Image by Thomas Schultz, avaliable at Wikimedia commons).
In a paper recently published in the journal Astrobiology, Aditya Chopra and Charles Lineweaver, from the Australian National University, argue that the reason we have not met intelligent aliens is because, in general, life does not evolve fast enough to become a regulating force on planet ecologies.
If this explanation holds true or if it is, at least, one of the possible explanations, then many planets may have developed life, but in few or none of them has life lasted long enough to be able to regulate greenhouse gases and albedo, thus maintaining surface temperatures compatible with life. If this is true, then extinction is the default destiny for the majority of life that has ever emerged on planets in the galaxy and the universe. Furthermore, only planets where life develops rapidly enough to become a regulating force in the planet ecology remain habitable and may, eventually, develop intelligent life.
(Photo by By Ian Norman, via Wikimedia Commons).