To Be a Machine: Adventures Among Cyborgs, Utopians, and the Futurists Solving the Modest Problem of Death

Mark O’Connell witty, insightful and sometimes deeply moving account of his research on the topic of transhumanism deserves a place in the bookshelf of anyone interested in the future of humanity. Reading To Be a Machine is a delightful trip through the ideals, technologies, places and characters involved in transhumanism, the idea that science and technology will one day transform human into immortal computer based lifeforms.

For reasons that are not totally clear to me, transhumanism remains mostly a fringe culture, limited to a few futurists, off-the-mainstream scientists and technology nuts. As shared fictions go (to use Yuval Harari’s notation), I would imagine transhumanism is one idea whose time has come. However, it remains mostly unknown by the general public. While humanists believe that the human person, with his/her desires, choices, and fears, should be the most important value to be preserved by a society (check my review of Homo Deus), transhumanists believe that biological based intelligence is imperfect, exists purely because of historical reasons (evolution, that is) and will go away soon as we move intelligence into other computational supports, more robust than our frail bodies.

O’Connell, himself a hard-core humanist, as becomes clear from reading between the lines of this book, pursued a deep, almost forensic, investigation on what transhumanists are up to. In this process, he talks with many unusual individuals involved in the transhumanist saga, from Max More, who runs Alcor, a company that, in exchange for a couple hundred dollars, will preserve your body for the future in liquid nitrogen (or 80k for just the head) to Aubrey de Grey, a reputed scientist working in life extension technologies, who argues that we should all be working on this problem. In de Grey’s words, cited by O’Connell “aging is a human disaster on an unimaginably vast scale, a massacre, a methodical and comprehensive annihilation of every single person that ever lived“. These are just two of the dozens of fascinating characters in the book interviewed in place by O’Connell.

The narrative is gripping, hilarious at times, but moving and compelling, not the least because O’Connell himself provides deep insights about the issues the book discusses. The characters in the book are, at once, alien and deeply human, as they are only trying to overcome the limits of our bodies. Deservedly, the book has been getting excellent reviews, from many sources.

In the end, one gets the idea that transhumanists are crazy, maybe, but not nearly as crazy as all other believers in immortality, be it by divine intervention, by reincarnation, or by any other mechanisms so ingrained in mainstream culture.

The Digital Mind: How Science is Redefining Humanity

Following the release in the US,  The Digital Mind, published by MIT Press,  is now available in Europe, at an Amazon store near you (and possibly in other bookstores). The book covers the evolution of technology, leading towards the expected emergence of digital minds.

Here is a short rundown of the book, kindly provided by yours truly, the author.

New technologies have been introduced in human lives at an ever increasing rate, since the first significant advances took place with the cognitive revolution, some 70.000 years ago. Although electronic computers are recent and have been around for only a few decades, they represent just the latest way to process information and create order out of chaos. Before computers, the job of processing information was done by living organisms, which are nothing more than complex information processing devices, created by billions of years of evolution.

Computers execute algorithms, sequences of small steps that, in the end, perform some desired computation, be it simple or complex. Algorithms are everywhere, and they became an integral part of our lives. Evolution is, in itself, a complex and long- running algorithm that created all species on Earth. The most advanced of these species, Homo sapiens, was endowed with a brain that is the most complex information processing device ever devised. Brains enable humans to process information in a way unparalleled by any other species, living or extinct, or by any machine. They provide humans with intelligence, consciousness and, some believe, even with a soul, a characteristic that makes humans different from all other animals and from any machine in existence.

But brains also enabled humans to develop science and technology to a point where it is possible to design computers with a power comparable to that of the human brain. Artificial intelligence will one day make it possible to create intelligent machines and computational biology will one day enable us to model, simulate and understand biological systems and even complete brains with unprecedented levels of detail. From these efforts, new minds will eventually emerge, minds that will emanate from the execution of programs running in powerful computers. These digital minds may one day rival our own, become our partners and replace humans in many tasks. They may usher in a technological singularity, a revolution in human society unlike any other that happened before. They may make humans obsolete and even a threatened species or they make us super-humans or demi-gods.

How will we create these digital minds? How will they change our daily lives? Will we recognize them as equals or will they forever be our slaves? Will we ever be able to simulate truly human-like minds in computers? Will humans transcend the frontiers of biology and become immortal? Will humans remain, forever, the only known intelligence in the universe?

 

Is mind uploading nearer than you might think?

A recent article published in The Guardian, an otherwise mainstream newspaper, openly discusses the fact that mind uploading may become a real possibility in the near future. Mind uploading is based on the concept that the behavior of a brain can be emulated completely in a computer, ultimately leading to the possibility of transporting individual brains, and individual consciousnesses, into a program, which would emulate the behavior of the “uploaded” mind. Mind uploading represents, in practice, the surest and most guaranteed way to immortality, far faster than any other non-digital technologies can possibly aim to achieve in the foreseeable future.

This idea is not new, and the article makes an explicit reference to Hans Moravec book, The Mind Children, published by Harvard University Press in 1988. In fact, the topic has been already been addressed by a large number of authors, including Ray Kurzweil, in The Singularity is Near, Nick Bostrom, in Superintelligence, and even by me in The Digital Mind.

The article contains an interesting list of interesting sites and organizations, including CarbonCopies, a site dedicated to making whole brain emulation possible, founded by Randal A Koene, and a reference to the 2045 initiative, with similar goals, created by Dmitry Itskov.

The article, definitely worthwhile reading, goes into some detail in the idea of “substrate independent minds”, an idea clearly reminiscent of the concept of virtualization, so in vogue in today’s business world.

Picture source: The Guardian

How to create a mind

Ray Kurzweil’s latest book, How to Create a Mind, published in 2012, is an interesting read and shows some welcome change on his views of science and technology. Unlike some of his previous (and influntial) books, including The Singularity is Near, The Age of Spiritual Machines and The Age of Intelligent Machines, the main point of this book is not that exponential technological development will bring in a technological singularity in a few decades.

how-to-create-a-mind-cover-347x512

True, that theme is still present, but takes second place to the main theme of the book, a concrete (although incomplete) proposal to build intelligent systems that are inspired in the architecture of the human neocortex.

Kurzweil main point in this book is to present a model of the human neocortex, what he calls The Pattern Recognition Theory of the Mind (PRTM). In this theory, the neocortex is simply a very powerful pattern recognition system, built out of about 300 million (his number, not mine) similar pattern recognizers. The input from each of these recognizers can come from either external inputs, through the senses, or from the older parts (evolutionary speaking) of the brain, or from the output of other pattern recognizers in the neocortex. Each recognizer is relatively simple, and can only recognize a simple pattern (say the word APPLE) but, through complex interconnections with other recognizers above and below, it makes possible all sorts of thinking and abstract reasoning.

Each pattern consists, in its essence, in a short sequence of symbols, and is connected, through bundles of axons, to the actual place in the cortex where these symbols are activated, by another pattern recognizer. In most cases, the memories these recognizers represent must be accessed in a specific order. He gives the example that very few persons can recite the alphabet backwards, or even their social security number, which is taken as evidence of the sequential nature of operation of these pattern recognizers.

The key point of the book is that the actual algorithms used to build and structure a neocortex may soon become well understood, and used to build intelligent machines, embodied with true strong Artificial Intelligence. How to Create a Mind falls somewhat short of the promise in the subtitle, The Secret of Human Thought Revealed, but still makes for some interesting reading.

IBM TrueNorth neuromorphic chip does deep learning

In a recent article, published in the Proceedings of the National Academy of Sciences, IBM researchers demonstrated that the TrueNorth chip, designed to perform neuromorphic computing, can be trained using deep learning algorithms.

brain_anatomy_medical_head_skull_digital_3_d_x_ray_xray_psychedelic_3720x2631

The TrueNorth chip was designed to efficiently simulate the efficient modeling of spiking neural networks, a model for neurons that closely mimics the way biological neurons work. Spiking neural networks are based on the integrate and fire model, inspired on the fact that actual neurons integrate the incoming ion currents caused by synaptic firing and generate an output spike only when sufficient synaptic excitation has been accumulated. Spiking neural network models tend to be less efficient than more abstract models of neurons, which simply compute the real valued output directly from the values of the real valued inputs multiplied by the input weights.

As IEEE Spectrum explains: “Instead of firing every cycle, the neurons in spiking neural networks must gradually build up their potential before they fire. To achieve precision on deep-learning tasks, spiking neural networks typically have to go through multiple cycles to see how the results average out. That effectively slows down the overall computation on tasks such as image recognition or language processing.

In the article just published, IBM researchers have adapted deep learning algorithms to run on their TrueNorth architecture, and have achieved comparable precision, with lower energy dissipation. This research raises the prospect that energy-efficient neuromorphic chips may be competitive in deep learning tasks.

Image from Wikimedia Commons

The User Illusion: Cutting consciousness down to size

In this entertaining and ambitious book Tor Nørretranders argues that consciousness, that hallmark of higher intelligence, is nothing more than an illusion, a picture of reality created by our brain that we mistake by the real thing. The book received good reviews and was very well received in his native country, Denmark, and all over the world.

Using fairly objective data, Nørretranders makes his main point that consciousness has a very limited bandwidth, probably no more than 20 bits a second. This means that we cannot, consciously, process more than a few bits a second, distilled from the megabytes of information processed by our senses in the same period. Furthermore, this stream of information creates a simulation of reality, which we mistake for the real thing, and the illusion that our conscious self (the “I”) in in charge, while the unconscious self (the “me”) follows the orders given by the “I”.

the_users_illusion

There is significant evidence that Nørretranders’ main point is well taken. We know (and he points it out in his book) that consciousness lags behind our actions, even conscious ones, by about half a second. As is also pointed out by another author, Daniel Dennett, in his book Consciousness Explained, consciousness controls much less than we think. Consciousness is more of a module that observes what is going on and explains it in terms of “conscious decisions” and “conscious attention”. This means that consciousness is more of an observer of our actions, than the agent that determines them. Our feeling that we consciously control our desires, actions, and sentiments is probably far from the truth, and a lot of what we consciously observe is a simulation carefully crafted by our “consciousness” module. Nørretranders refers to the fact that some people believe that consciousness is a recent phenomenon, maybe no more than a few thousand years old, as Julian Jaynes defended in his famous book, The Bicameral Mind.

Nørretranders uses these arguments to argue that we should pay less attention to conscious decisions (the “I”, as he describes it) and more to unconscious urges (the “me”, in his book), letting the unconscious “me”, who has access to vastly larger amounts of information, in control of more of your decisions.

Explaining (away) consciousness?

Consciousness is one of the hardest to explain phenomena created by the human brain. We are familiar with the concept of what it means to be conscious. I am conscious and I admit that every other human being is also conscious. We become conscious when we wake up in the morning and remain conscious during waking hours, until we lose consciousness again when we go to sleep at night. There is an uninterrupted flow of consciousness that, with the exception of sleeping periods, connects who you are now with who you were many years ago.

Explaining exactly what consciousness is, however, is much more difficult. One of the best known, and popular, explanations was given by Descartes. Even though he was a materialistic, he balked when it came to consciousness, and proposed what is now known as Cartesian dualism, the idea that the mind and the brain are two different things. Descartes thought that the mind, the seat of conscience, has no physical substance while the body, controlled by the brain, is physical and follows the laws of physics

Descartes ideas imply a Cartesian theatre, a place where the brain exposes the input obtained by the senses, so that the mind (your inner I) can look at these inputs, make decisions, take actions, and feel emotions.

dennet

In what is probably one of the most comprehensive and convincing analyses of what consciousness is, Dennett pulls all the guns against the idea of the Cartesian Theather, and argues that consciousness can be explained by what he calls a “multiple drafts” model.

Instead of a Cartesian Theater, where conscious experience occurs, there are “various events of content-fixation occurring in various places at various times in the brain“. The brain is nothing more than a “bundle of semi-independent agencies“, created by evolution, that act mostly independently and in semi-automatic mode. Creating a consistent view, a serial history of the behaviors of these different agencies, is the role of consciousness. It misleads “us” into thinking that “we” are in charge while “we” are, mostly, reporters telling a story to ourselves and others.

His arguments, supported by extensive experimental and philosophical evidence, are convincing, well structured, and discussed at depth, with the help of Otto, a non-believer in the multiple drafts model. If Dennett does not fully explain the phenomenon of consciousness, he certainly does an excellent job at explaining it away. Definitely one book to read if you care about artificial intelligence, consciousness, and artificial minds.

Inching towards an exascale supercomputer

The Sunway TaihuLight became, as of June 2016, the fastest supercomputer in the world. At this time, the Top 500 ranking was rearranged to put this computer ahead of TianHe-2 (also from China). Sunway TaihuLight clocked in at 93 petaflop/sec (93,000,000,000,000,000 floating point operations per second)  using its 10 million cores This performance compares with the 34 petaflop/sec for the 3 million core TianHe-2. An exascale computer would have a performance of 1000 petaflops/sec.

What is maybe even more important, is that the new machine uses 14% less power than TianHe-2 (it uses a mere 15.3 MW), which makes it more than three times as efficient.

Mjc4OTczNg

As IEEE Spectrum reports, “TaihuLight uses DDR3, an older, slower memory, to save on power“. Furthermore, it tries to use small amounts of local memory near each core instead of a more traditional (and power demanding) memory hierarchy. Other architectural choices aimed at reducing the power while preserving the performance.

It is interesting to compare the power efficiency of this supercomputer with that of the human brain. Imagine that this supercomputer is used to simulate a full human brain (with its 86 billion neurons), using a standard neuron simulator package, such as NEURON.

Using some reasonable assumptions, it is possible to estimate that such a simulation would proceed at a speed about 3 million times slower than real time, and would require about three trillion times more energy than the human brain, to perform equivalent calculations. In terms of speed and power efficiency, it is still hard to compete with the 20W human brain.

 

A new map of the human brain


More than one hundred years ago, the German anatomist Korbinian Brodmann undertook a systematic analysis of the microscopic features of the brain cortex of humans (and several other species) and was able to create a detailed map of the cortex. Brodmann 52 areas  (illustrated below) are still used today to refer to specific regions of the cortex.

Brodmann_areas_3D

Despite the fact that he numbered brain cortex areas based mostly on the cellular composition of the tissues observed by microscope, there is remarkable correlation between specific Brodmann areas and specific functions in the cortex. For instance, area 17 is the primary visual cortex, while area 4 is the primary motor cortex.

This week, an article in Nature proposes a new map of the human cortex, much more detailed than the one developed by Brodmann. In this new map, each hemisphere of the cortex is subdivided into 180 regions.

A team led by Mathew Glasser used multiple types of imaging data collected from more than two hundred adults participants in the Human Connectome Project. The information included a number of different measurements including cortical thickness, brain function, connectivity between regions, and topographic organization of cells in brain tissue, among others.The following video, made available by Nature, gives an idea of the process followed by the researchers and the results obtained.

Image by Mark Dow, available at Wikimedia Commons.

Could a neuroscientist understand a microprocessor?

In a recent article, which has been widely commented (e.g., in a wordpress blog and in marginal revolution) Eric Jonas and Konrad Korning, from UC Berkeley and Northwestern universities, respectively, have described an interesting experiment.

They have applied the same techniques neuroscientists use to analyze the brain to the study of a microprocessor. More specifically, they used local field potentials, correlations between activities of different zones, the effects of single transistor lesions, together with other techniques inspired in state of the art brain sciences.

Microprocessors are complex systems, although they are much simpler than a human brain. A modern microprocessor could have several billion transistors, a number that compares poorly with the human brain, which has close to 100 billion neurons, and probably more than one quadrillion synapses. One could imagine that, by applying techniques similar to the ones used in neuroscience, one could obtain some understanding of the role of different functional units, how they are interconnected, and even how they work.

Castle_Chip_Layout

The authors conclude, not surprisingly, that no significant insights on the structure of the processor can be be gained by applying neuroscience techniques. The authors have indeed observed signals that are reminiscent of the signals obtained when applying NMR and other imaging techniques to live brains, and have observed significant correlations between these signals and the tasks the processor was doing, as in the following figure, extracted from the paper.

signals

However, the analysis of these signals did not provide any significant knowledge on the way the processor works, nor about the different functional units involved. They did, however, provide significant amounts of misleading information. For instance, the authors investigated how transistor damage affected three chip “behaviors”, specifically the execution of the games Donkey Kong, Space Invaders and Pitfall. They were able to find transistors which uniquely crash one of the games but not the others. A neuroscientist studying this chip might thus conclude a specific transistor is uniquely responsible for a specific game – leading to the possible conclusion that there may exist a “Space Invaders” transistor and a “Pitfall” transistor.

These may be bad news for neuroscientists. Reverse engineering the brain, by observing the telltales left by neurons working, may remain forever an impossible task. Fortunately, that still leaves open the possibility that we may be able to fully reconstruct the behavior of a brain, even without ever having a full understanding of its behavior.

First image: Chip layout of EnCore Castle processor, by Igor Bohem, available at Wikimedia commons.

Second image: Observed signals, in different parts of the chip.