MIT distances itself from Nectome, a mind uploading company

The MIT Media Lab, a unit of MIT, decided to sever the ties that connected it with Nectome, a startup that proposes to make available a technology that processes and chemically preserves a brain, down to its most minute details, in order to make it possible, at least in principle, to simulate your brain and upload your mind, sometime in the future.

According to the MIT news release, “MIT’s connection to the company came into question after MIT Technology Review detailed Nectome’s promotion of its “100 percent fatal” technology” in an article posted in the MIT Technology Review site.

As reported in this blog, Nectome claims that by preserving the brain, it may be possible, one day, “to digitize your preserved brain and use that information to recreate your mind”. Nectome acknowledges, however, that the technology is fatal to the brain donor and that there are no warranties that future recovery of the memories, knowledge and personality will be possible.

Detractors have argued that the technology is not sound, since simulating a preserved brain is a technology that is at least many decades in the future and may even be impossible in principle. The criticisms were, however, mostly based on the argument the whole enterprise is profoundly unethical.

This kind of discussion between proponents of technologies aimed at performing whole brain emulation, sometimes in the future, and detractors that argue that such an endeavor is fundamentally flawed, has occurred in the past, most notably a 2014 controversy concerning the objectives of the Human Brain Project. In this controversy, critics argued that the goal of a large-scale simulation of the brain is premature and unsound, and that funding should be redirected towards more conventional approaches to the understanding of brain functions. Supporters of the Human Brain Project approach argued that reconstructing and simulating the human brain is an important objective in itself, which will bring many benefits and advance our knowledge of the brain and of the mind.

Picture by the author.

Advertisements

The Computer and the Brain

The Computer and the Brain, first published in 1958, is a delightful little book by John von Neumman, his attempt to compare two very different information processing devices: computers and brains. Although written more than sixty years ago, it has more than historic interest, even though it addresses two topics that have developed enormously in the decades since von Neumann’s death.

John von Neumann’s genius comes through very clearly in this essay. Sixty years ago, very few persons knew what a computer was, and probably even less had some idea how the brain performed its magic. This book, written just a few years after the invention of the transistor (by Bardeen and Brattain), and the discovery of the membrane mechanisms that explain the electrical behaviour of neurons (by Hodgkin and Huxley), nonetheless compares, in very clear terms, the relative computational power of computers and brains.

Von Neumann aim is to compare many of the characteristics of the processing devices used by computers (vacuum tubes and transistors) with the ones used by the brain (neurons). His objective is to perform an objective comparison of the two technologies, as of their ability to process information. He addresses speed, size, memory and other characteristics of the two types of information processing devices.

One of the central and (to me) most interesting parts of the book is the comparison of artificial information processing devices (vacuum tubes and transistors) with natural information processing devices (neurons), in terms of speed and size.

Von Neumman concludes that vacuum tubes and transistors are faster, by a factor of 10,000 to 100,000, than neurons, and occupy about 1000 times more space (with the technologies of the day). All together, if one assumes that speed can be traded by number of devices (for instance, reusing electronic devices to perform computations that, in the brain, are performed by slower, but independent, neurons), his comparisons lead to the conclusion (not explicit in the book, I must add) that an electronic computer the size of a human brain would be one to two orders of magnitude less powerful than the human brain itself.

John von Neumann could not have predicted, in 1957, that transistors would be packed, by the billions, on integrated circuits no larger than a postal stamp. If one uses the numbers that correspond to the technologies of today, one is led to conclude that a modern CPU (such as the Intel Core i7), with a billion transistors, operating in the nanoseconds range, is a few orders of magnitude (10000 times) more powerful than the human brain, with its hundred billion neurons operating in the milliseconds range.

Of course one has to consider, as John von Neumann also wrote, that a neuron is considerably more complex and can perform more complex computations than a transistor. But even if one takes that into account, and assumes that a transistor is roughly equivalent to a synapse, in raw computing power, one gets the final result that the human brain and an Intel Core i7 have about the same raw processing power.

It is a sobering thought, one which von Neumann would certainly have liked to share.

Consciousness: Confessions of a Romantic Reductionist

Christoph Koch, the author of “Consciousness: Confessions of a Romantic Reductionist”  is not only a renowned researcher in brain science but also the president of the Allen Institute for Brain Science, one of the foremost institutions in brain research. What he has to tell us about consciousness, and how he believes it is produced by the brain is certainly of great interest for anyone interested in these topics.

However, the book is more that just another philosophical treatise on the issue of consciousness, as it is also a bit of an autobiography and an open window on Koch’s own consciousness.

With less than 200 pages (in the paperback edition), this book is indeed a good start for those interested in the centuries-old problem of the mind-body duality and how a physical object (the brain) creates such an ethereal thing as a mind. He describes and addresses clearly the central issue of why there is such a thing as consciousness in humans, and how it creates self-awareness, free-will (maybe) and the qualia that characterize the subjective experiences each and (almost) every human has.

In Koch’s view, consciousness is not a thing that can be either on or off. He ascribes different levels of consciousness to animals and even to less complex creatures and systems. Consciousness, he argues, is created by the fact that very complex systems have a high dimensional state space, creating a subjective experience that corresponds to each configuration of this state space. In this view, computers and other complex systems can also exhibit some degree of consciousness, although much smaller than living entities, since they are much less complex.

He goes on to describe several approaches that have aimed at elucidating the complex feedback loops existing in brains, which have to exist in order to create these complex state spaces. Modern experimental techniques can analyze the differences between awake (conscious) and asleep (unconscious) brains, and learn from these differentes what exactly does create consciousness in a brain.

Parts of the book are more autobiographical, however. He describes not only his life-long efforts to address these questions, many of them developed together with Francis Crick, who remains a reference to him, as a scientist and as a person. The final chapter is more philosophical, and addresses other questions for which we have no answer yet, and may never have, such as “Why there is something instead of nothing?” or “Did an all powerful God create the universe, 14 billions year ago, complete with the laws of physics, matter and energy, or is this God simply a creation of man?”.

All in all, excellent reading, accessible to anyone interested in the topic but still deep and scientifically exact.

Portuguese Edition of The Digital Mind

IST Press, the publisher of Instituto Superior Técnico, just published the Portuguese edition of The Digital Mind, originally published by MIT Press.

The Portuguese edition, translated by Jorge Pereirinha Pires, follow the same organization and has been reviewed by a number of sources. The back-cover reviews are by Pedro Domingos, Srinivas Devadas, Pedro Guedes de Oliveira and Francisco Veloso.

A pre-publication was made by the Público newspaper, under the title Até que mundos digitais nos levará o efeito da Rainha Vermelha, making the first chapter of the book publicly available.

There are also some publicly available reviews and pieces about this edition, including an episode of a podcast and a review in the radio.

New technique for high resolution imaging of brain connections

MIT researchers have proposed a new technique that leads to very high resolution images of the detailed connections of neurons in the human brain. Taeyun Ku, Justin Swaney and Jeong-Yoon Park were the lead researchers of the work published in a Nature Biotechnology article. They have developed a new technique for imaging brain tissue at multiple scales that leads to unprecedented high resolution images of significant regions of the brain, which allows them to detect the presence of proteins within cells and determine the long-range connections between neurons.

The technique actually blows up the size of the tissues under observation, increasing their dimension, while preserving nearly all of the proteins within the cells, which can be labeled with fluorescent molecules and imaged.

The technique floods the brain tissue with acrylamide polymers, which end up forming a dense gel. The proteins are attached to this gel and, after they are denatured, the gel can be expanded to four or five times its original size. This leads to the possibility of imaging the blown-up tissue with a resolution that is much higher than would be possible if the original tissue was used.

Techniques like create the conditions to advance with reverse engineering techniques that could lead to a better understanding of the way neurons connect with each other, creating the complex structures in the brain.

Image credit: MIT

 

IEEE Spectrum special issue on whether we can duplicate a brain

Maybe you have read The Digital Mind or The Singularity is Near, by Ray Kurzweil, or other similar books, thought it all a bit farfetched, and wondered whether the authors are bonkers or just dreamers.

Wonder no more. The latest issue of the flagship publication of the Institute for Electrical and Electronic Engineers, IEEE Spectrum , is dedicated to the interesting and timely question of whether we can copy the brain, and use it as blueprint for intelligent systems.  This issue, which you can access here, includes many interesting articles, definitely worth reading.

I cannot even begin to describe here, even briefly, the many interesting articles in this special issue, but it is worthwhile reading the introduction, on the perspective of near future intelligent personal assistants or the piece on how we could build an artificial brain right now, by Jennifer Hasler.

Other articles address the question on how expensive, computationally, is the simulation of a brain at the right level of abstraction. Karlheinz Meier’s article on this topic explains very clearly why present day simulations are so slow:

“The big gap between the brain and today’s computers is perhaps best underscored by looking at large-scale simulations of the brain. There have been several such efforts over the years, but they have all been severely limited by two factors: energy and simulation time. As an example, consider a simulation that Markus Diesmann and his colleagues conducted several years ago using nearly 83,000 processors on the K supercomputer in Japan. Simulating 1.73 billion neurons consumed 10 billion times as much energy as an equivalent size portion of the brain, even though it used very simplified models and did not perform any learning. And these simulations generally ran at less than a thousandth of the speed of biological real time.

Why so slow? The reason is that simulating the brain on a conventional computer requires billions of differential equations coupled together to describe the dynamics of cells and networks: analog processes like the movement of charges across a cell membrane. Computers that use Boolean logic—which trades energy for precision—and that separate memory and computing, appear to be very inefficient at truly emulating a brain.”

Another interesting article, by Eliza Strickland, describes some of the efforts that are taking place to use  reverse engineer animal intelligence in order to build true artificial intelligence , including a part about the work by David Cox, whose team trains rats to perform specific tasks and then analyses the brains by slicing and imaging them:

“Then the brain nugget comes back to the Harvard lab of Jeff Lichtman, a professor of molecular and cellular biology and a leading expert on the brain’s connectome. ­Lichtman’s team takes that 1 mm3 of brain and uses the machine that resembles a deli slicer to carve 33,000 slices, each only 30 nanometers thick. These gossamer sheets are automatically collected on strips of tape and arranged on silicon wafers. Next the researchers deploy one of the world’s fastest scanning electron microscopes, which slings 61 beams of electrons at each brain sample and measures how the electrons scatter. The refrigerator-size machine runs around the clock, producing images of each slice with 4-nm resolution.”

Other approaches are even more ambitious. George Church, a well-known researcher in biology and bioinformatics, uses sequencing technologies to efficiently obtain large-scale, detailed information about brain structure:

“Church’s method isn’t affected by the length of axons or the size of the brain chunk under investigation. He uses genetically engineered mice and a technique called DNA bar coding, which tags each neuron with a unique genetic identifier that can be read out from the fringy tips of its dendrites to the terminus of its long axon. “It doesn’t matter if you have some gargantuan long axon,” he says. “With bar coding you find the two ends, and it doesn’t matter how much confusion there is along the way.” His team uses slices of brain tissue that are thicker than those used by Cox’s team—20 μm instead of 30 nm—because they don’t have to worry about losing the path of an axon from one slice to the next. DNA sequencing machines record all the bar codes present in a given slice of brain tissue, and then a program sorts through the genetic information to make a map showing which neurons connect to one another.”

There is also a piece on the issue of AI and consciousness, where Christoph Koch and Giulio Tononi describe their (more than dubious, in my humble opinion) theory on the application of Integrated Information Theory to the question of: can we quantify machine consciousness?

The issue also includes interesting quotes and predictions by famous visionairies, such as Ray Kurzweil, Carver Mead, Nick Bostrom, Rodney Brooks, among others.

Images from the special issue of IEEE Spectrum.

To Be a Machine: Adventures Among Cyborgs, Utopians, and the Futurists Solving the Modest Problem of Death

Mark O’Connell witty, insightful and sometimes deeply moving account of his research on the topic of transhumanism deserves a place in the bookshelf of anyone interested in the future of humanity. Reading To Be a Machine is a delightful trip through the ideals, technologies, places and characters involved in transhumanism, the idea that science and technology will one day transform human into immortal computer based lifeforms.

For reasons that are not totally clear to me, transhumanism remains mostly a fringe culture, limited to a few futurists, off-the-mainstream scientists and technology nuts. As shared fictions go (to use Yuval Harari’s notation), I would imagine transhumanism is one idea whose time has come. However, it remains mostly unknown by the general public. While humanists believe that the human person, with his/her desires, choices, and fears, should be the most important value to be preserved by a society (check my review of Homo Deus), transhumanists believe that biological based intelligence is imperfect, exists purely because of historical reasons (evolution, that is) and will go away soon as we move intelligence into other computational supports, more robust than our frail bodies.

O’Connell, himself a hard-core humanist, as becomes clear from reading between the lines of this book, pursued a deep, almost forensic, investigation on what transhumanists are up to. In this process, he talks with many unusual individuals involved in the transhumanist saga, from Max More, who runs Alcor, a company that, in exchange for a couple hundred dollars, will preserve your body for the future in liquid nitrogen (or 80k for just the head) to Aubrey de Grey, a reputed scientist working in life extension technologies, who argues that we should all be working on this problem. In de Grey’s words, cited by O’Connell “aging is a human disaster on an unimaginably vast scale, a massacre, a methodical and comprehensive annihilation of every single person that ever lived“. These are just two of the dozens of fascinating characters in the book interviewed in place by O’Connell.

The narrative is gripping, hilarious at times, but moving and compelling, not the least because O’Connell himself provides deep insights about the issues the book discusses. The characters in the book are, at once, alien and deeply human, as they are only trying to overcome the limits of our bodies. Deservedly, the book has been getting excellent reviews, from many sources.

In the end, one gets the idea that transhumanists are crazy, maybe, but not nearly as crazy as all other believers in immortality, be it by divine intervention, by reincarnation, or by any other mechanisms so ingrained in mainstream culture.