The mind of a fly

Researchers from the Howard Hughes Medical Institute, Google and other institutions have published the neuron level connectome of a significant part of the brain of the fruit fly, what they called the hemibrain. This may become one of the most significant advances in our understanding of the detailed structure of complex brains, since the 302 neurons connectome of C. elegans was published in 1986, by a team headed by Sydney Brenner, in an famous article with the somewhat whimsical subtitle of The mind of a worm. Both methods used an approach based on the slicing of the brains in very thin slices, followed by the use of scanning electron microscopy and the processing of the resulting images in order to obtain the 3D structure of the brain.

The neuron-level connectome of C. elegans was obtained after a painstaking effort that lasted decades, of manual annotation of the images obtained from the thousands of slices imaged using electron microscopy. As the brain of Drosophila melanogaster, the fruit fly, is thousands of times more complex, such an effort would have required several centuries if done by hand. Therefore, Google’s machine learning algorithms have been trained to identify sections of neurons, including axons, bodies and dendritic trees, as well as synapses and other components. After extensive training, the millions of images that resulted from the serial electron microscopy procedure were automatically annotated by the machine learning algorithms, enabling the team to complete in just a few years the detailed neuron-level connectome of a significant section of the fly brain, which includes roughly 25000 neurons and 20 million synapses.

The results, published in the first of a number of articles, can be freely analyzed by anyone interested in the way a fly thinks. A Google account can be used to log in to the neuPrint explorer and an interactive exploration of the 3D electron microscopy images is also available with neuroglancer. Extensive non-technical coverage by the media is also widely available. See, for instance, the article in The Economist or the piece in The Verge.

Image from the HHMI Janelia Research Campus site.

In the theater of consciousness

Bernard Baars has been one of the few neuroscientists who has dared to face the central problem of consciousness head-on. This 1997 book, which follows his first and most popular book, “A cognitive theory of consciousness”, aims at shedding some light on that most interesting of phenomena, the emergence of conscious reasoning from the workings of atoms and molecules that follow the laws of physics. This book is one of his most relevant works and supports the Global Workspace Theory (GWT), which is one of the few existing alternatives to describe the phenomenon of consciousness (the other one is Integrated Information Theory, IIT).

Baars’ work is probably not as widely known as it deserved, even though he is a famous author and neuroscientist. Unlike several other approaches, by authors as well-known as Daniel Dennett and Douglas Hofstadter, Baars tries to connect actual neuroscience knowledge with what we know about the phenomenon of consciousness.

He does not believe consciousness is an illusion, as several other authors (Dennet and Nørretranders, for instance) have argued. Instead, he argues that specific phenomena that occur in the cortex give rise to consciousness, and provides evidence that such is indeed the case. He argues for a principled approach to study consciousness, treating the phenomenon as a variable, and looking for specific situations that are similar between them but sufficiently different to be diverse in what respects to consciousness.

He proposes a theater metaphor to model the way consciousness arises and provides some evidence that this may be a workable metaphor to understand exactly what goes on in the brain when conscious behavior occurs. He presents evidence from neuroimaging and from specific dysfunctions in the brain that the theater metaphor may, indeed, serve as the basis for the creation of actual conscious, synthetic, systems. This work is today more relevant than ever, as we approach rapidly what can be learned with deep neural networks, which are not only unconscious but also unaware of what they are learning. Further advances in learning and in AI may depend critically on our ability to understand what is consciousness and how it can be used to make the learning of abstract concepts possible.

I am a strange loop – by Douglas Hofstadter

Douglas Hofstadter has always been fond of recursion and self-referential loops, the central topic of his acclaimed “Gödel, Escher and Bach” . In his 2007 book, “I am a strange loop”, Hofstadter goes even deeper into the idea that self-referential loops are the secret item that explains consciousness and self-awareness. The idea that consciousness is the result of our ability to look inside ourselves, and to model our selves in the world, is explored in this book, together with a number of related issues.

To Hofstadter, Gödel theorem, and the way Gödel has shown that any sufficiently complex mathematical system can be used to assert things about itself, is strongly related with our ability to reflect into our own selves, the phenomenon that, according to the author, creates consciousness.

Hofstadter uses the terms “soul” and “consciousness” almost interchangeably, meaning that, to him, our soul and our consciousness – our inner light – are one and the same. Other animals may have souls, such as dogs or cats (but not mosquitoes) although “smaller” and less complex than ours. One of the strongest ideas of the book, much cherished by the author, is that your soul is mostly contained within your brain but is also present, at varying lower levels of fidelity, in the brains of other people that know you and that have models of you inside their own brains.

In the process of describing these ideas, Hofstadter also dispatches with a few “sacred cows”, such as the idea that “zombies” are possible, even in principle, the “inverted spectrum” conundrum (is your red the same as my red?) and the “impossible” – to him- idea of free will.

The Ancient Origins of Consciousness

The Ancient Origins of Consciousness, by Todd Feinberg and Jon Mallatt, published by MIT Press, addresses the question of the rise of consciousness in living organisms from three different viewpoints: the philosophical, the neurobiological and the neuroevolutionary domains.

From a philosophical standpoint, the question is whether consciousness, i.e., subjective experience, can even be explained by an objective scientific theory. The so-called “hard problem” of consciousness, in the words of David Chalmers, may forever remain outside the realm of science, since we may never know how physical mechanisms in the brain create the subjective experience that gives rises to consciousness. The authors disagree with this pessimistic assessment by Chalmers, and argue that there is biological and evolutionary evidence that consciousness can be studied objectively. This evidence is the one they propose to present in this book.

Despite the argument that the book follows a three-pronged approach, it is most interesting when describing and analyzing the evolutionary history of the neurological mechanisms that ended up created consciousness in humans and, presumably, in other mammals. Starting at the very beginning, with the Cambrian explosion, 540 million years ago, animals may have exhibited some kind of conscious experience, the authors argue. The first vertebrates, which appeared during this period, already exhibited some distinctive anatomic telltales of conscious experiences.

Outside the vertebrates, the question is even more complex, but the authors point to evidence that some arthropods and cephalopods may also exhibit behaviors that signal consciousness (a point poignantly made in another recent book, Other Minds and Alien Intelligences).

Overall, one is left convinced that consciousness can be studied scientifically and that there is significant evidence that graded versions of it have been present for hundreds of millions of years in our distant ancestors and long-removed cousins.

MIT distances itself from Nectome, a mind uploading company

The MIT Media Lab, a unit of MIT, decided to sever the ties that connected it with Nectome, a startup that proposes to make available a technology that processes and chemically preserves a brain, down to its most minute details, in order to make it possible, at least in principle, to simulate your brain and upload your mind, sometime in the future.

According to the MIT news release, “MIT’s connection to the company came into question after MIT Technology Review detailed Nectome’s promotion of its “100 percent fatal” technology” in an article posted in the MIT Technology Review site.

As reported in this blog, Nectome claims that by preserving the brain, it may be possible, one day, “to digitize your preserved brain and use that information to recreate your mind”. Nectome acknowledges, however, that the technology is fatal to the brain donor and that there are no warranties that future recovery of the memories, knowledge and personality will be possible.

Detractors have argued that the technology is not sound, since simulating a preserved brain is a technology that is at least many decades in the future and may even be impossible in principle. The criticisms were, however, mostly based on the argument the whole enterprise is profoundly unethical.

This kind of discussion between proponents of technologies aimed at performing whole brain emulation, sometimes in the future, and detractors that argue that such an endeavor is fundamentally flawed, has occurred in the past, most notably a 2014 controversy concerning the objectives of the Human Brain Project. In this controversy, critics argued that the goal of a large-scale simulation of the brain is premature and unsound, and that funding should be redirected towards more conventional approaches to the understanding of brain functions. Supporters of the Human Brain Project approach argued that reconstructing and simulating the human brain is an important objective in itself, which will bring many benefits and advance our knowledge of the brain and of the mind.

Picture by the author.

The Computer and the Brain

The Computer and the Brain, first published in 1958, is a delightful little book by John von Neumman, his attempt to compare two very different information processing devices: computers and brains. Although written more than sixty years ago, it has more than historic interest, even though it addresses two topics that have developed enormously in the decades since von Neumann’s death.

John von Neumann’s genius comes through very clearly in this essay. Sixty years ago, very few persons knew what a computer was, and probably even less had some idea how the brain performed its magic. This book, written just a few years after the invention of the transistor (by Bardeen and Brattain), and the discovery of the membrane mechanisms that explain the electrical behaviour of neurons (by Hodgkin and Huxley), nonetheless compares, in very clear terms, the relative computational power of computers and brains.

Von Neumann aim is to compare many of the characteristics of the processing devices used by computers (vacuum tubes and transistors) with the ones used by the brain (neurons). His objective is to perform an objective comparison of the two technologies, as of their ability to process information. He addresses speed, size, memory and other characteristics of the two types of information processing devices.

One of the central and (to me) most interesting parts of the book is the comparison of artificial information processing devices (vacuum tubes and transistors) with natural information processing devices (neurons), in terms of speed and size.

Von Neumman concludes that vacuum tubes and transistors are faster, by a factor of 10,000 to 100,000, than neurons, and occupy about 1000 times more space (with the technologies of the day). All together, if one assumes that speed can be traded by number of devices (for instance, reusing electronic devices to perform computations that, in the brain, are performed by slower, but independent, neurons), his comparisons lead to the conclusion (not explicit in the book, I must add) that an electronic computer the size of a human brain would be one to two orders of magnitude less powerful than the human brain itself.

John von Neumann could not have predicted, in 1957, that transistors would be packed, by the billions, on integrated circuits no larger than a postal stamp. If one uses the numbers that correspond to the technologies of today, one is led to conclude that a modern CPU (such as the Intel Core i7), with a billion transistors, operating in the nanoseconds range, is a few orders of magnitude (10000 times) more powerful than the human brain, with its hundred billion neurons operating in the milliseconds range.

Of course one has to consider, as John von Neumann also wrote, that a neuron is considerably more complex and can perform more complex computations than a transistor. But even if one takes that into account, and assumes that a transistor is roughly equivalent to a synapse, in raw computing power, one gets the final result that the human brain and an Intel Core i7 have about the same raw processing power.

It is a sobering thought, one which von Neumann would certainly have liked to share.

Consciousness: Confessions of a Romantic Reductionist

Christoph Koch, the author of “Consciousness: Confessions of a Romantic Reductionist”  is not only a renowned researcher in brain science but also the president of the Allen Institute for Brain Science, one of the foremost institutions in brain research. What he has to tell us about consciousness, and how he believes it is produced by the brain is certainly of great interest for anyone interested in these topics.

However, the book is more that just another philosophical treatise on the issue of consciousness, as it is also a bit of an autobiography and an open window on Koch’s own consciousness.

With less than 200 pages (in the paperback edition), this book is indeed a good start for those interested in the centuries-old problem of the mind-body duality and how a physical object (the brain) creates such an ethereal thing as a mind. He describes and addresses clearly the central issue of why there is such a thing as consciousness in humans, and how it creates self-awareness, free-will (maybe) and the qualia that characterize the subjective experiences each and (almost) every human has.

In Koch’s view, consciousness is not a thing that can be either on or off. He ascribes different levels of consciousness to animals and even to less complex creatures and systems. Consciousness, he argues, is created by the fact that very complex systems have a high dimensional state space, creating a subjective experience that corresponds to each configuration of this state space. In this view, computers and other complex systems can also exhibit some degree of consciousness, although much smaller than living entities, since they are much less complex.

He goes on to describe several approaches that have aimed at elucidating the complex feedback loops existing in brains, which have to exist in order to create these complex state spaces. Modern experimental techniques can analyze the differences between awake (conscious) and asleep (unconscious) brains, and learn from these differentes what exactly does create consciousness in a brain.

Parts of the book are more autobiographical, however. He describes not only his life-long efforts to address these questions, many of them developed together with Francis Crick, who remains a reference to him, as a scientist and as a person. The final chapter is more philosophical, and addresses other questions for which we have no answer yet, and may never have, such as “Why there is something instead of nothing?” or “Did an all powerful God create the universe, 14 billions year ago, complete with the laws of physics, matter and energy, or is this God simply a creation of man?”.

All in all, excellent reading, accessible to anyone interested in the topic but still deep and scientifically exact.

Portuguese Edition of The Digital Mind

IST Press, the publisher of Instituto Superior Técnico, just published the Portuguese edition of The Digital Mind, originally published by MIT Press.

The Portuguese edition, translated by Jorge Pereirinha Pires, follow the same organization and has been reviewed by a number of sources. The back-cover reviews are by Pedro Domingos, Srinivas Devadas, Pedro Guedes de Oliveira and Francisco Veloso.

A pre-publication was made by the Público newspaper, under the title Até que mundos digitais nos levará o efeito da Rainha Vermelha, making the first chapter of the book publicly available.

There are also some publicly available reviews and pieces about this edition, including an episode of a podcast and a review in the radio.

New technique for high resolution imaging of brain connections

MIT researchers have proposed a new technique that leads to very high resolution images of the detailed connections of neurons in the human brain. Taeyun Ku, Justin Swaney and Jeong-Yoon Park were the lead researchers of the work published in a Nature Biotechnology article. They have developed a new technique for imaging brain tissue at multiple scales that leads to unprecedented high resolution images of significant regions of the brain, which allows them to detect the presence of proteins within cells and determine the long-range connections between neurons.

The technique actually blows up the size of the tissues under observation, increasing their dimension, while preserving nearly all of the proteins within the cells, which can be labeled with fluorescent molecules and imaged.

The technique floods the brain tissue with acrylamide polymers, which end up forming a dense gel. The proteins are attached to this gel and, after they are denatured, the gel can be expanded to four or five times its original size. This leads to the possibility of imaging the blown-up tissue with a resolution that is much higher than would be possible if the original tissue was used.

Techniques like create the conditions to advance with reverse engineering techniques that could lead to a better understanding of the way neurons connect with each other, creating the complex structures in the brain.

Image credit: MIT

 

IEEE Spectrum special issue on whether we can duplicate a brain

Maybe you have read The Digital Mind or The Singularity is Near, by Ray Kurzweil, or other similar books, thought it all a bit farfetched, and wondered whether the authors are bonkers or just dreamers.

Wonder no more. The latest issue of the flagship publication of the Institute for Electrical and Electronic Engineers, IEEE Spectrum , is dedicated to the interesting and timely question of whether we can copy the brain, and use it as blueprint for intelligent systems.  This issue, which you can access here, includes many interesting articles, definitely worth reading.

I cannot even begin to describe here, even briefly, the many interesting articles in this special issue, but it is worthwhile reading the introduction, on the perspective of near future intelligent personal assistants or the piece on how we could build an artificial brain right now, by Jennifer Hasler.

Other articles address the question on how expensive, computationally, is the simulation of a brain at the right level of abstraction. Karlheinz Meier’s article on this topic explains very clearly why present day simulations are so slow:

“The big gap between the brain and today’s computers is perhaps best underscored by looking at large-scale simulations of the brain. There have been several such efforts over the years, but they have all been severely limited by two factors: energy and simulation time. As an example, consider a simulation that Markus Diesmann and his colleagues conducted several years ago using nearly 83,000 processors on the K supercomputer in Japan. Simulating 1.73 billion neurons consumed 10 billion times as much energy as an equivalent size portion of the brain, even though it used very simplified models and did not perform any learning. And these simulations generally ran at less than a thousandth of the speed of biological real time.

Why so slow? The reason is that simulating the brain on a conventional computer requires billions of differential equations coupled together to describe the dynamics of cells and networks: analog processes like the movement of charges across a cell membrane. Computers that use Boolean logic—which trades energy for precision—and that separate memory and computing, appear to be very inefficient at truly emulating a brain.”

Another interesting article, by Eliza Strickland, describes some of the efforts that are taking place to use  reverse engineer animal intelligence in order to build true artificial intelligence , including a part about the work by David Cox, whose team trains rats to perform specific tasks and then analyses the brains by slicing and imaging them:

“Then the brain nugget comes back to the Harvard lab of Jeff Lichtman, a professor of molecular and cellular biology and a leading expert on the brain’s connectome. ­Lichtman’s team takes that 1 mm3 of brain and uses the machine that resembles a deli slicer to carve 33,000 slices, each only 30 nanometers thick. These gossamer sheets are automatically collected on strips of tape and arranged on silicon wafers. Next the researchers deploy one of the world’s fastest scanning electron microscopes, which slings 61 beams of electrons at each brain sample and measures how the electrons scatter. The refrigerator-size machine runs around the clock, producing images of each slice with 4-nm resolution.”

Other approaches are even more ambitious. George Church, a well-known researcher in biology and bioinformatics, uses sequencing technologies to efficiently obtain large-scale, detailed information about brain structure:

“Church’s method isn’t affected by the length of axons or the size of the brain chunk under investigation. He uses genetically engineered mice and a technique called DNA bar coding, which tags each neuron with a unique genetic identifier that can be read out from the fringy tips of its dendrites to the terminus of its long axon. “It doesn’t matter if you have some gargantuan long axon,” he says. “With bar coding you find the two ends, and it doesn’t matter how much confusion there is along the way.” His team uses slices of brain tissue that are thicker than those used by Cox’s team—20 μm instead of 30 nm—because they don’t have to worry about losing the path of an axon from one slice to the next. DNA sequencing machines record all the bar codes present in a given slice of brain tissue, and then a program sorts through the genetic information to make a map showing which neurons connect to one another.”

There is also a piece on the issue of AI and consciousness, where Christoph Koch and Giulio Tononi describe their (more than dubious, in my humble opinion) theory on the application of Integrated Information Theory to the question of: can we quantify machine consciousness?

The issue also includes interesting quotes and predictions by famous visionairies, such as Ray Kurzweil, Carver Mead, Nick Bostrom, Rodney Brooks, among others.

Images from the special issue of IEEE Spectrum.