Instantiation, another great collection of Greg Egan’s short stories

Greg Egan is a master of short-story telling. His Axiomatic collection of short stories is one of my favorites. This new collection of short stories keeps Egan’s knack for communicating deep concepts using few words and dives deeper into the concepts of virtual reality and the impacts of technology in society.

The first story, The discrete charm of the Turing machine, could hardly be more relevant these days, when the discussions on the economic impacts of Artificial Intelligence are taking place everywhere. But the main conducting line of the book is the series of stories where sentient humans who are, in fact, characters in virtual reality games, plot to break free of their slave condition. To find out whether they succeed or not, you will have to read to book yourself!

PS: As a joke, I leave here a meme of unknown origin

The mind of a fly

Researchers from the Howard Hughes Medical Institute, Google and other institutions have published the neuron level connectome of a significant part of the brain of the fruit fly, what they called the hemibrain. This may become one of the most significant advances in our understanding of the detailed structure of complex brains, since the 302 neurons connectome of C. elegans was published in 1986, by a team headed by Sydney Brenner, in an famous article with the somewhat whimsical subtitle of The mind of a worm. Both methods used an approach based on the slicing of the brains in very thin slices, followed by the use of scanning electron microscopy and the processing of the resulting images in order to obtain the 3D structure of the brain.

The neuron-level connectome of C. elegans was obtained after a painstaking effort that lasted decades, of manual annotation of the images obtained from the thousands of slices imaged using electron microscopy. As the brain of Drosophila melanogaster, the fruit fly, is thousands of times more complex, such an effort would have required several centuries if done by hand. Therefore, Google’s machine learning algorithms have been trained to identify sections of neurons, including axons, bodies and dendritic trees, as well as synapses and other components. After extensive training, the millions of images that resulted from the serial electron microscopy procedure were automatically annotated by the machine learning algorithms, enabling the team to complete in just a few years the detailed neuron-level connectome of a significant section of the fly brain, which includes roughly 25000 neurons and 20 million synapses.

The results, published in the first of a number of articles, can be freely analyzed by anyone interested in the way a fly thinks. A Google account can be used to log in to the neuPrint explorer and an interactive exploration of the 3D electron microscopy images is also available with neuroglancer. Extensive non-technical coverage by the media is also widely available. See, for instance, the article in The Economist or the piece in The Verge.

Image from the HHMI Janelia Research Campus site.

The Fabric of Reality

The Fabric of Reality, a 1997 book by David Deutsch, is full of great ideas, most of them surprising and intriguing. The main argument is that explanations are the centerpiece of science and that four theories play an essential role in our understanding of the world: quantum theory, the theory of evolution, the theory of computation and epistemology (the theory of knowledge).

You may raise a number of questions about these particular choices, such as why is the theory of relativity not there or why is the theory of evolution simply not a result of other theories in physics or even what makes epistemology to special. You will have to read the book to find out but the short answer is that not everything is physics and that theories at many levels are required to explain the world. Still, in physics, the most fundamental idea is quantum theory and it has profound impacts on our understanding of the universe. Perhaps the most significant impact comes from the fact that (according to Deutsch) what we know about quantum theory implies that we live in a multiverse. Each time a quantum phenomenon can conduct to more than one observable result, the universe splits into as many universes as the number of possible results, universes that exist simultaneously in the multiverse.

Although the scientific establishment views the multiverse theory with reservation, to say the least, to Deutsch, the multiverse is not just a theory, but the only possible explanation for what we know about quantum physics (he dismisses the Copenhagen interpretation as nonsense). Armed with these four theories, and the resulting conclusion that we live in a multiverse, Deutsch goes on to address thought-provoking questions, such as:

  • Is life a small thing at the scale of the universe or, on the contrary, is the most important thing on it?
  • Can we have free will, in a deterministic universe? And in the multiverse?
  • Do computers strictly more powerful than Turing machines exist, and how do they work?
  • Can mathematical proofs provide us with absolute certainties about specific mathematical statements?
  • Is time travel possible, at least in principle, either in the physical world or in a virtual reality simulator?
  • Will we (or our descendants, or some other species) eventually become gods, when we reach the Omega point?

The idea of the multiverse is required to answer most, if not all, of these questions. Deutsch is certainly not a parsimonious person when he uses universes to answer questions and to solve problems. The multiverse allows you to have free will, solves the paradoxes of time travel and makes quantum computers possible, among many other things. One example of the generous use of universes made by Deutsch is the following sentence:

When a quantum factorization engine is factorizing a 250-digit number, the number of interfering universes will be of the order of 10 to the 500. This staggeringly large number is the reason why Shor’s algorithm makes factorization tractable. I said that the algorithm requires only a few thousand arithmetic operations. I meant, of course, a few thousand operations in each universe that contributes to the answer. All those computations are performed in parallel, in different universes, and share their results through interference.

The fact that Deutsch’s arguments depend so heavily on the multiverse idea makes this book much more about the multiverse than about the other topics he addresses. After all, if the multiverse theory is wrong, many of Deutsch’s explanations collapse, interesting as they may be.

Still, the book is full of great ideas, makes for some interesting reading, and presents many interesting concepts, some of them further developed in other books by Deutsch, such as The Beginning of Infinity.

Virtually Human: the promise of digital immortality

Martine Rothblatt’s latest book, Virtually Human, the promise – and the peril – of digital immortality, recommended by none less than the likes of Craig Venter and Ray Kurzweil, is based on an interesting premise, which looks quite reasonable in principle.

Each one of us leaves behind such a large digital trace that it could be used, at least in principle, to teach a machine to behave like the person that generated the trace. In fact, if you put together all the pictures, videos, emails and messages that you generate in a lifetime, together with additional information like GPS coordinates, phone conversations, and social network info, there should be enough information for the right software to learn to behave just like you.

Rothblatt imagines that all this information will be stored in what she calls a mindfile and that such a mindfile could be used by software (mindware) to create mindclones, software systems that would think, behave and act like the original human that was used to create the mindfile. Other systems, similar to these, but not based on a copy of a human original, are called bemans, and raise similar questions. Would such systems have rights and responsibilities, just like humans? Rothblatt argues forcefully that society will have to recognize them as persons, sooner or later. Otherwise, we would assist to a return to situations that modern societies have already abandoned, like slavery, and other practices that disrespect basic human rights (in this case, mindclone and beman’s rights).

Most of the book is dedicated to the analysis of the social, ethical, and economic consequences of an environment where humans live with mindclones and bemans. This analysis is entertaining and comprehensive, ranging from subjects as diverse as the economy, human relations, families, psychology, and even religion.  If one assumes the technology to create mindclones will happen, thinking about the consequences of such a technology is interesting and entertaining.

However, the book falls short in that it does not provide any convincing evidence that the technology will come to exist, in any form similar to the one that is assumed so easily by the author. We do not know how to create mindware that could interpret a mindfile and use it to create a conscious, sentient, self-aware system that is indistinguishable, in its behavior, from the original. Nor are we likely to find out soon how such a mindware could be designed. And yet, Rothblatt seems to think that such a technology is just around the corner, maybe just a few decades away. All in all, it sounds more like (poor) science fiction than the shape of things to come.

Crystal Nights

Exactly 80 years ago, Kristallnacht (the night of the crystals) took place in Germany, in the night from the 9th to the 10th of November. Jews were persecuted and killed, and their property was destroyed, in an event that is an important marker in the rise of the anti-semitism movement that characterized Nazi Germany. The name comes from the many windows of Jewish-owned stores broken during that night.

Greg Egan, one of my favorite science fiction writers, wrote a short story inspired in that same night, entitled Crystal Nights. This (very) short story is publicly available (you can find it here ) and is definitely worth reading. I will not spoil the ending here, but it has to do with computers and singularities. The story was also included in a book that features other short stories by Greg Egan.

If you like this story, maybe you should check other books by Egan, such as Permutation City, Diaspora or Axiomatic (another collection of short stories).

IEEE Spectrum special issue on whether we can duplicate a brain

Maybe you have read The Digital Mind or The Singularity is Near, by Ray Kurzweil, or other similar books, thought it all a bit farfetched, and wondered whether the authors are bonkers or just dreamers.

Wonder no more. The latest issue of the flagship publication of the Institute for Electrical and Electronic Engineers, IEEE Spectrum , is dedicated to the interesting and timely question of whether we can copy the brain, and use it as blueprint for intelligent systems.  This issue, which you can access here, includes many interesting articles, definitely worth reading.

I cannot even begin to describe here, even briefly, the many interesting articles in this special issue, but it is worthwhile reading the introduction, on the perspective of near future intelligent personal assistants or the piece on how we could build an artificial brain right now, by Jennifer Hasler.

Other articles address the question on how expensive, computationally, is the simulation of a brain at the right level of abstraction. Karlheinz Meier’s article on this topic explains very clearly why present day simulations are so slow:

“The big gap between the brain and today’s computers is perhaps best underscored by looking at large-scale simulations of the brain. There have been several such efforts over the years, but they have all been severely limited by two factors: energy and simulation time. As an example, consider a simulation that Markus Diesmann and his colleagues conducted several years ago using nearly 83,000 processors on the K supercomputer in Japan. Simulating 1.73 billion neurons consumed 10 billion times as much energy as an equivalent size portion of the brain, even though it used very simplified models and did not perform any learning. And these simulations generally ran at less than a thousandth of the speed of biological real time.

Why so slow? The reason is that simulating the brain on a conventional computer requires billions of differential equations coupled together to describe the dynamics of cells and networks: analog processes like the movement of charges across a cell membrane. Computers that use Boolean logic—which trades energy for precision—and that separate memory and computing, appear to be very inefficient at truly emulating a brain.”

Another interesting article, by Eliza Strickland, describes some of the efforts that are taking place to use  reverse engineer animal intelligence in order to build true artificial intelligence , including a part about the work by David Cox, whose team trains rats to perform specific tasks and then analyses the brains by slicing and imaging them:

“Then the brain nugget comes back to the Harvard lab of Jeff Lichtman, a professor of molecular and cellular biology and a leading expert on the brain’s connectome. ­Lichtman’s team takes that 1 mm3 of brain and uses the machine that resembles a deli slicer to carve 33,000 slices, each only 30 nanometers thick. These gossamer sheets are automatically collected on strips of tape and arranged on silicon wafers. Next the researchers deploy one of the world’s fastest scanning electron microscopes, which slings 61 beams of electrons at each brain sample and measures how the electrons scatter. The refrigerator-size machine runs around the clock, producing images of each slice with 4-nm resolution.”

Other approaches are even more ambitious. George Church, a well-known researcher in biology and bioinformatics, uses sequencing technologies to efficiently obtain large-scale, detailed information about brain structure:

“Church’s method isn’t affected by the length of axons or the size of the brain chunk under investigation. He uses genetically engineered mice and a technique called DNA bar coding, which tags each neuron with a unique genetic identifier that can be read out from the fringy tips of its dendrites to the terminus of its long axon. “It doesn’t matter if you have some gargantuan long axon,” he says. “With bar coding you find the two ends, and it doesn’t matter how much confusion there is along the way.” His team uses slices of brain tissue that are thicker than those used by Cox’s team—20 μm instead of 30 nm—because they don’t have to worry about losing the path of an axon from one slice to the next. DNA sequencing machines record all the bar codes present in a given slice of brain tissue, and then a program sorts through the genetic information to make a map showing which neurons connect to one another.”

There is also a piece on the issue of AI and consciousness, where Christoph Koch and Giulio Tononi describe their (more than dubious, in my humble opinion) theory on the application of Integrated Information Theory to the question of: can we quantify machine consciousness?

The issue also includes interesting quotes and predictions by famous visionairies, such as Ray Kurzweil, Carver Mead, Nick Bostrom, Rodney Brooks, among others.

Images from the special issue of IEEE Spectrum.

The Digital Mind: How Science is Redefining Humanity

Following the release in the US,  The Digital Mind, published by MIT Press,  is now available in Europe, at an Amazon store near you (and possibly in other bookstores). The book covers the evolution of technology, leading towards the expected emergence of digital minds.

Here is a short rundown of the book, kindly provided by yours truly, the author.

New technologies have been introduced in human lives at an ever increasing rate, since the first significant advances took place with the cognitive revolution, some 70.000 years ago. Although electronic computers are recent and have been around for only a few decades, they represent just the latest way to process information and create order out of chaos. Before computers, the job of processing information was done by living organisms, which are nothing more than complex information processing devices, created by billions of years of evolution.

Computers execute algorithms, sequences of small steps that, in the end, perform some desired computation, be it simple or complex. Algorithms are everywhere, and they became an integral part of our lives. Evolution is, in itself, a complex and long- running algorithm that created all species on Earth. The most advanced of these species, Homo sapiens, was endowed with a brain that is the most complex information processing device ever devised. Brains enable humans to process information in a way unparalleled by any other species, living or extinct, or by any machine. They provide humans with intelligence, consciousness and, some believe, even with a soul, a characteristic that makes humans different from all other animals and from any machine in existence.

But brains also enabled humans to develop science and technology to a point where it is possible to design computers with a power comparable to that of the human brain. Artificial intelligence will one day make it possible to create intelligent machines and computational biology will one day enable us to model, simulate and understand biological systems and even complete brains with unprecedented levels of detail. From these efforts, new minds will eventually emerge, minds that will emanate from the execution of programs running in powerful computers. These digital minds may one day rival our own, become our partners and replace humans in many tasks. They may usher in a technological singularity, a revolution in human society unlike any other that happened before. They may make humans obsolete and even a threatened species or they make us super-humans or demi-gods.

How will we create these digital minds? How will they change our daily lives? Will we recognize them as equals or will they forever be our slaves? Will we ever be able to simulate truly human-like minds in computers? Will humans transcend the frontiers of biology and become immortal? Will humans remain, forever, the only known intelligence in the universe?


Is mind uploading nearer than you might think?

A recent article published in The Guardian, an otherwise mainstream newspaper, openly discusses the fact that mind uploading may become a real possibility in the near future. Mind uploading is based on the concept that the behavior of a brain can be emulated completely in a computer, ultimately leading to the possibility of transporting individual brains, and individual consciousnesses, into a program, which would emulate the behavior of the “uploaded” mind. Mind uploading represents, in practice, the surest and most guaranteed way to immortality, far faster than any other non-digital technologies can possibly aim to achieve in the foreseeable future.

This idea is not new, and the article makes an explicit reference to Hans Moravec book, The Mind Children, published by Harvard University Press in 1988. In fact, the topic has been already been addressed by a large number of authors, including Ray Kurzweil, in The Singularity is Near, Nick Bostrom, in Superintelligence, and even by me in The Digital Mind.

The article contains an interesting list of interesting sites and organizations, including CarbonCopies, a site dedicated to making whole brain emulation possible, founded by Randal A Koene, and a reference to the 2045 initiative, with similar goals, created by Dmitry Itskov.

The article, definitely worthwhile reading, goes into some detail in the idea of “substrate independent minds”, an idea clearly reminiscent of the concept of virtualization, so in vogue in today’s business world.

Picture source: The Guardian

Research platforms of Human Brain Project released

The Human Brain Project (HBP), a flagship project of the European Union, has just released the initial versions of its six Information and Communications Technology (ICT) platforms to users worldwide.

The six HBP Platforms are:

  • Neuroinformatics
  • Brain Simulation
  • High Performance Computing
  • Medical Informatics
  • Neuromorphic Computing
  • Neurorobotics


These platforms enable researchers to use the tools developed by the Human Brain Project to search and analyse neuroscience data, simulate brain sections, run complex simulations, searching of real data to understand similarities and differences among brain diseases, access computer systems that emulate brain microcircuits,  and test virtual models of the brain by connecting them to simulated robot bodies and environments.
All the Platforms can be accessed via the HBP Collaboratory, a web portal where users can also find additional information.