The Fabric of Reality

The Fabric of Reality, a 1997 book by David Deutsch, is full of great ideas, most of them surprising and intriguing. The main argument is that explanations are the centerpiece of science and that four theories play an essential role in our understanding of the world: quantum theory, the theory of evolution, the theory of computation and epistemology (the theory of knowledge).

You may raise a number of questions about these particular choices, such as why is the theory of relativity not there or why is the theory of evolution simply not a result of other theories in physics or even what makes epistemology to special. You will have to read the book to find out but the short answer is that not everything is physics and that theories at many levels are required to explain the world. Still, in physics, the most fundamental idea is quantum theory and it has profound impacts on our understanding of the universe. Perhaps the most significant impact comes from the fact that (according to Deutsch) what we know about quantum theory implies that we live in a multiverse. Each time a quantum phenomenon can conduct to more than one observable result, the universe splits into as many universes as the number of possible results, universes that exist simultaneously in the multiverse.

Although the scientific establishment views the multiverse theory with reservation, to say the least, to Deutsch, the multiverse is not just a theory, but the only possible explanation for what we know about quantum physics (he dismisses the Copenhagen interpretation as nonsense). Armed with these four theories, and the resulting conclusion that we live in a multiverse, Deutsch goes on to address thought-provoking questions, such as:

  • Is life a small thing at the scale of the universe or, on the contrary, is the most important thing on it?
  • Can we have free will, in a deterministic universe? And in the multiverse?
  • Do computers strictly more powerful than Turing machines exist, and how do they work?
  • Can mathematical proofs provide us with absolute certainties about specific mathematical statements?
  • Is time travel possible, at least in principle, either in the physical world or in a virtual reality simulator?
  • Will we (or our descendants, or some other species) eventually become gods, when we reach the Omega point?

The idea of the multiverse is required to answer most, if not all, of these questions. Deutsch is certainly not a parsimonious person when he uses universes to answer questions and to solve problems. The multiverse allows you to have free will, solves the paradoxes of time travel and makes quantum computers possible, among many other things. One example of the generous use of universes made by Deutsch is the following sentence:

When a quantum factorization engine is factorizing a 250-digit number, the number of interfering universes will be of the order of 10 to the 500. This staggeringly large number is the reason why Shor’s algorithm makes factorization tractable. I said that the algorithm requires only a few thousand arithmetic operations. I meant, of course, a few thousand operations in each universe that contributes to the answer. All those computations are performed in parallel, in different universes, and share their results through interference.

The fact that Deutsch’s arguments depend so heavily on the multiverse idea makes this book much more about the multiverse than about the other topics he addresses. After all, if the multiverse theory is wrong, many of Deutsch’s explanations collapse, interesting as they may be.

Still, the book is full of great ideas, makes for some interesting reading, and presents many interesting concepts, some of them further developed in other books by Deutsch, such as The Beginning of Infinity.

Crystal Nights

Exactly 80 years ago, Kristallnacht (the night of the crystals) took place in Germany, in the night from the 9th to the 10th of November. Jews were persecuted and killed, and their property was destroyed, in an event that is an important marker in the rise of the anti-semitism movement that characterized Nazi Germany. The name comes from the many windows of Jewish-owned stores broken during that night.

Greg Egan, one of my favorite science fiction writers, wrote a short story inspired in that same night, entitled Crystal Nights. This (very) short story is publicly available (you can find it here ) and is definitely worth reading. I will not spoil the ending here, but it has to do with computers and singularities. The story was also included in a book that features other short stories by Greg Egan.

If you like this story, maybe you should check other books by Egan, such as Permutation City, Diaspora or Axiomatic (another collection of short stories).

MIT distances itself from Nectome, a mind uploading company

The MIT Media Lab, a unit of MIT, decided to sever the ties that connected it with Nectome, a startup that proposes to make available a technology that processes and chemically preserves a brain, down to its most minute details, in order to make it possible, at least in principle, to simulate your brain and upload your mind, sometime in the future.

According to the MIT news release, “MIT’s connection to the company came into question after MIT Technology Review detailed Nectome’s promotion of its “100 percent fatal” technology” in an article posted in the MIT Technology Review site.

As reported in this blog, Nectome claims that by preserving the brain, it may be possible, one day, “to digitize your preserved brain and use that information to recreate your mind”. Nectome acknowledges, however, that the technology is fatal to the brain donor and that there are no warranties that future recovery of the memories, knowledge and personality will be possible.

Detractors have argued that the technology is not sound, since simulating a preserved brain is a technology that is at least many decades in the future and may even be impossible in principle. The criticisms were, however, mostly based on the argument the whole enterprise is profoundly unethical.

This kind of discussion between proponents of technologies aimed at performing whole brain emulation, sometimes in the future, and detractors that argue that such an endeavor is fundamentally flawed, has occurred in the past, most notably a 2014 controversy concerning the objectives of the Human Brain Project. In this controversy, critics argued that the goal of a large-scale simulation of the brain is premature and unsound, and that funding should be redirected towards more conventional approaches to the understanding of brain functions. Supporters of the Human Brain Project approach argued that reconstructing and simulating the human brain is an important objective in itself, which will bring many benefits and advance our knowledge of the brain and of the mind.

Picture by the author.

IEEE Spectrum special issue on whether we can duplicate a brain

Maybe you have read The Digital Mind or The Singularity is Near, by Ray Kurzweil, or other similar books, thought it all a bit farfetched, and wondered whether the authors are bonkers or just dreamers.

Wonder no more. The latest issue of the flagship publication of the Institute for Electrical and Electronic Engineers, IEEE Spectrum , is dedicated to the interesting and timely question of whether we can copy the brain, and use it as blueprint for intelligent systems.  This issue, which you can access here, includes many interesting articles, definitely worth reading.

I cannot even begin to describe here, even briefly, the many interesting articles in this special issue, but it is worthwhile reading the introduction, on the perspective of near future intelligent personal assistants or the piece on how we could build an artificial brain right now, by Jennifer Hasler.

Other articles address the question on how expensive, computationally, is the simulation of a brain at the right level of abstraction. Karlheinz Meier’s article on this topic explains very clearly why present day simulations are so slow:

“The big gap between the brain and today’s computers is perhaps best underscored by looking at large-scale simulations of the brain. There have been several such efforts over the years, but they have all been severely limited by two factors: energy and simulation time. As an example, consider a simulation that Markus Diesmann and his colleagues conducted several years ago using nearly 83,000 processors on the K supercomputer in Japan. Simulating 1.73 billion neurons consumed 10 billion times as much energy as an equivalent size portion of the brain, even though it used very simplified models and did not perform any learning. And these simulations generally ran at less than a thousandth of the speed of biological real time.

Why so slow? The reason is that simulating the brain on a conventional computer requires billions of differential equations coupled together to describe the dynamics of cells and networks: analog processes like the movement of charges across a cell membrane. Computers that use Boolean logic—which trades energy for precision—and that separate memory and computing, appear to be very inefficient at truly emulating a brain.”

Another interesting article, by Eliza Strickland, describes some of the efforts that are taking place to use  reverse engineer animal intelligence in order to build true artificial intelligence , including a part about the work by David Cox, whose team trains rats to perform specific tasks and then analyses the brains by slicing and imaging them:

“Then the brain nugget comes back to the Harvard lab of Jeff Lichtman, a professor of molecular and cellular biology and a leading expert on the brain’s connectome. ­Lichtman’s team takes that 1 mm3 of brain and uses the machine that resembles a deli slicer to carve 33,000 slices, each only 30 nanometers thick. These gossamer sheets are automatically collected on strips of tape and arranged on silicon wafers. Next the researchers deploy one of the world’s fastest scanning electron microscopes, which slings 61 beams of electrons at each brain sample and measures how the electrons scatter. The refrigerator-size machine runs around the clock, producing images of each slice with 4-nm resolution.”

Other approaches are even more ambitious. George Church, a well-known researcher in biology and bioinformatics, uses sequencing technologies to efficiently obtain large-scale, detailed information about brain structure:

“Church’s method isn’t affected by the length of axons or the size of the brain chunk under investigation. He uses genetically engineered mice and a technique called DNA bar coding, which tags each neuron with a unique genetic identifier that can be read out from the fringy tips of its dendrites to the terminus of its long axon. “It doesn’t matter if you have some gargantuan long axon,” he says. “With bar coding you find the two ends, and it doesn’t matter how much confusion there is along the way.” His team uses slices of brain tissue that are thicker than those used by Cox’s team—20 μm instead of 30 nm—because they don’t have to worry about losing the path of an axon from one slice to the next. DNA sequencing machines record all the bar codes present in a given slice of brain tissue, and then a program sorts through the genetic information to make a map showing which neurons connect to one another.”

There is also a piece on the issue of AI and consciousness, where Christoph Koch and Giulio Tononi describe their (more than dubious, in my humble opinion) theory on the application of Integrated Information Theory to the question of: can we quantify machine consciousness?

The issue also includes interesting quotes and predictions by famous visionairies, such as Ray Kurzweil, Carver Mead, Nick Bostrom, Rodney Brooks, among others.

Images from the special issue of IEEE Spectrum.