Crystal Nights

Exactly 80 years ago, Kristallnacht (the night of the crystals) took place in Germany, in the night from the 9th to the 10th of November. Jews were persecuted and killed, and their property was destroyed, in an event that is an important marker in the rise of the anti-semitism movement that characterized Nazi Germany. The name comes from the many windows of Jewish-owned stores broken during that night.

Greg Egan, one of my favorite science fiction writers, wrote a short story inspired in that same night, entitled Crystal Nights. This (very) short story is publicly available (you can find it here ) and is definitely worth a reading. I will not spoil the ending here, but it has to do with computers and singularities. The story was also included in a book that features other short stories by Greg Egan.

If you like this story, maybe you should check other books by Egan, such as Permutation City, Diaspora or Axiomatic (another collection of short stories).

Advertisements

MIT distances itself from Nectome, a mind uploading company

The MIT Media Lab, a unit of MIT, decided to sever the ties that connected it with Nectome, a startup that proposes to make available a technology that processes and chemically preserves a brain, down to its most minute details, in order to make it possible, at least in principle, to simulate your brain and upload your mind, sometime in the future.

According to the MIT news release, “MIT’s connection to the company came into question after MIT Technology Review detailed Nectome’s promotion of its “100 percent fatal” technology” in an article posted in the MIT Technology Review site.

As reported in this blog, Nectome claims that by preserving the brain, it may be possible, one day, “to digitize your preserved brain and use that information to recreate your mind”. Nectome acknowledges, however, that the technology is fatal to the brain donor and that there are no warranties that future recovery of the memories, knowledge and personality will be possible.

Detractors have argued that the technology is not sound, since simulating a preserved brain is a technology that is at least many decades in the future and may even be impossible in principle. The criticisms were, however, mostly based on the argument the whole enterprise is profoundly unethical.

This kind of discussion between proponents of technologies aimed at performing whole brain emulation, sometimes in the future, and detractors that argue that such an endeavor is fundamentally flawed, has occurred in the past, most notably a 2014 controversy concerning the objectives of the Human Brain Project. In this controversy, critics argued that the goal of a large-scale simulation of the brain is premature and unsound, and that funding should be redirected towards more conventional approaches to the understanding of brain functions. Supporters of the Human Brain Project approach argued that reconstructing and simulating the human brain is an important objective in itself, which will bring many benefits and advance our knowledge of the brain and of the mind.

Picture by the author.

LIFE 3.0: Being Human in the Age of Artificial Intelligence

Max Tegmark’s latest book, LIFE 3.0: Being Human in the Age of Artificial Intelligence, is an enthralling journey into the future, when the developments in artificial intelligence create a new type of lifeform on Earth.

Tegmark proposes to classify life in three stages. Life 1.0, unintelligent life, is able to change its hardware and improve itself only through the very slow and blind process of natural evolution. Single cell organisms, plants and simple animals are in this category. Life 2.0 is also unable to change its hardware (excepto through evolution, as for Life 1.0) but can change its software, stored in the brains, by using previous experience to learn new behaviors. Higher animals and humans, in particular, belong here. Humans can now, up to a limited point, change their hardware (through prosthetics, cellphones, computers and other devices) so they could also be considered now Life 2.1.

Life 3.0 is the new generation of life, which can change both its software and its hardware. The ability to change the computational support (i.e., the physical basis of computation) results from technological advances, which will only accelerate with the advent of Artificial General Intelligence (AGI). The book is really about the future of a world where AGI enables humanity to create a whole range of new technologies, and expand new forms of life through the cosmos.

The riveting prelude, The Tale of the Omega Team, is the story of the group of people who “created” the first intelligence explosion on planet Earth makes this a “hard-to-put-down” book.  The rest of the book goes through the consequences of this intelligence explosion, a phenomenon the author believes will undoubtedly take place, sooner or later. Chapter 4 focus on the explosion proper, and on how it could happen. Chapter 5, appropriately titled “Aftermath: The Next 10,000 Years” is one of the most interesting ones, and describes a number of long term scenarios that could result from such an event. These scenarios range from a benevolent and enlightened dictatorship (by the AI) to the enslaved God situation, where humanity keeps the AI in chains and uses it as a slave to develop new technologies, inaccessible to unaided humanity’s simpler minds. Always present, in these scenarios, are the risks of a hostile takeover by a human-created AGI, a theme that this book also addresses in depth, following on the ideas proposed by Nick Bostrom, in his book Superintelligence.

Being a cosmologist, Tegmark could not leave out the question of how life can spread through the Cosmos, a topic covered in depth in chapter 6, in a highly speculative fashion. Tegmark’s view is, to say the least, grandiose, envisaging a future where AGI will make it possible to spread life through the reachable universe, climbing the three levels of the Kardashev scale. The final chapters address (in a necessarily more superficial manner) the complex topics of goal setting for AI systems and artificial (or natural) consciousness. These topics somehow felt less well developed and more complete and convincing treatments can be found elsewhere. The book ends with a description of the mission of the Future of Life Institute, and the Asilomar AI Principles.

A book like this cannot leave anyone indifferent, and you will be likely to take one of two opposite sides: the optimistis, with many famous representatives, including Elon Mush, Stuart Russel and Nick Bostrom, who believe AGI can be developed and used to make humanity prosper; or the pessimists , whose more visible member is probably Yuval Noah Harari, who has voiced very serious concerns about technology developments in his book Homo Deus and in this review of Life 3.0.

Portuguese Edition of The Digital Mind

IST Press, the publisher of Instituto Superior Técnico, just published the Portuguese edition of The Digital Mind, originally published by MIT Press.

The Portuguese edition, translated by Jorge Pereirinha Pires, follow the same organization and has been reviewed by a number of sources. The back-cover reviews are by Pedro Domingos, Srinivas Devadas, Pedro Guedes de Oliveira and Francisco Veloso.

A pre-publication was made by the Público newspaper, under the title Até que mundos digitais nos levará o efeito da Rainha Vermelha, making the first chapter of the book publicly available.

There are also some publicly available reviews and pieces about this edition, including an episode of a podcast and a review in the radio.

The Great Filter: are we rare, are we first, or are we doomed?

Fermi’s Paradox (the fact that we never detected any sign of aliens even though, conceptually, life could be relatively common in the universe) has already been discussed in this blog, as new results come in about the rarity of life bearing planets, the discovery of new Earth-like planets, or even the detection of possible signs of aliens.

There are a number of possible explanations for Fermi’s Paradox and one of them is exactly that sufficiently advanced civilizations could retreat into their own planets, or star systems, exploring the vastness of the nano-world, becoming digital minds.

A very interesting concept related with Fermi’s Paradox is the Great Filter theory, which states, basically, that if intelligent civilizations do not exist in the galaxy we, as a civilization, are either rare, first, or doomed. As this post very clearly describes, one of these three explanations has to be true, if no other civilizations exist.

The Great Filter theory is based on Robin Hanson’s argument that the failure to find any extraterrestrial civilizations in the observable universe has to be explained by the fact that somewhere, in the sequence of steps that leads from planet formation to the creation of technological civilizations, there has to be an extremely unlikely event, which he called the Great Filter.

This Great Filter may be behind us, in the process that led from inorganic compounds to humans. That means that we, intelligent beings, are rare in the universe. Maybe the conditions that lead to life are extremely rare, either due to the instability of planetary systems, or to the low probability that life gets started in the first place, or to some other phenomenon that we were lucky enough to overcome.

It can also happen that conditions that make possible the existence of life are relatively recent in the universe. That would mean that conditions for life only became common in the universe (or the galaxy) in the last few billions years. In that case, we may not be rare, but we would be the first, or among the first, planets to develop intelligent life.

The final explanation is that the Great Filter is not behind us, but ahead of us. That would mean that many technological civilizations develop but, in the end, they all collapse, due to unknown factors (some of them we can guess). In this case, we are doomed, like all other civilizations that, presumably, existed.

There is, of course, another group of explanations, which states that advanced civilizations do exist in the galaxy, but we are simply too dumb to contact or to observe them. Actually, many people believe that we should not even be trying to contact them, by broadcasting radio-signals into space, advertising that we are here. It may, simply, be too dangerous.

 

Image by the Bureau of Land Management, available at Wikimedia Commons

IEEE Spectrum special issue on whether we can duplicate a brain

Maybe you have read The Digital Mind or The Singularity is Near, by Ray Kurzweil, or other similar books, thought it all a bit farfetched, and wondered whether the authors are bonkers or just dreamers.

Wonder no more. The latest issue of the flagship publication of the Institute for Electrical and Electronic Engineers, IEEE Spectrum , is dedicated to the interesting and timely question of whether we can copy the brain, and use it as blueprint for intelligent systems.  This issue, which you can access here, includes many interesting articles, definitely worth reading.

I cannot even begin to describe here, even briefly, the many interesting articles in this special issue, but it is worthwhile reading the introduction, on the perspective of near future intelligent personal assistants or the piece on how we could build an artificial brain right now, by Jennifer Hasler.

Other articles address the question on how expensive, computationally, is the simulation of a brain at the right level of abstraction. Karlheinz Meier’s article on this topic explains very clearly why present day simulations are so slow:

“The big gap between the brain and today’s computers is perhaps best underscored by looking at large-scale simulations of the brain. There have been several such efforts over the years, but they have all been severely limited by two factors: energy and simulation time. As an example, consider a simulation that Markus Diesmann and his colleagues conducted several years ago using nearly 83,000 processors on the K supercomputer in Japan. Simulating 1.73 billion neurons consumed 10 billion times as much energy as an equivalent size portion of the brain, even though it used very simplified models and did not perform any learning. And these simulations generally ran at less than a thousandth of the speed of biological real time.

Why so slow? The reason is that simulating the brain on a conventional computer requires billions of differential equations coupled together to describe the dynamics of cells and networks: analog processes like the movement of charges across a cell membrane. Computers that use Boolean logic—which trades energy for precision—and that separate memory and computing, appear to be very inefficient at truly emulating a brain.”

Another interesting article, by Eliza Strickland, describes some of the efforts that are taking place to use  reverse engineer animal intelligence in order to build true artificial intelligence , including a part about the work by David Cox, whose team trains rats to perform specific tasks and then analyses the brains by slicing and imaging them:

“Then the brain nugget comes back to the Harvard lab of Jeff Lichtman, a professor of molecular and cellular biology and a leading expert on the brain’s connectome. ­Lichtman’s team takes that 1 mm3 of brain and uses the machine that resembles a deli slicer to carve 33,000 slices, each only 30 nanometers thick. These gossamer sheets are automatically collected on strips of tape and arranged on silicon wafers. Next the researchers deploy one of the world’s fastest scanning electron microscopes, which slings 61 beams of electrons at each brain sample and measures how the electrons scatter. The refrigerator-size machine runs around the clock, producing images of each slice with 4-nm resolution.”

Other approaches are even more ambitious. George Church, a well-known researcher in biology and bioinformatics, uses sequencing technologies to efficiently obtain large-scale, detailed information about brain structure:

“Church’s method isn’t affected by the length of axons or the size of the brain chunk under investigation. He uses genetically engineered mice and a technique called DNA bar coding, which tags each neuron with a unique genetic identifier that can be read out from the fringy tips of its dendrites to the terminus of its long axon. “It doesn’t matter if you have some gargantuan long axon,” he says. “With bar coding you find the two ends, and it doesn’t matter how much confusion there is along the way.” His team uses slices of brain tissue that are thicker than those used by Cox’s team—20 μm instead of 30 nm—because they don’t have to worry about losing the path of an axon from one slice to the next. DNA sequencing machines record all the bar codes present in a given slice of brain tissue, and then a program sorts through the genetic information to make a map showing which neurons connect to one another.”

There is also a piece on the issue of AI and consciousness, where Christoph Koch and Giulio Tononi describe their (more than dubious, in my humble opinion) theory on the application of Integrated Information Theory to the question of: can we quantify machine consciousness?

The issue also includes interesting quotes and predictions by famous visionairies, such as Ray Kurzweil, Carver Mead, Nick Bostrom, Rodney Brooks, among others.

Images from the special issue of IEEE Spectrum.

To Be a Machine: Adventures Among Cyborgs, Utopians, and the Futurists Solving the Modest Problem of Death

Mark O’Connell witty, insightful and sometimes deeply moving account of his research on the topic of transhumanism deserves a place in the bookshelf of anyone interested in the future of humanity. Reading To Be a Machine is a delightful trip through the ideals, technologies, places and characters involved in transhumanism, the idea that science and technology will one day transform human into immortal computer based lifeforms.

For reasons that are not totally clear to me, transhumanism remains mostly a fringe culture, limited to a few futurists, off-the-mainstream scientists and technology nuts. As shared fictions go (to use Yuval Harari’s notation), I would imagine transhumanism is one idea whose time has come. However, it remains mostly unknown by the general public. While humanists believe that the human person, with his/her desires, choices, and fears, should be the most important value to be preserved by a society (check my review of Homo Deus), transhumanists believe that biological based intelligence is imperfect, exists purely because of historical reasons (evolution, that is) and will go away soon as we move intelligence into other computational supports, more robust than our frail bodies.

O’Connell, himself a hard-core humanist, as becomes clear from reading between the lines of this book, pursued a deep, almost forensic, investigation on what transhumanists are up to. In this process, he talks with many unusual individuals involved in the transhumanist saga, from Max More, who runs Alcor, a company that, in exchange for a couple hundred dollars, will preserve your body for the future in liquid nitrogen (or 80k for just the head) to Aubrey de Grey, a reputed scientist working in life extension technologies, who argues that we should all be working on this problem. In de Grey’s words, cited by O’Connell “aging is a human disaster on an unimaginably vast scale, a massacre, a methodical and comprehensive annihilation of every single person that ever lived“. These are just two of the dozens of fascinating characters in the book interviewed in place by O’Connell.

The narrative is gripping, hilarious at times, but moving and compelling, not the least because O’Connell himself provides deep insights about the issues the book discusses. The characters in the book are, at once, alien and deeply human, as they are only trying to overcome the limits of our bodies. Deservedly, the book has been getting excellent reviews, from many sources.

In the end, one gets the idea that transhumanists are crazy, maybe, but not nearly as crazy as all other believers in immortality, be it by divine intervention, by reincarnation, or by any other mechanisms so ingrained in mainstream culture.