The Digital Mind: How Science is Redefining Humanity

Following the release in the US,  The Digital Mind, published by MIT Press,  is now available in Europe, at an Amazon store near you (and possibly in other bookstores). The book covers the evolution of technology, leading towards the expected emergence of digital minds.

Here is a short rundown of the book, kindly provided by yours truly, the author.

New technologies have been introduced in human lives at an ever increasing rate, since the first significant advances took place with the cognitive revolution, some 70.000 years ago. Although electronic computers are recent and have been around for only a few decades, they represent just the latest way to process information and create order out of chaos. Before computers, the job of processing information was done by living organisms, which are nothing more than complex information processing devices, created by billions of years of evolution.

Computers execute algorithms, sequences of small steps that, in the end, perform some desired computation, be it simple or complex. Algorithms are everywhere, and they became an integral part of our lives. Evolution is, in itself, a complex and long- running algorithm that created all species on Earth. The most advanced of these species, Homo sapiens, was endowed with a brain that is the most complex information processing device ever devised. Brains enable humans to process information in a way unparalleled by any other species, living or extinct, or by any machine. They provide humans with intelligence, consciousness and, some believe, even with a soul, a characteristic that makes humans different from all other animals and from any machine in existence.

But brains also enabled humans to develop science and technology to a point where it is possible to design computers with a power comparable to that of the human brain. Artificial intelligence will one day make it possible to create intelligent machines and computational biology will one day enable us to model, simulate and understand biological systems and even complete brains with unprecedented levels of detail. From these efforts, new minds will eventually emerge, minds that will emanate from the execution of programs running in powerful computers. These digital minds may one day rival our own, become our partners and replace humans in many tasks. They may usher in a technological singularity, a revolution in human society unlike any other that happened before. They may make humans obsolete and even a threatened species or they make us super-humans or demi-gods.

How will we create these digital minds? How will they change our daily lives? Will we recognize them as equals or will they forever be our slaves? Will we ever be able to simulate truly human-like minds in computers? Will humans transcend the frontiers of biology and become immortal? Will humans remain, forever, the only known intelligence in the universe?

 

Is mind uploading nearer than you might think?

A recent article published in The Guardian, an otherwise mainstream newspaper, openly discusses the fact that mind uploading may become a real possibility in the near future. Mind uploading is based on the concept that the behavior of a brain can be emulated completely in a computer, ultimately leading to the possibility of transporting individual brains, and individual consciousnesses, into a program, which would emulate the behavior of the “uploaded” mind. Mind uploading represents, in practice, the surest and most guaranteed way to immortality, far faster than any other non-digital technologies can possibly aim to achieve in the foreseeable future.

This idea is not new, and the article makes an explicit reference to Hans Moravec book, The Mind Children, published by Harvard University Press in 1988. In fact, the topic has been already been addressed by a large number of authors, including Ray Kurzweil, in The Singularity is Near, Nick Bostrom, in Superintelligence, and even by me in The Digital Mind.

The article contains an interesting list of interesting sites and organizations, including CarbonCopies, a site dedicated to making whole brain emulation possible, founded by Randal A Koene, and a reference to the 2045 initiative, with similar goals, created by Dmitry Itskov.

The article, definitely worthwhile reading, goes into some detail in the idea of “substrate independent minds”, an idea clearly reminiscent of the concept of virtualization, so in vogue in today’s business world.

Picture source: The Guardian

Intel buys Mobileye by $15 billion

Mobileye, a company that develops computer vision and sensor fusion technology for autonomous and computer assisted driving, has been bought by Intel, in a deal worth 15.3 billion dollars.

The company develops a large range of technologies and services related with computer based driving. These technologies include rear facing and front facing cameras, sensor fusions, and high-definition mapping. Mobileye has been working with a number of car manufacturers, including Audi and BMW.

Mobileye already sells devices that you install in your car, to monitor the road and warn the driver of impeding risks. A number of insurance companies in Israel have reduced the insurance premium for drivers who have installed the devices in their cars.

This sale is another strong indication that autonomous and computer assisted driving will be a mature technology within the next decade, changing profoundly our relation with cars and driving.

The products of Mobileye have been extensively covered in the news recently, including TechCrunchThe New York Times and Forbes.

Image by Ranbar, available at Wikimedia Commons.

Are Fast Radio Bursts a sign of aliens?

In a recently published paper in The Astrophysical Journal Letters, Manasvi Lingam and Abraham Loeb, from the Harvard Center for Astrophysics, propose a rather intriguing explanation for the phenomena known as Fast Radio Bursts (FRBs). FRBs are very powerful and very short bursts of radio waves, originating, as far as is known, galaxies other than our own. FRBs last for only a few milliseconds, but, during that interval, they shine with the power of millions of suns.

The origin of FRBs remains a mystery. Although they were first detected in 2007, in archived data taken in 2001, and a number of FRBs was observed since then, no clear explanation of the phenomenon was yet found. They could be emitted by supermassive neutron stars, or they could be the result of massive stellar flares, millions of times larger than anything observed in our Sun. All of these explanations, however, remain speculative, as they fail to fully account for the data and to explain the exact mechanisms that generate this massive bursts of energy.

The rather puzzling, and possibly far-fetched, explanation proposed by Lingam and Loeb, is that these short-lived, intense, pulses of radio waves can be artificial radio beams, used by advanced civilizations to power light sail starships.

Light sail starships have been discussed as one technology that could possibly be used to send missions to other stars. A light sail, attached to a starship, deploys into space, and is accelerated using energy in the sending planet by powerful light source, like a laser. Existing proposals are based on the idea of using very small starships, possibly weighting only a few grams, which could be accelerated by pointing a powerful laser at them. Such a starship could be accelerated to a significant fraction of the speed of light in only a few days, using a sufficiently powerful laser, and could reach the nearest stars in only a few decades.

In their article, Lingam and Loeb discuss the rather intriguing idea that FRBs can be the flashes caused by such a technology, used by other civilizations to power their light sail spaceships. By analyzing the characteristics of the bursts, they conclude that these civilizations would have to use massive amounts of energy to produce these pulses, used to power starships with many thousands of tons. The characteristics of the bursts are, according to computations performed by the authors, compatible with an origin in a planet with a size approximately the size of the Earth.

The authors use the available data, to compute an expected number of FRB-enabled civilizations in the galaxy, under the assumption that such a technology is widespread throughout the universe. The reach the conclusion that a few thousands of this type of civilizations in our galaxy would account for the expected frequency of observed FRBs. Needless to say, a vast number of assumptions is used here to reach such a conclusion, which is, they point out, consistent with the values one reaches by using Drake’s equation with optimistic parameters.

The paper has been analyzed by many secondary sources, including The Economist and The Washington Post.

 

Image source: ESO. Available at Wikimedia Commons.

DNA as an efficient data storage medium

In an article recently published in the journal Science, Yaniv Erlich and Dina Zielinski showed that it is possible to store high density digital information in DNA molecules and reliably retrieve it. As they report, they stored a complete operating system, a movie, and other files with a total of more than 2MB, and managed to retrieve all the information with zero errors.

One of the critical factors of success is to use the appropriate coding methods: “Biochemical constraints dictate that DNA sequences with high GC content or long homopolymer runs (e.g., AAAAAA…) are undesirable, as they are difficult to synthesize and prone to sequencing errors.” 

Using the so-called DNA fountain strategy, they managed to overcome the limitations that arise from biochemical constraints and recovery errors. As they report in the Science article “We devised a strategy for DNA storage, called DNA Fountain, that approaches the Shannon capacity while providing robustness against data corruption. Our strategy harnesses fountain codes , which have been developed for reliable and effective unicasting of information over channels that are subject to dropouts, such as mobile TV (20). In our design, we carefully adapted the power of fountain codes to overcome both oligo dropouts and the biochemical constraints of DNA storage.”

The encoded data was written using DNA synthesis and the information was retrieved by performing PCR and sequencing the resulting DNA using Illumina sequencers.

Other studies, including the pioneering one by Church, in 2012, predicted that DNA storage could theoretically achieve a maximum information density of 680 Peta bytes per gram of DNA. The authors managed to perfectly retrieve the information from a physical density of 215 Peta bytes per gram. For comparison, a flash memory with about one gram can carry, at the moment, up to 128GB, a density 3 orders of magnitude lower.

The authors report that the cost of storage and retrieval, which was $3500/Mbyte, still represents a major bottleneck.

Arrival of the Fittest: why are biological systems so robust?

In his 2014 book, Arrival of the Fittest, Andreas Wagner addresses important open questions in evolution: how are useful innovations created in biological systems, enabling natural selection to perform its magic of creating ever more complex organisms? Why is it that changes in these complex systems do not lead only to non-working systems? What is the origin of variation upon which natural selection acts?

Wagner’s main point is that “Natural selection can preserve innovations, but it cannot create them. Nature’s many innovations—some uncannily perfect—call for natural principles that accelerate life’s ability to innovate, its innovability.”51bwxg5grcl-_sx324_bo1204203200_

In fact, natural selection can apply selective pressure, selecting organisms that have useful phenotypic variations, caused by the underlying genetic variations. However, for this to happen, genetic mutations and variations have to occur and, with high enough frequency, they have to lead to viable and more fit organisms.

In most man-made systems, almost all changes in the original design lead to systems that do not work, or that perform much worse than the original. Performing almost any random change in a plane, in a computer or in a program leads to a system that either performs worst than the original, or else, that fails catastrophically. Biological systems seem much more resilient, though. In this book, Wagner explores several types of (conceptual) biological networks: metabolic networks, protein interaction networks and gene regulatory networks.

Each node in these networks corresponds to one specific biological function: in the first case, a metabolic network, where chemical entities interact; in the second case, a protein interaction network, where proteins interact to create complex functions; and in the third case, a gene regulatory network, where genes regulate the expression of other genes. Two nodes in such networks are neighbors if they differ in only one DNA position, in the genotype that encodes the network.

He concludes that these networks are robust to mutations and, therefore, to innovations. In particular, he shows that you can traverse these networks, from node to neighboring node, while keeping the biological function unchanged, only slightly degraded, or even improved. Unlike man-made systems, biological systems are robust to change, and nature can experiment tweaking them, in the process creating innovation and increasingly complex systems. This how the amazingly complex richness of life has been created in a mere four billion years.

 

Taxing robots: a solution for unemployment or a recipe for economic disaster?

In a recent interview with Quartz, Bill Gates, who cannot exactly be called a Luddite, argued that a robot tax should be levied and used to help pay for jobs in healthcare and education, which are hard to automate and can only be done by humans (for now). Gates pointed out that humans are taxed on the salary they make, unlike the robots who could replace them.

Gates argued that governments must take more control of the consequences of increased technological sophistication and not rely on businesses to redistribute the income that is generated by the new generation of robots and artificial intelligence systems.

Although the idea looks appealing, it is in reality equivalent to taxing capital, as this article in The Economist explains. Taxing capital investments will slow down increases in productivity, and may lead, in the end, to poorer societies. Bill Gates’ point seems to be that investing in robots does indeed improve productivity, but also causes significant negative externalities, such as long term unemployment and increased income distribution inequalities. These negative externalities might justify a specific tax on robots, aimed at alleviating these negative externalities. In the end, it comes down to deciding whether economic growth is more important than ensuring everyone has a job.

As The Economist puts it: “Investments in robots can make human workers more productive rather than expendable; taxing them could leave the employees affected worse off. Particular workers may suffer by being displaced by robots, but workers as a whole might be better off because prices fall. Slowing the deployment of robots in health care and herding humans into such jobs might look like a useful way to maintain social stability. But if it means that health-care costs grow rapidly, gobbling up the gains in workers’ incomes, then the victory is Pyrrhic.”

Gates´ comments have been extensively analyzed in a number of articles, including this one by Yanis Varoufakis, a former finance minister of Greece, who argues that the robot tax will not solve the problem and is, at any rate, much worse than the existing alternative, a universal basic income.

The question of whether robots should be taxed is not a purely theoretical one. On February 17th, 2017, the European Parliament approved  a resolution with recommendations to the European Commission, which is heavily based on the draft report proposed by the committee on legal affairs, but leaves out the recommendations (included in the draft report) to consider a tax on robots. The decision to reject the robot tax was, unsurprisingly, well received by the robotics industry, as reported  in this article by Reuters.

PHOTO DATE: 12-12-13 LOCATION: Bldg. 32B - Valkyrie Lab SUBJECT: High quality, production photos of Valkyrie Robot for PAO PHOTOGRAPHERS: BILL STAFFORD, JAMES BLAIR, REGAN GEESEMAN

Image courtesy of NASA/Bill Stafford, James Blair and Regan Geeseman, available at Wikimedia Commons.