MIT distances itself from Nectome, a mind uploading company

The MIT Media Lab, a unit of MIT, decided to sever the ties that connected it with Nectome, a startup that proposes to make available a technology that processes and chemically preserves a brain, down to its most minute details, in order to make it possible, at least in principle, to simulate your brain and upload your mind, sometime in the future.

According to the MIT news release, “MIT’s connection to the company came into question after MIT Technology Review detailed Nectome’s promotion of its “100 percent fatal” technology” in an article posted in the MIT Technology Review site.

As reported in this blog, Nectome claims that by preserving the brain, it may be possible, one day, “to digitize your preserved brain and use that information to recreate your mind”. Nectome acknowledges, however, that the technology is fatal to the brain donor and that there are no warranties that future recovery of the memories, knowledge and personality will be possible.

Detractors have argued that the technology is not sound, since simulating a preserved brain is a technology that is at least many decades in the future and may even be impossible in principle. The criticisms were, however, mostly based on the argument the whole enterprise is profoundly unethical.

This kind of discussion between proponents of technologies aimed at performing whole brain emulation, sometimes in the future, and detractors that argue that such an endeavor is fundamentally flawed, has occurred in the past, most notably a 2014 controversy concerning the objectives of the Human Brain Project. In this controversy, critics argued that the goal of a large-scale simulation of the brain is premature and unsound, and that funding should be redirected towards more conventional approaches to the understanding of brain functions. Supporters of the Human Brain Project approach argued that reconstructing and simulating the human brain is an important objective in itself, which will bring many benefits and advance our knowledge of the brain and of the mind.

Picture by the author.


New technique for high resolution imaging of brain connections

MIT researchers have proposed a new technique that leads to very high resolution images of the detailed connections of neurons in the human brain. Taeyun Ku, Justin Swaney and Jeong-Yoon Park were the lead researchers of the work published in a Nature Biotechnology article. They have developed a new technique for imaging brain tissue at multiple scales that leads to unprecedented high resolution images of significant regions of the brain, which allows them to detect the presence of proteins within cells and determine the long-range connections between neurons.

The technique actually blows up the size of the tissues under observation, increasing their dimension, while preserving nearly all of the proteins within the cells, which can be labeled with fluorescent molecules and imaged.

The technique floods the brain tissue with acrylamide polymers, which end up forming a dense gel. The proteins are attached to this gel and, after they are denatured, the gel can be expanded to four or five times its original size. This leads to the possibility of imaging the blown-up tissue with a resolution that is much higher than would be possible if the original tissue was used.

Techniques like create the conditions to advance with reverse engineering techniques that could lead to a better understanding of the way neurons connect with each other, creating the complex structures in the brain.

Image credit: MIT


Europe wants to have one exascale supercomputer by 2023

On March 23rd, in Rome, seven European countries signed a joint declaration on High Performance Computing (HPC), committing to an initiative that aims at securing the required budget and developing the technologies necessary to acquire and deploy two exascale supercomputers, in Europe, by 2023. Other Member States will be encouraged to join this initiative.

Exascale computers, defined as machines that execute 10 to the 18th power operations per second will be roughly 10 times more powerful than the existing fastest supercomputer, the Sunway TaihuLight, which clocks in at 93 petaflop/s, or 93 times 10 to the 15 floating point operations per second. No country in Europe has, at the moment, any machine among the 10 most powerful in the world. The declaration, and related documents, do not fully specify that these machines will clock at more than one exaflop/s, given that the requirements for supercomputers are changing with the technology, and floating point operations per second may not be the right measure.

This renewed interest of European countries in High Performance Computing highlights the fact that this technology plays a significant role in the economic competitiveness of research and development. Machines with these characteristics are used mainly in complex system simulations, in physics, chemistry, materials, fluid dynamics, but they are also useful in storing and processing the large amounts of data required to create intelligent systems, namely by using deep learning.

Andrus Ansip, European Commission Vice-President for the Digital Single Market remarked that: “High-performance computing is moving towards its next frontier – more than 100 times faster than the fastest machines currently available in Europe. But not all EU countries have the capacity to build and maintain such infrastructure, or to develop such technologies on their own. If we stay dependent on others for this critical resource, then we risk getting technologically ‘locked’, delayed or deprived of strategic know-how. Europe needs integrated world-class capability in supercomputing to be ahead in the global race. Today’s declaration is a great step forward. I encourage even more EU countries to engage in this ambitious endeavour”.

The European Commission press release includes additional information on the next steps that will be taken in the process.

Photo of the signature event, by the European Commission. In the photo, from left to right, the signatories: Mark Bressers (Netherlands), Thierry Mandon (France), Etienne Schneider (Luxembourg), Andrus Ansip (European Commission), Valeria Fedeli (Italy), Manuel Heitor (Portugal), Carmen Vela (Spain) and Herbert Zeisel (Germany).


Intel buys Mobileye by $15 billion

Mobileye, a company that develops computer vision and sensor fusion technology for autonomous and computer assisted driving, has been bought by Intel, in a deal worth 15.3 billion dollars.

The company develops a large range of technologies and services related with computer based driving. These technologies include rear facing and front facing cameras, sensor fusions, and high-definition mapping. Mobileye has been working with a number of car manufacturers, including Audi and BMW.

Mobileye already sells devices that you install in your car, to monitor the road and warn the driver of impeding risks. A number of insurance companies in Israel have reduced the insurance premium for drivers who have installed the devices in their cars.

This sale is another strong indication that autonomous and computer assisted driving will be a mature technology within the next decade, changing profoundly our relation with cars and driving.

The products of Mobileye have been extensively covered in the news recently, including TechCrunchThe New York Times and Forbes.

Image by Ranbar, available at Wikimedia Commons.

IBM TrueNorth neuromorphic chip does deep learning

In a recent article, published in the Proceedings of the National Academy of Sciences, IBM researchers demonstrated that the TrueNorth chip, designed to perform neuromorphic computing, can be trained using deep learning algorithms.


The TrueNorth chip was designed to efficiently simulate the efficient modeling of spiking neural networks, a model for neurons that closely mimics the way biological neurons work. Spiking neural networks are based on the integrate and fire model, inspired on the fact that actual neurons integrate the incoming ion currents caused by synaptic firing and generate an output spike only when sufficient synaptic excitation has been accumulated. Spiking neural network models tend to be less efficient than more abstract models of neurons, which simply compute the real valued output directly from the values of the real valued inputs multiplied by the input weights.

As IEEE Spectrum explains: “Instead of firing every cycle, the neurons in spiking neural networks must gradually build up their potential before they fire. To achieve precision on deep-learning tasks, spiking neural networks typically have to go through multiple cycles to see how the results average out. That effectively slows down the overall computation on tasks such as image recognition or language processing.

In the article just published, IBM researchers have adapted deep learning algorithms to run on their TrueNorth architecture, and have achieved comparable precision, with lower energy dissipation. This research raises the prospect that energy-efficient neuromorphic chips may be competitive in deep learning tasks.

Image from Wikimedia Commons

Writing a Human Genome from scratch: the Genome Project-write

The Genome Project-write has released a white paper, with a clear proposal of the steps and timeline that will be required to design and assemble a human genome from scratch.


The project is a large scale project, involving a significant number of institutions, and many well-known researchers, including George Church and Jef Boeke. According to the project web page:

“Writing DNA is the future of science and medicine, and holds the promise of pulling us forward into a better future. While reading DNA code has continued to advance, our capability to write DNA code remains limited, which in turn restricts our ability to understand and manipulate biological systems. GP-write will enable scientists to move beyond observation to action, and facilitate the use of biological engineering to address many of the global problems facing humanity.”

The idea is to use existing technologies for DNA synthesis to accelerate research in a wide spectrum of life-sciences. The synthesis of human genomes may make it possible to understand the phenotypic results of specific genome sequences and will contribute to improve the quality of synthetic biology tools.

Special attention will be paid to the complex ethical, legal and social issues that are a consequence of the project.

The project has received wide coverage, in a number of news sources, including popular science sites such as Statnews and the journal Science.

Computers will always follow instructions. That may be the problem…

Many pessimistic scenarios about machines taking control of the world and harming humans are based on the idea that computers will eventually develop self-consciousness and define their own goals, incompatible with the goals of humanity. This is the basis of the argument of many science-fiction movies and books.

Many people believe, however, that this will not be the main problem. As reported in many news outlets, the University of California at Berkeley (my alma matter) has launched the Center for Human-Compatible Artificial Intelligence. The center will be headed by Stuart Russell, a famous expert in Artificial Intelligence (and  co-author, with Peter Norvig, of the most used textbook in the field, Artificial Intelligence: A Modern Approach). Russell has been a vocal advocate for incorporating human values into the design of AI, in order to avoid the pitfall that may come from AI systems running amok.

According to Stuart Russell, the issue is “that machines as we currently design them in fields like AI, robotics, control theory and operations research take the objectives that we humans give them very literally“. Therefore, they may approach tasks with an objective that is simply too literal. For instance, if instructed to solve the problem of “global warming”, a machine may decide that the most effective way is to wipe out the human race.


According to the UC Berkeley press release, the center is being launched with a grant of $5.5 million from the Open Philanthropy Project, with additional grants from the Leverhulme Trust and the Future of Life Institute.

The center will work on mechanisms to guarantee that the AI systems of the future will act, by design, in a way that is aligned with human values. According to Stuart Russell, “AI systems must remain under human control, with suitable constraints on behavior, despite capabilities that may eventually exceed our own. This means we need cast-iron formal proofs, not just good intentions.

Image credits: UC Berkeley. The image illustrates BRETT, the Berkeley Robot for the Elimination of Tedious Tasks, tieing a knot after watching others demonstrate it.