IEEE Spectrum special issue on whether we can duplicate a brain

Maybe you have read The Digital Mind or The Singularity is Near, by Ray Kurzweil, or other similar books, thought it all a bit farfetched, and wondered whether the authors are bonkers or just dreamers.

Wonder no more. The latest issue of the flagship publication of the Institute for Electrical and Electronic Engineers, IEEE Spectrum , is dedicated to the interesting and timely question of whether we can copy the brain, and use it as blueprint for intelligent systems.  This issue, which you can access here, includes many interesting articles, definitely worth reading.

I cannot even begin to describe here, even briefly, the many interesting articles in this special issue, but it is worthwhile reading the introduction, on the perspective of near future intelligent personal assistants or the piece on how we could build an artificial brain right now, by Jennifer Hasler.

Other articles address the question on how expensive, computationally, is the simulation of a brain at the right level of abstraction. Karlheinz Meier’s article on this topic explains very clearly why present day simulations are so slow:

“The big gap between the brain and today’s computers is perhaps best underscored by looking at large-scale simulations of the brain. There have been several such efforts over the years, but they have all been severely limited by two factors: energy and simulation time. As an example, consider a simulation that Markus Diesmann and his colleagues conducted several years ago using nearly 83,000 processors on the K supercomputer in Japan. Simulating 1.73 billion neurons consumed 10 billion times as much energy as an equivalent size portion of the brain, even though it used very simplified models and did not perform any learning. And these simulations generally ran at less than a thousandth of the speed of biological real time.

Why so slow? The reason is that simulating the brain on a conventional computer requires billions of differential equations coupled together to describe the dynamics of cells and networks: analog processes like the movement of charges across a cell membrane. Computers that use Boolean logic—which trades energy for precision—and that separate memory and computing, appear to be very inefficient at truly emulating a brain.”

Another interesting article, by Eliza Strickland, describes some of the efforts that are taking place to use  reverse engineer animal intelligence in order to build true artificial intelligence , including a part about the work by David Cox, whose team trains rats to perform specific tasks and then analyses the brains by slicing and imaging them:

“Then the brain nugget comes back to the Harvard lab of Jeff Lichtman, a professor of molecular and cellular biology and a leading expert on the brain’s connectome. ­Lichtman’s team takes that 1 mm3 of brain and uses the machine that resembles a deli slicer to carve 33,000 slices, each only 30 nanometers thick. These gossamer sheets are automatically collected on strips of tape and arranged on silicon wafers. Next the researchers deploy one of the world’s fastest scanning electron microscopes, which slings 61 beams of electrons at each brain sample and measures how the electrons scatter. The refrigerator-size machine runs around the clock, producing images of each slice with 4-nm resolution.”

Other approaches are even more ambitious. George Church, a well-known researcher in biology and bioinformatics, uses sequencing technologies to efficiently obtain large-scale, detailed information about brain structure:

“Church’s method isn’t affected by the length of axons or the size of the brain chunk under investigation. He uses genetically engineered mice and a technique called DNA bar coding, which tags each neuron with a unique genetic identifier that can be read out from the fringy tips of its dendrites to the terminus of its long axon. “It doesn’t matter if you have some gargantuan long axon,” he says. “With bar coding you find the two ends, and it doesn’t matter how much confusion there is along the way.” His team uses slices of brain tissue that are thicker than those used by Cox’s team—20 μm instead of 30 nm—because they don’t have to worry about losing the path of an axon from one slice to the next. DNA sequencing machines record all the bar codes present in a given slice of brain tissue, and then a program sorts through the genetic information to make a map showing which neurons connect to one another.”

There is also a piece on the issue of AI and consciousness, where Christoph Koch and Giulio Tononi describe their (more than dubious, in my humble opinion) theory on the application of Integrated Information Theory to the question of: can we quantify machine consciousness?

The issue also includes interesting quotes and predictions by famous visionairies, such as Ray Kurzweil, Carver Mead, Nick Bostrom, Rodney Brooks, among others.

Images from the special issue of IEEE Spectrum.

Europe wants to have one exascale supercomputer by 2023

On March 23rd, in Rome, seven European countries signed a joint declaration on High Performance Computing (HPC), committing to an initiative that aims at securing the required budget and developing the technologies necessary to acquire and deploy two exascale supercomputers, in Europe, by 2023. Other Member States will be encouraged to join this initiative.

Exascale computers, defined as machines that execute 10 to the 18th power operations per second will be roughly 10 times more powerful than the existing fastest supercomputer, the Sunway TaihuLight, which clocks in at 93 petaflop/s, or 93 times 10 to the 15 floating point operations per second. No country in Europe has, at the moment, any machine among the 10 most powerful in the world. The declaration, and related documents, do not fully specify that these machines will clock at more than one exaflop/s, given that the requirements for supercomputers are changing with the technology, and floating point operations per second may not be the right measure.

This renewed interest of European countries in High Performance Computing highlights the fact that this technology plays a significant role in the economic competitiveness of research and development. Machines with these characteristics are used mainly in complex system simulations, in physics, chemistry, materials, fluid dynamics, but they are also useful in storing and processing the large amounts of data required to create intelligent systems, namely by using deep learning.

Andrus Ansip, European Commission Vice-President for the Digital Single Market remarked that: “High-performance computing is moving towards its next frontier – more than 100 times faster than the fastest machines currently available in Europe. But not all EU countries have the capacity to build and maintain such infrastructure, or to develop such technologies on their own. If we stay dependent on others for this critical resource, then we risk getting technologically ‘locked’, delayed or deprived of strategic know-how. Europe needs integrated world-class capability in supercomputing to be ahead in the global race. Today’s declaration is a great step forward. I encourage even more EU countries to engage in this ambitious endeavour”.

The European Commission press release includes additional information on the next steps that will be taken in the process.

Photo of the signature event, by the European Commission. In the photo, from left to right, the signatories: Mark Bressers (Netherlands), Thierry Mandon (France), Etienne Schneider (Luxembourg), Andrus Ansip (European Commission), Valeria Fedeli (Italy), Manuel Heitor (Portugal), Carmen Vela (Spain) and Herbert Zeisel (Germany).

 

DNA as an efficient data storage medium

In an article recently published in the journal Science, Yaniv Erlich and Dina Zielinski showed that it is possible to store high density digital information in DNA molecules and reliably retrieve it. As they report, they stored a complete operating system, a movie, and other files with a total of more than 2MB, and managed to retrieve all the information with zero errors.

One of the critical factors of success is to use the appropriate coding methods: “Biochemical constraints dictate that DNA sequences with high GC content or long homopolymer runs (e.g., AAAAAA…) are undesirable, as they are difficult to synthesize and prone to sequencing errors.” 

Using the so-called DNA fountain strategy, they managed to overcome the limitations that arise from biochemical constraints and recovery errors. As they report in the Science article “We devised a strategy for DNA storage, called DNA Fountain, that approaches the Shannon capacity while providing robustness against data corruption. Our strategy harnesses fountain codes , which have been developed for reliable and effective unicasting of information over channels that are subject to dropouts, such as mobile TV (20). In our design, we carefully adapted the power of fountain codes to overcome both oligo dropouts and the biochemical constraints of DNA storage.”

The encoded data was written using DNA synthesis and the information was retrieved by performing PCR and sequencing the resulting DNA using Illumina sequencers.

Other studies, including the pioneering one by Church, in 2012, predicted that DNA storage could theoretically achieve a maximum information density of 680 Peta bytes per gram of DNA. The authors managed to perfectly retrieve the information from a physical density of 215 Peta bytes per gram. For comparison, a flash memory with about one gram can carry, at the moment, up to 128GB, a density 3 orders of magnitude lower.

The authors report that the cost of storage and retrieval, which was $3500/Mbyte, still represents a major bottleneck.

IBM TrueNorth neuromorphic chip does deep learning

In a recent article, published in the Proceedings of the National Academy of Sciences, IBM researchers demonstrated that the TrueNorth chip, designed to perform neuromorphic computing, can be trained using deep learning algorithms.

brain_anatomy_medical_head_skull_digital_3_d_x_ray_xray_psychedelic_3720x2631

The TrueNorth chip was designed to efficiently simulate the efficient modeling of spiking neural networks, a model for neurons that closely mimics the way biological neurons work. Spiking neural networks are based on the integrate and fire model, inspired on the fact that actual neurons integrate the incoming ion currents caused by synaptic firing and generate an output spike only when sufficient synaptic excitation has been accumulated. Spiking neural network models tend to be less efficient than more abstract models of neurons, which simply compute the real valued output directly from the values of the real valued inputs multiplied by the input weights.

As IEEE Spectrum explains: “Instead of firing every cycle, the neurons in spiking neural networks must gradually build up their potential before they fire. To achieve precision on deep-learning tasks, spiking neural networks typically have to go through multiple cycles to see how the results average out. That effectively slows down the overall computation on tasks such as image recognition or language processing.

In the article just published, IBM researchers have adapted deep learning algorithms to run on their TrueNorth architecture, and have achieved comparable precision, with lower energy dissipation. This research raises the prospect that energy-efficient neuromorphic chips may be competitive in deep learning tasks.

Image from Wikimedia Commons

Algorithms to live by: the computer science of human decisions

This delightful book, by Brian Christian and Tom Griffiths, provides a very interesting and orthogonal view on the role of computer science in our everyday lives.

The book covers a number of algorithms, which range from the best way to choose a bride (check the first 37% of the available candidates and pick the first one that is better than them) to the best way to manage your email ( just drop messages once you are over the top, don’t queue them for future processing, which will never happen).

516-sildnl-2

The book makes for a very enjoyable and engaging read, and should be required material for any computer science student, professor, or researcher.

The chapters include advice on when to stop looking for the best person for the job (e.g., your bride); how to manage the explore vs. exploit dilemma, as in picking the best restaurant for dinner; how to sort things in your closet; how to make sure the things you need frequently are nearby (caching); how to choose the things you should do first; how to predict the future (use Bayes’ rule); how to avoid overfitting and learn from the past; how to tackle difficult problems by looking at easier versions of them (relaxations); when rolling a dice is the best way to make a decision; how to handle long queues of requests, which are above and beyond your capacity; and how to avoid the tragedy of the commons that so commonly gets all of us into trouble, as in the prisoner’s dilemma.

Definitely, two thumbs up!

A review of Microsoft Hololens

By a kind invitation from Microsoft, I had the opportunity to experiment, from a user’s perspective, the new Microsoft Hololens. Basically, I was able to wear them for a while and to interact with a number of applications that were spread around a room.hololens

From the outside, the result is not very impressive, as the picture above shows. In a room, which was mostly empty (except for the other guests, wearing similar devices), you can see me wearing the lenses, raising my hand to pull-up a menu, using the menu-pull up gesture.

From the inside, things are considerably more interesting. During configuration, the software identifies the relevant features of the room, and creates an internal model of the space and of the furniture in it.

Applications, both 3D and 2D, can then be deployed in different spaces in the room, using a number of control gestures and menus. Your view of the applications is superimposed with the view of the room, leading to a semi-realistic impression of virtual reality, mixed with the “real” reality. You can move around the 3D holograms in the room (in this case an elephant, a mime and a globe, like the one below, among others).

image00315

You can also interact with them using a virtual pointing device (basically a mouse, controlled by your head movements). 2D applications, like video-streaming, appear as suspended screens (or screens lying on top of desks and tables) and can be controlled using the same method. Overall, the impression is very different from the one obtained using 3D Virtuall Reality googles, like Google Cardboard or Oculus Rift. For instance, in a conversation (pictured below) you would be seating in a chair, facing an hologram of your guest, possibly discussing some 3D object sitting between the two.

f528e79e-beed-4e12-9486-a3ccf003a0c4-1

Overall, I was much more impressed with the possibilities of this technology than I was with Google glasses, which I tried a few years back. The quality of the holograms was quite good, and the integration with the real world quite convincing. The applications need to be developed, though.

On the minus side, the device is somewhat heavy and less than comfortable to wear for extended periods. This limitation could probably be addressed by future developments of the device.

Microsoft HoloLens merges the real and the virtual worlds

The possibility to superimpose the real physical world and the virtual world created by computers has been viewed, for a long time, as a technology looking for a killer application.

The fact is that, until now, the technology was incipient and the user experience less than perfect. Microsoft is trying to change that, with their new product, Microsoft HoloLens. As of April this year, Microsoft is shipping the pre-production version of HoloLens, for developers.

The basic idea is that, by using HoloLens, computer generated objects can be superimposed with actual physical objects. Instead of using the “desktop” metaphor, users will be able to deploy applications in actual physical space. Non-holographic applications run as floating virtual screens  that will stick to a specific point in the physical space or move with the user. Holographic enabled applications will let you to use the physical space for virtual objects as you would for physical objects. For instance, if you leave a report, say, on top of a desk, it will stay there until you pick it up.

hololensThe IEEE Spectrum report on the device, by Rod Furlan, provides some interesting additional information and gives the device a clear  “thumbs up”.

The HoloLens, a self-contained computer weighting 580 grams, is powered by a 32-bit Intel Atom processor and Microsoft’s custom Holographic Processing Unit (HPU).

The following YouTube video, made available by Microsoft, gives some idea of what the product may become, once sufficiently powerful applications are developed.

Image and video credits: Microsoft HoloLens website.