Inching towards an exascale supercomputer

The Sunway TaihuLight became, as of June 2016, the fastest supercomputer in the world. At this time, the Top 500 ranking was rearranged to put this computer ahead of TianHe-2 (also from China). Sunway TaihuLight clocked in at 93 petaflop/sec (93,000,000,000,000,000 floating point operations per second)  using its 10 million cores This performance compares with the 34 petaflop/sec for the 3 million core TianHe-2. An exascale computer would have a performance of 1000 petaflops/sec.

What is maybe even more important, is that the new machine uses 14% less power than TianHe-2 (it uses a mere 15.3 MW), which makes it more than three times as efficient.

Mjc4OTczNg

As IEEE Spectrum reports, “TaihuLight uses DDR3, an older, slower memory, to save on power“. Furthermore, it tries to use small amounts of local memory near each core instead of a more traditional (and power demanding) memory hierarchy. Other architectural choices aimed at reducing the power while preserving the performance.

It is interesting to compare the power efficiency of this supercomputer with that of the human brain. Imagine that this supercomputer is used to simulate a full human brain (with its 86 billion neurons), using a standard neuron simulator package, such as NEURON.

Using some reasonable assumptions, it is possible to estimate that such a simulation would proceed at a speed about 3 million times slower than real time, and would require about three trillion times more energy than the human brain, to perform equivalent calculations. In terms of speed and power efficiency, it is still hard to compete with the 20W human brain.

 

Advertisements

A new map of the human brain


More than one hundred years ago, the German anatomist Korbinian Brodmann undertook a systematic analysis of the microscopic features of the brain cortex of humans (and several other species) and was able to create a detailed map of the cortex. Brodmann 52 areas  (illustrated below) are still used today to refer to specific regions of the cortex.

Brodmann_areas_3D

Despite the fact that he numbered brain cortex areas based mostly on the cellular composition of the tissues observed by microscope, there is remarkable correlation between specific Brodmann areas and specific functions in the cortex. For instance, area 17 is the primary visual cortex, while area 4 is the primary motor cortex.

This week, an article in Nature proposes a new map of the human cortex, much more detailed than the one developed by Brodmann. In this new map, each hemisphere of the cortex is subdivided into 180 regions.

A team led by Mathew Glasser used multiple types of imaging data collected from more than two hundred adults participants in the Human Connectome Project. The information included a number of different measurements including cortical thickness, brain function, connectivity between regions, and topographic organization of cells in brain tissue, among others.The following video, made available by Nature, gives an idea of the process followed by the researchers and the results obtained.

Image by Mark Dow, available at Wikimedia Commons.

Pokemon Go: the first step in the path to Accelerando?

The recent release of Pokemon Go,  an augmented reality mobile game attracted much attention, and made the value of its parent company, Nintendo, raise by more than 14 billion dollars. Rarely has the release of a mobile game had so much impact in the media and the financial world.

In large part, this happened because the market (and the world) are expecting this to be the first of many applications that explore the possibilities of augmented reality, a technology that superimposes the perceptions of the real and the virtual world.

Pokemon Go players, instead of staying at home playing with their cellphones, walk around the real world, looking for little monsters that appear in more or less random locations. More advanced players meet in specific places, called gyms, to have their monsters fight each other. Pokemon Go brought augmented reality into the mainstream, and may indeed represent the first of many applications that merge the real and the virtual world. The game still has many limitations in what concerns the use of augmented reality. Exact physical location, below a few feet cannot be obtained, and the illusion is slightly less than perfect. Nonetheless, the game represents a significant usage of augmented reality, a potentially disruptive technology.

Charles Stross, in the novel Accelerando, imagines a society where the hero, Manfred Macx, is one of the first to live permanently in augmented reality, looking into the world through an always-on pair of digital glasses. The glasses integrate information from the real world and the always present web. This society provides just the starting point for the novel, which recounts the story of three generations of a family as the world goes into (and emerges out of) a technological singularity.

945623pokemon

 

It is not difficult to imagine a future where digital glasses keep you informed of the name (and history, interests, and marital status) of anyone you meet in a party, where to go for your next appointment, or what are the last relevant news. Such an augmented reality world does not really require much more technology that what is available today, only the right applications and the right user interfaces.

Until we have Manfred’s glasses, we can use Pokemon Go to imagine what the fusion of real and artificial worlds will look like.

Left picture: cover of Accelerando

Right picture: The author, posing with an Oddish Pokemon monster, found in a remote town, in Portugal.

 

Is AI the worst mistake In human history?

In an interesting article, John Batelle added some fuel to the fire, in the ongoing discussion about the promises and dangers of Artificial Intelligence technology.

Physicists Stephen Hawking, Max Tegmark and Frank Wilczek, together with influential AI researcher Stuart Russell, have stated, in a widely cited article, that “Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.” Elon Musk, billionaire and founder of several well-known companies, including SpaceX, PayPal, and Tesla Motors, also joined the fray by stating that “we should be very careful about artificial intelligence” and that if he had to guess, “what our biggest existential threat is, it’s probably that.

Some people dismiss these worries. Andrew Ng, chief scientist at Baidu Research in Silicon Valley and professor at Stanford stated that “Fearing a rise of killer robots is like worrying about overpopulation on Mars“: it is not impossible, but should not be a major worry.

Others point to the fact that, while the first industrial revolution gave us cheap physical labor, freeing people to do other, more interesting jobs, the AI revolution will give us cheaper intellectual labor, freeing people to do more creative jobs. Anything other than that is either wishful thinking or paranoid worries. That may indeed be the case but some, including myself, worry that this time it may be different.

Previous articles in this blog have also addressed this topic, including March of the Machines, a reference to a recent special edition of The Economist and a brief review of Supperintelligence, a book by Nick Bostrom about the dangers of AI.

 

maxresdefault

When so many people talk about the danger of technology, the Valley listens. One of the most open responses, so far, has been OpenAI, a non-profit AI company, whose goal is “to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return“.

The idea is that, by being free from the need to generate income, OpenAI may develop more effectively significant advances in Artificial Intelligence and make them open and usable by everyone. Also, by making sure that AI research is kept in the open, OpenAI hopes to reduce the risks of a takeover by an hostile AI.

Are self-driving cars like elevators or like planes?

As reported in an article by the New York Times, Google and Tesla are working on self-driving cars using radically different approaches. Google is using the “elevator” metaphor, while Tesla is using the “plane autopilot” metaphor. IEEE Spectrum, the journal of the Institute of Electrical and Electronics Engineers, published an interesting analysis of the approaches taken by different companies.

img_0358_wide-dc4be1a3f03ed75555931114af2fd96cd6ebb54b-s800-c85

As you can gather from from this interesting Planet Money podcast, Google has decided that their autonomous vehicles will be much like elevators: you push a button, and the car (like an elevator) drives to the intended destination, without possible intervention from the driver.

The alternative approach, followed by Tesla and other car manufacturers, is the autopilot metaphor. The autopilot in a plane can be programmed to take the plane to a specific location, but the pilot can take back control of the plane at any moment. The autopilot assists, but does not replace, the pilot.

A number of experiments conducted by Google led the company to believe that it would be very risky to bet on the possibility that drivers would be able to take back control of the vehicle, in an emergency. Google found out that many drivers were not paying attention to the road while the autopilot was in charge and, instead, they would be working on their computers, talking on the phone or even taking a nap. Based on this data, Google designed cars without brake pedals, steering wheels or accelerators. These cars may seem strange to us today, just as elevators seemed strange in the beginning, when elevator operators were discontinued, and users started operating the elevators themselves.

The recent accident with a Tesla gives some additional evidence that the “plane autopilot” model may create additional risks, since drivers will not, in general, be alert enough to avoid accidents when the autopilot fails. Additionally, human drivers may become the highest risk in a world where most cars are driven by computers, given the inherent unpredictability of human drivers.

Only the future will tell whether future cars will be more like elevators or like planes, in what respects their self-driving ability.

 

Would you throw the fat guy off the bridge?

The recent fatal accident with a Tesla in autopilot mode did not involve any difficult moral decisions by the automatic driving systems, as it resulted from insufficient awareness of the conditions, both by the Tesla Autopilot and by the (late) driver.

However, it brings to the fore other cases where more difficult moral decisions may need to be made by intelligent systems in charge of driving autonomous vehicles. The famous trolley problem has been the subject of many analyses, articles and discussions and it remains a challenging topic in philosophy and ethics.

In this problem, which has many variants, you have to decide whether to pull a lever and divert an incoming runaway trolley, saving five people but killing one innocent bystander. In a variant of the problem, you cannot pull a lever, but you can throw a fat man from a bridge, thus stopping the trolley, but killing the fat man. The responses of different people vary wildly with the specific case in analysis.

These complex moral dilemmas have been addressed in detail many times, and a good overview is presented in the book by Thomas Cathcart.

trolley

In order to obtain more data about moral and difficult decisions, a group of researchers at MIT have created a website, where you can try to decide between the though choices available, yourself.

Instead of the more contrived cases involved in the trolley problem, you have to decide whether, as a driver, you should swerve or not, in the process deciding the fate of a number of passengers and bystanders.

Why don’t you try it?

An autonomous car has its first fatal crash! Now what?

For the first time, an autonomously driven vehicle, a model S Tesla, had a fatal crash. According to the manufacturer, the car hit a tractor trailer that crossed the highway where the car was traveling. Neither the autopilot, which was in charge, nor the driver, noticed “the white side of the tractor trailer against a brightly lit sky“. In this difficult lighting conditions, the brakes were not applied and the car crashed into the trailer. The bottom of the trailer hit the windshield of the car, leading to the dead of its only occupant.

This was the first fatal accident to happen with and autonomous driven car, and it happened after Tesla Autopilot logged in 130 million miles. In the average, there is an accident for every 94 million miles driven, in the US, and 60 million miles, worldwide, according to Tesla.

tesla-logo

It its statement, Tesla makes clear that Autopilot “is an assist feature that requires you to keep your hands on the steering wheel at all times,” and that “you need to maintain control and responsibility for your vehicle” while using it.

Nonetheless, this crash is bound to be the first to raise significant questions, with very difficult answers. Who is to blame for the fact that the autopilot did not break the car, in order to avoid the impact?

The programmers, who coded the software that was driving the vehicle at the time of accident? The driver, who did not maintain control of the vehicle? The designers of the learning algorithms, used to derive significant parts of the control software? The system architects, who did not ensure that the Autopilot was just an “assist feature“?

As autonomous systems, in general, and autonomous cars, in particular, become more common, these questions will multiply and we will need to find answers for them. We may on the eve of a new golden age for trolleyology.