Bell’s Theorem, or why the universe is even stranger than we might imagine

The Einstein-Podolsky-Rosen “paradox” was at first presented as an argument against some of the basic tenets of quantum mechanics.

One of these basic tenets is that there is genuine randomness in the characteristics of particles. For instance, when one measures the spin of an electron, it is only at the instant the measure is taken that the actual value of the spin is defined. Until then, its value was defined by a probability function, that collapses when the measurement is taken.

The EPR paradox uses the concept of entangled particles. Two particles are “entangled” if they were generated in such a way that they exhibit a totally correlated particular characteristic. For instance, two photons generated by a specific phenomenon (such as an electron-positron annihilation, under some circumstances) will have opposite polarizations. Once generated, these particles can travel vast distances, still entangled.

If some particular characteristic of one of these particles is measured (e.g., the polarization of a photon) in one location, this measurement will, probabilistically, result in a given value. That particular value will determine, instantaneously, the value of that same characteristic on the other particle, no matter how far the particles are. It is this “spooky action at a distance” that Einstein, Podolsky and Rosen believed to be impossible. It seems that the information about the state of one of the particles travels, faster than light, to the place where the other particle is.

Now, we can imagine that that particular characteristic of the particles was defined the very instant they were generated. Imagine you have one bag with one white ball and one black ball, and you separate the balls, without looking at them,  and put them into separate boxes. If one of the boxes is opened in Australia, say, and it is white, we will know instantaneously the color of the other ball. There is nothing magic or strange about this. Hidden inside the boxes, was all along the true color of the boxes, a hidden variable.

Maybe this is exactly what happens with the entangled photons. When they are generated, each one already carries with it the actual value of the polarization.

It is here that Bell’s Theorem comes to show that the universe is even stranger than we might conceive. Bell’s result, beautifully explained in this video, shows that the particles cannot carry with them any hidden variable that tells them what to do when they face a measurement. Each particle has to decide, probabilistically, at the time of the measurement, the value that should be reported. And, once this decision is made, the measurement for the other entangled particle is also defined, even if the other particle is on the other side of the universe. It seems that information travels faster than light.

The fact is that hidden variables cannot be used to explain this phenomenon. As Bell concluded “In a theory in which parameters are added to quantum mechanics to determine the results of individual measurements, without changing the statistical predictions, there must be a mechanism whereby the setting of one measuring device can influence the reading of another instrument, however remote. Moreover, the signal involved must propagate instantaneously, …

A very easy and practical demonstration of Bell’s theorem can be done with polarized filters, like the ones used in cameras or some 3D glasses. If you take two filters and put them at an angle, only a fraction of the photons that go through the first one make it through the second one. The actual fraction is given by the cosine squared of the angle between the filters(so, if the angle is 90º, no photons go through the two filters). So far, so good. Now, if you have the two filters at an angle (say 45º, so that half the photons that pass the first go through the second filter) and put an additional filter between them, at an angle of 22.5º, it happens that roughly 85% of the photons go through the (now) second filter. Of these, roughly 85% go through the third filter (which used to be the second). That means that, with the three filters in place, roughly 72% of the photons go through, way more than if you had just the two first filters, which were not changed in any way. This, obviously, cannot happen if the decision of the photons was determined from the start.

Do look at the video, and do the experience yourself.

Advertisements

New technique for high resolution imaging of brain connections

MIT researchers have proposed a new technique that leads to very high resolution images of the detailed connections of neurons in the human brain. Taeyun Ku, Justin Swaney and Jeong-Yoon Park were the lead researchers of the work published in a Nature Biotechnology article. They have developed a new technique for imaging brain tissue at multiple scales that leads to unprecedented high resolution images of significant regions of the brain, which allows them to detect the presence of proteins within cells and determine the long-range connections between neurons.

The technique actually blows up the size of the tissues under observation, increasing their dimension, while preserving nearly all of the proteins within the cells, which can be labeled with fluorescent molecules and imaged.

The technique floods the brain tissue with acrylamide polymers, which end up forming a dense gel. The proteins are attached to this gel and, after they are denatured, the gel can be expanded to four or five times its original size. This leads to the possibility of imaging the blown-up tissue with a resolution that is much higher than would be possible if the original tissue was used.

Techniques like create the conditions to advance with reverse engineering techniques that could lead to a better understanding of the way neurons connect with each other, creating the complex structures in the brain.

Image credit: MIT

 

AIs running wild at Facebook? Not yet, not even close!

Much was written about two Artificial Intelligence systems developing their own language. Headlines like “Facebook shuts down down AI after it invents its own creepy language” and “Facebook engineers panic, pull plug on AI after bots develop their own language” were all over the place, seeming to imply that we were just at the verge of a significant incident in AI research.

As it happens, nothing significant really happened, and these headlines are only due to the inordinate appetite of the media for catastrophic news. Most AI systems currently under development have narrow application domains, and do not have the capabilities to develop their own general strategies, languages, or motivations.

To be fair, many AI systems do develop their own language. Whenever a neural network is trained to perform pattern recognition, for instance, a specific internal representation is chosen by the network to internally encode specific features of the pattern under analysis. When everything goes smoothly, these internal representations correspond to important concepts in the patterns under analysis (a wheel of car, say, or an eye) and are combined by the neural network to provide the output of interest. In fact, creating these internal representations, which, in a way, correspond to concepts in a language, is exactly one of the most interesting features of neural networks, and of deep neural networks, in particular.

Therefore, systems creating their own languages are nothing new, really. What happened with the Facebook agents that made the news was that two systems were being trained using a specific algorithm, a generative adversarial network. When this training method is used, two systems are trained against each other. The idea is that system A tries to make the task of system B more difficult and vice-versa. In this way, both systems evolve towards becoming better at their respective tasks, whatever they are. As this post clearly describes, the two systems were being trained at a specific negotiation task, and they communicated using English words. As the systems evolved, the systems started to use non-conventional combinations of words to exchange their information, leading to the seemingly strange language exchanges that led to the scary headlines, such as this one:

Bob: I can i i everything else

Alice: balls have zero to me to me to me to me to me to me to me to me to

Bob: you i everything else

Alice: balls have a ball to me to me to me to me to me to me to me to me

Strange as this exchange may look, nothing out of the ordinary was really happening. The neural network training algorithms were simply finding concept representations which were used by the agents to communicate their intentions in this specific negotiation task (which involved exchanging balls and other items).

The experience was stopped not because Facebook was afraid that some runaway explosive intelligence process was underway, but because the objective was to have the agents use plain English, and not a made up language.

Image: Picture taken at the Institute for Systems and Robotics of Técnico Lisboa, courtesy of IST.

Stuart Russell and Sam Harris on The Dawn of Artificial Intelligence

In one of the latest episodes of his interesting podcast, Waking Up , Sam Harris discusses with Stuart Russell the future of Artificial Intelligence (AI).

Stuart Russel is one of the foremost world authorities on AI, and author of the most widely used textbook on the subject, Artificial Intelligence, a Modern Approach. Interestingly, most of the (very interesting) conversation focuses not so much on the potential of AI, but on the potential dangers of the technology.

Many AI researchers have dismissed offhand the worries many people have expressed over the possibility of runaway Artificial Intelligence. In fact, most active researchers know very well that most of the time is spent worrying about the convergence of algorithms, the lack of efficiency of training methods, or in difficult searches for the right architecture for some narrow problem. AI researchers spend no time at all worrying about the possibility that the systems they are developing will, suddenly, become too intelligent and a danger to humanity.

On the other hand, famous philosophers, scientists and entrepreneurs, such as Elon Musk, Richard Dawkins, Bill Gates, and Nick Bostrom have been very vocal about the possibility that man-made AI systems may one day run amok and become a danger to humanity.

From this duality one is led to believe that only people who are away from the field really worry about the possibility of dangerous super-intelligences. People inside the field pay little or no attention to that possibility and, in many cases, consider these worries baseless and misinformed.

That is why this podcast, with the participation of Stuart Russell, is interesting and well worth hearing. Russell cannot be accused of being an outsider to the field of AI, and yet his latest interests are focused on the problem of making sure that future AIs will have their objectives closely allied with those of the human race.

The Great Filter: are we rare, are we first, or are we doomed?

Fermi’s Paradox (the fact that we never detected any sign of aliens even though, conceptually, life could be relatively common in the universe) has already been discussed in this blog, as new results come in about the rarity of life bearing planets, the discovery of new Earth-like planets, or even the detection of possible signs of aliens.

There are a number of possible explanations for Fermi’s Paradox and one of them is exactly that sufficiently advanced civilizations could retreat into their own planets, or star systems, exploring the vastness of the nano-world, becoming digital minds.

A very interesting concept related with Fermi’s Paradox is the Great Filter theory, which states, basically, that if intelligent civilizations do not exist in the galaxy we, as a civilization, are either rare, first, or doomed. As this post very clearly describes, one of these three explanations has to be true, if no other civilizations exist.

The Great Filter theory is based on Robin Hanson’s argument that the failure to find any extraterrestrial civilizations in the observable universe has to be explained by the fact that somewhere, in the sequence of steps that leads from planet formation to the creation of technological civilizations, there has to be an extremely unlikely event, which he called the Great Filter.

This Great Filter may be behind us, in the process that led from inorganic compounds to humans. That means that we, intelligent beings, are rare in the universe. Maybe the conditions that lead to life are extremely rare, either due to the instability of planetary systems, or to the low probability that life gets started in the first place, or to some other phenomenon that we were lucky enough to overcome.

It can also happen that conditions that make possible the existence of life are relatively recent in the universe. That would mean that conditions for life only became common in the universe (or the galaxy) in the last few billions years. In that case, we may not be rare, but we would be the first, or among the first, planets to develop intelligent life.

The final explanation is that the Great Filter is not behind us, but ahead of us. That would mean that many technological civilizations develop but, in the end, they all collapse, due to unknown factors (some of them we can guess). In this case, we are doomed, like all other civilizations that, presumably, existed.

There is, of course, another group of explanations, which states that advanced civilizations do exist in the galaxy, but we are simply too dumb to contact or to observe them. Actually, many people believe that we should not even be trying to contact them, by broadcasting radio-signals into space, advertising that we are here. It may, simply, be too dangerous.

 

Image by the Bureau of Land Management, available at Wikimedia Commons

The wealth of humans: work and its absence in the twenty-first century

The Wealth of Humans, by Ryan Avent, a senior editor at The Economist, addresses the economic and social challenges imposed on societies by the rapid development of digital technologies.  Although the book includes an analysis of the mechanisms, technologies, and effects that may lead to massive unemployment, brought by the emergence of digital technologies, intelligent systems, and smart robots, the focus is on the economic and social effects of those technologies.

The main point Avent makes is that market mechanisms may be relied upon to create growth and wealth for society, and to improve the average condition of humans, but cannot be relied upon to ensure adequate redistribution of the generated wealth. Left to themselves, the markets will tend to concentrate wealth. This happened in the industrial revolution, but society adapted (unions, welfare, education) to ensure that adequate redistribution mechanisms were put in place.

To Avent, this tendency towards increased income asymmetry, between the top earners and the rest, which is already so clear, will only be made worst by the inevitable glut of labor that will be created by digital technologies and artificial intelligence.

There are many possible redistribution mechanisms, from universal basic income to minimum wage requirements but, as the author points out, none is guaranteed to work well in a society where a large majority of people may become unable to find work. The largest and most important asymmetry that remains is, probably, the asymmetry that exists between developed countries and underdeveloped ones. Although this asymmetry was somewhat reduced by the recent economic development of the BRIC countries, Avent believes that was a one time event that will not reoccur.

Avent points out that the strength of the developed economies is not a direct consequence of the factors that are most commonly thought to be decisive: more capital, adequate infrastructures, and better education. These factors do indeed play a role but what makes the decisive difference is “social capital”, the set of rules shared by members of developed societies that makes them more effective at creating value for themselves and for society. Social capital, the unwritten set of rules that make it possible to create value, in a society, in a country or in a company, cannot be easily copied, sold, or exported.

This social capital (which, interestingly, closely matches the idea of shared beliefs Yuval Harari describes in Sapiens) can be assimilated, by immigrants or new hires, who can learn how to contribute to the creation of wealth, and benefit from it. However, as countries and societies became adverse at receiving immigrants, and companies reduce workforces, social capital becomes more and more concentrated.

In the end, Avent concludes that no public policies, no known economic theories, are guaranteed to fix the problem of inequality, mass unemployment, and lack of redistribution. It comes down to society, as whole, i.e., to each one of us, to decide to be generous and altruistic, in order to make sure that the wealth created by the hidden hand of the market benefits all of mankind.

A must-read if you care about the effects of asymmetries in income distribution on societies.

IEEE Spectrum special issue on whether we can duplicate a brain

Maybe you have read The Digital Mind or The Singularity is Near, by Ray Kurzweil, or other similar books, thought it all a bit farfetched, and wondered whether the authors are bonkers or just dreamers.

Wonder no more. The latest issue of the flagship publication of the Institute for Electrical and Electronic Engineers, IEEE Spectrum , is dedicated to the interesting and timely question of whether we can copy the brain, and use it as blueprint for intelligent systems.  This issue, which you can access here, includes many interesting articles, definitely worth reading.

I cannot even begin to describe here, even briefly, the many interesting articles in this special issue, but it is worthwhile reading the introduction, on the perspective of near future intelligent personal assistants or the piece on how we could build an artificial brain right now, by Jennifer Hasler.

Other articles address the question on how expensive, computationally, is the simulation of a brain at the right level of abstraction. Karlheinz Meier’s article on this topic explains very clearly why present day simulations are so slow:

“The big gap between the brain and today’s computers is perhaps best underscored by looking at large-scale simulations of the brain. There have been several such efforts over the years, but they have all been severely limited by two factors: energy and simulation time. As an example, consider a simulation that Markus Diesmann and his colleagues conducted several years ago using nearly 83,000 processors on the K supercomputer in Japan. Simulating 1.73 billion neurons consumed 10 billion times as much energy as an equivalent size portion of the brain, even though it used very simplified models and did not perform any learning. And these simulations generally ran at less than a thousandth of the speed of biological real time.

Why so slow? The reason is that simulating the brain on a conventional computer requires billions of differential equations coupled together to describe the dynamics of cells and networks: analog processes like the movement of charges across a cell membrane. Computers that use Boolean logic—which trades energy for precision—and that separate memory and computing, appear to be very inefficient at truly emulating a brain.”

Another interesting article, by Eliza Strickland, describes some of the efforts that are taking place to use  reverse engineer animal intelligence in order to build true artificial intelligence , including a part about the work by David Cox, whose team trains rats to perform specific tasks and then analyses the brains by slicing and imaging them:

“Then the brain nugget comes back to the Harvard lab of Jeff Lichtman, a professor of molecular and cellular biology and a leading expert on the brain’s connectome. ­Lichtman’s team takes that 1 mm3 of brain and uses the machine that resembles a deli slicer to carve 33,000 slices, each only 30 nanometers thick. These gossamer sheets are automatically collected on strips of tape and arranged on silicon wafers. Next the researchers deploy one of the world’s fastest scanning electron microscopes, which slings 61 beams of electrons at each brain sample and measures how the electrons scatter. The refrigerator-size machine runs around the clock, producing images of each slice with 4-nm resolution.”

Other approaches are even more ambitious. George Church, a well-known researcher in biology and bioinformatics, uses sequencing technologies to efficiently obtain large-scale, detailed information about brain structure:

“Church’s method isn’t affected by the length of axons or the size of the brain chunk under investigation. He uses genetically engineered mice and a technique called DNA bar coding, which tags each neuron with a unique genetic identifier that can be read out from the fringy tips of its dendrites to the terminus of its long axon. “It doesn’t matter if you have some gargantuan long axon,” he says. “With bar coding you find the two ends, and it doesn’t matter how much confusion there is along the way.” His team uses slices of brain tissue that are thicker than those used by Cox’s team—20 μm instead of 30 nm—because they don’t have to worry about losing the path of an axon from one slice to the next. DNA sequencing machines record all the bar codes present in a given slice of brain tissue, and then a program sorts through the genetic information to make a map showing which neurons connect to one another.”

There is also a piece on the issue of AI and consciousness, where Christoph Koch and Giulio Tononi describe their (more than dubious, in my humble opinion) theory on the application of Integrated Information Theory to the question of: can we quantify machine consciousness?

The issue also includes interesting quotes and predictions by famous visionairies, such as Ray Kurzweil, Carver Mead, Nick Bostrom, Rodney Brooks, among others.

Images from the special issue of IEEE Spectrum.