AIs running wild at Facebook? Not yet, not even close!

Much was written about two Artificial Intelligence systems developing their own language. Headlines like “Facebook shuts down down AI after it invents its own creepy language” and “Facebook engineers panic, pull plug on AI after bots develop their own language” were all over the place, seeming to imply that we were just at the verge of a significant incident in AI research.

As it happens, nothing significant really happened, and these headlines are only due to the inordinate appetite of the media for catastrophic news. Most AI systems currently under development have narrow application domains, and do not have the capabilities to develop their own general strategies, languages, or motivations.

To be fair, many AI systems do develop their own language. Whenever a neural network is trained to perform pattern recognition, for instance, a specific internal representation is chosen by the network to internally encode specific features of the pattern under analysis. When everything goes smoothly, these internal representations correspond to important concepts in the patterns under analysis (a wheel of car, say, or an eye) and are combined by the neural network to provide the output of interest. In fact, creating these internal representations, which, in a way, correspond to concepts in a language, is exactly one of the most interesting features of neural networks, and of deep neural networks, in particular.

Therefore, systems creating their own languages are nothing new, really. What happened with the Facebook agents that made the news was that two systems were being trained using a specific algorithm, a generative adversarial network. When this training method is used, two systems are trained against each other. The idea is that system A tries to make the task of system B more difficult and vice-versa. In this way, both systems evolve towards becoming better at their respective tasks, whatever they are. As this post clearly describes, the two systems were being trained at a specific negotiation task, and they communicated using English words. As the systems evolved, the systems started to use non-conventional combinations of words to exchange their information, leading to the seemingly strange language exchanges that led to the scary headlines, such as this one:

Bob: I can i i everything else

Alice: balls have zero to me to me to me to me to me to me to me to me to

Bob: you i everything else

Alice: balls have a ball to me to me to me to me to me to me to me to me

Strange as this exchange may look, nothing out of the ordinary was really happening. The neural network training algorithms were simply finding concept representations which were used by the agents to communicate their intentions in this specific negotiation task (which involved exchanging balls and other items).

The experience was stopped not because Facebook was afraid that some runaway explosive intelligence process was underway, but because the objective was to have the agents use plain English, and not a made up language.

Image: Picture taken at the Institute for Systems and Robotics of Técnico Lisboa, courtesy of IST.

Advertisements

IEEE Spectrum special issue on whether we can duplicate a brain

Maybe you have read The Digital Mind or The Singularity is Near, by Ray Kurzweil, or other similar books, thought it all a bit farfetched, and wondered whether the authors are bonkers or just dreamers.

Wonder no more. The latest issue of the flagship publication of the Institute for Electrical and Electronic Engineers, IEEE Spectrum , is dedicated to the interesting and timely question of whether we can copy the brain, and use it as blueprint for intelligent systems.  This issue, which you can access here, includes many interesting articles, definitely worth reading.

I cannot even begin to describe here, even briefly, the many interesting articles in this special issue, but it is worthwhile reading the introduction, on the perspective of near future intelligent personal assistants or the piece on how we could build an artificial brain right now, by Jennifer Hasler.

Other articles address the question on how expensive, computationally, is the simulation of a brain at the right level of abstraction. Karlheinz Meier’s article on this topic explains very clearly why present day simulations are so slow:

“The big gap between the brain and today’s computers is perhaps best underscored by looking at large-scale simulations of the brain. There have been several such efforts over the years, but they have all been severely limited by two factors: energy and simulation time. As an example, consider a simulation that Markus Diesmann and his colleagues conducted several years ago using nearly 83,000 processors on the K supercomputer in Japan. Simulating 1.73 billion neurons consumed 10 billion times as much energy as an equivalent size portion of the brain, even though it used very simplified models and did not perform any learning. And these simulations generally ran at less than a thousandth of the speed of biological real time.

Why so slow? The reason is that simulating the brain on a conventional computer requires billions of differential equations coupled together to describe the dynamics of cells and networks: analog processes like the movement of charges across a cell membrane. Computers that use Boolean logic—which trades energy for precision—and that separate memory and computing, appear to be very inefficient at truly emulating a brain.”

Another interesting article, by Eliza Strickland, describes some of the efforts that are taking place to use  reverse engineer animal intelligence in order to build true artificial intelligence , including a part about the work by David Cox, whose team trains rats to perform specific tasks and then analyses the brains by slicing and imaging them:

“Then the brain nugget comes back to the Harvard lab of Jeff Lichtman, a professor of molecular and cellular biology and a leading expert on the brain’s connectome. ­Lichtman’s team takes that 1 mm3 of brain and uses the machine that resembles a deli slicer to carve 33,000 slices, each only 30 nanometers thick. These gossamer sheets are automatically collected on strips of tape and arranged on silicon wafers. Next the researchers deploy one of the world’s fastest scanning electron microscopes, which slings 61 beams of electrons at each brain sample and measures how the electrons scatter. The refrigerator-size machine runs around the clock, producing images of each slice with 4-nm resolution.”

Other approaches are even more ambitious. George Church, a well-known researcher in biology and bioinformatics, uses sequencing technologies to efficiently obtain large-scale, detailed information about brain structure:

“Church’s method isn’t affected by the length of axons or the size of the brain chunk under investigation. He uses genetically engineered mice and a technique called DNA bar coding, which tags each neuron with a unique genetic identifier that can be read out from the fringy tips of its dendrites to the terminus of its long axon. “It doesn’t matter if you have some gargantuan long axon,” he says. “With bar coding you find the two ends, and it doesn’t matter how much confusion there is along the way.” His team uses slices of brain tissue that are thicker than those used by Cox’s team—20 μm instead of 30 nm—because they don’t have to worry about losing the path of an axon from one slice to the next. DNA sequencing machines record all the bar codes present in a given slice of brain tissue, and then a program sorts through the genetic information to make a map showing which neurons connect to one another.”

There is also a piece on the issue of AI and consciousness, where Christoph Koch and Giulio Tononi describe their (more than dubious, in my humble opinion) theory on the application of Integrated Information Theory to the question of: can we quantify machine consciousness?

The issue also includes interesting quotes and predictions by famous visionairies, such as Ray Kurzweil, Carver Mead, Nick Bostrom, Rodney Brooks, among others.

Images from the special issue of IEEE Spectrum.

Homo Deus: A Brief History of Tomorrow

Homo Deus, the sequel to the wildly successful hit Sapiens, by Yuval Harari, aims to chronicle the history of tomorrow and to provide us with a unique and dispassionate view of the future of humanity. In Homo Deus, Harari develops further the strongest idea in Sapiens, the idea that religions (or shared fictions) are the reason why humanity came to dominate the world.

Many things are classified by Harari as religions, from the traditional ones like Christianism, Islamism or Hinduism, to other shared fictions that we tend not to view as religions, such as countries, money, capitalism, or humanism. The ability to share fictions, such as these, created in Homo sapiens the ability to coordinate enormous numbers of individuals in order to create vast common projects: cities, empires and, ultimately, modern technology. This is the idea, proposed in Sapiens, that Harari develops further in this book.

Harari thinks that, with the development of modern technology, humans will doggedly pursue an agenda consisting of three main goals: immortality, happiness and divinity. Humanity will try to become immortal, to live in constant happiness and to be god-like in its power to control nature.

The most interesting part of the book is in middle, where Harari analyses, in depth, the progressive but effective replacement of ancient religions by the dominant modern religion, humanism. Humanism, the relatively recent idea that there is a unique spark in humans, that makes human life sacred and every individual unique. Humanism therefore believes that meaning should be sought in the individual choices, views, and feelings, of humans, replaced almost completely traditional religions (some of them with millennia), which believed that meaning was to be found in ancient scriptures or “divine” sayings.

True, many people still believe in traditional religions, but with the exception of a few extremist sects and states, these religions plays a relatively minor role in conducting the business of modern societies. Traditional religions have almost nothing to say about the key ideas that are central to modern societies, the uniqueness of the individual and the importance of the freedom of choice, ideas that led to our current view of democracies and ever-growing market-oriented economies. Being religious, in the traditional sense, is viewed as a personal choice, a choice that must exist because of the essential humanist value of freedom of choice.

Harari’s description of the humanism schism, into the three flavors of liberal humanism, socialist humanism, and evolutionary humanism (Nazism and other similar systems), is interesting and entertaining. Liberal humanism, based on the ideals of free choice, capitalism, and democracy, has been gaining the upper hand in the twentieth century, with occasional relapses, over socialism or enlightened dictatorships.

The last part of the book, where one expects Harari to give us a hint of what may come after humanism, once technology creates systems and machines that make humanist creeds obsolete, is rather disappointing. Instead of presenting us with the promises and threats of transhumanism, he clings to common clichés and rather mundane worries.

Harari firmly believes that there are two types of intelligent systems: biological ones, which are conscious and have, possibly, some other special properties, and the artificial ones, created by technology, which are not conscious, even though they may come to outperform humans in almost every task. According to him, artificial systems may supersede humans in many jobs and activities, and possibly even replace humans as the intelligent species on Earth, but they will never have that unique spark of consciousness that we, humans, have.

This belief leads to two rather short-sighted final chapters, which are little more than a rant against the likes of Facebook, Google, and Amazon. Harari is (and justifiably so) particularly aghast with the new fad, so common these days, of believing that every single human experience should go online, to make shareable and give it meaning. The downsize is that this fad provides data to the all-powerful algorithms that are learning all there is to know about us. I agree with him that this is a worrying trend, but viewing it as the major threat of future technologies is a mistake. There are much much more important issues to deal with.

It is not that these chapters are pessimistic, even though they are. It is that, unlike in the rest of Homo Deus (and in Sapiens), in these last chapters Harari’s views seem to be locked inside a narrow and traditionalist view of intelligence, society, and, ultimately, humanity.

Other books, like SuperintelligenceWhat Technology Wants or The Digital Mind provide, in my opinion, much more interesting views on what a transhumanist society may come to be.

The Digital Mind: How Science is Redefining Humanity

Following the release in the US,  The Digital Mind, published by MIT Press,  is now available in Europe, at an Amazon store near you (and possibly in other bookstores). The book covers the evolution of technology, leading towards the expected emergence of digital minds.

Here is a short rundown of the book, kindly provided by yours truly, the author.

New technologies have been introduced in human lives at an ever increasing rate, since the first significant advances took place with the cognitive revolution, some 70.000 years ago. Although electronic computers are recent and have been around for only a few decades, they represent just the latest way to process information and create order out of chaos. Before computers, the job of processing information was done by living organisms, which are nothing more than complex information processing devices, created by billions of years of evolution.

Computers execute algorithms, sequences of small steps that, in the end, perform some desired computation, be it simple or complex. Algorithms are everywhere, and they became an integral part of our lives. Evolution is, in itself, a complex and long- running algorithm that created all species on Earth. The most advanced of these species, Homo sapiens, was endowed with a brain that is the most complex information processing device ever devised. Brains enable humans to process information in a way unparalleled by any other species, living or extinct, or by any machine. They provide humans with intelligence, consciousness and, some believe, even with a soul, a characteristic that makes humans different from all other animals and from any machine in existence.

But brains also enabled humans to develop science and technology to a point where it is possible to design computers with a power comparable to that of the human brain. Artificial intelligence will one day make it possible to create intelligent machines and computational biology will one day enable us to model, simulate and understand biological systems and even complete brains with unprecedented levels of detail. From these efforts, new minds will eventually emerge, minds that will emanate from the execution of programs running in powerful computers. These digital minds may one day rival our own, become our partners and replace humans in many tasks. They may usher in a technological singularity, a revolution in human society unlike any other that happened before. They may make humans obsolete and even a threatened species or they make us super-humans or demi-gods.

How will we create these digital minds? How will they change our daily lives? Will we recognize them as equals or will they forever be our slaves? Will we ever be able to simulate truly human-like minds in computers? Will humans transcend the frontiers of biology and become immortal? Will humans remain, forever, the only known intelligence in the universe?

 

Are Fast Radio Bursts a sign of aliens?

In a recently published paper in The Astrophysical Journal Letters, Manasvi Lingam and Abraham Loeb, from the Harvard Center for Astrophysics, propose a rather intriguing explanation for the phenomena known as Fast Radio Bursts (FRBs). FRBs are very powerful and very short bursts of radio waves, originating, as far as is known, galaxies other than our own. FRBs last for only a few milliseconds, but, during that interval, they shine with the power of millions of suns.

The origin of FRBs remains a mystery. Although they were first detected in 2007, in archived data taken in 2001, and a number of FRBs was observed since then, no clear explanation of the phenomenon was yet found. They could be emitted by supermassive neutron stars, or they could be the result of massive stellar flares, millions of times larger than anything observed in our Sun. All of these explanations, however, remain speculative, as they fail to fully account for the data and to explain the exact mechanisms that generate this massive bursts of energy.

The rather puzzling, and possibly far-fetched, explanation proposed by Lingam and Loeb, is that these short-lived, intense, pulses of radio waves can be artificial radio beams, used by advanced civilizations to power light sail starships.

Light sail starships have been discussed as one technology that could possibly be used to send missions to other stars. A light sail, attached to a starship, deploys into space, and is accelerated using energy in the sending planet by powerful light source, like a laser. Existing proposals are based on the idea of using very small starships, possibly weighting only a few grams, which could be accelerated by pointing a powerful laser at them. Such a starship could be accelerated to a significant fraction of the speed of light in only a few days, using a sufficiently powerful laser, and could reach the nearest stars in only a few decades.

In their article, Lingam and Loeb discuss the rather intriguing idea that FRBs can be the flashes caused by such a technology, used by other civilizations to power their light sail spaceships. By analyzing the characteristics of the bursts, they conclude that these civilizations would have to use massive amounts of energy to produce these pulses, used to power starships with many thousands of tons. The characteristics of the bursts are, according to computations performed by the authors, compatible with an origin in a planet with a size approximately the size of the Earth.

The authors use the available data, to compute an expected number of FRB-enabled civilizations in the galaxy, under the assumption that such a technology is widespread throughout the universe. The reach the conclusion that a few thousands of this type of civilizations in our galaxy would account for the expected frequency of observed FRBs. Needless to say, a vast number of assumptions is used here to reach such a conclusion, which is, they point out, consistent with the values one reaches by using Drake’s equation with optimistic parameters.

The paper has been analyzed by many secondary sources, including The Economist and The Washington Post.

 

Image source: ESO. Available at Wikimedia Commons.