Meet Duplex, your new assistant, courtesy of Google

Advances in natural language processing have enabled systems such as Siri, Alexa, Google Assistant or Cortana to be at the service of anyone owning a smartphone or a computer. Still, so far, none of these systems managed to cross the thin dividing line that would make us take them for humans. When we ask Alexa to play music or Siri do dial a telephone number, we know very well that we are talking with a computer and the replies of the systems would remind us, were we to forget that.

It was to be expected that, with the evolution of the technology, this type of interactions would become more and more natural, possibly reaching a point where a computer could impersonate a real human, taking us closer to the vision of Alan Turing, a situation where you cannot tell a human apart from a computer by simply talking to both.

In an event widely reported in the media, at the I/O 2018 conference, Google made a demonstration of Duplex, a system that is able to process and execute requests in specific areas, interacting in a very human way with human operators. While Google states that the system is still under development, and only able to handle very specific situations, you get a feeling that, soon enough, digital assistants will be able to interact with humans without disclosing their artificial nature.  You can read the Google AI blog post here, or just listen to a couple of examples, where Duplex is scheduling a haircut or making a restaurant reservation. Both the speech recognition system and the speech synthesis system, as well as the underlying knowledge base and natural language processing engines, operate flawlessly in these cases, anticipating a widely held premonition that AI systems will soon be replacing humans in many specific tasks.

Photo by Kevin Bhagat on Unsplash

Advertisements

European Commission releases communication on Artificial Intelligence

Today, April 25th, 2018, the European Commission released a communication entitled Artificial Intelligence for Europe, and a related press release, addressing what could become the European strategy for Artificial Intelligence.

The document states that “Like the steam engine or electricity in the past, AI is transforming our world, our society and our industry. Growth in computing power, availability of data and progress in algorithms have turned AI into one of the most strategic technologies of the 21st century.

The communication argues that “The EU as a whole (public and private sectors combined) should aim to increase this investment [in Artificial Intelligence] to at least EUR 20 billion by the end of 2020. It should then aim for more than EUR 20 billion per year over the following decade.” These values should be compared with the current value of 4-5 billion, spent in AI.

The communication also addresses some questions raised by the increased ability of AI systems to replace human jobs: “The first challenge is to prepare the society as a whole. This means helping all Europeans to develop basic digital skills, as well as skills which are complementary to and cannot be replaced by any machine such as critical thinking, creativity or management. Secondly, the EU needs to focus efforts to help workers in jobs which are likely to be the most transformed or to disappear due to automation, robotics and AI. This is also about ensuring access for all citizens, including workers and the self-employed, to social protection, in line with the European Pillar of Social Rights. Finally, the EU needs to train more specialists in AI, building on its long tradition of academic excellence, create the right environment for them to work in the EU and attract more talent from abroad.”

This initiative, which has already received significant press coverage, may become Europe’s answer to the strong investments China and the United States are making in Artificial Intelligence technologies. There is also a fact sheet about the communication.

The Second Machine Age

The Second Machine Age, by Erik Brynjolfsson and Andrew McAfee, two MIT professors and researchers, offers mostly an economist’s point of view on the consequences of the technological changes that are remaking civilisation.

Although a fair number of chapters is dedicated to the technological innovations that are shaping the first decades of the 21st century, the book is at its best when the economic issues are presented and discussed.

The book is particularly interesting in its treatment of the bounty vs. spread dilema: will economic growth be fast enough to lift everyone’s standard of living, or will increased concentration of wealth lead to such an increase in inequality that many will be left behind?

The chapter that provides evidence on the steady increase in inequality is specially appealing and convincing. While average income, in the US, has been increasing steadily in the last decades, median income (the income of those who are exactly in the middle of the pay scale) has stagnated for several decades, and may even be decreasing in the last few years. For the ones at the bottom at the scale, the situation is much worst now than decades ago.

Abundant evidence of this trend also comes from the analysis of the shares of GDP that are due to wages and to corporate profits. Although these two fractions of GDP have fluctuated somewhat in the last century, there is mounting evidence that the fraction due to corporate profits is now increasing, while the fraction due to wages is decreasing.

All this evidence, put together, leads to the inevitable conclusion that society has to explicitly address the challenges posed by the fourth industrial revolution.

The last chapters are, indeed, dedicated to this issue. The authors do not advocate a universal basic income, but come out in defence of a negative income tax for those whose earnings are below a given level. The mathematics of the proposal are somewhat unclear but, in the end, one thing remains certain: society will have to address the problem of mounting inequality brought in by technology and globalisation.

MIT distances itself from Nectome, a mind uploading company

The MIT Media Lab, a unit of MIT, decided to sever the ties that connected it with Nectome, a startup that proposes to make available a technology that processes and chemically preserves a brain, down to its most minute details, in order to make it possible, at least in principle, to simulate your brain and upload your mind, sometime in the future.

According to the MIT news release, “MIT’s connection to the company came into question after MIT Technology Review detailed Nectome’s promotion of its “100 percent fatal” technology” in an article posted in the MIT Technology Review site.

As reported in this blog, Nectome claims that by preserving the brain, it may be possible, one day, “to digitize your preserved brain and use that information to recreate your mind”. Nectome acknowledges, however, that the technology is fatal to the brain donor and that there are no warranties that future recovery of the memories, knowledge and personality will be possible.

Detractors have argued that the technology is not sound, since simulating a preserved brain is a technology that is at least many decades in the future and may even be impossible in principle. The criticisms were, however, mostly based on the argument the whole enterprise is profoundly unethical.

This kind of discussion between proponents of technologies aimed at performing whole brain emulation, sometimes in the future, and detractors that argue that such an endeavor is fundamentally flawed, has occurred in the past, most notably a 2014 controversy concerning the objectives of the Human Brain Project. In this controversy, critics argued that the goal of a large-scale simulation of the brain is premature and unsound, and that funding should be redirected towards more conventional approaches to the understanding of brain functions. Supporters of the Human Brain Project approach argued that reconstructing and simulating the human brain is an important objective in itself, which will bring many benefits and advance our knowledge of the brain and of the mind.

Picture by the author.

Uber temporarily halts self-driving cars on the wake of fatal accident

Uber decided to halt all self-driving cars operations following a fatal accident involving an Uber car driving in autonomous mode, in Tempe, Arizona. Although the details are sketchy, Elaine Herzberg, a 49-year-old woman was crossing the street, outside the crosswalk,  in her bike, when she was fatally struck by a Volvo XC90 outfitted with the company’s sensing systems, in autonomous mode. She was taken to the hospital, where she later died as a consequence of the injuries. A human safety driver was behind the wheel but did not intervene.  The weather was clear and no special driving conditions have been reported but reports say she crossed the road suddenly, coming from a poorly lit area.

The accident raised concerns about the safety of autonomous vehicles, and the danger they may cause to people. Uber has decided to halt all self-driving car operations, pending investigation of the accident.

Video released by the Tempe police shows the poor light conditions and the sudden appearance of the woman with the bike. From the video, the collision looks unavoidable, by looking only at camera images. Other sensors, on the other hand, might have helped.

In 2016, about 1 person has died in traffic accidents, per each 100 million miles travelled by cars. Uber has, reportedly, logged 3 million miles in its autonomous vehicles. Since no technology will reduce the number of accidents to zero, further studies will be required to assess the comparative safety of autonomous vs. non-autonomous vehicles.

Photo credits: ABC-15 via Associated Press.

Nectome, a Y-combinator startup, wants to upload your mind

Y-combinator is a well known startup accelerator, which accepts and supports startups developing new ideas. Well-known companies, like Airbnb, Dropbox and Unbabel were incubated there, as were many others which became successful.

Wild as the ideas pitched at Y-combinator may be, however, so far no proposal was as ambitious as the one pitched by Nectome, a startup that wants to backup your mind. More precisely, Nectome wants to process and chemically preserve your brain, down to its most detailed structures, in order to make it possible to upload your mind sometime in the future. Robert McIntyre, founder and CEO of Nectome, and an MIT graduate, will pitch his company in a meeting in New York, next week.

Nectome’s is committed to the goal of archiving your mind, as goes the description in the website, by building the next generation of tools to preserve the connectome, the pattern of neuron interconnections that constitutes a brain. Nectome’s technology uses a process known as vitrifixation (also known as Aldehyde-Stabilized Cryopreservation) to stabilize and preserve a brain, down to its finer structures.

The idea is to keep the physical structure of the brain intact for the future (even though that will involve destroying the actual brain) in the hope that you may one day reverse engineer and reproduce, in the memory of a computer, the working processes of that brain. This idea, that you may be able to simulate a particular brain in a computer, a process known as mind uploading is, of course, not novel. It was popularized by many authors, most famously by Ray Kurzweil,  in his books. It has also been addressed in many non-fiction books, such as Superintelligence and The Digital Mind, both featured in this blog.

Photo by Nectome

 

 

The Computer and the Brain

The Computer and the Brain, first published in 1958, is a delightful little book by John von Neumman, his attempt to compare two very different information processing devices: computers and brains. Although written more than sixty years ago, it has more than historic interest, even though it addresses two topics that have developed enormously in the decades since von Neumann’s death.

John von Neumann’s genius comes through very clearly in this essay. Sixty years ago, very few persons knew what a computer was, and probably even less had some idea how the brain performed its magic. This book, written just a few years after the invention of the transistor (by Bardeen and Brattain), and the discovery of the membrane mechanisms that explain the electrical behaviour of neurons (by Hodgkin and Huxley), nonetheless compares, in very clear terms, the relative computational power of computers and brains.

Von Neumann aim is to compare many of the characteristics of the processing devices used by computers (vacuum tubes and transistors) with the ones used by the brain (neurons). His objective is to perform an objective comparison of the two technologies, as of their ability to process information. He addresses speed, size, memory and other characteristics of the two types of information processing devices.

One of the central and (to me) most interesting parts of the book is the comparison of artificial information processing devices (vacuum tubes and transistors) with natural information processing devices (neurons), in terms of speed and size.

Von Neumman concludes that vacuum tubes and transistors are faster, by a factor of 10,000 to 100,000, than neurons, and occupy about 1000 times more space (with the technologies of the day). All together, if one assumes that speed can be traded by number of devices (for instance, reusing electronic devices to perform computations that, in the brain, are performed by slower, but independent, neurons), his comparisons lead to the conclusion (not explicit in the book, I must add) that an electronic computer the size of a human brain would be one to two orders of magnitude less powerful than the human brain itself.

John von Neumann could not have predicted, in 1957, that transistors would be packed, by the billions, on integrated circuits no larger than a postal stamp. If one uses the numbers that correspond to the technologies of today, one is led to conclude that a modern CPU (such as the Intel Core i7), with a billion transistors, operating in the nanoseconds range, is a few orders of magnitude (10000 times) more powerful than the human brain, with its hundred billion neurons operating in the milliseconds range.

Of course one has to consider, as John von Neumann also wrote, that a neuron is considerably more complex and can perform more complex computations than a transistor. But even if one takes that into account, and assumes that a transistor is roughly equivalent to a synapse, in raw computing power, one gets the final result that the human brain and an Intel Core i7 have about the same raw processing power.

It is a sobering thought, one which von Neumann would certainly have liked to share.