The implications of AI in education and welfare

Re-Educating Rita, a part of the extensive special report on Artificial Intelligence by The Economist, debates some important consequences of AI for policymakers in education and welfare.

A relatively obvious consequence is the need for life-long learning, imposed by the rapid changes in technology. The standard way of providing higher education currently adopted by universities and colleges is based on condensing all you need to know in a four or five-year-long degree. What you get from those years is supposed to last for about half a century, until you die or retire. Getting education in this way has been compared to “drinking from a fire hose“.

Clearly, this approach is no longer valid, and schools should adapt their strategies to a world where knowledge acquired in college may become obsolete in a few years. Recent developments in MOOCs and other forms of flexible education are changing the landscape of higher education, and imposing radical changes in education policies.
20160625_SRD004_0

As the Economist points out, “… as knowledge becomes obsolete more quickly, the most important thing will be learning to relearn, rather than learning how to do one thing very well.” Focusing on reasoning, learning skills and strong fundamentals, such as math and physics, is important, as will be the ability to come back to school to learn new technologies, approaches and theories.

Another issue tackled by the Economist, in this article, is the idea of a Universal Basic Income, discussed here in a previous post. First proposed during the industrial revolution by Thomas Paine and John Stuart Mill, among others, the idea that everyone should receive from the state a fixed amount that is enough to survive (and remain a consumer) has been increasing in popularity lately.

According to The Economist, “The idea enjoys broad support within the technology industry: Y Combinator, a startup incubator, is even funding a study of the idea in Oakland, California. … The idea seems to appeal to techie types in part because of its simplicity and elegance (replacing existing welfare and tax systems, which are like badly written programming code, with a single line) and in part because of its Utopianism. A more cynical view is that it could help stifle complaints about technology causing disruption and inequality, allowing geeks to go on inventing the future unhindered.

Converting existing welfare schemes (excluding health care) into a universal basic income would provide around $6,000 a year, per person, in America and $6,200 in Britain. A recent Freakonomics podcast also addresses the idea of Universal Basic Income and describes some pilot runs of the idea, which have obtained limited, but promising, data about its impact.

Image from the print edition of the Economist.

Advertisements

March of the Machines

The Economist dedicates this week’s special report to Artificial Intelligence and the effects it will have on the economy.

20160625_LDD001_0

The first article on the report addresses a two centuries old question, which was called, then, the machinery question. First recorded during the industrial revolution, this question asks whether machines will replace so many human jobs as to leave a large fraction of humanity unemployed. The impact on jobs is addressed in more detail in another piece of the report, automation and anxiety and includes a reference to a 2013 article, by Frey and Osborne. This article reportes that 47% of workers in America have jobs at high risk, including many white collar jobs.

IMG_0012

Countermeasures to these challenges are discussed in some detail, including the idea of universal basic income but, in the end, The Economist seems to side with the traditional opinion of economists, that technology will ultimately create more jobs than it will destroy.

Other pieces on the report describe the technology behind the most significant recent advances in AI, deep learning and the complex ethical questions raised by the possibility of advances artificial intelligences.

Images in this article are from the print edition of The Economist.

 

 

Raising the floor: how to stop the machines from making almost everyone poor

Andy Stern is a former president of the Service Employees International Union (SEIU) and now teaches at Columbia University. In his new book, “Raising the Floor: How a Universal Basic Income Can Renew Our Economy and Rebuild the American Dream“, Andy Stern makes the case for a universal basic income. He argues that economic growth is becoming more and more decoupled from job creation, as more and more jobs are done by machines.

raising_the_floor

Andy Stern believes that a guaranteed universal basic income for all citizens is a key change in social policy that is required to sustain demand in the economy.

In an interesting interview, he points to the fact that advances in robotics, artificial intelligence and human-machine interfaces made possible by Moore’s Law will make more and more jobs amenable to being handled by machines. Furthermore, these advances tend to concentrate wealth in a small number of people and organizations that have the ability to conquer global markets, increasing inequality and reducing the salary for jobs in the lower tiers.

The argument concludes that the only way to stop this increasing inequality, caused mostly by technological changes, would be a radical change in income redistribution mechanisms that is incompatible with existing social security schemes. One such possibility is, certainly, universal basic income.

Could a neuroscientist understand a microprocessor?

In a recent article, which has been widely commented (e.g., in a wordpress blog and in marginal revolution) Eric Jonas and Konrad Korning, from UC Berkeley and Northwestern universities, respectively, have described an interesting experiment.

They have applied the same techniques neuroscientists use to analyze the brain to the study of a microprocessor. More specifically, they used local field potentials, correlations between activities of different zones, the effects of single transistor lesions, together with other techniques inspired in state of the art brain sciences.

Microprocessors are complex systems, although they are much simpler than a human brain. A modern microprocessor could have several billion transistors, a number that compares poorly with the human brain, which has close to 100 billion neurons, and probably more than one quadrillion synapses. One could imagine that, by applying techniques similar to the ones used in neuroscience, one could obtain some understanding of the role of different functional units, how they are interconnected, and even how they work.

Castle_Chip_Layout

The authors conclude, not surprisingly, that no significant insights on the structure of the processor can be be gained by applying neuroscience techniques. The authors have indeed observed signals that are reminiscent of the signals obtained when applying NMR and other imaging techniques to live brains, and have observed significant correlations between these signals and the tasks the processor was doing, as in the following figure, extracted from the paper.

signals

However, the analysis of these signals did not provide any significant knowledge on the way the processor works, nor about the different functional units involved. They did, however, provide significant amounts of misleading information. For instance, the authors investigated how transistor damage affected three chip “behaviors”, specifically the execution of the games Donkey Kong, Space Invaders and Pitfall. They were able to find transistors which uniquely crash one of the games but not the others. A neuroscientist studying this chip might thus conclude a specific transistor is uniquely responsible for a specific game – leading to the possible conclusion that there may exist a “Space Invaders” transistor and a “Pitfall” transistor.

These may be bad news for neuroscientists. Reverse engineering the brain, by observing the telltales left by neurons working, may remain forever an impossible task. Fortunately, that still leaves open the possibility that we may be able to fully reconstruct the behavior of a brain, even without ever having a full understanding of its behavior.

First image: Chip layout of EnCore Castle processor, by Igor Bohem, available at Wikimedia commons.

Second image: Observed signals, in different parts of the chip.

Next challenge: a synthetic human?

A group of researchers is calling for the next challenge in genetics: create an entirely synthetic human genome. The Human Genome Project Write (HGP-write) aims at creating a human genome from scratch, using the information available from thousands of sequenced human genomes.

Creating a DNA sequence that corresponds to a viable human being is quite an achievable challenge with existing technology. The large number of sequenced human genomes provide an excellent blueprint for that such a genome could be. Poorly understood or hard to sequence regions provide considerable challenges, but they should not be impossible to tackle. More difficult would be to create viable cell lines out of the synthesised DNA, or even viable embryos.

Mjc2MTQ0OQ

As IEEE Sprectrum reports, the subject has received considerable attention in the media, namely in the NY Times. The authors of the proposal have already said that they do not intend to create synthetic humans, but only advance the state of the art in genetics research. Their objective is to understand better the human genome, by building a human (and other) genome from scratch. However, one never knows where a road leads, only where it starts from.

 

Bill Gates recommends the two books to read if you want to understand Artificial Intelligence

Also in the 2026 Code conference, Bill Gates recommended the two books you need to read if you want to understand Artificial Intelligence. By coincidence (or not) these two books are exactly the ones I have previously covered in this blog, The Master Algorithm and Superintelligence.

Given Bill Gates strong recent interests in Artificial Intelligence, there is a fair chance that Windows 20 will have a natural language interface just like the one in the movie Her (caution, spoiler below).

If you haven’t seen the movie, maybe you should. It is about a guy who falls in love with the operating system of his computer.

So, there is no doubt that operating systems will keep evolving in order to offer more natural user interfaces. Will they ever reach the point where you can fall in love with them?

Are we living in a computer simulation?

The idea that the Earth, its inhabitants and the whole universe could be just a computer simulation is not new. Many have argued that intelligent agents simulated in a computer are not necessarily aware they are part of a computer simulation. Nick Bostrum, author of Supperintelligence and professor at Oxford, suggested in 2003 that members of an advanced civilization with enormous computing power might decide to run simulations of their ancestors.

Of course, no computer simulation created by mankind was ever able to simulate realities as complex as our world, nor beings as intelligent as humans. Current computer technology is not powerful enough to simulate worlds with that level of complexity. However, more advanced computer technologies could be used to simulate much more complex virtual realities, possibly as complex as our own reality.

A recent article in Scientific American about this topic includes opinions from many well known scientists. Neil deGrasse Tyson, well-known for the series Cosmos, put the odds at 50-50 that our entire existence is a program on someone else’s hard drive. Max Tegmark, a cosmologist at MIT) pointed out that “If I were a character in a computer game, I would also discover eventually that the rules seemed completely rigid and mathematical,” just as our universe.

This week the topic came to the forefront, at the Code Conference 2016, where Elon Musk said that “we’re probably characters in some advanced civilisation’s video game“.  His argument is that “If you assume any rate of improvement at all, then the games will become indistinguishable from reality, even if that rate of advancement drops by a thousand from what it is now. Then you just say, okay, let’s imagine it’s 10,000 years in the future, which is nothing on the evolutionary scale.

Therefore, if you assume this rate of evolution lasts for a few centuries, computer games will become indistinguishable from reality and we may well be inside one of those.

MareNostrum_III_superior_3

To be fair, there are many things that could be interpreted as signs that we do live, indeed, inside a computer simulation. The strangeness of quantum computing, the vastness and the many inexplicable coincidences of the universe, the unexplained start of the evolutionary process, are all things that could be easily explained by the “simulated world” hypothesis.

This topic has, of course, already been fully addressed by Zach Weiner in a brilliant SMBC strip.

Pictured, the Marenostrun supercomputer, in a photo by David Abián, available at Wikimedia Commons.