The wealth of humans: work and its absence in the twenty-first century

The Wealth of Humans, by Ryan Avent, a senior editor at The Economist, addresses the economic and social challenges imposed on societies by the rapid development of digital technologies.  Although the book includes an analysis of the mechanisms, technologies, and effects that may lead to massive unemployment, brought by the emergence of digital technologies, intelligent systems, and smart robots, the focus is on the economic and social effects of those technologies.

The main point Avent makes is that market mechanisms may be relied upon to create growth and wealth for society, and to improve the average condition of humans, but cannot be relied upon to ensure adequate redistribution of the generated wealth. Left to themselves, the markets will tend to concentrate wealth. This happened in the industrial revolution, but society adapted (unions, welfare, education) to ensure that adequate redistribution mechanisms were put in place.

To Avent, this tendency towards increased income asymmetry, between the top earners and the rest, which is already so clear, will only be made worst by the inevitable glut of labor that will be created by digital technologies and artificial intelligence.

There are many possible redistribution mechanisms, from universal basic income to minimum wage requirements but, as the author points out, none is guaranteed to work well in a society where a large majority of people may become unable to find work. The largest and most important asymmetry that remains is, probably, the asymmetry that exists between developed countries and underdeveloped ones. Although this asymmetry was somewhat reduced by the recent economic development of the BRIC countries, Avent believes that was a one time event that will not reoccur.

Avent points out that the strength of the developed economies is not a direct consequence of the factors that are most commonly thought to be decisive: more capital, adequate infrastructures, and better education. These factors do indeed play a role but what makes the decisive difference is “social capital”, the set of rules shared by members of developed societies that makes them more effective at creating value for themselves and for society. Social capital, the unwritten set of rules that make it possible to create value, in a society, in a country or in a company, cannot be easily copied, sold, or exported.

This social capital (which, interestingly, closely matches the idea of shared beliefs Yuval Harari describes in Sapiens) can be assimilated, by immigrants or new hires, who can learn how to contribute to the creation of wealth, and benefit from it. However, as countries and societies became adverse at receiving immigrants, and companies reduce workforces, social capital becomes more and more concentrated.

In the end, Avent concludes that no public policies, no known economic theories, are guaranteed to fix the problem of inequality, mass unemployment, and lack of redistribution. It comes down to society, as whole, i.e., to each one of us, to decide to be generous and altruistic, in order to make sure that the wealth created by the hidden hand of the market benefits all of mankind.

A must-read if you care about the effects of asymmetries in income distribution on societies.

Advertisements

IEEE Spectrum special issue on whether we can duplicate a brain

Maybe you have read The Digital Mind or The Singularity is Near, by Ray Kurzweil, or other similar books, thought it all a bit farfetched, and wondered whether the authors are bonkers or just dreamers.

Wonder no more. The latest issue of the flagship publication of the Institute for Electrical and Electronic Engineers, IEEE Spectrum , is dedicated to the interesting and timely question of whether we can copy the brain, and use it as blueprint for intelligent systems.  This issue, which you can access here, includes many interesting articles, definitely worth reading.

I cannot even begin to describe here, even briefly, the many interesting articles in this special issue, but it is worthwhile reading the introduction, on the perspective of near future intelligent personal assistants or the piece on how we could build an artificial brain right now, by Jennifer Hasler.

Other articles address the question on how expensive, computationally, is the simulation of a brain at the right level of abstraction. Karlheinz Meier’s article on this topic explains very clearly why present day simulations are so slow:

“The big gap between the brain and today’s computers is perhaps best underscored by looking at large-scale simulations of the brain. There have been several such efforts over the years, but they have all been severely limited by two factors: energy and simulation time. As an example, consider a simulation that Markus Diesmann and his colleagues conducted several years ago using nearly 83,000 processors on the K supercomputer in Japan. Simulating 1.73 billion neurons consumed 10 billion times as much energy as an equivalent size portion of the brain, even though it used very simplified models and did not perform any learning. And these simulations generally ran at less than a thousandth of the speed of biological real time.

Why so slow? The reason is that simulating the brain on a conventional computer requires billions of differential equations coupled together to describe the dynamics of cells and networks: analog processes like the movement of charges across a cell membrane. Computers that use Boolean logic—which trades energy for precision—and that separate memory and computing, appear to be very inefficient at truly emulating a brain.”

Another interesting article, by Eliza Strickland, describes some of the efforts that are taking place to use  reverse engineer animal intelligence in order to build true artificial intelligence , including a part about the work by David Cox, whose team trains rats to perform specific tasks and then analyses the brains by slicing and imaging them:

“Then the brain nugget comes back to the Harvard lab of Jeff Lichtman, a professor of molecular and cellular biology and a leading expert on the brain’s connectome. ­Lichtman’s team takes that 1 mm3 of brain and uses the machine that resembles a deli slicer to carve 33,000 slices, each only 30 nanometers thick. These gossamer sheets are automatically collected on strips of tape and arranged on silicon wafers. Next the researchers deploy one of the world’s fastest scanning electron microscopes, which slings 61 beams of electrons at each brain sample and measures how the electrons scatter. The refrigerator-size machine runs around the clock, producing images of each slice with 4-nm resolution.”

Other approaches are even more ambitious. George Church, a well-known researcher in biology and bioinformatics, uses sequencing technologies to efficiently obtain large-scale, detailed information about brain structure:

“Church’s method isn’t affected by the length of axons or the size of the brain chunk under investigation. He uses genetically engineered mice and a technique called DNA bar coding, which tags each neuron with a unique genetic identifier that can be read out from the fringy tips of its dendrites to the terminus of its long axon. “It doesn’t matter if you have some gargantuan long axon,” he says. “With bar coding you find the two ends, and it doesn’t matter how much confusion there is along the way.” His team uses slices of brain tissue that are thicker than those used by Cox’s team—20 μm instead of 30 nm—because they don’t have to worry about losing the path of an axon from one slice to the next. DNA sequencing machines record all the bar codes present in a given slice of brain tissue, and then a program sorts through the genetic information to make a map showing which neurons connect to one another.”

There is also a piece on the issue of AI and consciousness, where Christoph Koch and Giulio Tononi describe their (more than dubious, in my humble opinion) theory on the application of Integrated Information Theory to the question of: can we quantify machine consciousness?

The issue also includes interesting quotes and predictions by famous visionairies, such as Ray Kurzweil, Carver Mead, Nick Bostrom, Rodney Brooks, among others.

Images from the special issue of IEEE Spectrum.