What if jobs are the problem, and not the solution?

In a fascinating article, worth reading in its entirety, James Livingston, author of No More Work: Why Full Employment Is a Bad Idea, asks a key question about the future of work: Why do we believe that every productive, adult, human being should have work, and get paid for it?

As he puts it: “For centuries – since, say, 1650 – we’ve believed that it builds character (punctuality, initiative, honesty, self-discipline, and so forth). We’ve also believed that the market in labour, where we go to find work, has been relatively efficient in allocating opportunities and incomes. And we’ve believed that, even if it sucks, a job gives meaning, purpose and structure to our everyday lives – at any rate, we’re pretty sure that it gets us out of bed, pays the bills, makes us feel responsible, and keeps us away from daytime TV. These beliefs are no longer plausible. In fact, they’ve become ridiculous, because there’s not enough work to go around..:”

nomorework

His point is that, in the future, there will simply not be enough interesting, well-paid, jobs to create full employment. Of course, I am not forgetting the favorite argument of traditional economists, that technological revolutions have always created more valuable jobs than the ones they destroyed.

James Livingston has this to say about that argument: “But, wait, isn’t our present dilemma just a passing phase of the business cycle? What about the job market of the future? Haven’t the doomsayers, those damn Malthusians, always been proved wrong by rising productivity, new fields of enterprise, new economic opportunities? Well, yeah – until now, these times. The measurable trends of the past half-century, and the plausible projections for the next half-century, are just too empirically grounded to dismiss as dismal science or ideological hokum.”

It is time to face the truth: this time is different, the fourth industrial revolution will not create jobs enough to keep everyone employed, at least not with the full-time, well-paid jobs that we came to associate  with economically advanced societies. The fraction of GDP that is being paid in salaries shown an unmistakable tendency, since the beginning of the 21st century. Technology will only exacerbate this tendency, as more and more well-paid jobs are lost to machines.

15134556_10210587766985940_7255657276005857315_n

Livingstone makes the point that we can, indeed, afford, a minimum guaranteed income for everyone (let’s just call it “entitlements”). In his words: “But are these transfer payments and ‘entitlements’ affordable, in either economic or moral terms? By continuing and enlarging them, do we subsidise sloth, or do we enrich a debate on the rudiments of the good life? … I know what you’re thinking – we can’t afford this! But yeah, we can, very easily. We raise the arbitrary lid on the Social Security contribution, which now stands at $127,200, and we raise taxes on corporate income, reversing the Reagan Revolution. These two steps solve a fake fiscal problem and create an economic surplus where we now can measure a moral deficit.”

Whether you want to call it a minimum guaranteed income, or just a overhaul of the social security system, it is time to face this truth, and to think of the mechanisms that should be put in place to address the social challenges. caused by technology.

Face it, jobs are the problem, not the solution!

 

IBM TrueNorth neuromorphic chip does deep learning

In a recent article, published in the Proceedings of the National Academy of Sciences, IBM researchers demonstrated that the TrueNorth chip, designed to perform neuromorphic computing, can be trained using deep learning algorithms.

brain_anatomy_medical_head_skull_digital_3_d_x_ray_xray_psychedelic_3720x2631

The TrueNorth chip was designed to efficiently simulate the efficient modeling of spiking neural networks, a model for neurons that closely mimics the way biological neurons work. Spiking neural networks are based on the integrate and fire model, inspired on the fact that actual neurons integrate the incoming ion currents caused by synaptic firing and generate an output spike only when sufficient synaptic excitation has been accumulated. Spiking neural network models tend to be less efficient than more abstract models of neurons, which simply compute the real valued output directly from the values of the real valued inputs multiplied by the input weights.

As IEEE Spectrum explains: “Instead of firing every cycle, the neurons in spiking neural networks must gradually build up their potential before they fire. To achieve precision on deep-learning tasks, spiking neural networks typically have to go through multiple cycles to see how the results average out. That effectively slows down the overall computation on tasks such as image recognition or language processing.

In the article just published, IBM researchers have adapted deep learning algorithms to run on their TrueNorth architecture, and have achieved comparable precision, with lower energy dissipation. This research raises the prospect that energy-efficient neuromorphic chips may be competitive in deep learning tasks.

Image from Wikimedia Commons

Algorithms to live by: the computer science of human decisions

This delightful book, by Brian Christian and Tom Griffiths, provides a very interesting and orthogonal view on the role of computer science in our everyday lives.

The book covers a number of algorithms, which range from the best way to choose a bride (check the first 37% of the available candidates and pick the first one that is better than them) to the best way to manage your email ( just drop messages once you are over the top, don’t queue them for future processing, which will never happen).

516-sildnl-2

The book makes for a very enjoyable and engaging read, and should be required material for any computer science student, professor, or researcher.

The chapters include advice on when to stop looking for the best person for the job (e.g., your bride); how to manage the explore vs. exploit dilemma, as in picking the best restaurant for dinner; how to sort things in your closet; how to make sure the things you need frequently are nearby (caching); how to choose the things you should do first; how to predict the future (use Bayes’ rule); how to avoid overfitting and learn from the past; how to tackle difficult problems by looking at easier versions of them (relaxations); when rolling a dice is the best way to make a decision; how to handle long queues of requests, which are above and beyond your capacity; and how to avoid the tragedy of the commons that so commonly gets all of us into trouble, as in the prisoner’s dilemma.

Definitely, two thumbs up!

Writing a Human Genome from scratch: the Genome Project-write

The Genome Project-write has released a white paper, with a clear proposal of the steps and timeline that will be required to design and assemble a human genome from scratch.

sanger

The project is a large scale project, involving a significant number of institutions, and many well-known researchers, including George Church and Jef Boeke. According to the project web page:

“Writing DNA is the future of science and medicine, and holds the promise of pulling us forward into a better future. While reading DNA code has continued to advance, our capability to write DNA code remains limited, which in turn restricts our ability to understand and manipulate biological systems. GP-write will enable scientists to move beyond observation to action, and facilitate the use of biological engineering to address many of the global problems facing humanity.”

The idea is to use existing technologies for DNA synthesis to accelerate research in a wide spectrum of life-sciences. The synthesis of human genomes may make it possible to understand the phenotypic results of specific genome sequences and will contribute to improve the quality of synthetic biology tools.

Special attention will be paid to the complex ethical, legal and social issues that are a consequence of the project.

The project has received wide coverage, in a number of news sources, including popular science sites such as Statnews and the journal Science.

Andrew McAfee at the Lisbon Web Summit: what lies beyond the second machine age?

Andrew McAfee, Research Scientist at MIT, and author of a number of works on the future of technology, was live at the 2016 Lisbon Web Summit.

30746916832_11b789be05_oIn this interesting talk he referred to the fact that, despite all advances in science that took place over the last millennia, neither population size nor quality of life did improve significantly until the advent of the industrial revolution. The radical takeoff in quality of life and in population size, which can be seen in one of his slides, below, only took place when man managed to use steam and electrical power to replace human labor.

graphic

The author (together with  Erik Brynjolfsson) of The Second Machine Age pointed out that we are in the brim of a second machine revolution of comparable consequences.

second-machine-age

Artificial intelligence and new information technology applications are already creating a new society, with different values and requirements. New technologies are already decreasing the spending of raw materials in the US, and are contributing to create a more efficient and compassionate society.

However, these changes will not all be easy to integrate in our current model of society. For instance, global warming remains a serious problem for the planet, and new technologies, such as Artificial Intelligence and Machine Learning, will lead to systems that may replace humans in the majority of cognitive tasks. This may lead to a society where full employment will no longer be the rule, or even desirable, creating the need for the adoption of alternative income redistribution strategies, such as the adoption of universal basic income.

Youtube features a longer talk, by Andrew McAfee, which covers many of the same issues he presented at the Web Summit.

 

The White House report on the future of Artificial Intelligence

The US administration has released a comprehensive report on the future of Artificial Intelligence (AI). According to the White House blog, the report surveys the current state of AI, its existing and potential applications, and the questions that progress in AI raise for society and public policy.

washington04The report includes a large number of recommendations related to the need for society, in general, and federal agencies, in particular, to pay attention to the impact of AI in the economy, security, safety and quality of life of the countries. These recommendations include, among others:

  • investigating further the consequences of AI and automation in the job market;
  • including ethics, security, safety, privacy and related topics in the university curricula of AI, machine learning and computer science;
  • closely monitoring and reporting the advances in this area, by the subcommittee on AI and machine learning;
  • prioritizing basic and long-term AI research by the Federal Government.

The report provides a rather comprehensive analysis of the impact of AI in fields as diverse as the job growth, national security, transportation safety and social justice, and may contribute to raising awareness in the society about the impact of this important technology.