The Digital Mind: How Science is Redefining Humanity

Following the release in the US,  The Digital Mind, published by MIT Press,  is now available in Europe, at an Amazon store near you (and possibly in other bookstores). The book covers the evolution of technology, leading towards the expected emergence of digital minds.

Here is a short rundown of the book, kindly provided by yours truly, the author.

New technologies have been introduced in human lives at an ever increasing rate, since the first significant advances took place with the cognitive revolution, some 70.000 years ago. Although electronic computers are recent and have been around for only a few decades, they represent just the latest way to process information and create order out of chaos. Before computers, the job of processing information was done by living organisms, which are nothing more than complex information processing devices, created by billions of years of evolution.

Computers execute algorithms, sequences of small steps that, in the end, perform some desired computation, be it simple or complex. Algorithms are everywhere, and they became an integral part of our lives. Evolution is, in itself, a complex and long- running algorithm that created all species on Earth. The most advanced of these species, Homo sapiens, was endowed with a brain that is the most complex information processing device ever devised. Brains enable humans to process information in a way unparalleled by any other species, living or extinct, or by any machine. They provide humans with intelligence, consciousness and, some believe, even with a soul, a characteristic that makes humans different from all other animals and from any machine in existence.

But brains also enabled humans to develop science and technology to a point where it is possible to design computers with a power comparable to that of the human brain. Artificial intelligence will one day make it possible to create intelligent machines and computational biology will one day enable us to model, simulate and understand biological systems and even complete brains with unprecedented levels of detail. From these efforts, new minds will eventually emerge, minds that will emanate from the execution of programs running in powerful computers. These digital minds may one day rival our own, become our partners and replace humans in many tasks. They may usher in a technological singularity, a revolution in human society unlike any other that happened before. They may make humans obsolete and even a threatened species or they make us super-humans or demi-gods.

How will we create these digital minds? How will they change our daily lives? Will we recognize them as equals or will they forever be our slaves? Will we ever be able to simulate truly human-like minds in computers? Will humans transcend the frontiers of biology and become immortal? Will humans remain, forever, the only known intelligence in the universe?

 

Will the fourth industrial revolution destroy or create jobs?

The impact of the fourth industrial revolution on jobs has been much discussed.

On one side, there are the traditional economists, who argue that technological advances have always created more and better jobs than the ones they destroyed. On the other side, the people that believe that with the arrival of artificial intelligence and robotics, there will simply not exist enough jobs that cannot be done by machines.

So, in this post, I try to present a balanced analysis on the subject, as deeply as allowed by the space and time available.

Many studies have addressed the question of which jobs are more likely to be destroyed by automation.  This study, by McKinsey, provides a very comprehensive analysis.

lixo

Recently, The Economist also published a fairly balanced analysis of the topic, already posted in this blog. In this analysis, The Economist makes a reference to a number of studies on the jobs that are at high risk but, in the end, it sides with the opinion that enough jobs will be created to replace the ones technology will destroy.

A number of books and articles have been written on the topic, including “Raising the Floor“, “The Wealth of Humans: Work, Power, and Status in the Twenty-first Century“, “The Second Machine Age“, and “No More Work“, some of them already reviewed in this blog.

In most cases, the authors of these books advocate the need for significant changes in the way society is organized, and on the types of social contracts that need to be drawn. Guaranteeing every one a universal basic income is a proposal that has become very popular, as a way to address the question of how humanity will live in a time when there are much less jobs to go around.

Further evidence that some deep change is in the cards is provided by data that shows that, with the begining of the XXI century, income is being moved away from jobs (and workers) towards capital (and large companies):

15134556_10210587766985940_7255657276005857315_n

On the other side of the debate, there are many people who believe that humans will always be able to adapt and add value to society, regardless of what machines can or cannot do. David Autor, in his TED talk, makes a compelling point that many times before it was argued that “this time is different” and that it never was.

Other articles, including this one in the Washington Post, argue that the fears are overblown. The robots will not be coming in large numbers, to replace humans. Not in the near future, anyway.

Other economists, such as  Richard Freeman, in an article published in Harvard Magazine agree and also believe that the fears are unwarranted: “We should worry less about the potential displacement of human labor by robots than about how to share fairly across society the prosperity that the robots produce.

His point is that the problem is not so much on the lack of jobs, but on the depression of wages. Jobs may still exist, but will not be well paid, and the existing imbalances in income distribution will only become worst.

Maybe, in the end, this opinion represents a balanced synthesis of the two competing views: jobs will still exist, for anyone who wants to take them, but there will be competition (from robots and intelligent agents) for them, pushing down the wages.

European Parliament committee approves proposal to give robots legal status and responsibilities

The committee on legal affairs of the European Parliament has drafted and approved a report that addresses many of the legal, social and financial consequences of the development of robots and artificial intelligence (AI).

The draft report addresses a large number of issues related with the advances of robotics, AI and related technologies, and proposes a number of european regulations to govern the utilization of robots and other advanced AI agents.

The report was approved with a 17-2 vote (and two abstentions) by the parliament’s legal affairs committee.

epstrasbourg

Among many other issues addressed, the report considers:

  • The question of legal status: “whereas, ultimately, robots’ autonomy raises the question of their nature in the light of the existing legal categories – of whether they should be regarded as natural persons, legal persons, animals or objects – or whether a new category should be created”, advancing with the proposal of “creating a specific legal status for robots, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons with specific rights and obligations…”
  • The impact of robotics and AI on employment and social security, and concludes that “consideration should be given to the possible need to introduce corporate reporting requirements on the extent and proportion of the contribution of robotics and AI to the economic results of a company for the purpose of taxation and social security contributions; takes the view that in the light of the possible effects on the labour market of robotics and AI a general basic income should be seriously considered, and invites all Member States to do so;”
  • The need for a clear and unambiguous registration system for robots, recommending that “a system of registration of advanced robots should be introduced, and calls on the Commission to establish criteria for the classification of robots with a view to identifying the robots that would need to be registered;”

 

How to create a mind

Ray Kurzweil’s latest book, How to Create a Mind, published in 2012, is an interesting read and shows some welcome change on his views of science and technology. Unlike some of his previous (and influntial) books, including The Singularity is Near, The Age of Spiritual Machines and The Age of Intelligent Machines, the main point of this book is not that exponential technological development will bring in a technological singularity in a few decades.

how-to-create-a-mind-cover-347x512

True, that theme is still present, but takes second place to the main theme of the book, a concrete (although incomplete) proposal to build intelligent systems that are inspired in the architecture of the human neocortex.

Kurzweil main point in this book is to present a model of the human neocortex, what he calls The Pattern Recognition Theory of the Mind (PRTM). In this theory, the neocortex is simply a very powerful pattern recognition system, built out of about 300 million (his number, not mine) similar pattern recognizers. The input from each of these recognizers can come from either external inputs, through the senses, or from the older parts (evolutionary speaking) of the brain, or from the output of other pattern recognizers in the neocortex. Each recognizer is relatively simple, and can only recognize a simple pattern (say the word APPLE) but, through complex interconnections with other recognizers above and below, it makes possible all sorts of thinking and abstract reasoning.

Each pattern consists, in its essence, in a short sequence of symbols, and is connected, through bundles of axons, to the actual place in the cortex where these symbols are activated, by another pattern recognizer. In most cases, the memories these recognizers represent must be accessed in a specific order. He gives the example that very few persons can recite the alphabet backwards, or even their social security number, which is taken as evidence of the sequential nature of operation of these pattern recognizers.

The key point of the book is that the actual algorithms used to build and structure a neocortex may soon become well understood, and used to build intelligent machines, embodied with true strong Artificial Intelligence. How to Create a Mind falls somewhat short of the promise in the subtitle, The Secret of Human Thought Revealed, but still makes for some interesting reading.

Artificial Intelligence developments: the year in review

TechCrunch, a popular site dedicated to technology news, has published a list of the the top Artificial Intelligence news of 2016.

2016 seems indeed to have been the year Artificial Intelligence (AI) left the confinement of university labs to come into public view.

review

Several of the news selected by TechCrunch, were also covered in this blog.

In March a Go playing program, developed by Google’s DeepMind, AlphaGo, defeated 18-time world champion Lee Sedol (reference in the TechCrunch review).

Digital Art, where deep learning algorithms learn to paint in the style of a particular artist, was also the topic of one post (reference in the TechCrunch review).

In May, Digital Minds posted Moore’s law is dead, long live Moore´s law, describing how Google’s new chip can be used to run deep learning algorithms using Google’s TensorFlow (related article in the TechCrunch review).

TechCrunch has identified a number of other relevant developments that make for an interesting reading, including the Facebook-Amazon-Google-IBM-Microsoft mega partnership on AI, the Facebook strategy on AI and the news about the language invented by Google’s translation tool.

Will the AI wave gain momentum in 2017, as predicted by this article? I think the chances are good, but only the future will tell.

IBM TrueNorth neuromorphic chip does deep learning

In a recent article, published in the Proceedings of the National Academy of Sciences, IBM researchers demonstrated that the TrueNorth chip, designed to perform neuromorphic computing, can be trained using deep learning algorithms.

brain_anatomy_medical_head_skull_digital_3_d_x_ray_xray_psychedelic_3720x2631

The TrueNorth chip was designed to efficiently simulate the efficient modeling of spiking neural networks, a model for neurons that closely mimics the way biological neurons work. Spiking neural networks are based on the integrate and fire model, inspired on the fact that actual neurons integrate the incoming ion currents caused by synaptic firing and generate an output spike only when sufficient synaptic excitation has been accumulated. Spiking neural network models tend to be less efficient than more abstract models of neurons, which simply compute the real valued output directly from the values of the real valued inputs multiplied by the input weights.

As IEEE Spectrum explains: “Instead of firing every cycle, the neurons in spiking neural networks must gradually build up their potential before they fire. To achieve precision on deep-learning tasks, spiking neural networks typically have to go through multiple cycles to see how the results average out. That effectively slows down the overall computation on tasks such as image recognition or language processing.

In the article just published, IBM researchers have adapted deep learning algorithms to run on their TrueNorth architecture, and have achieved comparable precision, with lower energy dissipation. This research raises the prospect that energy-efficient neuromorphic chips may be competitive in deep learning tasks.

Image from Wikimedia Commons

Algorithms to live by: the computer science of human decisions

This delightful book, by Brian Christian and Tom Griffiths, provides a very interesting and orthogonal view on the role of computer science in our everyday lives.

The book covers a number of algorithms, which range from the best way to choose a bride (check the first 37% of the available candidates and pick the first one that is better than them) to the best way to manage your email ( just drop messages once you are over the top, don’t queue them for future processing, which will never happen).

516-sildnl-2

The book makes for a very enjoyable and engaging read, and should be required material for any computer science student, professor, or researcher.

The chapters include advice on when to stop looking for the best person for the job (e.g., your bride); how to manage the explore vs. exploit dilemma, as in picking the best restaurant for dinner; how to sort things in your closet; how to make sure the things you need frequently are nearby (caching); how to choose the things you should do first; how to predict the future (use Bayes’ rule); how to avoid overfitting and learn from the past; how to tackle difficult problems by looking at easier versions of them (relaxations); when rolling a dice is the best way to make a decision; how to handle long queues of requests, which are above and beyond your capacity; and how to avoid the tragedy of the commons that so commonly gets all of us into trouble, as in the prisoner’s dilemma.

Definitely, two thumbs up!