Homo Deus: A Brief History of Tomorrow

Homo Deus, the sequel to the wildly successful hit Sapiens, by Yuval Harari, aims to chronicle the history of tomorrow and to provide us with a unique and dispassionate view of the future of humanity. In Homo Deus, Harari develops further the strongest idea in Sapiens, the idea that religions (or shared fictions) are the reason why humanity came to dominate the world.

Many things are classified by Harari as religions, from the traditional ones like Christianism, Islamism or Hinduism, to other shared fictions that we tend not to view as religions, such as countries, money, capitalism, or humanism. The ability to share fictions, such as these, created in Homo sapiens the ability to coordinate enormous numbers of individuals in order to create vast common projects: cities, empires and, ultimately, modern technology. This is the idea, proposed in Sapiens, that Harari develops further in this book.

Harari thinks that, with the development of modern technology, humans will doggedly pursue an agenda consisting of three main goals: immortality, happiness and divinity. Humanity will try to become immortal, to live in constant happiness and to be god-like in its power to control nature.

The most interesting part of the book is in middle, where Harari analyses, in depth, the progressive but effective replacement of ancient religions by the dominant modern religion, humanism. Humanism, the relatively recent idea that there is a unique spark in humans, that makes human life sacred and every individual unique. Humanism therefore believes that meaning should be sought in the individual choices, views, and feelings, of humans, replaced almost completely traditional religions (some of them with millennia), which believed that meaning was to be found in ancient scriptures or “divine” sayings.

True, many people still believe in traditional religions, but with the exception of a few extremist sects and states, these religions plays a relatively minor role in conducting the business of modern societies. Traditional religions have almost nothing to say about the key ideas that are central to modern societies, the uniqueness of the individual and the importance of the freedom of choice, ideas that led to our current view of democracies and ever-growing market-oriented economies. Being religious, in the traditional sense, is viewed as a personal choice, a choice that must exist because of the essential humanist value of freedom of choice.

Harari’s description of the humanism schism, into the three flavors of liberal humanism, socialist humanism, and evolutionary humanism (Nazism and other similar systems), is interesting and entertaining. Liberal humanism, based on the ideals of free choice, capitalism, and democracy, has been gaining the upper hand in the twentieth century, with occasional relapses, over socialism or enlightened dictatorships.

The last part of the book, where one expects Harari to give us a hint of what may come after humanism, once technology creates systems and machines that make humanist creeds obsolete, is rather disappointing. Instead of presenting us with the promises and threats of transhumanism, he clings to common clichés and rather mundane worries.

Harari firmly believes that there are two types of intelligent systems: biological ones, which are conscious and have, possibly, some other special properties, and the artificial ones, created by technology, which are not conscious, even though they may come to outperform humans in almost every task. According to him, artificial systems may supersede humans in many jobs and activities, and possibly even replace humans as the intelligent species on Earth, but they will never have that unique spark of consciousness that we, humans, have.

This belief leads to two rather short-sighted final chapters, which are little more than a rant against the likes of Facebook, Google, and Amazon. Harari is (and justifiably so) particularly aghast with the new fad, so common these days, of believing that every single human experience should go online, to make shareable and give it meaning. The downsize is that this fad provides data to the all-powerful algorithms that are learning all there is to know about us. I agree with him that this is a worrying trend, but viewing it as the major threat of future technologies is a mistake. There are much much more important issues to deal with.

It is not that these chapters are pessimistic, even though they are. It is that, unlike in the rest of Homo Deus (and in Sapiens), in these last chapters Harari’s views seem to be locked inside a narrow and traditionalist view of intelligence, society, and, ultimately, humanity.

Other books, like SuperintelligenceWhat Technology Wants or The Digital Mind provide, in my opinion, much more interesting views on what a transhumanist society may come to be.

The Digital Mind: How Science is Redefining Humanity

Following the release in the US,  The Digital Mind, published by MIT Press,  is now available in Europe, at an Amazon store near you (and possibly in other bookstores). The book covers the evolution of technology, leading towards the expected emergence of digital minds.

Here is a short rundown of the book, kindly provided by yours truly, the author.

New technologies have been introduced in human lives at an ever increasing rate, since the first significant advances took place with the cognitive revolution, some 70.000 years ago. Although electronic computers are recent and have been around for only a few decades, they represent just the latest way to process information and create order out of chaos. Before computers, the job of processing information was done by living organisms, which are nothing more than complex information processing devices, created by billions of years of evolution.

Computers execute algorithms, sequences of small steps that, in the end, perform some desired computation, be it simple or complex. Algorithms are everywhere, and they became an integral part of our lives. Evolution is, in itself, a complex and long- running algorithm that created all species on Earth. The most advanced of these species, Homo sapiens, was endowed with a brain that is the most complex information processing device ever devised. Brains enable humans to process information in a way unparalleled by any other species, living or extinct, or by any machine. They provide humans with intelligence, consciousness and, some believe, even with a soul, a characteristic that makes humans different from all other animals and from any machine in existence.

But brains also enabled humans to develop science and technology to a point where it is possible to design computers with a power comparable to that of the human brain. Artificial intelligence will one day make it possible to create intelligent machines and computational biology will one day enable us to model, simulate and understand biological systems and even complete brains with unprecedented levels of detail. From these efforts, new minds will eventually emerge, minds that will emanate from the execution of programs running in powerful computers. These digital minds may one day rival our own, become our partners and replace humans in many tasks. They may usher in a technological singularity, a revolution in human society unlike any other that happened before. They may make humans obsolete and even a threatened species or they make us super-humans or demi-gods.

How will we create these digital minds? How will they change our daily lives? Will we recognize them as equals or will they forever be our slaves? Will we ever be able to simulate truly human-like minds in computers? Will humans transcend the frontiers of biology and become immortal? Will humans remain, forever, the only known intelligence in the universe?

 

In memoriam of Raymond Smullyan: An unfortunate dualist

Mind-body Dualists believe there are two different realms that define us. One is the physical realm, well studied and understood by the laws of physics, while the other one is the non-physical realm, where our selves exist. Our essence, our soul, if you want, exists in this non-physical realm, and it interacts and controls our physical body through some as yet unexplained mechanism. Most religions are based on a dualist theory, including Christianity, Islam, and Hinduism.

On the other side of the discussion are Monists, who do not believe in the existence of dual realities.  The term monism is used to designate the position that everything is either mental (idealism) or that everything is physical (materialism).

Raymond Smullyan, deceased two days ago (February 10th, 2017),

165

had a clear view on dualism, which he makes clear in this history, published in his book This book needs no title.

An Unfortunate Dualist

Once upon a time there was a dualist. He believed that mind and matter are separate substances. Just how they interacted he did not pretend to know-this was one of the “mysteries” of life. But he was sure they were quite separate substances. This dualist, unfortunately, led an unbearably painful life-not because of his philosophical beliefs, but for quite different reasons. And he had excellent empirical evidence that no respite was in sight for the rest of his life. He longed for nothing more than to die. But he was deterred from suicide by such reasons as: (1) he did not want to hurt other people by his death; (2) he was afraid suicide might be morally wrong; (3) he was afraid there might be an afterlife, and he did not want to risk the possibility of eternal punishment. So our poor dualist was quite desperate.

Then came the discovery of the miracle drug! Its effect on the taker was to annihilate the soul or mind entirely but to leave the body functioning exactly as before. Absolutely no observable change came over the taker; the body continued to act just as if it still had a soul. Not the closest friend or observer could possibly know that the taker had taken the drug, unless the taker informed him. Do you believe that such a drug is impossible in principle? Assuming you believe it possible, would you take it? Would you regard it as immoral? Is it tantamount to suicide? Is there anything in Scriptures forbidding the use of such a drug? Surely, the body of the taker can still fulfill all its responsibilities on earth. Another question: Suppose your spouse took such a drug, and you knew it. You would know that she (or he) no longer had a soul but acted just as if she did have one. Would you love your mate any less?

To return to the story, our dualist was, of course, delighted! Now he could annihilate himself (his soul, that is) in a way not subject to any of the foregoing objections. And so, for the first time in years, he went to bed with a light heart, saying: “Tomorrow morning I will go down to the drugstore and get the drug. My days of suffering are over at last!” With these thoughts, he fell peacefully asleep.

Now at this point a curious thing happened. A friend of the dualist who knew about this drug, and who knew of the sufferings of the dualist, decided to put him out of his misery. So in the middle of the night, while the dualist was fast asleep, the friend quietly stole into the house and injected the drug into his veins. The next morning the body of the dualist awoke-without any soul indeed-and the first thing it did was to go to the drugstore to get the drug. He took it home and, before taking it, said, “Now I shall be released.” So he took it and then waited the time interval in which it was supposed to work. At the end of the interval he angrily exclaimed: “Damn it, this stuff hasn’t helped at all! I still obviously have a soul and am suffering as much as ever!”

Doesn’t all this suggest that perhaps there might be something just a little wrong with dualism?

Raymond M. Smullyan

How to create a mind

Ray Kurzweil’s latest book, How to Create a Mind, published in 2012, is an interesting read and shows some welcome change on his views of science and technology. Unlike some of his previous (and influntial) books, including The Singularity is Near, The Age of Spiritual Machines and The Age of Intelligent Machines, the main point of this book is not that exponential technological development will bring in a technological singularity in a few decades.

how-to-create-a-mind-cover-347x512

True, that theme is still present, but takes second place to the main theme of the book, a concrete (although incomplete) proposal to build intelligent systems that are inspired in the architecture of the human neocortex.

Kurzweil main point in this book is to present a model of the human neocortex, what he calls The Pattern Recognition Theory of the Mind (PRTM). In this theory, the neocortex is simply a very powerful pattern recognition system, built out of about 300 million (his number, not mine) similar pattern recognizers. The input from each of these recognizers can come from either external inputs, through the senses, or from the older parts (evolutionary speaking) of the brain, or from the output of other pattern recognizers in the neocortex. Each recognizer is relatively simple, and can only recognize a simple pattern (say the word APPLE) but, through complex interconnections with other recognizers above and below, it makes possible all sorts of thinking and abstract reasoning.

Each pattern consists, in its essence, in a short sequence of symbols, and is connected, through bundles of axons, to the actual place in the cortex where these symbols are activated, by another pattern recognizer. In most cases, the memories these recognizers represent must be accessed in a specific order. He gives the example that very few persons can recite the alphabet backwards, or even their social security number, which is taken as evidence of the sequential nature of operation of these pattern recognizers.

The key point of the book is that the actual algorithms used to build and structure a neocortex may soon become well understood, and used to build intelligent machines, embodied with true strong Artificial Intelligence. How to Create a Mind falls somewhat short of the promise in the subtitle, The Secret of Human Thought Revealed, but still makes for some interesting reading.

The User Illusion: Cutting consciousness down to size

In this entertaining and ambitious book Tor Nørretranders argues that consciousness, that hallmark of higher intelligence, is nothing more than an illusion, a picture of reality created by our brain that we mistake by the real thing. The book received good reviews and was very well received in his native country, Denmark, and all over the world.

Using fairly objective data, Nørretranders makes his main point that consciousness has a very limited bandwidth, probably no more than 20 bits a second. This means that we cannot, consciously, process more than a few bits a second, distilled from the megabytes of information processed by our senses in the same period. Furthermore, this stream of information creates a simulation of reality, which we mistake for the real thing, and the illusion that our conscious self (the “I”) in in charge, while the unconscious self (the “me”) follows the orders given by the “I”.

the_users_illusion

There is significant evidence that Nørretranders’ main point is well taken. We know (and he points it out in his book) that consciousness lags behind our actions, even conscious ones, by about half a second. As is also pointed out by another author, Daniel Dennett, in his book Consciousness Explained, consciousness controls much less than we think. Consciousness is more of a module that observes what is going on and explains it in terms of “conscious decisions” and “conscious attention”. This means that consciousness is more of an observer of our actions, than the agent that determines them. Our feeling that we consciously control our desires, actions, and sentiments is probably far from the truth, and a lot of what we consciously observe is a simulation carefully crafted by our “consciousness” module. Nørretranders refers to the fact that some people believe that consciousness is a recent phenomenon, maybe no more than a few thousand years old, as Julian Jaynes defended in his famous book, The Bicameral Mind.

Nørretranders uses these arguments to argue that we should pay less attention to conscious decisions (the “I”, as he describes it) and more to unconscious urges (the “me”, in his book), letting the unconscious “me”, who has access to vastly larger amounts of information, in control of more of your decisions.

Explaining (away) consciousness?

Consciousness is one of the hardest to explain phenomena created by the human brain. We are familiar with the concept of what it means to be conscious. I am conscious and I admit that every other human being is also conscious. We become conscious when we wake up in the morning and remain conscious during waking hours, until we lose consciousness again when we go to sleep at night. There is an uninterrupted flow of consciousness that, with the exception of sleeping periods, connects who you are now with who you were many years ago.

Explaining exactly what consciousness is, however, is much more difficult. One of the best known, and popular, explanations was given by Descartes. Even though he was a materialistic, he balked when it came to consciousness, and proposed what is now known as Cartesian dualism, the idea that the mind and the brain are two different things. Descartes thought that the mind, the seat of conscience, has no physical substance while the body, controlled by the brain, is physical and follows the laws of physics

Descartes ideas imply a Cartesian theatre, a place where the brain exposes the input obtained by the senses, so that the mind (your inner I) can look at these inputs, make decisions, take actions, and feel emotions.

dennet

In what is probably one of the most comprehensive and convincing analyses of what consciousness is, Dennett pulls all the guns against the idea of the Cartesian Theather, and argues that consciousness can be explained by what he calls a “multiple drafts” model.

Instead of a Cartesian Theater, where conscious experience occurs, there are “various events of content-fixation occurring in various places at various times in the brain“. The brain is nothing more than a “bundle of semi-independent agencies“, created by evolution, that act mostly independently and in semi-automatic mode. Creating a consistent view, a serial history of the behaviors of these different agencies, is the role of consciousness. It misleads “us” into thinking that “we” are in charge while “we” are, mostly, reporters telling a story to ourselves and others.

His arguments, supported by extensive experimental and philosophical evidence, are convincing, well structured, and discussed at depth, with the help of Otto, a non-believer in the multiple drafts model. If Dennett does not fully explain the phenomenon of consciousness, he certainly does an excellent job at explaining it away. Definitely one book to read if you care about artificial intelligence, consciousness, and artificial minds.

Is consciousness simply the consequence of complex system organization?

The theory that consciousness is simply an emergent property of complex systems has been gaining adepts lately.

The idea may be originally due to Giulio Tononi, from the University of Wisconsin in Madison. Tononi argued that a system that exhibits  consciousness must be able to store and process large amounts of information and must have some internal structure that cannot be divided into independent parts. In other words, consciousness is a result of the intrinsic complexity of the internal organization of an information processing system, complexity that cannot be broken into parts. A good overview of the theory has been recently published in the Philosophical Transactions of the Royal Society.

The theory has been gaining adepts, such as Max Tegmark, from MIT, who argues that consciousness is simply a state of matter. Tegmark suggests that consciousness arises out of particular arrangements of matter, and there may exist varying degrees of consciousness. Tegmark believes current day computers may be approaching the threshold of higher consciousness.

state-of-matter

Historically, consciousness has been extremely difficult to explain because it is essential a totally subjective phenomenon. It is impossible to assess objectively whether an animal or artificial agent (or even a human, for that matter) is conscious or not, since, ultimately, one has to rely on the word of the agent whose consciousness we are trying to assert. Tononi and Tegmark theories may, eventually, shed some light on this obscure phenomenon.