The Big Picture: On the Origins of Life, Meaning and the Universe Itself

Sean Carroll’s 2016 book, The Big Picture, is a rather well-succeeded attempt to cover all the topics that are listed in the subtitle of the book, life, the universe, and everything.  Carroll calls himself a poetic naturalist, short for someone who believes physics explains everything but does not eliminate the need for other levels of description of the universe, such as biology, psychology, and sociology, to name a few.

Such an ambitious list of topics requires a fast-paced book, and that is exactly what you get. Organized in no less than 50 chapters, the book brings us from the very beginning of the universe to the many open questions related to intelligence, consciousness, and free-will. In the process, we get to learn about what Carroll calls the “core theory”, the complete description of all the particles and forces that make the universe, as we know it today, encompassing basically the standard model and general relativity. In the process, he takes us through the many things we know (and a few of the ones we don’t know) about quantum field theory and the strangeness of the quantum world, including a rather good description of the different possibilities of addressing this strangeness: the Copenhaguen interpretation, hidden variables theories and (the one the author advocates) Everett’s many-worlds interpretation.

Although fast-paced, the book succeeds very well in connecting and going into some depth into these different fields. The final sections of the book, covering life, intelligence, consciousness, and morals are a very good introduction to these complex topics, many of them addressed also in Sean Carroll popular podcast, Mindscape.

Kill the baby or the grandma?

What used to be an arcane problem in philosophy and ethics, The Trolley Problem, has been taking center stage in the discussions about the way autonomous vehicles should behave in the case of an accident. As reported previously in this blog, a website created by MIT researchers, The Moral Machine, gave everyone the opportunity to confront him or herself with the dilemmas that an autonomous car may have to face when deciding what action to take in the presence of an unavoidable accident.

The site became so popular that it was possible to gather more than 40 million decisions, from people in 233 countries and territories. The analysis of this massive amount of data was just published in an article in the journal Nature. In the site, you are faced with a simple choice. Drive forward, possibly killing some pedestrians or vehicle occupants, or swerve left, killing a different group of people. From the choices made by millions of persons, it is possible to derive some general rules of how ethics commands people to act, when faced with the difficult choice of who to kill and who to spare.

The results show some clear choices, but also that some decisions vary strongly with the culture of the person in charge. In general, people decide to protect babies, youngsters and pregnant women, as well as doctors (!). At the bottom of the preference scale are old people, animals and criminals. 

Images: from the original article in Nature.

Would you throw the fat guy off the bridge?

The recent fatal accident with a Tesla in autopilot mode did not involve any difficult moral decisions by the automatic driving systems, as it resulted from insufficient awareness of the conditions, both by the Tesla Autopilot and by the (late) driver.

However, it brings to the fore other cases where more difficult moral decisions may need to be made by intelligent systems in charge of driving autonomous vehicles. The famous trolley problem has been the subject of many analyses, articles and discussions and it remains a challenging topic in philosophy and ethics.

In this problem, which has many variants, you have to decide whether to pull a lever and divert an incoming runaway trolley, saving five people but killing one innocent bystander. In a variant of the problem, you cannot pull a lever, but you can throw a fat man from a bridge, thus stopping the trolley, but killing the fat man. The responses of different people vary wildly with the specific case in analysis.

These complex moral dilemmas have been addressed in detail many times, and a good overview is presented in the book by Thomas Cathcart.

trolley

In order to obtain more data about moral and difficult decisions, a group of researchers at MIT have created a website, where you can try to decide between the though choices available, yourself.

Instead of the more contrived cases involved in the trolley problem, you have to decide whether, as a driver, you should swerve or not, in the process deciding the fate of a number of passengers and bystanders.

Why don’t you try it?

An autonomous car has its first fatal crash! Now what?

For the first time, an autonomously driven vehicle, a model S Tesla, had a fatal crash. According to the manufacturer, the car hit a tractor trailer that crossed the highway where the car was traveling. Neither the autopilot, which was in charge, nor the driver, noticed “the white side of the tractor trailer against a brightly lit sky“. In this difficult lighting conditions, the brakes were not applied and the car crashed into the trailer. The bottom of the trailer hit the windshield of the car, leading to the dead of its only occupant.

This was the first fatal accident to happen with and autonomous driven car, and it happened after Tesla Autopilot logged in 130 million miles. In the average, there is an accident for every 94 million miles driven, in the US, and 60 million miles, worldwide, according to Tesla.

tesla-logo

It its statement, Tesla makes clear that Autopilot “is an assist feature that requires you to keep your hands on the steering wheel at all times,” and that “you need to maintain control and responsibility for your vehicle” while using it.

Nonetheless, this crash is bound to be the first to raise significant questions, with very difficult answers. Who is to blame for the fact that the autopilot did not break the car, in order to avoid the impact?

The programmers, who coded the software that was driving the vehicle at the time of accident? The driver, who did not maintain control of the vehicle? The designers of the learning algorithms, used to derive significant parts of the control software? The system architects, who did not ensure that the Autopilot was just an “assist feature“?

As autonomous systems, in general, and autonomous cars, in particular, become more common, these questions will multiply and we will need to find answers for them. We may on the eve of a new golden age for trolleyology.