Empathy with robots – science fiction or reality?

A number of popular videos made available by Boston Dynamics (a Google company) has shown different aspects of the potential of bipedal and quadrupedal robots to move around in rough terrain, and to carry out complex tasks. The behavior of the robots is strangely natural, even though they are clearly man-made mechanical contraptions.

screen-shot-2016-09-18-at-18-18-32

In a recent interview given at Disrupt SF, Boston Dynamics CEO Marc Raibert put the emphasis on making the robots friendlier. Being around a 250 pound robot that can move very fast may be very dangerous to humans, and the company is creating smaller and friendlier robots that can move around safely inside peoples houses.

This means that this robots can have many more applications, other than military ones. They may serve as butlers, servants or even as pets.

It is hard to predict what sort of emotional relationship these robots may eventually become able to create with their owners. Their animal-like behavior makes them almost likeable to us, despite their obviously mechanic appearance.

In some of these videos, humans intervene to make the jobs harder for the robots, kicking them, and moving things around in a way that looks frustrating to the robots. To many viewers, this may seem to amount to acts of actual robot cruelty, since the robots seem to become sad and frustrated. You can see some of those images around minute 3 of the video above, made available by TechCrunch and Boston Dynamics or in the (fake) commercial below.

Our ideas that robots and machines don’t have feelings may be challenged in the near future, when human or animal-like mechanical creatures become common. After all, extensive emotional attachment to Roombas robotic vacuum cleaners is nothing new!

Videos made available by TechCrunch and Boston Dynamics.

Advertisements

Would you throw the fat guy off the bridge?

The recent fatal accident with a Tesla in autopilot mode did not involve any difficult moral decisions by the automatic driving systems, as it resulted from insufficient awareness of the conditions, both by the Tesla Autopilot and by the (late) driver.

However, it brings to the fore other cases where more difficult moral decisions may need to be made by intelligent systems in charge of driving autonomous vehicles. The famous trolley problem has been the subject of many analyses, articles and discussions and it remains a challenging topic in philosophy and ethics.

In this problem, which has many variants, you have to decide whether to pull a lever and divert an incoming runaway trolley, saving five people but killing one innocent bystander. In a variant of the problem, you cannot pull a lever, but you can throw a fat man from a bridge, thus stopping the trolley, but killing the fat man. The responses of different people vary wildly with the specific case in analysis.

These complex moral dilemmas have been addressed in detail many times, and a good overview is presented in the book by Thomas Cathcart.

trolley

In order to obtain more data about moral and difficult decisions, a group of researchers at MIT have created a website, where you can try to decide between the though choices available, yourself.

Instead of the more contrived cases involved in the trolley problem, you have to decide whether, as a driver, you should swerve or not, in the process deciding the fate of a number of passengers and bystanders.

Why don’t you try it?

Bill Gates recommends the two books to read if you want to understand Artificial Intelligence

Also in the 2026 Code conference, Bill Gates recommended the two books you need to read if you want to understand Artificial Intelligence. By coincidence (or not) these two books are exactly the ones I have previously covered in this blog, The Master Algorithm and Superintelligence.

Given Bill Gates strong recent interests in Artificial Intelligence, there is a fair chance that Windows 20 will have a natural language interface just like the one in the movie Her (caution, spoiler below).

If you haven’t seen the movie, maybe you should. It is about a guy who falls in love with the operating system of his computer.

So, there is no doubt that operating systems will keep evolving in order to offer more natural user interfaces. Will they ever reach the point where you can fall in love with them?

Crazy chatbots or smart personal assistants?

Well-known author, scientist, and futurologist Ray Kurzweil is reportedly working with Google to create a chatbot, named Danielle. Chatbots, i.e., natural language parsing programs that get their input from social networks and other groups on the web, have been of interest for researchers since they represent an easy way to test new technologies in the real world.

Very recently, a chatbot created by Microsoft, Tay, made the news because it became “a Hitler-loving sex robot” after chatting for less than 24 hours with teens, on the web. Tay was an AI created to speak like a teen girl, and it was an experiment done in order to improve Microsoft voice recognition software. The chatbot was rapidly “deleted”, after it started comparing Hitler, in favorable terms, with well known contemporary politicians.

Presumably, Danielle, reportedly under development by Google, with the cooperation of Ray Kurzweil, will be released later this year. According to Kurzweil, Danielle will be able to maintain relevant, meaningful, conversations, but he still points to 2029 as the year when a chatbot will pass the Turing test, becoming indistinguishable from a human. Kurzweil, the author of The Singularity is Near and many other books on the future of technology, is a firm believer in the singularity, a point in human history where society will suffer such a radical change that it will become unrecognizable to contemporary humans.

DSCN0277

In a brief video interview (which was since removed from YouTube), Kurzweil describes the Google chatbot project, and the hopes he pins on this project.

While chatbots may not look very interesting, unless you have a lot of spare time on your hands, the technology can be used to create intelligent personal assistants. These assistants can take verbal instructions and act on your behalf and may therefore become very useful, almost indispensable “tools”. As Austin Okere puts it in this article , “in five or ten years, when we have got over our skepticism and become reliant upon our digital assistants, we will wonder how we ever got along without them.