Tesla announces full self-driving ability for all its cars

Tesla motors announced all current and future Tesla cars will be built with a ‘Full Self Driving Hardware’ package. This package is the next step in the development of Autopilot, and it will enable Model S, Model X and Model 3 cars to handle junctions, twisting rural roads and parking lots.

According to the press release, this hardware includes eight surround cameras providing 360 degree visibility around the car at up to 250 meters of range, twelve updated ultrasonic sensors, and a forward-facing radar with enhanced processing ability.


The video released by Tesla, on Tesla website, shows the car driving autonomously in a number of different road conditions and parking itself after searching for a free parking space. Elon Musk tweeted “When searching for parking, the car reads the signs to see if it is allowed to park there, which is why it skipped the disabled spot.” He added that in 2017 a driverless Tesla will travel from LA to NYC.


Is AI the worst mistake In human history?

In an interesting article, John Batelle added some fuel to the fire, in the ongoing discussion about the promises and dangers of Artificial Intelligence technology.

Physicists Stephen Hawking, Max Tegmark and Frank Wilczek, together with influential AI researcher Stuart Russell, have stated, in a widely cited article, that “Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.” Elon Musk, billionaire and founder of several well-known companies, including SpaceX, PayPal, and Tesla Motors, also joined the fray by stating that “we should be very careful about artificial intelligence” and that if he had to guess, “what our biggest existential threat is, it’s probably that.

Some people dismiss these worries. Andrew Ng, chief scientist at Baidu Research in Silicon Valley and professor at Stanford stated that “Fearing a rise of killer robots is like worrying about overpopulation on Mars“: it is not impossible, but should not be a major worry.

Others point to the fact that, while the first industrial revolution gave us cheap physical labor, freeing people to do other, more interesting jobs, the AI revolution will give us cheaper intellectual labor, freeing people to do more creative jobs. Anything other than that is either wishful thinking or paranoid worries. That may indeed be the case but some, including myself, worry that this time it may be different.

Previous articles in this blog have also addressed this topic, including March of the Machines, a reference to a recent special edition of The Economist and a brief review of Supperintelligence, a book by Nick Bostrom about the dangers of AI.



When so many people talk about the danger of technology, the Valley listens. One of the most open responses, so far, has been OpenAI, a non-profit AI company, whose goal is “to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return“.

The idea is that, by being free from the need to generate income, OpenAI may develop more effectively significant advances in Artificial Intelligence and make them open and usable by everyone. Also, by making sure that AI research is kept in the open, OpenAI hopes to reduce the risks of a takeover by an hostile AI.

An autonomous car has its first fatal crash! Now what?

For the first time, an autonomously driven vehicle, a model S Tesla, had a fatal crash. According to the manufacturer, the car hit a tractor trailer that crossed the highway where the car was traveling. Neither the autopilot, which was in charge, nor the driver, noticed “the white side of the tractor trailer against a brightly lit sky“. In this difficult lighting conditions, the brakes were not applied and the car crashed into the trailer. The bottom of the trailer hit the windshield of the car, leading to the dead of its only occupant.

This was the first fatal accident to happen with and autonomous driven car, and it happened after Tesla Autopilot logged in 130 million miles. In the average, there is an accident for every 94 million miles driven, in the US, and 60 million miles, worldwide, according to Tesla.


It its statement, Tesla makes clear that Autopilot “is an assist feature that requires you to keep your hands on the steering wheel at all times,” and that “you need to maintain control and responsibility for your vehicle” while using it.

Nonetheless, this crash is bound to be the first to raise significant questions, with very difficult answers. Who is to blame for the fact that the autopilot did not break the car, in order to avoid the impact?

The programmers, who coded the software that was driving the vehicle at the time of accident? The driver, who did not maintain control of the vehicle? The designers of the learning algorithms, used to derive significant parts of the control software? The system architects, who did not ensure that the Autopilot was just an “assist feature“?

As autonomous systems, in general, and autonomous cars, in particular, become more common, these questions will multiply and we will need to find answers for them. We may on the eve of a new golden age for trolleyology.