Elon Musk is going to kill the self-driving car even before it takes off. It might seem like he’s done more for the image of the self-driving car than any other manufacturer, Google included, out there, but I believe that the exact opposite is true. Yes, the Tesla Autopilot system is the most advanced, commercially available driver assistance system in the market today. However, that’s all it is, a system to assist drivers. It’s not a true autopilot system, as the name would suggest, and it’s inconceivable that the Tesla model cars are self-driving cars. If nothing else, the fatal crash involving Joshua David Brown, his Tesla Model S and a tractor trailer emphasise that point. Musk has gone out of his way to point out that Autopilot is still in beta and that any driver in a Tesla vehicle must have his hands on or near the wheel at all times. Musk and Tesla have also made it clear that the driver must be alert at all times.
However, Musk has not gone out of his way to actually prevent drivers from abusing the Autopilot system. There is no system in place to ensure that the driver is alert or that his hands or on or near the wheel, there is no compulsory training program of sorts to educate drivers on the limitations of Autopilot and nothing has been done to censure drivers on their blatant abuse of a system that isn’t fully capable yet. Autopilot does require a driver to place his hands on the wheel from time to time, but those intervals are the responsibility of Autopilot and again, this isn’t a real solution. Following the crash, all Musk really said was, “Neither Autopilot nor the driver noticed the white side of the tractor trailer against a brightly lit sky, so the brake was not applied.” He also went on to say, “Had the Model S impacted the front or rear of the trailer, even at high speed, its advanced crash safety system would likely have prevented serious injury as it has in numerous other similar incidents.” Musk has conveniently bypassed the entire issue of responsibility and the inadequacy of Autopilot. Let me just put things in perspective for you. The Tesla Model S was travelling down a straight road, a tractor trailer was coming down the other lane. The driver of the tractor trailer turned left at a junction. Tesla says that the car and driver didn’t notice the “white trailer against the white sky.” In this particular case, the 62-year old driver claimed that he didn’t even see the car and that it hit the trailer and passed underneath before he knew what happened. Firstly, the car itself uses only two sensors to keep an eye on the road. One is a monocular camera placed behind the rear view mirror. The word “monocular” is very important here. It’s not a 3D camera and so it cannot sense depth. The second sensor is a forward-facing radar with limited range. There are a slew of ultrasonic sensors all around, but they’re mostly for parking and have a range of 5 metres. Neither of these sensors can paint a full picture. Secondly, while it’s possible that an alert driver could have missed a tractor trailer coming down the opposite lane and possibly giving an indicator that it was turning left, the driver and Autopilot should have been able to see the trailer when it crossed the road. Neither did.
Radar tunes out what looks like an overhead road sign to avoid false braking events
— Elon Musk (@elonmusk) June 30, 2016
As Tesla clarified later on, the monocular camera was fooled by the white trailer and bright sky and the radar system thought that the trailer was, of all things, an overhead sign. Why did the driver not see the trailer? He placed his trust in beta software. You do NOT play with human lives in this fashion. Musk knows that Autopilot is inadequate, he’s said it himself. Why is it still being allowed on the streets when there’s obvious proof of abuse? Would you step on an aircraft if you knew that the pilot wasn’t adequately trained to fly it and that the software was unreliable? No competent safety board would allow that pilot to fly, let alone the aircraft. When Google started work on their self-driving car project, the very first thing they noticed was that humans would be lulled into a false sense of security when placed in a self-driving car. They also noted that humans couldn’t snap back to “situational awareness” mode when required to handle a crisis. As the New York Times points out, this stuff made Google engineers “a little nervous,” which was when Google decided to take humans out of the equation altogether. I’m certain that Musk and his engineers would have noticed the same thing but rather than do something about it, they simply plastered warning stickers over everything and forgot about it. Tesla’s electric cars are great, they’re the future. If I had the means, I’d get myself a Tesla Roadster without a second’s thought. But they’re not self-driving cars and Autopilot isn’t a safe alternative. Musk needs to take serious steps to ensure that Autopilot isn’t abused. People need education on the limitations of Autopilot. More important than that, however, is a system to reign in the abuse of Autopilot. People are going to do stupid things, Musk included, and without sufficient safety measures (restrict the speed, monitor driver alertness, etc.) people will die and they will blame self-driving cars. If nothing is done, Autopilot will give true self-driving cars a bad reputation even before the technology has a chance to prove itself. And Elon Musk will be entirely to blame for that. Disclaimer: We’re not pointing fingers at the driver of the trailer, the car or Autopilot. The investigation is still ongoing. We are pointing fingers at Elon Musk for not taking adequate measures to educate drivers on the inadequacy of Autopilot.