tech2 News StaffJan 26, 2018 17:02:29 IST
On 22 January, a Tesla Model S slammed into the back of a stationary fire truck at over 65 mph (around 104 kph). While it’s certainly thankful that nobody was injured, it’s scary that a vehicle that’s labelled as “semi-autonomous” can’t see a big, red fire engine just parked across a highway.
What’s even more scary is the fact that this behaviour isn’t unusual. Most purportedly “self-driving” capable vehicles will make the same mistake, and quite often at that.
Vehicles claiming to be semi-autonomous today are largely using radar-based technology for scanning the surroundings and detecting obstacles. For radar to work, a pre-configured radio pulse is sent out via an antenna. These radio waves will bounce off obstacles and get reflected, and a tuned antenna will pick up the reflections. The reflections are processed by an onboard computer and a map of the surroundings is created from the data.
This sounds amazing, and it is, but the biggest problem with radar is that everything will reflect the radio waves and the signal will echo. If you’ve ever tried shouting in a large cave, say, you’ll know how hard it is to determine the source of the sound. That’s exactly what happens when a radar detects an echo. Since the radar doesn’t “see” the world and only interprets incoming radio signals, it’s not very good at understanding the world around it.
One solution for this problem, which automakers have adopted, is to filter out the reflections from stationary objects. Automotive radar of the type used in Tesla vehicles looks for something called doppler-shifted signals.
Imagine you’re at a train station. A train is coming to you at high speed with its horn at full blast. In this case, the horn sounds much stronger and maybe shriller because the sound is getting compressed. As it moves away from you, the note of the horn is much lower and continues to reduce as the train recedes into the distance. In this case, the pressure wave of the sound is expanding. This is the doppler effect.
Most radar implementations take advantage of the doppler shift to only detect objects that are moving. Stationary objects — like a fire-engine — are ignored. This technique eases the workload on the processor but could also result in accidents like the one that occurred on 22 January. This is not a new issue and engineers have been struggling with it since World War II, when radar was first implemented. Even America’s most effective fighter plane, the F-15 Eagle, struggled with the issue.
Speaking to experts, Wired has reported that this is indeed the most likely explanation for the crash. To the radar, the truck was invisible. The report also indicates that LiDAR — a mechanism that uses light the same way radar uses radio waves — to map the environment. This technique is far more reliable than radar and isn’t as easily blinded by “echoes”. Google’s self-driving car project uses LiDAR for exactly this reason. Unfortunately, Wired reports that LiDAR is still too expensive and the hardware too unreliable to implement at scale.
The Tesla Model S uses one camera that's used primarily for lane-detection and determining the speed limit, a GPS unit that assists in this and a dozen or so ultrasonic sensors for parking assistance. The only unit that's looking at the road and monitoring traffic is a radar mounted in the nose. That's it. A radar that can, in all likelihood, only track moving objects. If a vehicle in front of you were to pull out, your vehicle will, in all likelihood, speed up.
Clearly, it is absolutely ESSENTIAL for the driver behind the to remain alert.
While working a freeway accident this morning, Engine 42 was struck by a #Tesla traveling at 65 mph. The driver reports the vehicle was on autopilot. Amazingly there were no injuries! Please stay alert while driving! #abc7eyewitness #ktla #CulverCity #distracteddriving pic.twitter.com/RgEmd43tNe
— Culver City Firefighters (@CC_Firefighters) January 22, 2018
As usual, the bigger problem right now isn’t that the technology is still too crippled, it’s that the messaging around “self-driving” and “autonomous” vehicles is very misleading.
Yes, the technology is amazing and yes, it will save — and probably has saved — a lot of lives. However, it’s also far from perfect. People have been misusing Tesla’s semi-autonomous features from day one, and that’s not going to change any time soon.
When developing its own self-driving cars, Google discovered that humans can’t be trusted to handle semi-autonomous cars. “Human drivers can’t always be trusted to dip in and out of the task of driving when the car is encouraging them to sit back and relax”, said Google’s Chris Urmson in a Reuters interview.
Another Wired report also highlights this fact, pointing out that many automakers, including Volvo — a company with a reputation for making some of the world’s safest cars — are moving on from semi-autonomous vehicles and focussing on fully autonomous ones.
Humans either need to be completely attentive to the needs of the vehicle or completely left out of the picture. A semi-attentive state will simply not work. How attentive would you be to your semi-autonomous vehicle when you're busy playing Candy Crush on your phone?
Semi-autonomous vehicles will inevitably lull its human cargo into a false sense of security. Sure, the vehicles will work fine 99 percent of the time, but that 1 percent of time they fail can be utterly disastrous.
For some reason Tesla, and Elon Musk by extension, are simply refusing to accept this truth.
Tech2 is now on WhatsApp. For all the buzz on the latest tech and science, sign up for our WhatsApp services. Just go to Tech2.com/Whatsapp and hit the Subscribe button.