As Tesla reportedly prepares to launch its long-awaited driverless robotaxi ride-hailing service in Austin this month, plenty of critical questions around its safety and operations still remain unanswered.
As its global electric vehicle sales continue to plunge, CEO Elon Musk is betting on self-driving taxis, which he thinks will propel the automaker into the most valuable company on Earth, worth trillions of dollars.
But the safety record of Tesla’s underlying Full-Self Driving (FSD) software has not been promising. FSD, which is available as an add-on feature for Tesla owners for $8,000, has been linked to many crashes, some of them fatal.
Now, as the launch date approaches, several scientists in the AI and autonomous vehicle fields, as well as federal safety investigators, have questions for Tesla about how it plans to make its driverless cars safe.
It starts with the National Highway Traffic Safety Administration (NHTSA), which wrote a detailed letter to Tesla in May requesting more operational information. Here’s a list of some (but not all) questions asked by the NHTSA, which have been reworded here for conciseness:
- How will FSD for robotaxi be different from FSD (Supervised) that’s available for Tesla owners?
- What are the operational restrictions Tesla will implement relating to the time-of-day, weather, geofencing, maximum speed, and more?
- Will the restrictions be implemented primarily to ensure safe operations?
- Metrics for disengagements/interventions for the unsupervised FSD.
- What is Tesla’s approach to crash detection and response?
Experts have long warned that Tesla’s hardware and software approach is flawed. The automaker is only using artificial intelligence and cameras to train the brains of the robotaxi to make it drive autonomously.
That’s the opposite of Waymo’s approach. The Alphabet-owned autonomous vehicle start-up uses a far more sophisticated sensor suite, including cameras, radar and lidar on top of the vehicle and its sides to make sense of the environment and then drive around safely.
And that’s not all. Philip Koopman, the associate professor of electrical and computer engineering at Carnegie Mellon University and the author of the book How Safe Is Safe Enough, also shared several questions in a recent blog. In that, he asked: Will the teleoperators actually drive the Model Y robotaxis? And, how safe is it actually?
Finally, there’s one especially burning question here: when will the first crash occur?
Photo by: Tesla
Tesla has been hiring teleoperations engineers to control the vehicles remotely. While that’s not unusual—even Waymo has remote operators for emergency backups—it’s unclear if Tesla’s remote drivers will control the Model Ys all the time with pedals and a steering wheel of their own, or whether they will only intervene when the vehicle’s computers alert them in a challenging traffic situation.
As Koopman notes, there are serious concerns about communication lags and Tesla will need system redundancies if the cellular network goes bust. It’s also unclear where the remote operators would be stationed. In case of a crash, can Tesla be sued in California (If the assistants are located there) even if the crash happened in Texas?
Other scientists have taken a more critical stance against Tesla’s approach. Missy Cummings, the director of the Mason Autonomy and Robotics Center at George Mason University, called out Tesla’s teleoperation approach.
For teleoperation to be safe, the communication latency between the remote human driver and the car on the road has to be as low as 10 milliseconds, she said, but that’s nowhere near what teleoperation can currently achieve.
“Remote driving is possible under carefully controlled conditions, but it is just a matter of time until someone is killed for a company using this approach—that victim will likely be a pedestrian or cyclist,” Cummings said in a recent LinkedIn post.

Photo by: Tesla
Moreover, we also don’t know whether the robotaxis will be classified as Level 2, Level 3 or Level 4 under the Society of Automotive Engineers (SAE) standards. If the remote driver is in charge, that would effectively place the self-driving Model Ys into the Level 2 category as all advanced driver assistance systems (ADAS) currently available on passenger cars, similar to Ford’s BlueCruise or General Motors Super Cruise.
For them to qualify as Level 4 systems, such as Alphabet’s Waymo, occupants won’t be required to take over control at any time. And even Level 4 systems aren’t foolproof, as we’ve seen with crashes involving Waymo and GM’s now dissolved Cruise robotaxis. Musk has said that the robotaxis will be geofenced to the safest areas of Austin, meaning it will be restricted to drive only within a pre-mapped city limit.
This is when the answer to safety becomes broad. According to the Federal Highway Administration, a fatality occurs on U.S. roads roughly every 100 million miles driven. With an initial deployment of just 10 or 20 unsupervised Model Ys, Tesla would take years to reach that mileage and provide any significant data.
When it comes to the current FSD, which is already on the market, Tesla does not release its safety record in the same way it does for Autopilot. So if the 10 Model Ys driving in the safest areas of Austin don’t crash at all, that doesn’t mean they’re fully safe. It just means that they were exposed to far fewer road hazards.
A self-driving Model Y was spotted driving around Austin for the first time today. But it’s unclear if Tesla will launch the actual ride-hailing service on June 12 as Bloomberg reported. The automaker never confirmed that date but has said the end of June instead.
What other pressing questions do you think Tesla should answer about its much-awaited autonomous taxi service? Leave your thoughts in the comments.
Have a tip? Contact the author: suvrat.kothari@insideevs.com
Read the full article here