PREMIUM

'Full self-driving': 4 reasons why Elon Musk is wrong; 1 place he could be right

Next time you hear Musk make an FSD promise, take it with a grain of salt

Last updated:
Jay Hilotin, Senior Assistant Editor
6 MIN READ
A Tesla Cybertruck had an unfortunate run-in with a pole on February 11 while using Full Self-Driving (FSD) v13 in Reno, Nevada. The high-tech ride failed to merge lanes, bounced off a curb, and went headfirst into the pole. While FSD might be smart, perhaps in 99.99% of instances, it’s not quite pole-proof yet. Fortunately, the driver lived to tell the story.
A Tesla Cybertruck had an unfortunate run-in with a pole on February 11 while using Full Self-Driving (FSD) v13 in Reno, Nevada. The high-tech ride failed to merge lanes, bounced off a curb, and went headfirst into the pole. While FSD might be smart, perhaps in 99.99% of instances, it’s not quite pole-proof yet. Fortunately, the driver lived to tell the story.
@mariusfanu | X

Elon Musk, the billionaire mastermind behind Tesla, SpaceX, and other futuristic ventures, has the knack for making the impossible... possible. 

Electric cars? Done.

Reusable rockets? Check.

Solar and grid-scale batteries? On it.

Cell sites in the sky? Awesome.

AI-powered humanoid robots? You bet.

But when it comes to "Full Self-Driving" (FSD), well, let’s just say Musk is shooting beyond Mars.

Here’s why:

1. "Full Self-Driving" is a misleading name

Tesla’s FSD isn't actually "full" self-driving. It's close, but not quite. The reason: According to Tesla itself, FSD provides "active guidance and automatic driving under your active supervision."

That’s a fancy way of saying: "you still need to pay attention".

For years, Tesla owners have been promised that FSD would allow them to kick back, relax, or even snooze while their car drives itself.

Has that actually happened? Nope.

Sure, Tesla’s self-driving AI has improved dramatically. Countless end-user videos have showcased zero-intervention self-driving trips which are quite remarkable. 

Here's one:

But, let’s get real: Self-driving robots on Mars? Yep, that happened.

Those rovers are awesome: i.e. Opportunity operated for 14.5 years. They're still semi-autonomous, partly controlled by humans on Earth.

But lucky for them: they don’t have to dodge reckless drivers, jaywalkers, kids chasing soccer balls, stray animals, or those rogue shopping carts with a mind of their own.

Teaching a vision-based self-driving bot to navigate the chaos of the real world? Now, that’s a whole different level of tricky.

So a Tesla AI driver may eventually hit a 99.99 per cent reliability on open roads, which is awesome. However, given the unpredictable "edge cases", a truly 100 per cent, "full" self driving on earth is way harder than sending man to Mars.

94%
Human error accounts for 94% of all vehicular accidents (Source: US National Highway Traffic Safety Administration (NHTSA)

2. Elon time: The art of being “almost there”... forever

Musk is famous for delivering the impossible... just a little late. And that’s fine — except when people have already paid thousands of dollars for FSD and are still waiting for it to, well, fully drive.

For nearly a decade, Musk has promised that self-driving Teslas were just around the corner. Spoiler alert: they weren’t.

On January 8, 2025, during the Consumer Electronics Show (CES) in Las Vegas, Musk admitted that Teslas' approach to self-driving is "hard".

“There already are autonomous vehicles in some regions. Waymo has autonomous vehicles with no one in it. But they're limited to a few cities in the US. The Tesla solution, which is a much more difficult path to go but ultimately much more powerful, is a general solution to self-driving."

"The Tesla software is purely AI and vision. It doesn't rely on any expensive sensors, no lidars, no radars. It doesn't even require knowing the area beforehand. You could have it drive someplace it's never been before, and no Tesla has ever been before. It could even be an alien planet. And the car will still work, still drive.”

Statistically, Tesla’s AI driver is already safer than most human drivers. The company claims that its "Autopilot" system crashes only once every 5.5 million miles.

The National Highway Traffic Safety Administration (NHTSA) estimated that 18,720 people died in traffic crashes in 2024, of which 94 per cent were caused by human error.

A 2023 analysis of US federal data found that vehicles using "Autopilot" guidance have been involved in over 730 accidents since 2014.

Despites its better-than-human-drivers safety record, the whole FSD dream remains stuck in "beta" mode.

Another 10 years of "just wait, it's coming"? Even his most loyal fans are running out of patience.

That's why the robotaxi rollout in Texas set in June will be a scrutinised at every turn.

Why Texas?
Texas law allows autonomous vehicle companies to operate freely on public roads as long as they are registered, insured, and equipped to record crash data — just like regular cars. No state agency regulates driverless taxis, and local governments are banned from making their own rules.

3. Musk admits he got it wrong

For years, Musk insisted that Teslas built since 2019 had all the necessary hardware for full autonomy.

Then came the plot twist: they don’t.

Turns out, those cars are missing key components, and software updates alone won’t magically make them fully self-driving. This means many Tesla owners who paid for FSD might never actually get the full experience. Oops.

Solving self-driving is harder than I originally thought. The sheer volume of testing and real-world training needed (for FSD) is massive.
Elon Musk, Tesla CEO

Achieving full autonomy has taken longer than Musk himself predicted.

Despite multiple timeline estimates, Musk has highlighted that unlike a controlled environment (i.e. a game), self-driving must handle unpredictable human behaviour and weather conditions.

#4. Vision-only fundamentalism is problematic

This is the underlying challenge. Tesla’s vision-driven AI is based on principle and intuition, rather than static or walled-off memory.

Its FSD relies on AI trained on billions of kilometres of real-world driving data. It continues to refine the system, called "iterations", marked by versions (i.e. Ver. 13.2.6).

The fact that Tesla self-driving software can drive somewhere it’s never been – and has no information on, but can still get around – is phenomenal.

However, Musk's vision-only fundamentalism is problematic, and is raising concerns even among his fanboys.

FSD takes a wrong turn

In a not-so-futuristic twist, a Tesla Cybertruck had an unfortunate run-in with a pole on February 11 while using Full Self-Driving (FSD) v13. The driver, Jonathan Challinger, watched in shock as his high-tech ride failed to merge lanes, bounced off a curb, and went headfirst into the pole.

Challinger shared the mishap online as a friendly reminder: Stay alert, even with FSD engaged. While Tesla hypes up its self-driving magic, this crash proves that human drivers still need to keep their hands on the wheel (and their eyes on the road).

So, while FSD might be smart, and nice to have, it’s not quite pole-proof (yet).

How did it happen?

For one, no amount of in-car or data centre-based AI computing power can predict all sorts of road conditions on Earth.

Musk described FSD software development as one of the "hardest" AI problems, "probably the hardest".

While he remains confident that Tesla will eventually achieve full autonomy, exact timelines are uncertain.

Tesla’s FSD system is known to struggle with dust, ash, and sun glare.

Some of his most ardent supporters question the wisdom of vision-only FSD, when off-the-shelf lidar tech is now available for <$500.

Ross Gerber, CEO of Gerber Kawasaki Wealth and Investment Management, pointed out a key difference in autonomous driving: most major companies use lidar, while Tesla relies only on cameras.

Meanwhile, Gary Black, managing partner of The Future Fund LLC, believes self-driving tech is becoming a shared market, not a winner-takes-all race. He supports Tesla as the global EV leader but doesn’t expect it to dominate ride-sharing entirely.

Robotaxi debut, or yet another experiment?

Tesla's self-driving "robotaxis" will debut in Austin, Texas, in June, taking advantage of the state's favourable regulatory environment, design chief Franz von Holzhausen told Fortune.  

Is it the future of ‘ride-hailing’? And what about existing Tesla owners? Will they be able to upgrade? Will FSD finally become what its name entails, or will it remain a fascinating experiment that tests our faith, or lack of it, in technology?

FSD availability and testing:

1. US

  • Widely available as FSD (Supervised) in most states.

  • Regulators still require human oversight, meaning it's not yet fully autonomous. Some states, like California and Texas, have been key testing grounds.

2. Canada

  • Tesla started expanding FSD (Supervised) to Canadian users in 2023. It has similar restrictions as in the U.S., with driver monitoring required.

3. China (Limited Availability)

  • Tesla sells vehicles with Autopilot, but FSD has not been fully deployed due to local regulations and mapping restrictions.

  • China prioritizes domestic AI and mapping tech.

4. Europe

  • The EU has stricter autonomous driving regulations, limiting Tesla’s FSD features.

  • Basic Autopilot is available, but FSD Beta is not widely deployed. Germany, Norway, and the Netherlands have been more open to testing autonomy.

5. Australia & New Zealand

  • Tesla has launched Autopilot, but FSD (Supervised) is not yet widely available due to regulatory challenges.

Where FSD could be 100% perfect

While vision-only FSD faces many risks in open-road conditions, road tunnels could be a perfect setting to roll it out at scale.

Here’s why:

1. Fewer edge-cases 

  • With fewer environmental variables (no pedestrians, cyclists, falling leaves, tree trunks, complex intersections, roundabouts, unexpected road crossings, etc) and no harsh weather conditions (rain, fog, snow, landslides, etc) that can interfere with vision-only AI, a fully driverless vehicle (sort of like a driverless train) could work with a high degree of reliiability.

2. Highly-structured environment

  • Predictable, clear lane markings that remain consistent throughout the tunnel.

  • Barrier-protected lanes prevent random objects or cars from suddenly entering.

  • Limited entry/exit points, reducing the need for AI to make complex decisions.

3. No GPS dependency 

  • In a tunnel, precise camera-based lane tracking is enough to maintain lane discipline, solving any issues due to poor GPS signals,

4. Smooth traffic flow 

  • Speed is usually regulated with fewer sudden stops or erratic driving behaviour.

  • No left or right turns — just straight or gentle curves, which Tesla’s vision AI handles well.

Takeaway

So when it comes to Elon Musk’s FSD promises, it’s best to take them with a grain of salt.

Sign up for the Daily Briefing

Get the latest news and updates straight to your inbox

Up Next