Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A point I commonly hear made from both Tesla and Comma.ai for example is that Lidar is far too expensive with Waymo's vehicle costing a total of $200,000 and that cameras alone are sufficient for full self driving. I do think that cameras alone are probably sufficient for full self-driving but every time I hear this point I think to myself that self driving progress is moving so slowly on the camera-only front that Lidar might become so affordable by the time camera-only makes substantial progress that by that point it was much more efficient to just have been developing with Lidar from the start. Am I missing something or just completely wrong? I would really appreciate any insight on this.


The conversation is complicated because LIDAR is used for many purposes in self driving. It's used for localization, it's used for objection detection, it's used for classifications, it's used for understanding occlusions, it's used to get accurate 3D positions and speeds of objects, etc. For each of these cases there is a way to use just cameras (or camera augmented with radars), but often with some significant performance penalty.

One of the big challenges is that most self driving stacks have an interface between the perception and planning stages that is specified to be a 3D model of the world. LIDARs are particularly helpful at creating 3D models because that is essentially their native data product. So, for "traditional" AV stacks that use this interface, LIDARs are bound to improve performance a lot.

If you use a different approach, say pure imitation learning off of sensor data, you might find that LIDARs are not as important. (intuition: Humans drive well without understanding super accurate positions or velocities of objects.) Though Tesla isn't taking a pure imitation learning approach (yet), they are more in line with this strategy.

All this said, I don't think that having or not having LIDARs is a major factor in the progress of self driving. It's just a way to use money to improve perception performance (and reduce data labeling cost!) but neither is the major blocker for the industry. If we extrapolate from the last 10 years of progress, it seems like high-level self driving is going to take a while and I think that it's likely that LIDAR prices will have fallen dramatically by then and we'll see them as part of the overall sensor constellation on most vehicles.


I think you're probably right - the really difficult bits of self-driving cars are still really difficult even if you have perfect sensor data.

But still, we're a long way from vision based sensing being good enough for reliable self driving so why make it difficult for yourself?


It's a question that ultimately depends on the utilization economics pathway, which if you listen to folks like Tony Seba, is going to come from the side of robotaxi fleets, not personally owned vehicles.

If you own a taxi fleet, the cost structure will favor spending more on Lidar if it gets you to a level 4 or 5 result faster. Cars designed for personal use will tend towards seeing the feature as a consumer "add-on" and prefer cameras. Tesla's business has focused on this latter path, trying to capture high end users first and then mainstream the results.

But if Seba is correct and we take a logistic curve pathway with both EV and self-driving tech, the cost curve will price robotaxi fleets underneath all forms of ownership within just a few years; you get much better utility from your sensor investment if the same car is making a dozen trips every day, and the consumer pays "for what they use" instead of having an unused hunk of metal taking up parking space. At that point, adoption shoots up and the consumer add-on model doesn't have a leg to stand on. Camera tech might still get better and replace the sensor package, but the race, such as it is, would be won by whomever deploys a fleet at scale first.


I work in a parallel industry where capturing motion at high speeds is critical, and in the last two decades, the cost of LIDAR and RADAR has barely come down - and the technical specs haven't gotten much better - while the cost of optical continues to absolutely crash through the floor and technical specs skyrocket.

Mobile phones and laptops are the obvious main reason for this. Optical tracking (cameras) have economies of scale that LIDAR and RADAR can't even come close to. LIDAR and RADAR are niche features in a specific set of high priced luxury goods while optical tracking is dominating everything from government/consumer surveillance to camera phones to sports motion capture and many, many more applications.

Optical tracking has so much pressure on the field to drive costs down and to increase feature sets (innovate) that RADAR/LIDAR don't. It's going to be this way for quite a long time, so I don't really see the costs for RADAR/LIDAR getting under control anytime soon.

EDIT: Optical tracking will be the dominant method of self-driving cars, I'm almost sure of it. The trained datasets have a huge advantage in this regard. Still, it's obvious that LIDAR/RADAR will have additive value down the line. It is just very hard to see how it becomes the primary technology. This is echoed in my field as well as many others - where RADAR dominated, machine learning / software + good enough optical tracking took over at 1-3 magnitudes of cost savings.


> I work in a parallel industry where capturing motion at high speeds is critical, and in the last two decades, the cost of LIDAR and RADAR has barely come down

This doesn't track for me. What was the price of the cheapest Velodyne LiDAR unit 12 years ago, and what is the price of the cheapest today? My not-in-the-industry searching says a Velodyne unit cost $75,000[1][2] 12 years ago. IIRC, estimates of Google's inhouse LiDAR sensor (which is not for sale) was about $10-20,000 per unit - this was about 5 years ago. Currently, Velodyne is selling(?) the Velarray H800[3] solid-state unit that had a $500 price-point target[4][5] during development. How does this square with your assertion that the cost of LiDAR has barely come down?

Edit: I checked, it was precisely 5 years ago and Google claimed[6] that it cut the cost of LiDAR by 90 percent! I think you were wrong to say LiDAR costs have barely come down.

1. https://www.latimes.com/business/la-fi-hy-ouster-lidar-20171...

2. https://arstechnica.com/cars/2020/10/the-technology-behind-t...

3.https://velodynelidar.com/products/velarray-h800/

4. https://www.forbes.com/sites/samabuelsamid/2020/11/13/velody...

5. https://www.reuters.com/article/velodyne-lidar-tech/velodyne...

6. https://www.businessinsider.com/googles-waymo-reduces-lidar-...


Apple also puts lidar in the iphone pro models


My robot vacuum cleaner from xiaomi has lidar. It's coming at the consumer level fast.


I would say the race isn’t over yet. One thing to consider is the training dataset. By using only vision, Tesla was able to start collecting data with their entire fleet starting many years ago. They probably have more edge case data than anyone else, in more diverse driving situations. They would not have been able to do this if they needed to use expensive lidar on company owned training vehicles.

What we will likely see is that lidar equipped vehicles gain a lot of ground at first, but have to be rolled out slowly both due to cost and how they are trained. Tesla’s vision based fleet could get an update tomorrow that theoretically was able to drive almost anywhere.

Of course, Tesla has clearly been finding out that while vision is in principle sufficient, the brains behind the cameras are not so simple. In my opinion it’s still too early to call who will win in the end.


This point about Tesla has been made frequently in this discussion. I still wonder: how does Tesla store any meaningful video data in customer vehicles and transfer it?

I know that other vendors have a hard time putting the necessary fast storage in the trunk and get the data off with 100G cables in the garage.


Their AI system has something called “shadow mode” where it can make observations without having any effect on the car. When Tesla needs to collect a dataset of something, like short videos of cars that put on their blinker but did not change lanes, or partially obscured stop signs, they can train a net to run in shadow mode on all their cars and collect more of this edge case. I presume that they also collect a lot of data around disengagements. And then they send the data presumably over Wifi when the customer gets home, or otherwise over the car’s 3G connection. But they collect targeted data, not mass dumps of drives.

Andrej Karpathy has described this system in various talks including on Tesla AI day.


As I understand it, it's mainly collected when the car goes in for maintenance.


,,Lidar might become so affordable by the time camera-only makes substantial progress that by that point it was much more efficient to just have been developing with Lidar from the start.''

Tesla and Comma are already making profit with great margins that they needed a lot when they started developing self driving capabilities. Waymo is using Google's coffers, but so far they lost billions of dollars that they have to get back.

Regarding Lidar I wouldn't be surprised to see it in the Robotaxi that comes out in 1-2 years from Tesla, but maybe it would be a PR nightmare at this point.


The problem with Lidar is that it doesn’t work well in bad weather. So it’s not the “endgame” technology - that’s vision. It’s a bridging technology. Whether that bridge is needed today is an open question.


> The problem with Lidar is that it doesn’t work well in bad weather

Cameras don't work well in bad weather either.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: