Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

To me beta testing should be a long period of time where the computer is running while humans are driving, with deviations between what the computer would do if it had control versus what the human actually does being recorded for future training. The value add is that the computer can still be used to alert to dangerous conditions, or potentially even overriding in certain circumstances (applying break when lidar sees an obstacle at night when the human driver didn't see it).

The problem is that Uber needs self driving cars in order to make money, and Tesla firmly believes that their system is safer than human drivers by themselves (even if a few people who wouldn't have otherwise died do, others who might have died won't and they believe those numbers make it worth it).



It's surprising that this isn't the standard right now. I'm certain the people at Tesla/Uber/Waymo have considered this - I'm curious why this approach isn't more common.


It was happening. Many companies, Tesla including, gathered a lot of data for training the system.

The problem is that you cna only learn so much without actual practice.


I'm sure it is the standard during testing. I doubt you can do this for the general population.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: