Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

AI systems should be able to do better. I don't think saying "yeah, we're done here" is acceptable.

The next step is to start building models of human minds (as is modeling human behavior, not literal mind uploads), as well as models of every self driving car model, and do full on global (-ish, limited to the general area) optimization of outcomes according to a publicly available and audited decision theory and utility function.

It's the only way to enable superhuman avoidance actions without making things worse by confusing others. This is why the models of humans and other robotic cars are needed.



FWIW these companies already invest pretty heavily in predicting the behavior of other agents (cars, pedestrians, squirrels...) in the vicinity.

But clearly it is not publicly available or audited.


Isn't predicting the future actions of agents in the scene half of what Waymo already does?


I missed the "yeah, we're done here" part. Don't think Waymo is going to pause their safety research program now.


It's in general sentiment I've seen in some places, not a literal quote. The article exhibits it somewhat, not explicitly explaining how the human average is dragged down by drivers that are partially incapacitated.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: