> And it's not much of a problem if every human in a semi-autonomous vehicle followed the automakers' explicit, insistent instructions to pay attention at all times, and take back control if they see a stationary vehicle up ahead.
While their technical reasoning is fine... this statement shows a complete failure of reasoning about human beings and UX. In essence they are asking people to stay alert without being active, this is actually really fucking hard to do as a human being. Noticing an object that needs to be avoided while you are already actively engaged in driving is easy, you are naturally alert and there is no "intervention time" because there is nothing to intervene with. Trying to achieve the same level of response while letting a machine drive kind of defeats the purpose of letting the machine drive.
Yes I'm basically making the argument that semi-autonomous driving is not safe. Make it fully autonomous or not at all. Two modes, keep it simple, because human concentration is complex.
There are mountains of evidence to back this up, quite a bit of it stems from the problem of keeping airline pilots engaged and alert while the autopilot functions as it should.
Also called "Children of the Magenta" (the cues provided by the flight management computer on the primary flight display are typically displayed in magenta).
Really a conundrum - automation forestalls many accidents, but pilots get less hands-on experience and are less prepared to cope when the automation fails.
Excellent related talk by Capt. Warren VanderBurgh of American Airlines from 1997 (!):
And people deskill when they don't get enough practice as well. I know I got better over time as a driver and it wasn't just because I wasn't a teenager any longer. It's easy to imagine we'll end up with a lot of people on the roads with not-quite autonomous systems who actually haven't actively driven all that much.
It's not even about deskilling. I raced road bikes for years and in large part drive like I rode: highly defensive (bordering on aggressive as I've been lectured by LEOs). I commuted for years in my roadster with a manual transmission & power steering set for the track where I'm highly engaged as I can feel every crack in the road and the car can accelerate to unholy speeds in seconds. When I drive our sedan, it's boring and I find my mind drifting. If I turn on assisted cruise control, it's mind numbingly boring and on any long drive I'm guaranteed to fall asleep so I don't use it anymore because it's too dangerous.
Without being engaged in the act of driving, without the adrenaline, I switch off. This is the key problem I suspect everyone at some level suffers from wether they want to admit it or not. Even awake, on full autopilot I suspect most people are in full-on daydream mode.
I don't race but I have noticed this with my "normal" car, i've always driven a hatchback though which are quite common in this country. If I force myself to drive very steady I can almost feel the alertness drifting away, I become less aware of vehicles around me and their bearing. If I speed up to a threshold just between the bored comfortable zone and racing it's enough to give heightened alertness, I can easily preempt drivers making careless moves that would otherwise endanger me and I notice pedestrians and other things around the road more, it's a strange balance, but i've no doubt that sticking dead to the limit on a large road will decrease alertness in any study.
The autopilot is fully capable of landing the plane entirely automatically (once you tell it to, and if you're approaching an ILS autoland runway), but often times that's reserved for extremely poor weather or such, and pilots will land manually. They'll still tune the ILS so they have their localiser and glideslope information on their flight display, but they'll follow that manually rather than telling the autopilot to. This varies by airline.
On Boeing airliners there's also the (delivery) option for a pilot awareness monitor. If you don't make any inputs (change from weather to terrain radar or vice versa, change the range of information displayed on your navigation display, toggle whether airports or VORs are displayed, etc) for a short period of time (I believe it's 10 minutes), it will generate an EICAS warning.
This is attention-grabbing yellow text on a screen that the pilots are supposed to be monitoring. If you don't action that (by hitting the warning reset switch or making any input) you get a very loud very red alarm.
Note that alertness checks are (somewhat) effective in commercial aviation because the industry AND the pilots accept them as normal.
A big ship will have BNWAS, which is supposed to alert everybody else in crew quarters (e.g. officers asleep) if the officer on the bridge (who should be actively controlling it) doesn't do anything for a prolonged period and doesn't respond to a local warning alarm.
When there's an accident investigation with suspicious circumstances like a collision in an otherwise empty region in the middle of a calm dark night, almost invariably either the BNWAS was "mistakenly" switched off, or its mandatory logging was "mistakenly" erased or turns out never to have been enabled at all. Why? Because the alarm is annoying and there's no acceptance by owners, masters or crews that it's keeping them alive and so they must use it, so they switch it off, muffle the alarms or otherwise disable it and then, inevitably, someone falls asleep and an accident happens.
Similar tech in private cars would, I think we have to assume, end up just like BNWAS on big freighters, disabled because it was annoying. Sometimes nobody would survive the accident to cover up the fact it was disabled, but that's cold comfort.
Yep. The problem is that if I have to basically drive anyway then, aside from some safety and convenience systems (lane departure warning, automatic braking, maybe adaptive cruise control, etc.), why would I really want a nominally autonomous driving system. If I have to drive and can't read or watch videos (and certainly can't summon a robo-taxi), what good is it?
I've noticed my driving has gotten a lot less smooth now that I moved near a train station and take the commuter train into work every day. I drive about once a week now and my acceleration and braking has definitely got a bit more jerky.
And we all know how much damage that has caused the airline industry! Oh wait, there wasn't a single death on a US passenger airline last year, a record low in the history of aviation.
FWIW, I find "semi-autonomous" features like automatic cruise control & lane keeping to be a net benefit to my alertness on long trips as they reduce fatigue from constant minor adjustments in speed and direction.
Clearly it's possible to use these features in a way that increases risky distraction, but I think it's a mistake to assume such a driving mode can't be safer on net when used properly.
I have the opposite experience with lane keeping, it made me space out in a way that when lane keeping didn't work properly I was really scared on how long it took me to react.
Definitely vouch for cruise control as having a cramping leg stuck on the accelerator for 5+ hours driving sucks but, personally, would say otherwise for lane keeping.
Though I use it for the same reason you mention ... I always am paranoid when I have cruise control engaged for very long distances. Have no idea how likely this would be, but the fear that creeps and stays in the back of my head is that I'll get comfortable, reposition my feet as I adjust for comfort, and then miss the brakes in the event of an emergency
I rarely use even basic cruise control as I find it extremly hard to stay engaged with it enabled. Small throttle adjustments seem to be the trigger for my driving routines of checking mirrors, monitoring gaps and watching the horizon for curves. Just making steering adjustments isn't enough to keep my brain engaged; I zone out don't maintain enough situational awareness.
I can easily imagine there are people who can use lane keeping and radar cruise control safely, but I also know I am not one of them. I'll need to stick with manual driving until the car can drive itself in all cases.
I used to use cruise control more. But now, driving in the Northeast US, there's enough traffic most of the time that I find cruise control encourages me to optimize for maintaining a steady speed rather than driving appropriately for the flow of traffic. I'm not sure the last time I used it.
Yeah, I have a Tesla with AutoPilot. Staying alert enough to make sure it doesn't crash into anything really isn't that hard. It's still really nice to not have to worry about keeping in the lane and maintaining distance yourself, but you can easily get a sense for when the car is doing something you wouldn't do. There have been many times when I'm pretty sure AutoPilot was about to kill me so I just... hit the breaks. Trust me, it's still a lot safer than if I was driving the car.
Naturally there is always going to be human error, but it doesn't make sense to say "semi-autonomous" is bad while "full autonomy" is good. Guess what, we're going to have to deal with less advanced versions of the software before we can arrive at the real deal. The trick is to know and communicate the limits of the system. For example Tesla will alert you visually and through audio if you don't have your hands on the wheel. If it has to ask you three times, it'll just disable AutoPilot for the rest of the drive. It's just like a more advanced cruise control. If I hit a fire truck because I was on cruise control would anyone write an article blaming cruise control? No, because everyone knows the limits of that system. Crashes like this will force people to learn the limits of current autonomous software as well.
I don't think the statement fails in that it's accurate of the expectations. What has failed is the understanding of the manufacturers of, as you said, human concentration. They advertise semi-autonomous driving as making driving easier, but you still demand the customer invest 90% of the same effort in staying alert. I still have to concentrate, be alert, and keep a hand on the wheel. Literally all I'm "saving" is several hundred minute muscle movements to keep the car aligned. The "semi" part so far out-weighs the "autonomous" part the term (to me) really doesn't fit.
If people can't focus or concentrate enough to pay attention to what's important, shouldn't they have the responsibility to pull over until they are able to do what's insisted? The same would go for if they're in full control of a vehicle however they're falling asleep, we'd expect them to pull over?
Perhaps Tesla, if the model has a camera, can detect where the drivers are - and if not consistently looking forward then alert them?
The benefit of autonomous driving is not having to pay attention. If I have to pay attention to the highway I may as well be in my Ford Taurus with my steering wheel on my leg and my hands free. How about I save $$$$$ on huge rechargable batteries and I get laser sensors in exchange. EV and self driving should be decoupled.
Obviously Tesla's AutoPilot doesn't satisfy what you're wanting - but you're still capable of understanding if you're instructed that AutoPilot still requires your full attention then that means you need to always pay attention, right?
Yes. There's decades of research on this. Some level of automation probably does help. It's not like people don't zone out, especially for freeway driving, even when they're driving with totally manual systems. And, of course, people text, fiddle with radios, etc. But too much automation that's not totally for backup may well be harmful.
It's still illegal to text or otherwise use a handheld phone while driving, even with the automation that we have on some cars now. I imagine this will be the case until the automation is so complete that there is no longer a steering wheel.
I still see people driving around while holding phones to their ears. There needs to be a change to make it socially unacceptable to do this. You only need to be caught twice to lose your license.
It's also illegal to drive distracted. While it's technically legal to use a hands-free telephone, if you could be described as "distracted" because of it, it's illegal.
This is absurd, it's exactly the same argument as when cars were invented, people can't drive thus it will kill more people!!!
You know what's the solution? Don't use the feature if you can't use it. There are people that drink and drive, we didn't ban cars because of that...
If you can't achieve the same level of awareness or don't want to, well, too bad, don't use it. My SO stress when she drives, she is a good driver but she just doesn't like it. I'm pretty sure she will have enough awareness, but that thing will lower her stress considerably and will make driving much more pleasant for her.
>but its manual does warn that the system is ill-equipped to handle this exact sort of situation: “Traffic-Aware Cruise Control cannot detect all objects and may not brake/decelerate for stationary vehicles, especially in situations when you are driving over 50 mph (80 km/h) and a vehicle you are following moves out of your driving path and a stationary vehicle or object is in front of you instead.
Yeah that definitely sounds 100% autonomous... ship it!
>While their technical reasoning is fine... this statement shows a complete failure of reasoning about human beings and UX. In essence they are asking people to stay alert without being active, this is actually really fucking hard to do as a human being.
This is the common case in pretty much all automated manufacturing plants where you have to hit the big red STOP button when stuff goes wrong. I don't quite see how its a failure when it has been working for them. In any case, do you have an alternative proposal that works?
I think the alternate proposal is that semi-automated driving is a bad idea and shouldn't be a thing. It's not on an HN poster to solve the problem for Tesla.
Its also bizarre to identify problems, and assume the mantle of a teacher and tell someone they're doing it wrong without having any first hand expertise.
It's not bizarre at all when you have decades of research on your side. We know that taking the human out of the equation reduces their response times and awareness dramatically. This isn't a new area of research.
> Tesla didn't confirm the car was running Autopilot at the time of the crash"
That essentially means that it was. If it wasn't they would have denied it and released data.
> but its manual does warn that the system is ill-equipped to handle this exact sort of situation: “Traffic-Aware Cruise Control cannot detect all objects and may not brake/decelerate for stationary vehicles, especially in situations when you are driving over 50 mph (80 km/h) and a vehicle you are following moves out of your driving path and a stationary vehicle or object is in front of you instead.
Just for those unaware of Teslas previous handling of other crashes, this is them waving a huge flag saying "We fucked up and we know it".
I wonder how that call went "Hi Tesla, my self-driving car just drove head-on into a stationary firetruck" "Ohh... did you read the manual? It's supposed to do that, so it's basically your fault!"
Based on past experience, Tesla PR tends to respond within 24 hours to push the blame on their customer. In the court of public opinion, a change in behavior like this is a pretty good indication.
Apparently the driver of the Tesla didn't have time to react to the firetruck as their field of view was obscured by a large pick-up truck that changed lanes seconds beforehand.
If Tesla's autopilot can't detect upcoming stationary objects, it should in the very least keep enough distance between itself and the vehicle ahead in order to allow the human at the wheel to have an unobscured view of the road ahead.
Autopilot quitting when it's too difficult, and expecting a human to jump-in at the last second is a stupid plan. Tesla needs to fix their attitude to failure and take the blame if autopilot gives up.
If in the very least the autopilot could ring an alarm when it gives up. It seems like the autopilot in this case didn't know it had given up. The engineers had given up...
According to the article, a stopped truck is in the same class of objects as an overhead sign, so a lot of false alarms would be going off if this wasn't also fixed.
It does have an alarm when it's about to avoid an obstacle (and it's actually a pretty cool noise, if you ask me), not sure why it wouldn't have one for this scenario as well.
I was in a crash some time ago where the car in front of me jumped out of the way because the guy in front of him was at a dead stop. I can understand why this is an issue for the autopilot. I mean, it's an issue for humans too. It is simply infeasible to drive on a highway in such a way you account for stationary objects in a lane. If we did that, we would never get above 5mph.
In the cases presented, the car in front reacted in time to make it safe for that car, but they probably had a less obstructed view. It's not the car in front's responsibility for the car behind to follow at a safe distance.
It is your responsibility as a driver to maintain a safe distance behind the car you are following. The car in front of you should be able to spontaneously turn into a brick wall, and you should have enough space to stop before hitting it
Why is it unrealistic, besides the fact that you're following too close behind? If your line-of-sight is shorter than your braking distance, you are driving wrong.
Something more realistic then. Drive far enough behind the vehicle in front of you so that you can stop before falling down the same sinkhole/broken bridge/earthquake damaged overpass they just fell into.
Math seems to work out if you keep a 3 second distance to the car in front of you.
Google shows two-second rule when looking for "safe following distance" - I was taught 3 seconds and it seems more reasonable assuming slightly over a second reaction time.
This has happened to me several times over 35 years of driving, driving along at freeway speed and the car in front of me swerves at the last second to avoid a stationary object either car or mattress or very recently dropped material from a truck in the road. Usually there is a clue because you can most of the time you can see how other cars other than the one right in front of you is reacting unless the car ahead of you is much taller or has super dark tinted windows. I wonder how accurate LIDAR is through the rear window and front windshield of the car ahead of you.
I cannot vote this comment up enough. There needs to be a clear view of the obstruction in order for a person or an autopilot to see it in time. This is why you get a huge long string of cones easing people out of a lane before a road work obstruction. Similarly, warning signs should be placed before a stopped vehicle obstruction - there are quite a few countries in the world where not carrying warning signs for this purpose all the time is illegal.
>Apparently the driver of the Tesla didn't have time to react to the firetruck as their field of view was obscured by a large pick-up truck that changed lanes seconds beforehand.
That changes the story a lot, do you know where you got that information?
Suggested highway following distance is two seconds. This gives you time to react to a decelerating vehicle. It is not sufficient time to stop if a stationary object appears, either because the vehicle in front dodges out of lane, drives over it with higher clearance than you have, or plows into it.
It is entirely possible for the vehicle ahead to have an escape path to another lane but you to be pinned in your lane.
Our traffic laws don’t provide for a zero accident environment. They just reduce the damage and suffering to a level society will tolerate.
> Suggested highway following distance is two seconds.
Suggested by tailgaters?
1) A two-second rule is insufficient.
2) You need more time if you can't see around or through the vehicle ahead of you (and various other reasons, like weather, road conditions, being followed by a tailgater, etc.)
California driver handbook:
"""
Most rear end collisions are caused by tailgating. To avoid tailgating, use the “3 second rule”: ...
You should allow for 4 or more seconds or when:...
Following large vehicles that block your view ahead. The extra space allows you to see around the vehicle.
"""
Most US States say two seconds. Some say three. German law requires 0.9 seconds, but studies show 41% of drivers there follow closer in relaxed conditions!†
Out here in the relaxed midwest most drivers in 60mph traffic seem to follow about 1.5 seconds.
It looks like highway planners expect to get 1900 vehicles per lane per hour at saturation, that averages 1.9 seconds between cars.
␄
† https://en.wikipedia.org/wiki/Two-second_rule (wherein it proclaims sleep impaired drivers in bad conditions should leave 6 seconds). If you think you need a six second reaction time, pull over and get out of the car.
I'm curious how you've determined "most US States". I cited CA above - 3 or more, Washington wants 4 seconds above 35 mph. Florida says 4 seconds during normal weather and traffic conditions (more if hazardous). Utah does say 2 seconds. Texas wants 4 seconds above 30mph. New York said 2 seconds but "In bad weather and when following large trucks, increase the count to at least three or four seconds for additional space."
And in any case, the 2 seconds is not the "suggested follow distance" it's the legal MINIMUM. That is, "You're clearly unsafe if following at any less than this," rather than, "Close up any gaps more than this."
The rule of thumb in Germany is "half speedometer" which corresponds to 1.8seconds. For example, if you drive at 80km/h, keep a 40m distance. The reflector posts [0] on streets out of towns have a 50m distance, so the distance between two posts at 100km/h.
At 80mph, 2 seconds is about 240 feet (72 metres); 3 seconds 360 feet (108 metres). I'd expect to have my safety buffer merged into with a three second gap. It can still happen with a 2 second gap, but it's a lot less likely.
So you are willing to reduce your safety to stay ahead of the guy in the other lane.
Traffic flows best when drivers allow merges, not when they try to close gaps to prevent merges.
Most aggressive mergers will quickly merge out as well.
I'm not talking about closing gaps to prevent merges, but leaving a compromise between continuously merging traffic in front of you - requiring you to slow down continuously but randomly - vs maintaining a steady speed with a predictable buffer.
I was in a crash some time ago where the car in front of me jumped out of the way because the guy in front of him was at a dead stop. A "safe distance" for following a car that will have to decelerate to stop moving is a lot different if you are approaching a stationary object. It's not fair to expect either a human or computer to meet the latter standard on a highway were you expect everything to be moving at a speed close to the speed limit. There is a reason you can get a ticket for driving too slow.
Why can't it detect a static object on the same path?
I thought Subaru has a system that can do that.. At least according to the commercials it detects when a car in front of the car in front of you has stopped or slowed.
This is different from what happened in the tesla case. What happened here is that the vehicle in front of the car was moving and in view of the tesla's radar sensor obstructing the view of the firetruck. That car then moved and now a stationary object was suddenly in front of the car. From the Tesl's view point, an object just magically appeared somewhere in front of the car.
I'm not familiar with radar systems, but I imagine in this scenario the system isn't precise enough to know that this stationary object that suddenly appeared isn't an overhead sign or some other similar object that pops into the field of view of the radar system. It probably has something to do with radar not being extremely directional, constantly having stationary things pop up due to reflections and various objects in the periphery, having a limited sampling rate, and needing to do a filtering or other sorts of magic to make sure there aren't constant false alarms etc. I can see how cruising along at 50mph, the system doesn't have enough time to figure all this out and stop (e.g. not enough samples or there being a noise floor on stationary object data). It appears the correct way to solve this is with 3d lidar systems combined with sort of of mapping technology, but those are still too expensive/difficult to put into every day vehicles.
I said "the car in front of the car in front of you". That car is not visible either since the car in front of you is blocking it. Subaru has commercials indicating this scenario is addressed as far as I can tell.
It does detect static objects on the same path, but >99% of the environment is static objects. From the article:
>Radar knows the speed of any object it sees, and is also simple, cheap, robust, and easy to build into a front bumper. But it also detects lots of things a car rolling down the highway needn't worry about, like overhead highway signs, loose hubcaps, or speed limit signs. So engineers make a choice, telling the car to ignore these things and keep its eyes on the other cars on the road: They program the system to focus on the stuff that's moving.
Yes, subaru eyesight can detect stationary objects and apply the brakes. However they say that a full stop is likely (but not guaranteed) up to about 30 mph. They aren't clear on exactly what the likely result is at 65mph, but some braking to help mitigate the collision seems likely. Which is exactly what happened with the Tesla.
The Subaru system absolutely does handle this case. It uses visual cameras which watch the road ahead (just like your eyes) so they see what you see and react to objects/obstructions like you'd expect.
I'd had my Subaru slow due to a vehicle bumper sitting in the lane (from a crash that happened shortly before).
I have a 2015 Tesla S 70D with first generation autopilot (traffic aware cruise control and autosteering). The car does keep enough distance if you set it that way. The cruise control has a setting for the distance between you and the vehicle in front, the maximum distance is actually about three seconds separation. If you have it on that setting you have time to react, assuming you are paying attention in the first place. Might not be enough time to completely avoid an accident but enough to mitigate it.
That is nothing to do with any of this. There are two issues:
1. Autopilot didn't leave a big enough gap in front of the Tesla. Presumably because the radar has limited range, or maybe it was a big enough gap but the guy just wasn't paying enough attention.
2. Autopilot ignores stationary objects. This is a deliberate decision - apparently if you don't ignore stationary objects there are too many false positives from stationary objects you aren't driving into (signs, bridges, etc).
The root cause of 2. is that the radar doesn't have good enough angular resolution to say whether something is in your path or not - it can just detect things vaguely in front of you. The solution is probably LIDAR (which everyone except Tesla uses) or maybe some camera-radar data fusion, but obviously it doesn't do that at the moment.
The actual root cause of #2 is that, along with LiDAR, maps are also on Elon Musk’s “won’t do” list, so instead of comparing radar output to ground truth and detecting a stationary obstacle (which Waymo can do easily), Tesla tries to get by without any information about the world, and fails, badly and with sometimes fatal consequences.
There is no such thing as "ground truth". Maps are for navigation, not safety. IMHO the fact that companies keep using radar and lidar indicates that their vision systems are not as good as they should be.
So you have maps that record the location of stationary signs etc. and then have the car stop if it gets a radar return from a stationary object (e.g. fire truck) that's not on your map? Then when someone puts up a new sign you're slamming on the brakes until the map gets updated.
Not being able to tell if a stationary object is something you're going to hit or not (like LiDAR or stereo cameras might do) is more relevant to this case than maps.
The issue at hand is that the software filters out stationary objects. The case you describe only happens if the tesla is tracking the two cars in front of it and the further one breaks.
That suggests the fire truck was stopped in a bad place and had not set up enough warning signs up the road that there was an obstruction in the road. I wouldn't be surprised if the fire truck was found responsible in this case. (That's not to say that auto-driving systems shouldn't be improved to better handle this situation by at least braking before impact!)
I had something like this happen to me about a year ago (October 2016); funny thing was, at the time I was enrolled in the Udacity Self-Driving Car Engineer Nanodegree. After the accident (which was completely my fault), I wondered how a self-driving car could have handled it.
Basically, I rear ended somebody - but the details of why that happened are interesting to me.
I was going down a road I usually travelled to go to work that morning; two lanes each direction with a middle "suicide" lane.
Traffic was fairly heavy, as it always is in the area during morning traffic; there's a school not too far ahead in the direction I was travelling, and traffic "bunches up" and slows in the area, thanks to a crosswalk, people dropping kids off, and a "radar speed limit sign" that tells drivers how fast they are going in relation to the speed limit when school is in session (causes people to hit their brakes). Furthermore, not too far after the school is a stoplight for an intersection with another similar road.
Where I was at just before the accident was crossing a neighborhood street at a controlled intersection; also at that intersection, the suicide lane changes to an unprotected left turn lane.
So there I am, tooling along at my normal 40-45 mph (appropriate speed for the street I was on). Coming up on the intersection, a tractor-trailer semi was in front of me; the driver decided to make a left hand turn at the intersection, and got into the suicide-turning-into-a-left-turn lane. I decided to accelerate a bit to go around him. Now - I was kinda in an "autopilot" mode, not paying as much attention as I should have.
The traffic in front of me - thanks to the conditions I mentioned - which were typical of the morning time period, should have clued me in to "slow down" (which I have since taken to heart). That area, due to the traffic and conditions - is a "stop and go" situational nightmare. Traffic moves a bit, slows, maybe stops, starts again - and people do the typical: They don't keep their foot on the brake, so while they are "going slow" or even "stopped" (idling, not enough power to move) - their brake lights aren't "on".
So there I am - going around the truck - and "traffic ahead of me" registers, but I didn't see any brake lights, so while in my "autopilot" morning haste/haze/whathaveyou, I made the incorrect assumption that traffic ahead of me was moving; accelerating, maybe not going as fast as I was, but getting there. Which was the wrong assumption, of course.
The next thing I knew, I recall that the car in my lane ahead of me didn't have his brakes applied enough to turn his brake lights on, but wasn't moving anywhere near my speed; then he hit his brakes full (causing his lights to go on), and I slammed on my brakes just a bit too late in my realization. My speed dropped instantly, but not quick enough - I was probably going 15 mph or so when I rear ended him.
I was driving my 1996 C1500 pickup, his was a small crossover-like vehicle. Not much damage on my end; his bumper was mashed, lights broken, etc. We pulled over, exchanged insurance info, no cops involved, and went on our way. I called my insurance immediately, of course, and let them know what happened.
So it was a combo of things; my "autopilot" inattention, my speed, the traffic situation, the way people were "braking", going around a semi truck that was blocking my view then "road clear" ahead - it all came together in a "perfect storm", ending up with me rear-ending someone.
I tend to think that an autonomous car, or one with automatic braking (features not to be found on my truck at the time), would have been paying "more attention" and would have seen the car ahead of me not moving as fast (never mind the lack of brake lights), and would have hit the brakes and came to a stop much sooner. Indeed, had I been paying better attention, and going a bit slower (perhaps only going 30 or 35 mph), I could have applied my brakes much sooner, and avoided the accident.
I wonder if AEB was installed and activated in this crash, despite what the cruise control was doing? Surely it must have done otherwise I'd have expected significantly more damage, particularly to the fire truck getting a 2t lump slamming into it @ 65mph?!
Presumably when the car got within a certain distance of the firetruck the cone of the radar was narrow enough that the radar could tell that the truck was definitely in the road and triggered braking - too late to avoid a collision but early enough to slow down some.
> Radar knows the speed of any object it sees, and is also simple, cheap, robust, and easy to build into a front bumper. But it also detects lots of things a car rolling down the highway needn't worry about, like overhead highway signs, loose hubcaps, or speed limit signs. So engineers make a choice, telling the car to ignore these things and keep its eyes on the other cars on the road: They program the system to focus on the stuff that's moving.
So these car-mounted radar systems can't detect how large an object is or where it is in relation to the road? I don't get how they can be used at all then. That can't be true, right? Because if you can tell how large something is, then the hubcap is out. And if you can tell where something is in relation to the vehicle and/or road, then the signs are out. I mean, hell, the most maniacal situation of climbing a steep hill with clearance under a sign that you just can't see because the sign happens to be mounted just after the rise, which probably doesn't exist anywhere in the world, would be out by other factors, like knowing that you're going up a steep hill.
So it really sounds like the only thing left is just not being able to tell how large something is, which sounds wild.
> This unsettling compromise may be better than nothing
People frequently find disappointment to be worse than nothing.
There are physical limits to how good a radar's resolution can be, set by the properties of the radio waves used. Those make it really difficult to detect small size differences -- relative to the radar's wavelength -- though, not quite impossible.
It's why everyone else uses lidar instead, or in addition. Really, it has the same limits, but the wavelength of visible light is small enough that they're only an issue for microscopes.
That's why many (most?) other vehicle manufacturers use cameras instead of radar. You physically see what the driver would physically see and react how the driver would react with the same information (just faster).
See Subaru, Toyota, Honda, etc.
I'm not sure why this article and this discussion seemingly act like the predominant technology used in automatic braking systems doesn't exist?
So I read this and cannot help but wonder if we really are 5 years away from a massive sea change. People think truck drivers could be obsolete in 5 years time, the bullish ones anyway.
The tech is still developing. We may be in a bit of a hype bubble, but it is as real and clear as it has ever been. I think even as the tech advances rapidly, public acceptance will be way further out than 5, 10 or 15 years. I would venture to say self-driving cars and truck fleets being normalized is 30 years away and in no way because of the tech, but due to all the red tape and public debate that will encase the issue.
If you think all truck drivers are going to be displaced in 5 or even 10 years, you may be living in an HN bubble. Sure jobs will be automated away, but this is going to take more time than lots of people think. For instance, one article like this could delay acceptance for months or years, in my opinion.
Even once the problem is "solved", there's still an entire industry, one of the biggest, to turn around. Right now we still don't even get software updates for our head units in our car in most cases. As technology improves, how will self-driving cars adapt?
We change out our phones every couple years, but may car is five years old and I'm just about to finish paying it off. With a decade being a reasonable replacement cycle for a car, how will the car technology keep pace? Surely people aren't going to start buying iCars as often as they replace iPhones. Even if you adopted a service-only module for the industry, it's not like driving services will want to replace their entire fleet every two years.
Will cars have some sort of "technology package" including the computer and some of the sensor modules that can be made plug-and-play removable so you can upgrade your car?
> People think truck drivers could be obsolete in 5 years time, the bullish ones anyway.
I'm not sure how many people _really_ think that; it's pure fantasy. To be honest, I wouldn't be surprised if they're not obsolete in 20 years. AI stuff has a tendency to get nearly good enough quite quickly, and then improve very little more for decades afterwards.
>AI stuff has a tendency to get nearly good enough quite quickly, and then improve very little more for decades afterwards.
What tends to happen is that some approach, deep learning in the current case, turns out to be very effective for a class of problems. But it turns out that it only gets you, say, 90 percent of the way there to being truly useful in the real world. And there isn't an obvious way to get you the other 10 percent.
My take on it as far as truck drivers is that the pay for truck drivers (as they are today) will likely drop as the skill required to drive one drops from partial automation. Similar to how airline pilots of today's advanced jets are typically only "needed" during taxi, take-off, and landing.
The fact that this flaw is known and in production gives credence to some of the AI experts who are saying level 5 is waaaaay further away than we are being led to believe.
I feel the true danger is in the "uncanny valley" of automation - it's good enough that you don't need to pay attention 98% of the time. People underestimate tail risk, ignore the manual etc.
My guess is that you're right: level 5 will take about as long as it takes to get to photorealistic rendering - never quite there.
If you apply that reliability percentage directly to time driven (inaccurate, I know, but for the sake of argument), a system which takes care of 99% of driving problems will still abandon you to chaos for 18 seconds out of every 30 minute drive. This is an eternity in a moving vehicle where milliseconds can draw the line between life & death.
The only systems safe enough to allow fully automated vehicles in the near future will have to be on dedicated lanes or roads, probably with additional safety mechanisms on the roadway itself.
It's not a problem with lidar according to the article. Everyone except for Tesla works with lidar. So the question is just when the systems are cheap and small enough for everyday use. AI doesn't appear to be the issue here, just the data input.
The article states that every except Tesla plans to work with lidar, not that they are currently. The technology still needs to be toughened up and made cheaper for it to be viable.
Available cars are not fully self-driving. I think this year only Audi will release one where the car takes full control on motorways. All others require you to remain vigilant. If you sell a self-driving car you can't put warnings like this in the manual since the manufacturer is on the hook if anything happens during self-driving (at least if it was the car's fault).
The very instant the “just around the corner” hype dies on lvl 5 automation, Uber is going to drop dead as a ride sharing service. Tesla will take a hit, as will all of the companies scrambling to be first up the automation ladder. The torrent of VC and corporate money will become a trickle where it doesn’t just dry up.
All of which is just to say that those voices pointing out the technical challenges are up against seriously desperate interests. This could get messy.
Tesla I'm not so sure. I expect the hype/valuation is based more on EV leadership. But Uber? Probably? Especially with new leadership, I don't really understand the logic of continuing to subsidize rides below cost if you're not trying to maintain the fantasy that some miracle is going to happen that radically alters the cost structure.
The cars around that car are still communicating. So the car next to it sees a drastic speed drop and tells everyone else.
Also say the cars regularly ping each other for updates, then one car that was in sight and is still in sight disappears doesn't respond, then it gets flagged as a possible hazard.
I disagree and I have a kid. I'd be happy to let an automated car pick up my kid if it was demonstrated to be safer than I'd likely be.
Car related deaths are fairly common, I can't see any rational reason to not switch to autonomous driving as soon as they are statistically likely to save lives.
Do you really want a large (35,000 a year or so in the USA) people to die every year until autonomous cars are perfect? Not to mention nothing is ever perfect.
Doing your best and working towards safety is one thing. Putting 100% trust to autonomous thing is a different thing. If you know it's 100% safe, than it's rather easy to do. But if it's 98% safe.. It's pretty much gambling. You know it may fail and you have no chance to influence it. Sit back and hope you're not the (un)lucky one.
We really want to be able to have cars avoid obstacles on the road whether or not those obstacles are cars or not. And given how infrequently firetrucks are replaced we shouldn't expect that even if every new vehicle had this technology it wouldn't have made a difference anytime soon. And given the inherent security vulnerabilities of the technology, I'd expect that it would be a net negative to road safety by the time it had any substantial impact compared to the general progress of self driving cars getting better.
Agreed. We won't ever get autonomous systems. I don't think that should even be a goal for production cars. We'll get roads/areas retrofit with signals to help cars and I think some type of coordination signals/beacon locally between cars.
The problem with Autopilot is the name and the marketing. Tesla markets it as more advanced than other systems, which is not true, and the name suggests to people that it is a more or less self-driving vehicle, which they are encouraged to believe with the Summon feature as well. The responsible thing for Tesla (and other adaptive CC makers) to do is to do studies on how many people feel safe browsing the web when they enable their cruise mode. If it is significant at all, it is a sign that they should perhaps disable these modes of operation or make them require more activity from the user.
I think it's safe to say Autopilot was enabled. Tesla has never waited to rush out and state that the car's driver was lying and that their system didn't screw up. Presumably they will still blame the driver for not paying enough attention, but as the article says, not being able to detect a stopped emergency vehicle is a pretty giant gaping flaw.
They're all just assistive driving systems and anyone who thinks that robo-Uber is going to pick them up at a bar and deliver them to their driveway in less than a number of decades is delusional. The issue is that these systems will get good enough to work enough of the time that people think reading a book or watching a movie with the occasional glance at the road is OK when it isn't. (Of course, plenty of people don't really pay full attention today.)
Waymo overestimates their actual capabilities. They have remote drivers in a call center to pilot the car when it gets confused. And it only works in a very tiny geographic area compared to Tesla-style solutions which work nationwide.
Honestly, what makes you think that, given that multiple corps are saying that they will have autonomous cars on the road this year?
I can see how "this year" could slip into next year under unforeseen circumstances, but when your own prediction is so far away from the collective wisdom of an entire industry sector, it sure looks like a massive case of Dunning-Krueger, so you should have a very solid argument to support your prediction.
No they're not or if they are it's with lots of caveats and limitations. In fact, Volvo, which has been one of the more aggressive companies in this space, has apparently moved back their plans. [1] I'm actually willing to believe that this space will develop faster than it looked a few years back. I note that one of the more skeptical AI researchers working on this, John Leonard, is taking a sabbatical from MIT to work on Toyota's self-driving initiative. But quicker to market is still probably at least 10-20 years. Perhaps just not "not in my lifetime" as Leonard said a few years back.
I think it will go much faster than people realise, because the big blocker to infrastructure changes that make self driving easier is the fact that there are no self driving cars. Once they have a foothold somewhere, with caveats and limitations, it's much easier to convince the neighboring city that with "only a few updates to these intersections" or "just a couple of beacons" or whatever, they can have self driving taxis too.
It took close to 70 years after the wright brothers before people en-mass felt safe to fly in an iron cage at 36,000ft. So yes, AI cars "work" but you have to ask yourself if you'd let it pickup your kids. I think the answer is still 'no' for most people.
Yes, totally, although their reasoning that they must ignore stationary objects to avoid too many false positives sounds pretty reasonable to me.
Also, on the other hand not being able to recognize that your car is about to slam into a stopped emergency vehicle is also a giant gaping flaw on the driver's side.
Disclaimer to put my last statement into perspective: I have absolutely no relation to Tesla except for being a potential future customer. However I am driving a decent stretch daily and get to experience first person many stupid things people do in their cars due to ignorance/not paying attention/whatever, also I often use the adaptive cruise control of my Audi which shows me quite some false positives and restrictions (such as not letting itself deactivate below 15mph, heck, even deactivating itself without previous notification)
"their reasoning that they must ignore stationary objects to avoid too many false positives sounds pretty reasonable to me"
I don't understand your logic. Stationary objects either are rare, and wouldn't trigger too many false positives, or they aren't, and the car should be prepared for them.
I think the problem with this is to set them into the correct context. Around the road (and thus in the visible field of the sensors) you have A LOT of stationary objects such as houses, trees, traffic signs, roadside objects, parked cars, ...
I think there is probably a big challenge determining which of these objects might be in the way and how to correctly ract to them.
Just as an example: Considering the situation the article and enhancing it a bit with guessings, it would have been the right decision to either stop or change lanes (depending on the road). However, if the car decides to change lanes and there is traffic on the other lane, we are in the same trouble of an accident due to autopilot again. Also, if autopilot decides to change lanes (or something similar in the category of "driving around") on a one-lane road (maybe even with missing lane marks), we might end up in oncoming traffic. Or what happens if the system falsely applies the decision to break in a turn with roadside objects (think of a 90° turn with a house in the corner). If the car falsely applies an emergency break here, our hypothetical car might be the one smashed into. Also, how do you set the thresholds for "stationary". If it is set too low, we might not stop infront of a stationary object until it is too late, if we set it too hight, we might stop way too early, say, in a traffic jam, again being the ones provoking accidents or more traffic jam due to unforseen heavy breaking.
Yes, the problem is difficult and hasn't been solved yet. Knowing that, Tesla should have figured out how to safely disengage autopilot and transfer control back to the driver before selling this feature (an autopilot in a plane, for example, can alert the humans in the cockpit way in advance of upcoming problems, and has incredibly annoying ways to get their attention, if needed)
I saw a tv program about self-driving cars a couple of years ago where the programmers said they would have it ready for major roads in a couple of years, but their human factors engineer said "a couple of decades", precisely because they realised humans would always be needed, and that the problem of transferring control back to humans while riding on the road hadn't been solved.
I'm inferring from the article the problem is spatial/temporal resolution of the _radar_ currently being used for speed sensing for adaptive cruise control. There are lots of objects that are stationary and near to the road but not in the path of the vehicle (road signs, overhead signs).
But how should fully automated driving without lidar ever work then? To me it sounds like the first supplier of a small and affordable lidar system will make a lot of money and everyone not using it will lose out.
Of course, this needs to be fixed in one way or the other to achieve fully autonomous cars, as many others like destinguishing red traffic lights from other red lights. But that's why nobody calls it fully autonomous yet.
What always boggles me about such news is that people rely so much on these systems and apparently people drift away that they even don't notice an f*ing big red fire truck standing in their way. I often drive with dynamic cruise control (radar based, only keeping distance, breaking and speeding up, no steering), but I am in many situations in which I think "I need full control here" and temporarily disable it. Also, there are many situations where I don't use it in the first place because I know the way it functions is bringing downsides, such as driving on multi-lane roads (read: german Autobahn) since it will hold exactly that much distance to the car infront of me that every idiot will cut in, causing cruise control to decelerate to leave a bigger gap, rinse, repeat.
> What always boggles me about such news is that people rely so much on these systems and apparently people drift away that they even don't notice an f*ing big red fire truck standing in their way. I often drive with dynamic cruise control (radar based, only keeping distance, breaking and speeding up, no steering), but I am in many situations in which I think "I need full control here" and temporarily disable it.
Maybe you're not a typical case?
FWIW, I consider myself a pretty decent focussed driver, yet when I hired a car with dynamic cruise control a few months back, I was shocked how much losing responsibility for part of the driving experience --the gears, acceleration, and braking, obvs-- helped me to relax too much, and lose a little bit of focus. I was still steering, and it wasn't like there were even any near misses... but I just felt my attention wander too many times for comfort.
Might be I'm not the typical user of this, probably also because as a software engineer I know how much guesswork and unsolved questions lies in these systems.
Of course, I also notice that my focus changes a bit, but it still stays on the traffic around me and I would consider it rather improbable that I would oversee a stopped vehicle infront of me, not to mention a fire truck on a mission.
Maybe it's also that I'm driving daily in and out of Stuttgart, one of the main traffic spots in southern Germany. There's a lot of cars and ignorant driving, especially in dense traffic or jams. Plus we have a rediculous amount of roundabouts in the meantime and there are a lot of people that decide to either push into the smalles gaps, shooting in from your side and forcing you to do heavy breaking or other people approaching an empty roundabout, doing a needles near-full stop, accelerate, just to decellerate again, almost coming to another near-full stop to get around the corner (never understand the thinking behind this :-| ). I don't trust adaptive cruise control in either of these situations.
I guess the output of the radar is basically a list of relative velocities for nearby large objects, paired with distance and rough direction to those objects. There is always a large object essentially all around you moving with the same relative velocity as your speed : the ground. And there are often other large fixed objects (overhead signs, overpasses, etc) which appear roughly dead ahead.
Yes, my 2017 Subaru with EyeSight behaves this way.
A couple of other scenarios when adaptive cruise is on: cars changing lanes into the lane ahead of me at short distances do not cause it to react quickly enough. It seems there's too much lag in the time to capture the interloper; and, if I'm approaching a car quickly that is moving much slower than I and I signal a lane change but don't execute it, I'll collide with the slower car. (or so it seems to me, the system did not slow down and I had to apply the brake to avoid colliding).
Another related problem with vision-based systems, these random type of road artifacts that fool the system. This image shows a false left-side lane marker (formed by a partial re-paving and the shadow cast by the center barrier). It sounded a lane departure alert. If the car were autonomous, it likely would have swerved right trying to center: https://imgur.com/a/KnsXU
That last situation gives me problems - my least favorite is that combined with jersey barriers so the old marking goes straight into the jersey barriers versus the new marking.
I have Adaptive Cruise Control on my VW Golf, and I must say that I preferred my old Honda Accord vanilla cruise control. I found with the old system that I could relax if the road ahead was fairly clear. If car changed lanes in front of me, I immediately touched the brake, which disabled cruise control. I could relax and yet know I could regain control immediately.
Even before reading this article, I have not been able to relax with my new automatic cruise control. The problem is that I don't have a solid intuition for it - so instead of relaxing, I remain nervous and alert while it is on. In reality, I use it way less than I used the vanilla system.
> "a stationary vehicle or object is in front of you instead"
So it isn't that Tesla and volvo don't see a stationaty vehicle: they might not see objects as well.
Evil plan: you, in the front car, drive against a wall, and turn in the last second to avoid the collision. The autopilot behind you will accelerate against the wall.
Evil plan 2: you, in the front car, see a car/truck stopped in the lane. You waint until the last second to turn and avoid collision, the autopilot behind you will accelerate against it and crash.
Isn't "blinding" self-driving cars with radio waves much more dangerous? Not sure how today's systems work in that case. But for a fully self-driving car, losing radars would probably cause emergency breaking.
I'm driving a '97 Miata now. No cruise, no ABS, basically no safety features other than questionable 90's airbags. It does have an LSD which is great. I find that being in touch with the road contributes to better control and safe driving on my part than when driving bigger vehicles with automatic trans, cruise, etc. Also driving with the top down feels pretty scary at first, and being that exposed really makes me think about how I'm driving.
I understand that these systems must ignore stationary objects or they'd break all of a sudden at every road sign you encounter, but is there no way to tell a road sign (albeit a big one) from a fire truck that is 20 times bigger than a road sign, and 2-3 times bigger than your car?
No one is going to pay that much attention if your car is on auto-pilot. That's just how people are, no matter what's written on the manual (which no one reads).
If that worked well they would not need a radar system. Accurate binocular vision is a hard problem we make a lot of assumptions about how the world works.
Again here is a sample image. http://iowahighwayends.net/ends/June06/16/34exit263_99_halfm... Notice the road bleeds into the sign, but based on experience we assume the car is not about to hit the bridge at road height. It's a single image, so we are not using binocular vision to figure this out.
It is going to have to work well for their autonomous system to actually work over time. Yes, it is a hard problem, but not insurmountable (especially when they have data from thousands of cars driving the same roads).
But how can you tell they are 4 meters above the ground level. Nothing in the image directly shows the last row of signs are that high off the road.
We can say they are not close enough to be a problem yet, but all I am pointing out is it's a harder problem than your thinking about with a lot of potential false positives.
I'm talking about maps + GPS (where is the road? will it curve in 50 meters?), and radar (what's in front of the car? how far is it? is it in the middle of the street, or on the side? Is it at the same height as my car?).
I don't see how these two tools together can make accellerating towards a stationary object in front of you make more sense than slowing down, or at least make some sort of sound to alert/wake up the driver.
Like I said, I know it's a difficult problems and very smart people have been working on this for possibly 20 years, but it still doesn't make sense to me.
You're bringing up maps, but they don't help in this case. Consider what happens if one of those signs falls down. You need to react to what the sensors detect not what it assumes is going on. (Maps useless)
Radar shows relative speeds, but can't distinguish between the road, bridge, and a stationary firetruck. It simply does not in any way help with this problem. (Radar useless)
LIDAR does what your thinking of, but Tesla does not use it. (LIDAR missing)
They use images, to avoid using LIDAR. So, now we have to detect stuff from images which is why I am showing you images. (immages only thing you have.)
At highway speeds and significant range yea. This gets into signal processing, when you send out a radar pulse you get a lot of junk returns. If you are slowly backing up and just want to know if a wall is there sure that works fine.
But, for more distant objects a single pulse gives ever more crap the further out it travels. If you send 2 pulses next to each other you can adjust for vector of car and ignore the repeated data to just look for stuff moving relative to the background.
This results in really 'easy' automatic cruse control that ends up hitting stopped objects on the road.
PS: Things would get a little better if cars had one of those rotating arrays you see on ships, but car radar is a solid state system.
What is the safety record for semi-autonomous cars per mile driven? I worry that even if it's better than human piloted vehicles, progress could be short-circuited by the fact that since this is a novel technology, it makes the news every time an accident occurs.
On the other hand, while it seems that we're on the cusp of an autonomous vehicle revolution, and we'll see commercial use adopting early, consumers are going to be wary. The failure modes are very different than what drivers are used to. The systems appear "dumb" and make mistakes that humans rarely do. I see more promise in the short term for collision avoidance systems, or other systems which augment the driver's ability. For example anti-lock brakes have been an almost complete win, almost to the point of invisible ubiquity. I'd love to have something on-board my car that can help me safely avoid hitting a deer at 75mph.
>I'd love to have something on-board my car that can help me safely avoid hitting a deer at 75mph.
I wouldn't hold my breath for that. Faster reflexes notwithstanding, automated systems don't change the laws of physics. If anything, I'd expect a human--at least during daylight hours--would be more likely to notice a deer at the side of the road and react accordingly before they ran in front of the car.
This article is fluff/FUD when it comes to the implications about autonomous driving. “Autopilot” is a marketing term for a kind of cruise control. The article repeatedly tries to make the point that if “the best system available” doesn’t stop for stationary vehicles, how will self-driving cars ever work? However, Tesla’s Autopilot is not a self-driving system, it’s a cruise control system. Emergency braking systems are also not self-driving systems. The comparison is meaningless.
I say the same thing to the comments about the irresponsible “UX” of Autopilot for not handling this case. It’s more about irresponsible marketing, capitalizing on the enthusiasm for self-driving cars, potentially lulling overzealous customers to trust a cruise control system or emergency braking system with their lives.
I don’t know the tech well, but the article doesn’t explain why it can’t differentiate between say, a sign post on the side of the road and a large object in the direct path of the vehicle, and slow down in the latter case.
There are a ton of datasets out there for things like traffic sign identification, and vehicle/pedestrian/etc identification - but most of those vehicles in those datasets are:
1. From the rear (or oblique rear angles)
2. Of mostly automobiles and small trucks
There likely aren't many real-world datasets of vehicles facing the vehicle and stopped in the road, and/or of larger vehicles (buses, semi-trucks/lorries, emergency vehicles, bulldozers/tractors, etc).
Without having such datasets, the AI you have trained won't have any way of knowing about those objects, and they may be odd enough that they can't be easily extrapolated from the datasets you do have to train on.
And by "dataset" - I mean you need massive amounts of data; 50,000 images would be a "small" dataset. Ideally you want many times that, plus you want to "generate" synthetic views for more training data (by skewing/warping the existing images, it helps the system generalize on the data).
That's a possible explanation, but I don't know if that's the reason, part of the reason, or no reason at all. Plus, it's not to say they are "at fault" - or to absolve the driver of any responsibility.
I'm just throwing it out there, based on my knowledge and experience - which isn't much, I admit (I completed the Udacity Self-Driving Car Engineer Nanodegree last year, plus Udacity's CS373 course, and what is now the Coursera Machine Learning course - but I took it as ML Class in 2011 before Coursera existed).
Hmm, would have expected something “more simple” like a sensor every 15? degrees. When a medium+ object is headed at high speed towards 0 degrees the car should slow, if on a different vector it ignores. The type of object is somewhat irrelevant, e.g. don’t want to run into a road sign either.
I doesn't look like they try to explain it. They just try to put the blame on others because the alternative would be to come out and say "We lied (about "AutoPilot").", "We don't know how to create good software".
Why not just have a system that produces a full 3D model of the surroundings and their velocity (using something like doppler radar) and then just do a geometric computation in 4D spacetime to find out a path that doesn't intersect with the trajectory of anything? (assuming their speed is unchanged)
That should prevent all such trivial accidents and allows to actually mathematically prove that the software works, as it should be required for such a critical system.
Machine learning can be added for object detection to allow more sophisticated trajectory prediction.
Indeed, "why not just"; I can't tell if you're being sarcastic or not.
I'll assume the latter: Even if you were able to perfectly maintain an accurate 3D model of the surrounding geometry (and associated trajectory of all moving objects), it doesn't mean you have enough information to make 100% accurate predictions. For example, a frozen over road often has very nearly exactly the same physical geometry and color, but extremely different behavior to cars trying to drive on it. Other weather effects like fog or snow bring their own challenges.
You're right though, that precise and accurate 3D models do do a lot to make driving easier and more correct, the problem is that the hardware necessary to produce those models is still very expensive (LIDAR). Most (all?) of the advanced self-driving prototype cars use LIDAR, while Tesla has decided that they can do without for their consumer models, sticking with just the simpler and less expensive RADAR and cameras.
In theory, Tesla's right, since humans get by just fine with only two "cameras", but we're clearly a long way from automating that level of performance.
Using multiple separate emitters and detectors it should be possible to determine how every point reflects light (i.e. sample its BRDF for a limited angular range and set of wavelengths), which should allow to distinguish at least between a limited set of materials such as snow, solid ice, liquid water and asphalt.
Cameras alone seems unsuitable, since it doesn't seem possible to make a camera-based system that works correctly all the time, ideally provably so, as required for a life-critical system.
Last year or so there was a article about what sounds like exactly the same problem with radar. Overhead traffic signs and very reflective small objects confusing it. The argument was that Tesla is in a great position to deal with it because they have so much human driver data. If humans keep driving through the giant obstacle it's probably not a problem. Apparently Tesla instead decided to ignore the problem...
>These systems are designed to ignore static obstacles because otherwise, they couldn't work at all.
>But it also detects lots of things a car rolling down the highway needn't worry about, like overhead highway signs, loose hubcaps, or speed limit signs. So engineers make a choice, telling the car to ignore these things and keep its eyes on the other cars on the road: They program the system to focus on the stuff that's moving.
I don't know about the sensors on today's semi-autonomous cars but it seems like there's already enough data there to prioritize collision avoidance over forward motion.
It seems like the Wired story is very incomplete about the details. It needs a more "Ars Technica" in depth treatment or more ideally, an actual Tesla engineer to explain the self-driving computer's decision tree.
How do today's sensors that advertise their ability to "maintain distance" between cars work?
What's the difference between the following to scenarios to the front-mounted sensor?
- 60mph Tesla slamming into a slow-moving road cleaning vehicle at 20mph
- 40mph Tesla slamming into a stopped vehicle at 0 mph.
Isn't the ongoing computation of dy/dx to determine a "safe" gap between cars the same?
EDIT to summarize replies with a possible technical explanation...
The resolution of the radar sensor treats overhead signs (like highway signs[1]) as being on the "same 2-dimensional plane" as cars directly in front of you. This would generate false positives. It's not 3D radar which would yield spherical data inputs of impending collisions. Without 3D data, you can't write a rule that ignores objects not 0 degrees in front of the car.
The resolution of radar treats surface level signs (such as the double-arrow[2] at T intersections) as being the same harmless mass as a stationary car. The low resolution cannot distinguish between the shape/footprint of small signs that are supposed to be there vs (large) stationary cars that are not. This object categorization requires LIDAR instead of radar.
Therefore, programming an unambiguous algorithm to prioritize "collision avoidance" is not possible with the current radar sensors. Is that an accurate summary of the technical limitations?
The difference is that if all you have is a radar telling you that something is moving at 20mph then you can guess that it's probably a vehicle in the road and you should brake. But if there's something that's stationary at 0 mph it might very well be an overhead sign or otherwise something that you don't have to avoid. A radar can be very precise in detecting distance and relative velocity but very bad at detecting which direction something is in. For the sort of non-dish radars used in cars I'm not even sure they can detect direction at all.
Ignoring things that aren't moving is a standard technique in radar to prevent your returns from being swamped by, e.g., the ground instead of fast moving planes you're looking for.
To add even more detail, as there still appears to be a lot of confusion:
The radar systems in these vehicles send out a radio pulse in a broad approximate-cone forward. They get bounces back from everything that reflects radio in front of them. Distance from the object is calculated by time between pulse and response. Speed towards/away from the object is calculated from Doppler shift of the radio frequency.
There are two main things that these systems can't detect.
1. Speed of the object perpendicular to the direction of radio wave travel.
2. Location of the object within the approximate-cone the radio pulse travels in.
Note that thanks to the second, you can't calculate the first with higher-level object tracking, either.
So the data you get back is a list of (same-direction velocity component, distance) pairs. There's no way to distinguish between stationary objects in the road and stationary objects above the road, to the side of the road, or even the surface of the road itself.
Radar just doesn't provide the directional information necessary to handle obstacle detection safely.
The difference is (according to the article) that there is an abundance of objects at 0mph - signs, litter, barriers - so the system filters all of these out to avoid constantly braking. There is no such abundance of ignorable items going at 20mph.
I don't think any item is "ignorable" if you are about to run directly into it. I get that the computer might not be able to track every road sign or object to the left or right of the car.
Still do you want your car to directly hit a sign, litter or barrier? Why not have a second smaller system like the backup warning sensors most cars have to avoid hitting non-movable objects taller than 6 inches?
But in curves (or directly before them) you have plenty of stationary items that you appear to happily drive into in a few seconds. They're right ahead, after all.
Also note that despite all the hype surrounding those features you're still supposed to pay attention as a driver. Marketing and manuals tell very different stories here and the liability question is purely answered by the latter.
Since the car knows its speed, it may categorize detections by their velocity relative to the ground (i.e. just subtract the car’s velocity from whatever the detection indicates). This would be able to filter e.g. road signs vs cars moving slower regardless of relative velocity as long as the car is moving fast enough.
Humans aren't that great at this either, especially considering our tendency to follow too closely.
I witnessed an accident very similar to this. A vehicle was partially blocking the left lane of a 4 lane highway. A pickup truck was traveling at highway speeds in the left lane. A small 4-door car was following the truck too closely.
The truck saw the lane blockage fairly late and performed an emergency lane change to avoid hitting the stopped vehicle. The small car behind the truck slammed into the stopped vehicle without ever even hitting the brakes. The driver was very seriously injured(the car battery was in her lap and most of the engine had entered the cab) but afaik survived.
That was my life lesson to always allow space between me and the vehicle in front of me.
A great rule - always look ahead of car you're following. If you can't (e.g truck or bus), either give it a fair amount of distance or overtake. If neither is possible, stay super cautious.
No because relative speed between the two vehicles is different then stopped objects.
These two things are treated differently by a radar system - imagine you are driving in semi AV mode and you come around a turn and there is a vehicle on the side of the road stopped (i've been in this situation). For a hot second the car is pointed right at that fixed object and is probably still processing the lane curvature (if it can at all, some adaptive cruise systems don't have lane monitoring). The vehicle will ignore the fixed object and keep going as to not slam on the brakes which can be very dangerous.
Isn't the solution some sort of collision avoidance system that is used in airplanes? Or perhaps as a stop gap, some sort of add-on (passive?) device for operators of vehicles that are frequently stopped can use to signal to autopilot vehicles that they are an obstruction to be avoided? It's a hack, but perhaps a necessary one?
I think this shows that autopilot won't truly be a thing until the cars start talking to each other. If the firetruck was broadcasting a stationary signal, the autopilot would not have to relay of cameras and radar to figure out what is where.
IIRC Model S only has cameras while Model 3 also has radars (maybe in response to the first death with autopilot, which didn't see a white truck against white sky).
While their technical reasoning is fine... this statement shows a complete failure of reasoning about human beings and UX. In essence they are asking people to stay alert without being active, this is actually really fucking hard to do as a human being. Noticing an object that needs to be avoided while you are already actively engaged in driving is easy, you are naturally alert and there is no "intervention time" because there is nothing to intervene with. Trying to achieve the same level of response while letting a machine drive kind of defeats the purpose of letting the machine drive.
Yes I'm basically making the argument that semi-autonomous driving is not safe. Make it fully autonomous or not at all. Two modes, keep it simple, because human concentration is complex.