If you're incredulous that Tesla would let people oversee a moving car at 15mph in a parking lot, wait until you hear what happens next after the car gets to them!
I think driving is a special case for safety, because we have put humans in charge of controlling 2000+lb cars at high speed in close proximity with other humans doing the same thing, and we've either designed or retrofitted this activity into almost every dense human habitation on the planet. Obviously we would never make such a decision now, but it gives a special context to today's safety decisions. It is a major ongoing source of death and injury as well as a major source of pollution, and bringing that era to a close is a special priority. In the United States roughly 100 people per day die in car accidents, and many more are seriously injured, so you could say that every day we can accelerate the end of human driving saves a hundred lives.
I agree that having humans drive cars is inherently unsafe and that self-driving cars will eventually be safer when the technology matures. However, that doesn't justify Tesla releasing features before they're ready. That doesn't move us towards a self-driving future; in fact, it sets us back.
100 people per day out of how many people driving per day? I'm pretty sure it's a very small percentage. How does that compare to the percentage of self-driving car accidents relative to the number of self-driving cars? My gut feeling is that if you replaced every car on the road with today's self-driving cars, the accident rate would go way up.
There's probably a danger-maximum somewhere before 100% replacement, though. Some mix of highly-predictable self-driving cars and poorly-predictable human-driven cars will cause the most accidents, and a lower number of either will cause fewer accidents.
There's probably a danger-maximum somewhere before 100% replacement, though. Some mix of highly-predictable self-driving cars and poorly-predictable human-driven cars will cause the most accidents
That's an interesting point. Since the technology will be improving at the same time adoption is increasing, I don't know if we'll be able to observe that maximum, but I think you're right. One thing to consider is that once there are a significant number of self-driving cars on the road, they'll start communicating in a much faster and richer way than human drivers are able to communicate. If self-driving cars are able to achieve human collision rates with human drivers, they will be able to achieve much, much lower collision rate with each other. That effect might lower the percentage at which safety starts to improve.
Another variable will be the built environment. Over the years we have developed road systems that are highly enriched with features to assist human drivers, and of course we will begin to do the same for self-driving cars.
Nobody is talking about those improvements right now because they aren't relevant to getting the first cars on the road. The engineers working on self-driving cars have to tackle the hardest version of the problem that will ever exist first, and only then can they start making it easier.
Confession: for years I predicted to my friends that the first autonomous passenger vehicles to coexist with untrained members of the public would be in a theme park or a city center closed to other vehicles, with sensors and other assistance built into the environment, because I thought we'd have to start with an easier version of the problem.
Honest question, Can somebody explain me the use case for this feature? I don't get it.
I mean, if the summoned car came to me in a totally autonomous way, it would be useful in very large parking lots, like an airport, campus, etc... Or I could call it while still leaving my apartment/office, and the car would be there by the time I get to the door. But if I need to have the car in my line of sight, just standing still while watching it and holding the button, with me being aware and responsible of the car not bumping into any obstacle, then it means I'm pretty close to the vehicle, so why not just walk the very short distance to it?
Minority rule: if a minority of disabled people can get around by summoning their car, then - all costs being equal - their families will always buy a Tesla. Apple's ecosystem also benefits from the iPhone having the best accessibility features, thus drawing in all developers working on accessibility software.
Blind people for example don't care about zoom factors, but they care very much about the fact that they can completely control and use their iPhone using text-to-speech and certain gestures.
There was an issue with the graphics, and it initially booted into an assistive mode for first-time setup, and I wondered how a visually-impaired person would use the touchbar.
They won't have to. Disabled people and those caring for grandparents prone to falling will seek out a Tesla with summoning if it helps them get around or can drive them to the hospital.
My brother just had serious back surgery. We've been dropping him at the store entrance, and picking him up. He can sit fine, but walking is difficult. I can see this being a benefit for certain people with injuries or disabilities.
You still need to be in sight of the vehicle, so the car would still have to be located near the store entrance. And as this article shows, you have to have a good, unobstructed view of the car in order to not accidentally run into something.
You can tap a location on the map and have the car drive their... and it seems to generally respect the rules of driving in a parking lot (going up and down the lanes instead of through parking lots) - I wonder if you could get it to park itself...
If not that capability will probably be added in the near future.
If it's anything like the UK, plenty of non-disabled people use them regardless - because they drive a massive Q7 which doesn't fit into regular spaces, or just because they're closer to the store/school/whatever.
Pretty dispicable if you ask me, but there does seem to be a large segment of the population that just don't give a shit.
There are, and they're usually the closest spots, but picking someone up and dropping them off right in front of the store is usually closer. (Oftentimes, doing pickups/dropoffs there is explicitly disallowed, because stopped cars obstruct the view of pedestrians for drivers of moving cars.)
Most people won't use it everyday. However, maybe it's raining outside and you don't have an umbrella. Or you got a shopping trolley full of stuff. Or the spot is so tight, it's easier to get into your car after it pulls out.
I'm replying to this comment only because there are numerous other comments talking about the use of this for people who are in some way physically impaired.
What's the state of designated parking spaces for such people? Just yesterday I crossed the parking lot at my local supermarket and passed a dozen extra-wide spaces right by the door, all marked with the generic symbol, painted in blue, and clearly marked for use by such patrons only. As it was most of them were empty, but that's good; every shopper with that need, currently at that store, had been able to park right by the door.
A short way beyond those spaces, are spaces marked for those shepherding children, which is less vital but still helpful.
So what's the state of this facility - extra wide designated spaces right by the door specifically intended for the people who need it - across the US and other countries? If it's being done, then this "smart summon" seems more of a gimmick. If it's not being done, it feels a little like an unnecessarily high-tech solution to a low-tech problem.
In the U.S., there are laws that determine how many such spaces must be designated in parking lots, based on the size of the parking lot and/or venue.
I have a friend who is physically handicapped and very much needs to park in these spaces. The only problem he has ever told me about is finding a spot to park at work. Apparently it's somewhat common where he lives for middle-aged women (his description) to game the system and obtain handicapped placards. He says he knows they are not really handicapped because he sees them walking briskly through long corridors at work without distress. Sometimes too young people will borrow an older relative's placard to park in those spaces.
Incidentally, he would never be able to afford a Tesla. He buys three- to five-year-old Hondas and drives them till they can't be fixed.
While I can't comment on the exact instances of people using handicapped placards, I would like to bring up the existence of invisible disabilities [1]. These are disabilities that are not immediately obvious to onlookers. For example, people with xeroderma pigmentosum[2] have a very severe sensitivity to sunlight (sunburn within a few minutes, vastly increased risk of skin cancer). A handicap space would be needed to minimize time spent in the sun, but no issue would be present walking inside a building.
Also, appropriate accommodation for a handicap helps keep one more functional. So when one gets appropriate accommodation, one can appear more or less "normal" for short periods, such as time spent in public spaces. Take away that accommodation and life can come apart at the seams.
> In the U.S., there are laws that determine how many such spaces must be designated in parking lots, based on the size of the parking lot and/or venue.
> I have a friend who is physically handicapped and very much needs to park in these spaces. The only problem he has ever told me about is finding a spot to park at work. Apparently it's somewhat common where he lives for middle-aged women (his description) to game the system and obtain handicapped placards. He says he knows they are not really handicapped because he sees them walking briskly through long corridors at work without distress. Sometimes too young people will borrow an older relative's placard to park in those spaces.
Firstly, it's very possible to be handicapped at any age, so this "too young" idea is pretty disgusting.
However, if people are using the placards of others, and if this is in the US, your friend should report these people to the police. The placard also comes with an identification card with the person's information on it, which can be used to verify their disabled status.
I use summon every night to pull my car out and it works flawlessly. Based on that damage it’s not clear how that could be caused by anything in a garage.
I think parking-lots make more sense than a garage.
1. Garage is full of tight spots. If something "goes wrong", you have to lift your finger off the cell-phone (capacitive sensor, so its innately laggy to a few dozen milliseconds), that signal then has to be trasmitted over the internet, to the car which then applies the brakes.
2. Or... you could be in the car itself... and directly connected to the accelerator / braking system. No delays if something goes wrong.
------
At least with a parking lot, you can argue that you didn't feel like walking (even if you need to keep the car within visual range). But in a garage + visual range, it means that you're giving up control for what amounts to a party trick. Its not like Electric vehicles give off fumes, so there's basically no disadvantage to getting in the car inside the garage instead.
Tesla is well aware that people will use the feature in ways that you describe, and its main concern, as with its other statements to the effect that drivers should pay as much attention to what is going on as if they were driving, is to avoid liability.
In what circumstances will you have 100%, unobstructed line of sight to:
1. Your car.
2. Your car's avenue of travel. [1]
[1] This is the important part. If you can't clearly see the entire path of travel, you could run a child over. Tesla obviously doesn't give a shit about that, because their collision detection doesn't work, and because the feature does not require this kind of line-of-sight to work.
The former is hard, but they want to be first to market, so they don't care about not building it right, and the latter would make this feature near-useless, so naturally, they don't ship this sort of fail-safe.
This is a critical part of the overall full self driving equation. Any vendor working on this problem will be spending inordinate amounts of resources on this as parking lots are one of the more challenging environments.
In certain locations (especially in winter) this will be a very nice feature. This is especially true to anyone with disabilities or extreme laziness. ;-).
This is a step in their progression toward self driving. No one should expect this to be flawless from the start, similar to Waymo's self driving work.
Tesla getting this out now should allow for the feature to begin improving at a quicker pace due to more data.
A lot of naysayers will certainly harp on the number of wonky videos to hit the web, but this will quickly decrease once the newness of the feature wears off and improves.
It’s worth noting that many of us consider self driving to he essentially analogous to full AI in its complexity and difficulty, and thus believe that we are literally nowhere near anything that fits this description.
No amount of “data” is going to solve the problem. Driving a car requires making a full mental model of the immediate world you’re in and creating accurate predictions of what everything else in that world is about to do, many of those things being sentient beings with whom you’re communicating through your own actions.
Nothing that Tesla is doing is getting much closer to that. The next major landmark on this timeline is a team successfully passing the Turing test, not a car moving across a parking lot.
If you share that opinion, that makes the Tesla a lethal toy.
What’s a “full mental model of the immediate world”? I’d argue that nobody has anything approaching “full”. You have “enough to get by” and that’s all computers need as well. This isn’t a full sentient AI, it’s just sensing the world and navigating a path, at an industrial scale.
No, it’s not path finding. The DARPA races were pathfinding and getting from point A to point B hasn’t been the problem for some time.
Driving around other people requires theory of mind in order to make decisions based on what the people around us expect to happen
I’m teaching a friend to drive right now - it’s all about eye contact, being waved through, stopping and starting predictably and without surprises, you’re always looking around for pedestrians and checking if they see you and predicting, according to everything you’ve learned about humans so far, whether they’re about to step into the street
Deep learning on photographs is not going to cut it.
Sure, but that's why there are armies of programmers writing rules for self driving cars. The "theory of mind" is being developed in the form of control code. It's not easy, but it is accelerated by the data collected in iterative development just like Tesla is doing.
It’s just not. How does it tell when the other car is waiting for them to move? What does it do when two people go into an alley at opposite directions and one has to back out? What happens when there aren’t any markings in the lot, or when people ignored the markings and parked in a different pattern? And so on.
These aren’t “edge cases” that “data” will sort out. These are common everyday occurrences in parking lots that require something like consciousness to navigate and involve subtle communication with other participants.
You could also argue that humans create such models in a way that is automated in confidence of predictability and familiarity. Familiarity and predictability being a byproduct of how the human chooses to make certain actions and judgements routine. Given this, humans exhibit quite a lot of overconfidence when it comes to driving: see phone use. Over time a human will make many more mistakes that any amount of paying attention would of prevented them.
My wife has recently had trouble with people parking so close to her that she couldn't get into the car. If her car could have reversed itself it would have been fantastic.
That, or jerks could not park people in (especially when they're heavily pregnant).
After that I just told her to either park in the disabled spots or the mother/child spots. She has more right to them than the lazy boomers and BMW drivers who consider them their private parking spots.
> After that I just told her to either park in the disabled spots or the mother/child spots. She has more right to them than the lazy boomers and BMW drivers who consider them their private parking spots.
I was with you till that statement. I know several people with disabilities who have been yelled at for parking in the handicap spot when they don't "appear" disabled.
People don't need to be wheelchair bounded to be disabled, and calling boomers "lazy" who use the spots (presumably legally) is in really poor taste imo.
Oh, absolutely, and those people have a big blue badge here in Europe. Like my grandparents, for example.
There are also plenty of people who should have a badge but don't and who never park incorrectly (like my father-in-law who can hardly walk now thanks to his terminal cancer).
However my complaint is legitimate; it's disproportionately people around the 60-70 age group or the suited twit in a big car who thinks she's better than anyone else. At least with the older group they have a small excuse - the bigger spots make it easier to get out of the car which can be an issue if you're older and unfit.
Your complaint is absolutely not legitimate - you're saying it's fine for your wife to park in disabled spaces just because some other people are arseholes too.
Can't speak for elsewhere, but in the UK there are disabled badges for (partially) this reason. And for parent and child spaces, you can tell by the lack of children with the 'adult'.
We can construct all sorts of hypothetical scenarios, but when a person arrives sans children, and with no booster seat, which is a legal requirement up until age 12 (or a certain height). Plus most of my experiences with parent and child parking is supermarkets, that tend to have drop off points anyway.
> After that I just told her to either park in the disabled spots or the mother/child spots. She has more right to them than the lazy boomers and BMW drivers who consider them their private parking spots.
Come on, seriously? You really think it's OK to just park in disabled spaces just because some other people are arseholes too?
As someone with a disabled child, I frequently encounter this kind of self-entitled, selfish behaviour - and I just can't fathom it. How can you be that uncaring?
"You are still responsible for your car and must monitor it and its surroundings at times within your line of sight because it may not detect all obstacles. Be especially careful around quick moving people, bicycles, and cars."
Is this a joke? I have huge respect for Tesla and SpaceX, but how do they expect to release a feature that will self-drive a car (even tiny distance) and someone will oversee a moving car?
>Is this a joke? I have huge respect for Tesla and SpaceX, but how do they expect to release a feature that will self-drive a car (even tiny distance) and someone will oversee a moving car?
Think of it as a marketing/business idea (let's sell this "advanced technology to early adopter suckers and we can try and spin any accidents as operator fault") rather than an engineering idea...
While at the same time "brings your car to you while you are dealing with a fussy child". It's like marketing wrote one part, legal the other, and neither read what the other wrote...
I think Tesla may be setting themselves up for trouble here. These sort of “using the product as clearly intended voids the warranty” disclaimers often don’t hold up.
I agree. Why even bother using a feature that cant babysit (manage) itself. Yikes. I rather it not be enabled at all if its going to risk my vehicle. Thankfully I am letting the masses QA those cars. I will stick to my dumb car in the meantime.
It's not unreasonable to assume someone will have a line of sight for where a vehicle is being summoned from, just like when in the vehicle itself and you drive forward you're paying attention to its surroundings as well.
I have seen some videos released with summons that just a quick horn honk would have likely prevented another driver backing up their vehicle from bumping into, with enough force to do some front bumper damage, a Tesla being summoned in a parking lot; it moves slowly too, and that could also be a factor in surrounding drivers not gauging or having the same kinds of reactions.
I can have line of sight of my vehicle, but in a parking lot, there may be a small child, obscured by other vehicles, in the path of motion of my car.
What do you think is going to happen, when a Tesla runs one over?
Hint: They are going to blame the driver for using this feature in a parking lot. Their fanboys will defend the hell out of that statement. They will be oddly silent on the subject of 'If you're not going to use this feature in a parking lot, where the hell are you supposed to use it?'
A 'self-driving' car that is incapable of collision avoidance is not yet ready for the market.
Ever walk up to your car in the parking lot then realize... that's not your car? Now identify your car from across the parking lot. In California, where every other car is a ticky tacky Tesla (and they all look just the same). There are already anecdotes of people wondering why their car isn't summoning only to realize ... that's not their car.
Fair point - so certain scenarios likely would benefit from visual and perhaps audio signals, lights flashing - certain lighter horn noise. Perhaps even listening to audio for someone nearby to be able to say "stop" and the vehicle would stop - or respond similarly to other horn noises.
New electric cars have a low speed noise maker in the US (required by law after sometime this year), but most Teslas on the road now were built before this was a requirement.
I think that's one of the stupidest additives of constant noise pollution - noise pollution being one reason people zone out and stop paying attention to their environment, rather than people being aware that they're about to step onto a road.
As someone who grew up before smartphones, with plenty of noise pollution, I am sure it's not the noise. People cross the street stairing at their hands.
And those people are going to be much less likely to randomly, unexpectedly walk out in front of traffic where it's not a designated crosswalk. I bet their hearing is refined, heightened, to more easily notice quieter electric vehicles as well.
Live in SV and my commute i see average of 30 M3s, about 10 S and 5 X mostly on 101. Also i see about 5 delivery truck packing about 5 cars each every day. Thats a lot of cars.
So instead of a foot on a pedal connected mostly directly to the brakes, it is a finger on the button of a phone app over the internet? That is madness, imagine if your car's brake pedal had to go over the public internet and instead of a pedal is was a button in a probably poorly coded phone app.
> So instead of a foot on a pedal connected mostly directly to the brakes, it is a finger on the button of a phone app over the internet? That is madness, imagine if your car's brake pedal had to go over the public internet and instead of a pedal is was a button in a probably poorly coded phone app.
it's a deadman's switch not a brake pedal. if you lose connection it stops the car. if you lift your finger it stops the car. if the car senses an object it should stop the car (but might not because it's not 100% yet)
What’s the latency tolerance for this? I have to imagine that, given mobile internet conditions, it’s in the range of seconds. It also raises the question - is it really a dead-man’s switch, or more like a heartbeat being sent every N milliseconds? How many heartbeat signals can be lost?
It’s not as simple as a deadman’s switch; it can’t be if it’s using a mobile phone and the internet.
This is a false problem, the engineer have coded the button the other way: Only moves if you receive a signal from the app. If you lose this information in less than 50ms, just stop the vehicle.
It's not a solvable problem in this context. Humans are not computers, and cars have no self. The car would melt a hole in the ground and not even approach par with a cat. It's understandable that people are making this mistake in the context of lifetimes of sci-fi anthropomorphizing machines.
But magic _is_ required. A reasonable field of view to assess the hazard or approaching kid on the bike might, quite easily, be available from the car, but not 20m away looking toward the car.
Case in point. That car in the video that turns into the car park looks like comes from behind the guy with the phone. Someone in the car, or the self driving sw, should have seen it far earlier. Maybe even noticed an indicator, or deceleration.
And that scenario of a kid on a bicycle not being seen until they're almost in front of you exists while a driver in the vehicle too: collision detection will certainly be able to have a faster reaction than a human driver as well.
Fortunately the cars do have cameras facing in every direction. Clearly not sufficient to stop every fender bender but you could say the same about human drivers.
The car should stop itself for safety, but may not have the awareness to do this successfully. Bicyclists are explicitly listed in the warning about paying attention to the surroundings of your car while it is being summoned.
Is there video of such an incident - did you see a video of such an incident? Not saying it's not possible, however we have to keep in mind what's "reported" as clickbait "journalism" vs. reality.
The title of TFA is about how "People Are Already Reporting Collisions With Tesla’s Driverless Smart Summon Feature".
So clearly it's not the case that "the car will stop itself if needed for safety"...
Your point is that if it was for a kid on a bicycle it would, but for the reported situations (not being kids on bicycles) it doesn't?
Because the more obvious conclusion is that it doesn't work properly period, and that can bite people, where the case involves an incoming bicycle or not. Especially since we have reported cases of Teslas hitting bicyclists in the non-summon mode:
If you think harder about not, no one really turns on autopilot except on main roads or highways, hence almost no training data. Parking lots is like a big rectangular area with stuff moving all over. So with more training data, they should have it down, but these incidents gonna lower their rates of getting training data.
And if the pharma companies didn’t have to abide by ethical guidelines for clinical trials they could collect more data. There are good reasons to forbid that.
How is it unreasonable if the things are already having collisions? This looks like a half-baked feature not even close to ready to be pushed out. Tesla is becoming a meme at this point and it has to be coming from the erratic man at the top. An engineer worth anything wouldn’t launch something like this.
Sorry if I’m fired up about this but he just reminds me of a bad boss I had that only cared about hype and how things looked and expected the laws of mathematics and physics to bow to his delusions.
There’s literally only two examples in the post and one is an example of the car stopping before a collision. The other example is another car backing into the Tesla.
In the first video from the article, the Tesla would be found to be at fault in most jurisdictions as the driver backing out was well into the backing out maneuver before the Tesla even rounded the corner. The Tesla without question did not have the right away at that point.
In most jurisdictions, the vehicle traveling in reverse is always the give-way vehicle. Practically, I expect that the insurance companies would agree to each pay their own driver’s costs and count it as an at-fault accident on both sides.
As you should know as a software developer, development of major new capabilities is an incremental process, and for something like self driving cars it’s expected that there will be a transitional period before self driving. Summon in parking lots is a good transitional step.
Remember Tesla is not just training cars here; humans also need to learn to deal and not freak out. There will be fender benders along the way. Most of them probably caused by other drivers, but bugs are also a possibility.
>As you should know as a software developer, development of major new capabilities is an incremental process
Hopefully software developers with such an idea are not allowed in medical devices, aviation, missile guidance, space industry, industry robotics, and automobile...
Developments in all the areas you mention start with baby steps and incrementally progress toward greater and greater capabilities. This should not be news to anyone here.
Plenty of people die in cars every day. Cars are released products, right? You are making no sense here. And the collisions reported with smart summon are fender benders not “killings”. No need to hype up the drama level.
>Plenty of people die in cars every day. Cars are released products, right?
Yes, but current cars killings are due to operators (drivers) errors. Not in a self-driving parking-summoner killing people.
Companies don't (or aren't supposed to) release production cars with known people-killing faults. And when some are nonetheless released (which happens sometimes, e.g. a defect found in the break system) companies get fined and/or people go to jail for those. It's not all A-OK because "people die in cars every day" anyway...
I think you’ll find that the behavior that is unreliable here is that of other drivers, not the technology. So what you’re saying doesn’t apply in this situation.
Even if it did apply, I believe the technology is safe and reliable, because from what I have seen Tesla does extremely extensive testing. So this meets your filter as ready to be released to the public.
To that point, I believe this technology was not released willy nilly. The poster seemed to be assuming it was. I agree that a good level of safety and reliability is important.
Nothing has ever been or will ever be perfect revision 1. That doesn’t mean it shouldn’t happen. The real world will always be where flaws get light shined on them.
To your point, no one is still using Model T’s, flying Wright Flyers, or still living with a Jarvik 7 heart implant (note the 7 implying incremental progress). Further, the entire history of space exploration is built on learning from failures and explosions.
As a motorcyclist, these videos are sickening. Especially the guy purposely using smart summon to cross LIVE traffic (not in a parking lot).
These kind of low speed collisions are not a big deal to car drivers. But for someone on a motorcycle, having a Tesla with no driver obliviously pulling out into live traffic in front of a motorcycle (even at low speeds) can cause significant injury.
when I ride to work I take a completely different route that is significantly longer just to avoid two intersections and a couple of blind high speed crests.
The long route also a long twisty section, so that helps too :D
>But for someone on a motorcycle, having a Tesla with no driver obliviously pulling out into live traffic in front of a motorcycle
I don't get it. Do motorcyclists use whether someone's in the driver's seat to determine whether the car is moving or not? Shouldn't they be looking at the brake/head lights?
The point is not to needlessly increase the risk for other people using the road for the sake of showing off your car’s cool party trick.
But to directly answer your question: in fact, most people on a motorcycle will try to look at the driver, when possible, to check if the driver is looking at them to gauge whether the driver sees them or not. In the case of a Tesla, it’s impossible to know if the car’s sensors know you’re approaching...
Also, I don’t know any human beings that share a similar driving pattern as the Tesla while it’s being summoned. It seems to start and stop unpredictably and does so in a very jerky nature (and most humans do not drive as slow as in the videos, and yes sometimes driving slowly is more dangerous than driving regular speeds, and most humans also won’t drive on the wrong side of a lane, which the Tesla often does in videos I’ve watched... it’s just very unpredictable
A common technique for motorcycle riders, which also works for pedestrians, bicyclists and even car drivers, is to make eye contact with other drivers before pulling in front of them. The vast number of MC collisions happen because the other driver doesn’t see them. When the other operator isn’t even in the car, you lose a key safety tool.
The feature being a worldwide premier, with no past experience, I honestly expected that Tesla had added more sounds and visual warnings when the Summon feature is running, such as flashing the car's headlights and emitting regular "bip" (like when a big truck is doing a reverse manœuvre), to catch the attention of other drivers, or even warn blind people when the Tesla is near.
Then, once the feature is polished with more real world data, they can always remove the warnings later.
The video with the white car coming through especially is scary. From all the sensors and marketing materials I know from Tesla, I would expect the Tesla to have detected that white car way sooner and stopped automatically. There was really no ambiguity to what decision to take here.
Does the Model 3 have a concept of stop signs (not just recognizing them, but understanding arrival order and how long to wait) and right of way?
Or are they expecting to get away with just avoiding cars and pedestrians while breaking driving rules because it's a parking lot and blaming owners when it inevitably falls short.
Tesla's track record leads me to believe the latter, but I'd love to find out something to the contrary.
-
Actually it looks like this is still controlled by holding a button? So yeah, guessing it has no concept of those andalmost-crash in video #2 would be expected behavior. Marvelous.
You say this as a joke, but I suspect they have been programmed to drive fairly aggressively, though probably to come to a halt before any collision that it can predict. If so, we might soon see two or more Teslas deadlocked, their fenders inches apart.
Right, the first was an at fault incident by the non-Tesla driver for sure.
In the second, the vehicle was allowed to drive across a traffic lane. And in fact it attempted to do so, but stopped at the threshold when it saw a vehicle approaching (which also hit the brakes). I fail so see how this isn't exactly the behavior claimed and desired.
It pulls out as if it were going to continue across the traffic lane and very clearly stops right at the threshold. At no point is it in the traffic lane, nor in the path of the gray vehicle that stops.
It's true that it "looked like" a human driver who was going to continue straight out, so the other driver (correctly) hit the brakes.
But spinning this like the Tesla "stopped in the middle of traffic" is just plain wrong.
The Tesla stopped when / because the 'driver' released the button. Watching the vehicle until that point, it hadn't slowed or made any hint that it was about to stop or do anything but continue forward, oblivious, and that -would- have caused an at fault collision.
Sorry, what's the evidence for the reason why the vehicle stopped? I mean, it stopped. That's what it's supposed to do. You have some cite showing that it wasn't a sensor detection?
And of course it hadn't "slowed", at this speed (looks like about ~1 m/s) the time taken to stop (at about the .7G a car on its wheels can achieve) is about 4 frames of that 30 Hz video, and it happens over a distance of about 7cm, most of which is the vehicle just rocking forward on its suspension. Be real, we're talking about parking lot maneuvering here.
Evidence? The author of the video saying he took his hand from the 'button'?
Okay, now we need to do some math. The evidence that a 4,000lb vehicle coming to a stop even from 5mph in... 3 inches? "Most of which is the car rockingon its suspension". Removing distance from the equation, because g forces are related to time of deceleration, using your numbers, https://rechneronline.de/g-acceleration/ says more like -2.3G which is far from a gentle stop.
You can also plainly see the other car decelerate too, over far more than '4 frames'.
In the twitter thread for the second incident the owner claims he commanded a stop (released the button) prior to the event but is uncertain as to whether the car stopped itself. So that makes at least three people who thought the summon feature was not going to yield to cross traffic.
I suppose that doesn't really fit the narrative you're trying to spin here.
All evidence so far suggests that self-driving cars are more dangerous than human drivers when similar conditions are taken into account.
The comparison numbers that Tesla parrots are between Teslas with modern safety features and in good weather conditions, and the entire rest of road fatalities, including motorcycles. Comparing similarly bodied cars in similar weather conditions, Tesla's self driving has more reported collisions per miles driven.
That’s not accurate. The original Tesla AP safety study was done on the Tesla fleet before vs. after AP was deployed. It’s about as close to a clean, controlled experiment as you can get. Same cars, same drivers, same safety equipment, roughly the same road conditions. They also didn’t even take into account whether AP was driving, only whether it was available on the car, so they aren’t looking at just the safest miles driven (highway miles).
"As a consequence, the overall 40 per-cent reduction in the crash rates reported by NHTSA following the installation of Autosteer is an artifact of the Agency’s treatment of mileage information that is actually missing in the underlying dataset."
Of course, the original publication of NHTSA's botched study is still the number one HN search result for both "NHTSA" and "Autopilot," and it still routinely gets referenced in comments. As the saying goes, the lie made it halfway around the world before the truth had put its pants on.
Do you have some sources on this? Every study I am finding even comparing like for like shows SDVs are about the same or safer. Also the facts I was able to find from Tesla do a lot of comparisons, not just the obscure ones you cite.
Fatalities of human-driven cars is ~1.5 per 100 million miles driven, and that's across the entire range of driving conditions.
The autonomous car industry (level 3) totaled under 10 million miles driven as of the Uber kill, putting them way beyond despite benefiting from only good conditions.
Tesla just reached 1 billion miles on autopilot early this year and has 5 fatalities, that's slightly better than the human-driven rate but (again) one can assume this is in good conditions pretty much exclusively.
Autonomous cars have not yet reached level 3! They are level 2 at best.
>> A Level 2 autonomous vehicle can control both the steering and the speed at the same time, essentially offering partial automation. The human must remain ready to take full control in an emergency and is ultimately in charge of and responsible for whatever the vehicle does. GM’s Super Cruise and Tesla’s Autopilot are examples of Level 2 systems.
>> Level 3 automation is controversial. The car not only manages steering and speed, but is responsible for monitoring the environment around it and detecting challenges that require human intervention. In normal conditions, a human driver would not need to pay any attention at all, but if something went wrong with the system, the person would have to be ready to take over right away.
All current autonomous cars require human safety drivers, or the full attention of the human driver (like in Teslas).
The dashcam footage released by Uber was misleading; their cameras and sensors were far from as good as the human eye. The Tempe police concluded 85% of humans could have stopped the car in time: [0]
> In simulating conditions of the crash and consulting their textbooks, detectives found that Herzberg could have been seen 143 feet down the road by 85 percent of motorists.
And as every other level 3 car it was using LIDAR, meaning visible light conditions were irrelevant. Even if the environment had really been as dark as the misleading dashcam footage it would have had very little relevance to car sensors unless that had been extremely heavy fog or rain.
At least one major study has found that there is currently not enough
information to decide with certainty whether self-driving cars are safer than
human-drive cars:
Kay Findings
•Autonomous vehicles would have to be driven hundreds of millions of miles and
sometimes hundreds of billions of miles to demonstrate their reliability in
terms of fatali-ties and injuries.
•Under even aggressive testing assumptions, existing fleets would take tens
and sometimes hundreds of years to drive these miles—an impossible proposition
if the aim is to demonstrate their performance prior to releas-ing them on the
roads for consumer use.
•Therefore, at least for fatalities and injuries, test-driving alone cannot
provide sufficient evidence for demonstrat-ing autonomous vehicle safety.
•Developers of this technology and third-party testers will need to develop
innovative methods of demonstrat-ing safety and reliability.
•Even with these methods, it may not be possible to establish with certainty
the safety of autonomous vehicles. Uncertainty will persist.
From:
RAND Corporation Driving to Safety. How Many Miles of Driving Would It Take to Demonstrate Autonomous Vehicle Reliability? Nidhi Kalra, Susan M. Paddock
I agree; that's the kind of comparison I think we should be making. As I said in another comment, if self-driving cars replaced humans the day they became safer than human drivers, they would become our second most deadly consumer technology next to guns. Wringing our hands over fender benders isn't a good start to the conversation. I think instead of worrying that people aren't scared enough, we should think about how hard it will be for people to accept the idea of self-driving cars killing and mutilating people every day while at the same time being a safety upgrade over the status quo they're used to.
>The comparison numbers that Tesla parrots are between Teslas with modern safety features and in good weather conditions, and the entire rest of road fatalities, including motorcycles. Comparing similarly bodied cars in similar weather conditions, Tesla's self driving has more reported collisions per miles driven.
I'm interested in learning more about this, because I'd originally taken Tesla's figures at face value. Are there any sources you'd recommend?
I co-wrote an article [1] that covers some of the flaws in Tesla’s comparisons a few years back. Not sure if their stats have gotten any better in recent years, but the errors back then were pretty basic.
and still you have a team of really gifted engineers that worked on this, an executive or a bunch of executives that pushed for this and a bunch of tesla owners that will give it a try.
Maybe; it's hard to say. I think the reputation of human driving is so low that it's hard to go lower, but opinion on that is pretty polarized. As a bike commuter in a quickly growing city, I'm aware that there are a lot of people who think driving is a fundamentally safe activity that is only made dangerous by unwelcome novelties such as cyclists, pedestrians, etc. But there's also a lot of awareness that it's really a shit show: humans are terrible drivers who kill people by the thousands every year. It's such an outlier that I don't think normal safety standards apply. I think the only standard that self-driving cars should need to meet is being comparable to human drivers.
If self-driving cars replaced humans the day they became safer than human drivers, they would become our second most deadly consumer technology next to guns. Crazy to think about, isn't it? As safety improves, self-driving cars can save thousands of lives per year and still be our second most deadly consumer technology. We should get used to that idea and look forward to cranking the death rate lower and lower as technology improves. Humans behind the wheel will always be a menace.
Well they are nowhere near human drivers. And yes, human drivers are terrible.
Thing is that human drivers vary a lot. So we have excellent drivers and we have awful drivers. And also we have tired drivers, bored drivers, angry drivers and we have sporadically inattentive drivers.
If we were to aim for "at least the same number of accidents as a human per mile" that would be a pretty disastrous AI. Because the AI would be much more uniform in its performance.
A human driver will send text messages and won't even slow down before hitting a stand-still truck right in front of them. A human driver will mix up the brake pedal and speed straight through an intersection.
An AI that does comparable damage (obviously wouldn't make the same mistakes) would be pretty darn scary to sit inside or be near.
There is also the very important factor that a self-driving car needs to behave as a human to not cause accidents just with it's presence.
The PR disasters that await, when people die because a self-driving car did something that a conscious human would never do people won't react rationally. Tesla are just begging for a, well deserved, ban on self-driving cars and attach a stigma to them. All while killing some people.
It’s straight statistics. You’re completely ignoring the harm done by human drivers; quantify that vs self driving and you’ll reconsider which is more reckless.
Yup. The following is from a previous HN Tesla discussion:
Tesla Will Have Full Autonomy in 2020, Musk Says
Tesla has an advantage here in that they
don't feel the need for their autonomy
to be particularly safe.
These problems and accidents are a direct result of Tesla's laissez-faire attitude toward autonomous driving. They do whatever the fuck they think they can get away with, consequences be damned.
So far, they've been allowed to get away with quite a lot.
I've used code formatting before, and I've tried to keep the lines short. But obviously not nearly short enough for some mobile users.
Let me turn it around. How would you format those two sentences for mobile, while still maintaining my intent of showing that the second was a response to the first?
> Quoted paragraph here. Many words are in this paragraph.
> Second paragraph here.
Reply here. Note that you’ll need to leave a blank line between paragraphs if you’re doing extended multi-paragraph quotes, which is unwieldy - but so are multi-paragraph quotes. For those, either quote less (if you’re replying), quote interleaved with replies (if it’s a multi-segment reply), or reply first and quote second^.
^ if it’s external content you’re including from an external source that you’d like to reference from your comment, as opposed to a footnote.
Here’s the relevant segment from the HN faq, for the record:
I think you really have to be oblivious to the current state of affairs on the road not to understand what he's getting at. As someone riding a bicycle on city streets, he is absolutely right. It's not safe out there right now. Even the current safety statistics are skewed by the scarcity of non-car road users in most places and the amount of advanced machinery in a car dedicated to keeping people (the ones inside the car, at least) alive in a serious accident.
In the first case the Lexus was in the wrong, but a human would have intervened with that loud auditory device affixed to the steering wheel, or moved further out of the way.
The Tesla freezes deer in headlights after it turns into the path of the car backing up. I wonder if it even signaled before turning. The fact the owner is worried that the other driver thinks they were driving leads to be believe it probably did something a human driver should have, not signaling makes sense
-
In the second case it ignored right of way and almost caused a serious crash!
In another it ran into a wall!
If anything tuis article is being incredible charitable by trying to focus on the cases where it doesn't fail...
This is exactly the thing that people should !not expect! from it in the very beginning. It's an owner's responsibility to _monitor surroundings_ and make car move when it's safe to do so. Car will navigate, but it's not a full self driving obeying all the rules of city traffic.
The real world is not a train line. A car that moves on a track while you hold a button, then stops when you let go is not safe.
Even when it is safe to move from your point of view, it doesn't exclude you from having to signal to other drivers.
It doesn't exclude you from situations that were safe but are no longer and require the car to move out of the way in a maneuver more complex than a few feet forward or backwards.
-
Not to mention all the scenarios Tesla touts in the launch are also perfect completely contradictory to monitoring surroundings.
You wouldn't drive with a hand full of groceries, or your baby crying in your lap, because you'd be driving distracted.
Why is Tesla bragging you can control the car while dealing with these distractions when they haven't taken the steps to make it safe to do so?
Related - if you like spending time on reddit: please consider visiting r/realtesla. Sometimes it's a bit too negative, but mostly I find it a nice counterbalance to that creepy "everything is always awesome" subreddit.
Own a TM3, unless Tesla can make "smart" summon work the idea they will ever have full autonomous driving down is just a pipe dream. What the car can do on a long drive is nothing short of amazing compared to any other brand but treat it like your kid with their new driving license, you watch over it like a hawk.
At most I use it to slide out of a space where someone got too close, usually on purpose, or as a party trick. Let it drive out of my sight, oh, hell no.
Tesla. I'm disappointed. You say your cars are the safest on the road. That your self-driving AI is safer than humans. You spar with regulating bodies along those lines-- arguing that you're protecting consumers. And then you roll out a fluff feature that causes accidents.
Then you shouldn't be trusting Tesla's marketing. Did you think Elon musk had personally made break throughs that ML scientists at Google are struggling over?
As much as google likes to brand themselves as an AI company it is pretty much all marketing fluff. What google has actually deployed is all super simply modeling because of the scale they have to run at. They can’t deploy expensive models because they have to run across a billion+ users. They have made some contributions on the NLP side but mostly they just have the best consumer packaging of ai.
maybe, if road means freeway. There is no way the "self-driving" AI is safer than humans in the context of busy city streets, construction areas, parking lots, you name it.
Honestly what's interesting here: People are rushing to try out the v1.0 software of a feature from a car company, potentially risking their expensive vehicle because FOMO of all the other Tesla owners posting about summon before them.
I love Musk but this is a bad feature because he hasn't looked at the actuarial case for not having this. TONS of car accidents happen in parking lots.
This seems like a huge opportunity for insurance fraud like you see often in Asia. Someone arranges to be struck by a car (ie flings themself in front of it), then splits the medical payout with a crooked doctor.
Most of these look like user stupidity rather than anything wrong with the car. Especially the video of the guy who trys to summon his car across a road.
I imagine summon is pretty dumb, but all Tesla's talk of autopilot and self-driving cars has made people believe it is a lot smarter than it actually is.
If it's a user facing feature. And it fails catastrophically if some preconditions aren't met. And it doesn't have the ability to verify those preconditions. Then allowing users to invoke it is probably not user stupidity. It's just poor design.
The same topic is being explored in the 737 MAX clusterfuck where Boeing failed to provide sufficient safeguards OR sufficient training for non-expert users to manage catastrophic failure. People are stupid, yes. But you can't blame people being stupid as an excuse for insufficient safety engineering, just like you can't blame drag force being proportional to the square of the velocity. These are just the constraints we have to engineer around.
True! I have the impression that Tesla is trying to "innovate" itself out of the corner they are finding themselves in. Already said multiple times, but there are valid reasons why the automotive sector is working the way it is. Sure, as everywhere a lot of stuff is just there because it was always there. Go ahead and brake that. But understand first which ones can be ignored and which ones not.
Kind of worked for SpaceX by ignoring all the political bullshit in aerospace. I'm slowly wondering whether that was luck or intention, so.
Just how the latest Tesla features ever became road legal is beyond my understanding so.
A lot of stuff is just there because it was always there. Go ahead and brake that. But understand first which ones can be ignored and which ones not.
Chesterton’s Fence:
In the matter of reforming things, as distinct from deforming them, there is one plain and simple principle; a principle which will probably be called a paradox. There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, “I don’t see the use of this; let us clear it away.” To which the more intelligent type of reformer will do well to answer: “If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.
Tesla vehicles are the safest ever made - I'd say allow Tesla to innovate away from whatever stagnancy the status quo auto manufacturers have allowed. User education is part of good design - and perhaps driver tests will need to start including proper use of such systems. They're road legal likely because it's innovative and it hasn't been discussed nor laws written yet - it takes time for the regulatory processes to work.
It sounds like the regulatory body doesn't like that Tesla is claiming they got a 5.4 rating out of 5, as they say they don't give out higher than a 5 rating - and it sounds like Tesla is using public data points to calculate that 5.4. We'll have to wait to see if this goes to court to see if Tesla's methods for stating their 5.4/5 rating is unreasonable - or if the guidelines perhaps are if they "cap" what rating a vehicle can reach based on comparable data; as Tesla states in one of those articles alone, ~40% of vehicles have a 5-star rating - so they argue it's important to be more specific to allow consumers to better differentiate.
Let’s say you make an amazing new consumer gas central heating boiler, and shout about it on Twitter to your adoring followers. And if someone switches it on without opening the water valves, it explodes and kills them. You say “oh, the manual says not to do that”. How do you think that would fly?
Consumer products have to be designed to be safe against consumers, to a large extent. You can’t just add a feature, add a disclaimer that says, in effect, “don’t use this”, and blame the user if they use the feature.
I don’t understand if this argument is sarcasm or not? I don’t have a boiler but I have a water heater and know if I don’t manually start the pilot light after having shut off the gas I could die.
Technology has always required basic understanding to avoid harm.
The car only moves forward as long as you hold a button on your phone - as soon as you let go it stops. You are supposed to observe the car moving at all times. So logically, if you see that the car is about to collide with something, you let go of the button and the car stops. What else is it if not user error then?
> if you see that the car is about to collide with something
What about when the car is about to collide with something you cannot see from a remote point of view, or when the positions and/or velocities are distorted from the longer perspective? There are many common situations where hazards that would usually be visible from the driver's seat are obscured when looking at the car from a distance.
Also, people generally have to be trained and pass a test demonstrating basic competency at driving and pass a vision test. These tests don't include the slightly different skills/vision required to safely judge driving a car from a distance, so why would you assume a licensed driver is even capable of operating dangerous machinery to do something outside the scope of that license?
You’re eyes are a couple inches apart. There are geometric limits to our ability to perceive depth and therefore angles.
You can’t, I can’t, no can perceive angles appropriately from outside the car at distances that keeps the feature safe and makes the feature useful (50 yards out)
Most parking involves one or the other end of the vehicle being close to something else. How do you stop in time if it starts out going the wrong way?
That sounds like the issue with the chap who tried it in his driveway.
(My feeling is that if the feature isn't more reliable than a 95th percentile human for the subset of functionality, it probably shouldn't be released.)
No, if you release the button it'll stop (barring some kind of fault). It will also stop before hitting anything (again, barring some kind of fault).
It's not ready, though. The Tesla might not technically have been at fault in this clip, but driving is full of situations where one party makes a mistake, and another takes corrective action to prevent an accident.
A self driving system that is never "at fault" but crashes regularly into drivers that are at fault is not good enough. It needs to drive defensively.
The timing of this release is suspiciously close to the end of Q3. It concerns me that this was rushed out the door early to recognize some revenue.
Pretty much - ppl are too lazy to walk to their bloody cats and drive their own cars because they want to keep starting at the car video they’re watching
The carelessness and dangerous way this entire thing has been "rolled out" is alarming.
People are touting this as some sort of "amazing" feature, I think this guy Elon Musk is a crazy whackjob who puts people's lives at risk.
All the videos I saw from Tesla were " ideal well-lit" situations; well of course! University graduate students have Legos and basic robotics that do that crap. Getting it to work in "normal" (read "edge-cases) is the most difficult part. And they conveniently just ignored that part.
It's insane people are putting these death machines on public roads.
Before people claim "this is just a parking lot", imagine how many parents use their stroll-cartd with babies in them or how many kids run around in parking lots.
I think "death machines" is a bit hyperbolic, the speed is capped at 5mph, and I imagine that Tesla is better at avoiding hitting humans in parking lots than a normal driver since it has sensors all around it. Avoiding hitting people isn't the hard part, navigating in a predictable way and following the unwritten expectations of driving in a parking lot is, and that is something Tesla doesn't seem to do a great job of yet.
yes. you have a couple thousands pounds of "self-driving" technology that does not do well in parking lots (that's like 95% of where I usually park my car). It's definitely not going to hit or kill anyone. Also, it's the responsibility of pedestrians 100% to look for cars.
Give me a break. You can either handle what your feature advertises or you cannot. This is in the cannot bucket - the same way autopilot is in the cannot bucket.
The sad story with Tesla is that they should probably focus on the one thing that works beautifully on the car and that is the fact that it's electric. Focus on making a beautiful, insanely fast electric car. People love it already - just stop w/ the gimmicks.
There is a big different between not doing well in parking lots, and being dangerous in parking lots. From the videos I have seen, it doesn't do well in parking lots because it is too cautious - it is safe, but not very functional. Now of course we have to wait longer before determining if it is safe or not, but I would imagine that it is not hard for Tesla to avoid hitting people, and if it does hit someone it is only going at walking speed anyways.
To me doing unpredictable things, in a parking lot, is dangerous. I would like to not be hit, period.
And good luck with the liability when accidents happen. You explain that it was the car and the car manufacturer explains it was you. In the meanwhile I'm in a wheel chair or worse for the rest of my life.
I am also not a big fan of experimenting if something is safe or not using others' people lives. I would think that if you're going to put something out there you will be liable if it's malfunctioning.
You want to have really cool features that don't work in the real world? Play with them on your private estate - just don't force people to be guinea pigs.
Also as a side rant: we live in a world where a significant chunk of people doesn't have clean water or access to internet, but hey I can remotely summon my car! (but have to be sure it's in my line of sight...). How about we focus on giving everyone clean water and internet and stop with the gimmicks?
The feature is in beta, and it is enabled on cars lacking the latest hardware. I think it would be reasonable to expect issues especially for those cars.
I know it's very popular to cherry pick incidents and attack Tesla these days, but, nobody else is pushing the envelope. And nothing about this feature is putting anyone at significant risk. These are inconveniences that have occurred, and nothing else. In the meantime, the pursuit of self driving cars is pushed forward greatly through the collection of this data.
In the end, this will benefit everyone.
For now, we can sit back and enjoy the cringey videos this will produce.
To add to this, there are a number of production cars today with level 2 automation equivalent to or better than Tesla's. None of them made a big fuss about it.
The only envelope Tesla is pushing with 'autopilot' is in misleading marketing that has been directly linked to drivers misunderstanding the capabilities/limitations of the technology.
The way I see it is that if someone's Tesla crashed into my car in the parking lot then as far as I am concerned it is 100% the fault of the owner and the owner should compensate me 100%.
The fact that it wasn't technically their fault but instead it was bad software matters nil to me. To me the only person I interacted with is the Tesla owner, and that's who I will put blame on. I don't care if the owner is able to defer blame to someone else. They can do that, but I want "mine" from the owner.
> 100% the fault of the owner and the owner should compensate me 100%.
The owners fault?? Nope--I feel its Tesla's fault... entirely. Well them and the DOT for allowing this experiment to be carried out on public roads. Meanwhile Google still asks me what a truck on a hill looks like via captcha's... oh yeah this self driving tech is ready to go!
What part or letter of the word "beta" don't you understand? Can't you just not use the functionality until the company says that it is complete? There is plenty of other functionality in a Tesla car, why not use that?
The comment you’re replying to is talking about their non-Tesla being hit by a Tesla operated by somebody else. How do we force other drivers to not use this feature? What if it wasn’t released until it’s less likely to collide with other people’s property
So you are fine with beta testing safety relevant features in the wild which could harm, injure and even kill people? You know that cars are not some app?
I think driving is a special case for safety, because we have put humans in charge of controlling 2000+lb cars at high speed in close proximity with other humans doing the same thing, and we've either designed or retrofitted this activity into almost every dense human habitation on the planet. Obviously we would never make such a decision now, but it gives a special context to today's safety decisions. It is a major ongoing source of death and injury as well as a major source of pollution, and bringing that era to a close is a special priority. In the United States roughly 100 people per day die in car accidents, and many more are seriously injured, so you could say that every day we can accelerate the end of human driving saves a hundred lives.