Hacker Newsnew | past | comments | ask | show | jobs | submit | dbt00's commentslogin

Waymo uses remote contractors to hint the Waymo driver when it can't figure out a path forward. They're not being remote piloted.

They are however, very cagy about how often this is necessary.


They don't necessarily even hint them. I think the car mainly asks them yes/no questions and they respond.

This is like keeping your kids inside in case something bad happens to them.

If your kids never leave the house, something bad definitely happens to them, they stay kids.


Is there some benefit to talking to weird Uber drivers I've yet to discover that's comparable with 'going outside at all'?

Interaction with the common person is great. I wouldn't have know one could trim their toenails while driving otherwise.

Or that a taxi driver in Wuhan could answer his phone while shifting his manual transmission and smoking a cigarette.

Pretty sure that's part of the taxi exam.

There are probably better places to interact with other people than rideshares, like at a public establishment. There's significantly less risk

Yes. "Weird" people are somewhat rare opportunity to build certain social skills.

I enjoy the challenge of finding creative ways to guide the discussion and understand their headspace for a little while. I am not even trying to control the level of weirdness, but just keep them talking and comfortable.

Unfortunately, most of the time they're not even weird people and it was just a weird first impression. They vent for like 3 minutes and then it gets boring again.


I mean, I do talk to them and I do have this skill, but it's a skill that I only ever seem to employ in talking to Uber drivers, so I'm not sure it's of any great benefit.

If anything the fact that most of them are immigrants puts the conversation on easy mode if you're a native speaker. They're doing twice the mental work you are so it's easy to orchestrate the conversation.

Not really transferrable to native-speaking workers. Like speaking to a barista is very different. Speaking to a construction worker different again.


That's interesting. Cultural differences and language barriers aren't what I would consider weird.

I was thinking of those people who have wild stories and/or mountains of narcissism to overcome. They have a fascinating worldview like an artist would if they had those ambitions.

They get bonus points in my book the more genuinely unhinged and confused they seem to be. They got that way by questioning things into absurdity and I don't mind listening.


Well there's a virtuous cycle for immigrants whereby if you integrate, you improve the language, you get a better job, and you integrate more, thus often ironing out any weirdness wrt to the host culture. Uber driver is pretty dead-end and isolated. You work constant hours but all your interactions tend to be very surface level.

I realize it is hard to do this, but please understand that other people have different perspectives on personal safety. For example, try and image how things might be different if you were a woman alone in an Uber with a driver who starts saying weird things.

I would rather say they develop crippling anxiety and agoraphobia. This is happening right now even to adults working from home.

"A slack that doesn't suck" doesn't exist, and whoever thinks Anthropic of all people are going to build that has no idea how this is going to work.

Slack has massive lock in due to cross-organization connections. The only way you're going to get people off slack is to build a 10x better mode for collaboration than river of shit chat, and while such models probably exist, you also have to convince people that they are better.

I wish whomever tries this the best of luck.


How google hasn't been able to do this with messenger is beyond me.

The external partners on our slack are almost all logged in via gmail or other google workspace. We are on google workspace as well.


Google decided to build a new chat app every two years instead of keeping the good bits of the original chat app they had and evolving it. It was endlessly frustrating to me when I was at Google. Google's security team ended up banning Slack access after several teams started expensing it.

It doesn't seem like building something that works well would be that hard; we've had nearly 40 years to learn from IRC, AIM, and others. Why can't I run my own chat client that does what I want? Oh, because you gotta lock people in. Sucks.


It is impossible to believe the self-own on Google's messaging platforms. At one point, it seemed that all of my acquaintances used Google Talk. Then years of shutting down perfectly working applications, sometimes without any real user porting. There were even identically named products existing at the same time.

However, I am sure a few Googlers got some tasty promotions out of the mess, so it was all worth poisoning the well.


If you are on Google Workspace, just use chat.google.com: it's not bad. All it takes is just a benevolent dictator (or more realistically a bean counter) at work saying they don't want the company to pay for Slack in addition to Google Workspace.


cries in google wave


+1, google wave might have been the best thing Google ever made.


There was a guy here plugging his slack alternative that was heavily AI based and people here loved it. I don't remember the name unfortunately


the fact nobody wants to admit is that social is the opposite dimension of productivity that’s why slack and teams are terrible product that try to combine both


I think they were already in the async world and needed message passing -- the polling code was also in python async.


What about a more general message-passing mailbox approach? This works really well in the Erlang/gen_server/gen_fsm world. (and in plenty of other contexts, but Erlang's OTP is still some of the best, simplest incarnation of these things)


“The problem with most programming languages is that they implement concurrency as libraries on top of sequential languages. Erlang is a concurrent language at the core; everything else is just a poor imitation implemented in libraries.” -Joe Armstrong


I mean, the "one-queue per consumer" they eventually ended up with, is basically an inbox that the sequential process reads from.


When I was younger I was lucky enough to live somewhere rural where I got into a couple of single car accidents that I walked away from. Now my ADHD hyper focus is super attentive when driving.


Who is accountable for the actions of the bot? It's not sentient, and this author is claiming zero accountability -- I just set it up and turned it loose bro, how is what it did next my fault?


I sat in the back seat of a model Y with two other adults and it was extremely painful.


That's not surprising. The point isn't to use it to regularly drive tall adults around back there. It's for when your family of four needs to take two of your kids' friends somewhere, that kind of thing. We probably use it once every couple of months, but it's super handy at those times, and folds away out of sight and mind the rest of the time.


(Always worth noting, human depth perception is not just based on stereoscopic vision, but also with focal distance, which is why so many people get simulator sickness from stereoscopic 3d VR)


> Always worth noting, human depth perception is not just based on stereoscopic vision, but also with focal distance

Also subtle head and eye movements, which is something a lot of people like to ignore when discussing camera-based autonomy. Your eyes are always moving around which changes the perspective and gives a much better view of depth as we observe parallax effects. If you need a better view in a given direction you can turn or move your head. Fixed cameras mounted to a car's windshield can't do either of those things, so you need many more of them at higher resolutions to even come close to the amount of data the human eye can gather.


Easiest example I always give of this is pulling out of the alley behind my house: there is a large bush that occludes my view left to oncoming traffic, badly. I do what every human does:

1. Crane my neck forward, see if I can see around it.

2. Inch forward a bit more, keep craning my neck.

3. Recognize, no, I'm still occluded.

4. Count on the heuristic analysis of the light filtering through the bush and determine if the change in light is likely movement associated with an oncoming car.

My Tesla's perpendicular camera is... mounted behind my head on the B-pillar... fixed... and sure as hell can't read the tea leaves, so to speak, to determine if that slight shadow change increases the likelihood that a car is about to hit us.

I honestly don't trust it to pull out of the alley. I don't know how I can. I'd basically have to be nose-into-right-lane for it to be far enough ahead to see conclusively.

Waymo can beam the LIDAR above and around the bush, owing to its height and the distance it can receive from, and its camera coverage to the perpendicular is far better. Vision only misses so many weird edge cases, and I hate that Elon just keeps saying "well, humans have only TWO cameras and THEY drive fine every day! h'yuck!"


> owing to its height and the distance it can receive from,

And, importantly, the fender-mount LIDARs. It doesn't just have the one on the roof, it has one on each corner too.

I first took a Waymo as a curiosity on a recent SF trip, just a few blocks from my hotel east on Lombard to Hyde and over to the Buena Vista to try it out, and I was immediately impressed when we pulled up the hill to Larkin and it saw a pedestrian that was out of view behind a building from my perspective. Those real-time displays went a long way to allowing me to quickly trust that the vehicle's systems were aware of what's going on around it and the relevant traffic signals. Plenty of sensors plus a detailed map of a specific environment work well.

Compare that to my Ioniq5 which combines one camera with a radar and a few ultrasonic sensors and thinks a semi truck is a series of cars constantly merging in to each other. I trust it to hold a lane on the highway and not much else, which is basically what they sell it as being able to do. I haven't seen anything that would make me trust a Tesla any further than my own car and yet they sell it as if it is on the verge of being able to drive you anywhere you want on its own.


In fact there are even more depth perception clues. Maybe the most obvious is size (retinal versus assumed real world size). Further examples include motion parallax, linear perspective, occlusion, shadows, and light gradients

Here is a study on how these effects rank when it’s comes to (hand) reaching tasks in VR: https://pubmed.ncbi.nlm.nih.gov/29293512/


Actually the reason people experience vection in VR is not focal depth but the dissonance between what their eyes are telling them and what their inner ear and tactile senses are telling them.

It's possible they get headaches from the focal length issues but that's different.


I keep wondering about the focal depth problem. It feels potentially solvable, but I have no idea how. I keep wondering if it could be as simple as a Magic Eye Autostereogram sort of thing, but I don't think that's it.

There have been a few attempts at solving this, but I assume that for some optical reason actual lenses need to be adjusted and it can't just be a change in the image? Meta had "Varifocal HMDs" being shown off for a bit, which I think literally moved the screen back and forth. There were a couple of "Multifocal" attempts with multiple stacked displays, but that seemed crazy. Computer Generated Holography sounded very promising, but I don't know if a good one has ever been built. A startup called Creal claimed to be able to use "digital light fields", which basically project stuff right onto the retina, which sounds kinda hogwashy to me but maybe it works?


My understanding is that contextual clues are a big part of it too. We see a the pitcher wind up and throw a baseball as us more than we stereoscopically track its progress from the mound to the plate.

More subtly, a lot of depth information comes from how big we expect things to be, since everyday life is full of things we intuitively know the sizes of, frames of reference in the form of people, vehicles, furniture, etc . This is why the forced perspective of theme park castles is so effective— our brains want to see those upper windows as full sized, so we see the thing as 2-3x bigger than it actually is. And in the other direction, a lot of buildings in Las Vegas are further away than they look because hotels like the Bellagio have large black boxes on them that group a 2x2 block of the actual room windows.


I can't get over Scam Altman.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: