Hacker Newsnew | past | comments | ask | show | jobs | submit | teaearlgraycold's commentslogin

People are rapidly learning how to improve model capabilities and lower resource requirements. The models we throw away as we go are the steps we climbed along the way.

He had a history of causing noise at Google’s weekly leadership Q&A.

For the moment it’s best practice to run it and all of your dev stuff in a VM.

People keep talking about automating software engineering and programmers losing their jobs. But I see no reason that career would be one of the first to go. We need more training data on computer use from humans, but I expect data entry and basic business processes to be the first category of office job to take a huge hit from AI. If you really can’t be employed as a software engineer then we’ve already lost most office jobs to AI.

Ah but that was before he saw the comp packages. But no judgement. The tool is still open source. Seems like a great outcome for everyone.

At this point I consider Scott to have played the Internet like a fiddle. I think he knew the whole time the agent didn’t deserve any attribution. He knew it was a human driving the thing but wanted to grab people’s attention.

They’re also great for reducing dependencies. What used to be a new dependency and 100 sub-dependencies from npm can now be 200 lines of 0-import JS.

It’s kind of shocking the OP does not consider this, the most likely scenario. Human uses AI to make a PR. PR is rejected. Human feels insecure - this tool that they thought made them as good as any developer does not. They lash out and instruct an AI to build a narrative and draft a blog post.

I have seen someone I know in person get very insecure if anyone ever doubts the quality of their work because they use so much AI and do not put in the necessary work to revise its outputs. I could see a lesser version of them going through with this blog post scheme.


Somehow, that's even worse...

But a much more believable scenario

LLMs give people leverage. Including mentally ill people. Or just plain assholes.

LLMs also appear to exacerbate or create mental illness.

I've seen similar conduct from humans recently who are being glazed by LLMs into thinking their farts smell like roses and that conspiracy theory nuttery must be why they aren't having the impact they expect based on their AI validated high self estimation.

And not just arbitrary humans, but people I have had a decade or more exposure to and have a pretty good idea of their prior range of conduct.

AI is providing the kind of yes-man reality distortion field the previously only the most wealthy could afford practically for free to vulnerable people who previously never would have commanded wealth or power sufficient to find themselves tempted by it.


I show the stack trace on AGPL projects. Why hide what they can already see for themselves?

The reason I see is that it might expose the value of secret keys or other sensitive variables. But if you are certain it won't happen, then yes

Just airtag your dog? Jesus Christ.

Quoting from the press release: https://www.apple.com/newsroom/2026/01/apple-introduces-new-...

"Designed exclusively for tracking objects, and not people or pets"

(emphasis mine)


That’s just so you don’t sue the over your lost dog

I think it’s also for practical reasons: your dog needs to be near a person with an iPhone. If the dog is in the middle of the woods it won’t show up. Generally most objects require a person to move them and so the chances of them being near an iPhone are much higher.

Or your dog eating the AirTag with the button battery inside it

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: