Hacker Newsnew | past | comments | ask | show | jobs | submit | chillfox's commentslogin

This honestly sounds like the best proposed solution I have heard.

Agreed. Putting the burden on parents is quite something:

1. You end up being the bad guy, other parents don't restrict their kids internet usage etc. Some folks would argue to just not set up restrictions and trust them. But it's a slippery slope and puts kids in a weird position. They start out with innocent YouTube videos, but pretty quickly a web search or even a comment can lead them to strange places. They want to play games online, but then creeps abuse that all the time. Even if you trust them to not do anything "wrong", it's a lot to put on their shoulders.

2. If you want to put restrictions in place, even if you're an expert, the tools out there are pretty wonky. You can set up a child protection DNS, but most home routers don't make it easy (or even allow you) to set a different DNS server. And that's not particularly hard to circumvent. I suppose a proxy would be a more solid solution, but setting that up would be major yak shaving. Any "family safety" features (especially those from Microsoft) are ridiculously complicated and often quite buggy. Right now, I got the problem on my plate that I need to migrate one of my kid's accounts from a local Windows account to a Microsoft account (without them loosing all their stuff), because for local accounts, it seems the button to add the device is just missing? Naturally, the docs don't mention that, I had to do research to arrive at that hypothesis. The amount of yak shaving, setup and configuration you have to do for a reasonable setup is just nuts.

3. If you're not good with tech - I don't see how you have _any_ chance in hell to set up meaningful restrictions.

Some countries are banning social media - sure, that's one thing. But there's a _lot_ of weird places on the internet, kids will find something else. I for one would appreciate dedicated devices or modes for kids < 18. Would solve all this stuff in a heartbeat.


After struggling with this problem for a while, we started using Qustodio. It's not perfect by any means, but it's the most broadly effective and usable tool for parental control I've found. Loads better than the confusing iOS native screen time tools.

Isn’t this pretty much how everyone uses agents?

Feels like it’s a lot of words to say what amounts to make the agent do the steps we know works well for building software.


I think most of these writeups are packaging familiar engineering moves into LLM-shaped language. In my experience the real value is operational: explicit tool interfaces, idempotent steps, checkpoints and durable workflows run in Temporal or Airflow, with Playwright for browser tasks and a vector DB for state so you can replay and debug failures. The tradeoff is extra latency, token cost and engineering overhead, so expect to spend most of your time on retries, schema validation and monitoring rather than on clever prompt hacks, and use function calling or JSON schemas to keep tool outputs predictable.

> I think most of these writeups are packaging familiar engineering moves into LLM-shaped language.

They are, and that's deliberate.

Something I'm finding neat about working with coding agents is that most of the techniques that get better results out of agents are techniques that work for larger teams of humans too.

If you've already got great habits around automated testing, documentation, linting, red/green TDD, code review, clean atomic commits etc - you're going to get much better results out of coding agents as well.

My devious plan here is to teach people good software engineering while tricking them into thinking the book is about AI.


G is posting this slop so Anthropic sends him his dinner invitation this month, give him a break.

I can't believe how far down I had to scroll before someone called the OP out for not having actually read the article and just decided to make up their own topic.

Isn’t dependent types replicating the object oriented inheritance problem in the type system?

No, unless you mean the problem of over-engineering? In which case, yes, that is a realistic concern. In the real world, tests are quite often more than good enough. And since they are good enough they end up covering all the same cases a half-assed type system is able to assert anyway by virtue of the remaining logic needing to be tested, so the type system doesn't become all that important in the first place.

A half-assed type system is helpful for people writing code by hand. Then you get things like the squiggly lines in your editor and automated refactoring tools, which are quite beneficial for productivity. However, when an LLM is writing code none of that matters. It doesn't care one bit if the failure reports comes from the compiler or the test suite. It is all the same to it.


Outside of work I don't know anyone who pays for AI.

But I have noticed that everyone seems to be using ChatGPT as the generic term for AI. They will google something and then refer to the Gemini summary as "ChatGPT says...". I tried to find out what model/version one of my friends was using when he was talking about ChatGPT and it was "the free one that comes with Android"... So Gemini.


Yeah, but PayPal is an even bigger pain.


If maintainers of open source want's AI code then they are fully capable of running an agent themselves. If they want to experiment, then again, they are capable of doing that themselves.

What value could a random stranger running an AI agent against some open source code possible provide that the maintainers couldn't do themselves better if they were interested.


Exactly! No one wants unsolicited input from a LLM, if they wanted one involved they could just use it themselves. Pointing an "agent" at random open source projects is the code equivalent of "ChatGPT says..." answers to questions posted on the internet. It's just wasting everyone involved's time.


Having used AI to write docs before, the value is in the guidance and review.

I started out with telling the AI common issues that people get wrong and gave it the code. Then I read (not skim, not speed, actually read and think) the entire thing and asked for changes. Then repeat the read everything, think, ask for changes loop until it’s correct which took about 10 iterations (most of a day).

I suspect the AI would have provided zero benefit to someone who is good at technical writing, but I am bad at writing long documents for humans so likely would just not have done it without the assistance.


Bad example, you really should just write caching yourself. It’s far too little code to pull in a dependency and if you write it yourself in every project that needs it then you will get good at it, so cache invalidation bugs won’t be an issue.


Poe's Law strikes again!


Looking at https://arcprize.org/leaderboard the cost/task is about the same as Opus 4.6.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: