Hacker Newsnew | past | comments | ask | show | jobs | submit | tommy_axle's commentslogin

With all the buzz about orchestrating in the age of CLI agents there doesn't seem to be much talk about vim + tmux with send-keys (a blessing). You can run as many windows and panes doing so many different things across multiple projects.

The way I see it using tmux to orchestrate multiple agents is an intermediate step until we get a UI that can be a product offering. Assuming we get orchestration to the level it has been touted, there is a world where tmux is unnecessary for the user. You would just type something to one panel in which the "overlord" agent is running (the "mayor" if we talking gas town lingo) and that agent will handle all the rest. I doubt jumping between panes is going to stick around as the product offering evolves.

If doing it directly fails (not surprising) wouldn't the next thing (maybe the first thing) to do was to have AI write a codemod to do what needed to be done then apply the codemod? Then all you need to do is get the codemod right and apply it to as many files as you need. Seems much more predictable and context-efficient.

This should work really well, but you still need to first ensure the agent is able to test the code (both through automated tests and "manually" poking at it) so it can verify the changes made actually work.

so like GraphQL?

My guess is business (if they are doing well on the platform)


I see a service like this as being in the ip lookup API category (like ipinfo.io) but I wanted to mention that for this (and IP lookup, captcha etc) I would expect that if the service is down then you allow the registrations then review later, and not simply prevent all registrations.


Interesting. I think you're right (on the API category this falls under). Also love the approach on keeping this API async. Makes so much more sense that way.


Ok so taylorswift is reserved but taylor_swift and realtaylorswift can be used? It seems like impersonation would still be a problem.


I thought about this and decided against complicating ways in which this can be restricted. Honestly, this is a super simple challenge to solve. Perhaps I should introduce this as an API parameter to detect variations. That way, not just taylor_swift but t_aylorswift, ta_ylorswift etc. could also be detected and flagged.

As for realtaylorswift, I thought about that too. I don't think — and this is my personal opinion, obviously — most platforms wouldn't want to restrict this because then it really becomes unmanageable. I could obviously be wrong though and these could very easily be introduced to the API also (i.e. detect obvious username patterns) and totally open to adding that as an API parameter too.


Friend, with respect, these "simple challenge"s really start to add up very quickly, especially after edge cases.

Highly recommend you read this and similar posts: https://www.kalzumeus.com/2010/06/17/falsehoods-programmers-...


Damn! Just read the title and a few lines from the post but will definitely go through it fully and thoroughly. Thanks for sharing.

I didn't mean to reduce the complexity of the challenge. Was mostly trying to convey that the specific cases being discussed, should be something that I could quickly solution and incorporate in the API.

You're right about ALL the different kinds of edge cases that exist though and really, I'm trying to have this API be the go-to solution for it. Clearly, it's still not there. But it will be. I'm now more sure than ever.


> I can safely assume that this dictionary of bad words contains no people’s names in it.

This is a big one for this kind of project, and I've never been sure how usernames for people named Kike should be handled.


Good point. Currently, I've got "kike" as a Spanish dictionary word and also a public figure. Honestly, the job of this API stops there. It tells the platform that this username needs to be handled differently than "randomusername7346783" which has absolutely no value. Now, what we do with this info is really up to admins/platform owners. They could simply do nothing, flag and monitor, charge a premium or block outright. Totally their call but they can now programmatically decide that.


It definitely should be in a list of offensive terms too (and offensive dictionaries by language could be even more useful, telling moderators why it was flagged is valuable).


I see. Will re-run through the categories and the datasets from which I've adopted the names and categories. Maybe either I missed something or it might've not existed in the import in the first place. But noted. Also, thanks :)


Hah no kidding. I tried just, "bill_gates" --

  {
    "username": "bill_gates",
    "isReserved": false,
    "isDeleted": false,
    "categories": []
  }
what's the point of this thing...?


Why would I want billgates to be reserved in the first place, unless I'm Microsoft?

And the definition of a "public figure" is absurdly broad and inconsistent. Some very common names are flagged as reserved for what are extremely minor celebrities at best (like an assistant coach of a college basketball team, or a actor with barely any formal credits as examples, and some other obscure athletes are marked as reserved while others are not).


Well, to clarify, this API is really for folks who're building platforms that require usernames. For ex: imagine if you were building the next Twitter or anything that requires usernames. There, you'd want to know what's happening with these kinds of usernames, where, people are now prepared to pay for too (premium usernames). Similarly, for cases where the names are offensive or profane, you may want to block outright.

As for definition of specific categories (more specifically public figures), you're right. Currently, it's just me building this and so I had to decide where to draw the line. I just drew it around the entire earth which I know is NOT the best appraoch but that's the one I went with just to ensure I cover all bases. Honestly, the API would tell if and why a username could be deemed reserved/premium. What to do with this info is really up to the platforms that are consuming it. They could let it slide, do nothing, just flag and monitor, block etc.


It's odd that they focused so much on "it's better than regexes" when it doesn't handle these cases where a regex would do well.


The comment on regex was really because that's what I did when I built internal reserved usernames list of 2 of my URL shortener projects. I love regex, btw. BUT, I don't think they cover all of what we need with usernames specifically. Shared some more insights on the thread about variations too (like underscores etc.).


An aside: it looks like there is a certificate error for https://certkit.com/ as it's for *.mscertkit.com (this was on Chromium + Linux)


wow, yea. that's foolish. Fixing.


You want it to be as close to deterministic as possible to reduce the risk of the LLM doing something crazy like deleting a feature or functionality. Sure, the idea is for reviews to catch it but it's easier to miss there when there is a lot of noise. I agree that it's very similar to an offshore team that's just focused on cranking out code versus caring about what it does.


Technically writing calls is also taking the downside.


Technically, yes. But you have to own the stock first (‘cuz writing “naked calls” is not for the faint of heart). Easier and less complicated to just buy puts, especially if you’re looking up “money laundering” in the dictionary.


Not the OP but yes you can definitely get a bigger quant like Q6 if it makes a difference but you also can go with a bigger param model like gpt oss 120B. A 70B would probably be great for a 128GB machine, which I don't think qwen has. You can search for the model you're interested in on hugging face often with "gguf" to get it ready to go (e.g. https://huggingface.co/ggml-org/gpt-oss-120b-GGUF/tree/main). Otherwise it's not a big deal to quantize yourself using llama.cpp.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: