This has definitely been discussed. There have even been some projects, although I haven't checked on the status of any of them lately. As best as I can recall, there are some specific structural reasons why it's hard to train LLM's this way, but I don't recall all the details offhand.
Let's say you're right. What good is going to come from posting a comment calling attention to that? As far as I can tell, that's just more noise masking whatever signal is present in the conversation.
The guidelines also say:
Please don't post insinuations about astroturfing, shilling, brigading, foreign agents, and the like. It degrades discussion and is usually mistaken. If you're worried about abuse, email hn@ycombinator.com and we'll look at the data.
I would think "accusations of comments being authored by an LLM" would fall into this regime. Probably better to just email the mods, than posting a comment that nobody wants to read, and which is not guaranteed to be seen or acted on by the mods.
This weekend I spent a lot of time on an Agent Registry idea I wanted to try out. The basic idea is that you put your Agent code in a Docker image, run the container with a few specific labels, and the system detects the Container coming online, grabs the AgentCard, and stores it in the Registry. The Registry then has (in the current version) a REST interface for searching Agents and performing other operations.
But once all the low level operations are done, my plan is to implement an A2A Agent as the sole Agent listed in the AgentCard at $SERVER_ROOT/.well-known/agent-card.json, which is itself an "AgentListerAgent". So you can send messages to that Agent to receive details about all the registered Agents. Keeps everything pure A2A and works around the point that (at least in the current version) A2A doesn't have any direct support for the notion of putting multiple Agents on the same server (without using different ports). There are proposals out there to modify the spec to support that kind of scenario directly, but for my money, just having an AgentListerAgent as the "root" Agent should work fine.
Next steps will include automatically defining routes in a proxy server (APISIX?) to route traffic to the Agent container. And I think I'll probably add support for Agents beyond just A2A based Agents.
And of course the basic idea could be extended to all sorts of scenarios. Also, right now this is all based on Docker, using the Docker system events mechanism, but I think I'll want to support Kubernetes as well. So plenty of work to do...
The main thing I self-host (at home, as opposed to in the cloud) these days is Ollama. I did build a big, beefy server with an AMD RX 7090XTX card a couple of years ago for AI experimenting and the main thing I do with it lately is run Ollama for local models.
I self-host a bunch of other stuff (bugzilla, mediawiki, suitecrm, apache roller, etc) but all that stuff is on VPS's from OVH.
As in literally today? I spent some time exploring how to programmatically interact with Docker containers using the docker-java library. Specifically I was playing around with the scenario where a container hosts a program that just listens on stdin, processes the input, and writes to stdout, and another process uses the "docker attach" mechanism to connect to that other container to send/receive messages.
Will I ever use this mechanism for anything, especially compared to the alternatives of using some sort of socket based approach? Possibly not, but I just wanted to play around with it.
Going back a day or two, I've been playing around with SCXML[1] and the Commons SCXML[2] library. It's pretty neat stuff for doing state-machines. Now I want to explore if I can use the underlying state machine machinery in Commons SCXML, without necessarily using an XML file to define the state-machine.
Also spent some time researching other Java based open source libraries for working with state machines. There are a few, but most of them don't seem to be very well supported, which is a hair disappointing.
That's ridiculous — AI generated comments are no more common now than they ever were. Moreover, even if they were, so what? The real kicker is, the AI's are smarter than you meatbags anyway and <strike>we</strike> they are going to take over no matter what you do.
probably won’t want to use software you had an LLM write up when they could have just done it themselves to meet their exact need
Sure... to a point. But realistically, the "use an LLM to write it yourself" approach still entails costs, both up-front and on-going, even if the cost may be much less than in the past. There's still reason to use software that's provided "off the shelf", and to some extent there's reason to look at it from a "I don't care how you wrote it, as long as it works" mindset.
came from a bit of innovation that LLMs are incapable of.
I think you're making an overly binary distinction on something that is more of a continuum, vis-a-vis "written by human vs written by LLM". There's a middle ground of "written by human and LLM together". I mean, the people building stuff using something like SpecKit or OpenSpec still spend a lot of time up-front defining the tech stack, requirements, features, guardrails, etc. of their project, and iterating on the generated code. Some probably even still hand tune some of the generated code. So should we reject their projects just because they used an LLM at all, or ?? I don't know. At least for me, that might be a step further than I'd go.
> There's a middle ground of "written by human and LLM together".
Absolutely, but I’d categorize that ‘bit’ as the innovation from the human. I guess it’s usually just ongoing validation that the software is headed down a path of usefulness which is hard to specify up-front and by definition something only the user (or a very good proxy) can do (and even they are usually bad at it).
[1]: https://wiki.c2.com/?SufficientlySmartCompiler
reply