How about you find out for yourself? Keep a chat window or an agent open and ask it how it could help with your tasks. My git messages and gitlab tickets are being written by AI for a year now, way better than anything I would half heartedly do on my side, really good commit messages too. Claude even reminds me to create/update the ticket.
I find the commits written by AI often inadequate, as they mostly just describe what is already in the diff, but miss the background on why was the change needed, why this approach was chosen, etc, the important stuff...
Then ask it to write the commit differently, or you can explain why in the prompt. Edit: I start by creating the ticket with Claude+terminal tool, the title and descriptions gives context info to the llm, then we do the task, then commit and update the ticket
And in the time it takes to do all of that, the guy could have already written a meaningful commit message and be done with that issue for the day.
You only have to describe how you want commits written once and then the AI will just handle it. Is not that anyone of us can't write good commits, but humans get tired, lose focus, get interrupted, etc.
Just in my short time using Claude Code, it generally writes pretty good commits; it often adds more detail than I normally would not because I'm not capable but because there's a certain amount of cognitive overhead when it comes to writing good commits and it gets harder as our mental energy decreases.
I found this custom command [1] for Claude Code and it reminded me that there's no way a human can consistently do this every single time, perhaps a dozen times per day, unless they're doing nothing else--no meetings, no phone calls, etc. And we know that's not possible:
# Git Status Command
Show detailed git repository status
*Command originally created by IndyDevDan (YouTube: https://www.youtube.com/@indydevdan) / DislerH (GitHub: https://github.com/disler)*
## Instructions
Analyze the current state of the git repository by performing the following steps:
1. *Run Git Status Commands*
- Execute `git status` to see current working tree state
- Run `git diff HEAD origin/main` to check differences with remote
- Execute `git branch --show-current` to display current branch
- Check for uncommitted changes and untracked files
2. *Analyze Repository State*
- Identify staged vs unstaged changes
- List any untracked files
- Check if branch is ahead/behind remote
- Review any merge conflicts if present
3. *Read Key Files*
- Review README.md for project context
- Check for any recent changes in important files
- Understand project structure if needed
4. *Provide Summary*
- Current branch and its relationship to main/master
- Number of commits ahead/behind
- List of modified files with change types
- Any action items (commits needed, pulls required, etc.)
This command helps developers quickly understand:
- What changes are pending
- The repository's sync status
- Whether any actions are needed before continuing work
Arguments: $ARGUMENTS
It's not possible for a human to do what an LLM does at scale, for sure. But that's the difference, humans are not robots, so they will turn the the problem around and will try to find ways on how to not have to do this in the first place. E.g. minimizing pending changes left around by making small frequent commits. A lot of invention comes from people being annoyed doing something all over again manually. LLM stirs up things a little bit as it provides a completely different way of doing such tasks. You don't have to invent a better process if the LLM can do it repeatedly for a reasonable price. The new pressure then comes from minimizing LLM costs, I guess.
Wishful thinking. They will often ignore your general instructions, due to the statistical nature of their output. Source: have many such detailed general instructions that routinely get ignored.
These tools aren't magic, if there are reasons for code changes outside of the diff LLMs aren't going to magically fabricate a commit message that gives that context.
Do you feed the LLM additional context for the commit message, or it is just summarising what’s in the commit? In the latter case, what’s the point? The reader can just get _their_ LLM to do a better job.
In the former case… I’m interested to hear how they’re better? Do you choose an agent with the full context of the changes to write the message, so it knows where you started, why certain things didn’t work? Or are you prompting a fresh context with your summary and asking it to make it into a commit message? Or something else?
Depends, I have a prompt ready for changes I made manually, that checks the diff, gets the context, spits a conventional commit with a summary of the changes, I check, correct if needed and add the ticket number. It’s faster because it types really fast, no time thinking about phrasing and remembering the changes, and usually way more complete then what I would have written, given time constraints.
If I’m using a CLI:
the agent already has:
- the context from the chat
- the ticket number via me or when it created the ticket
- meta info via project memory or other terminal commands like API call etc
- Info on commit format from project memory
So it boils down to asking it to commit and update the ticket when we’re done with the task in that case. Having a good workflow is key
For your question: I still read and validated/correct, in the end I’m the one committing the code! So it’s the usual requirements from there. If someone would use their LLM the results would vary, here they have an approved summary. This is why human in the loop is essential.
Interesting approach. I'm a bit old-school, when I make a change I already have all the context and beyond in my head, plus all the expectations from colleagues, historical context etc that might be useful to remind people about. At least for me, it is easier to formulate the commit based on that, than trying to formulate a prompt to formulate what I want to have in the commit. But I have the same with code. When it is born in my head, it's usually easier for me to write what I want, than trying to explain it to an LLM. I find the LLM a bit lacking precision when it comes to comprehension, a little like trying to explain something to a child (with superpowers, but still need step by step directions).
But I find it very interesting how others find prompting more productive for their use cases. It's definitely a new skill. Over years I also built my skill to write commits, so it comes natural to me as opposed to prompting, which requires extra effort and thinking in a different way and context and it doesn't work well for something that I do basically automatically already.
I’m from the old guard, I get where you’re coming from. The thing is when I find a prompt that works well, I can reuse it, build on it, create new rules, all in natural language.
You are saying that people need to write so complex that an LLM that can pass an LSAT test with flying colors is unable to summarize its changes in a few sentences, or else their work is not critical? That is a high bar.
I am not sure what tests LLMs are passing these days. Every day its some other metric of no practical usage. You know we make money by delivering working code and features. What I do know is that for myself and people working for me at my company, we hit the limits of their practical usage so often,not even counting the casual removal of entire parts of code, that we recently decided to revert back from agents to using them again only in the conversational mode and only for select tasks. Whoever claims these tools are revolutionary is clearly not using them intensively enough or does not have a challenging use case. We get it, they can quickly spit out a react app for you, the frontend devs and people who were never good at maths are finally "good" at something vaguely technical. However -try using them for production-ready products over several months every day, your opinion will likely change.
>We get it, they can quickly spit out a react app for you, the frontend devs and people who were never good at maths are finally "good" at something vaguely technical
Plenty of us are using LLM/agentic coding in highly regulated production applications. If you're not getting very impressive results in backend and frontend, it's purely a skill issue on your part. "This hammer sucks because I hit my thumb every time!"
Again mate, not relevant. Oh how about this. Show me one major application that was developed mainly with LLMs and that was a huge success by any measure (does not have to be profitability). Again the benchmarks show what benchmarks show, but we have yet to see some killer app done by the LLMs (or mostly LLMs).
You started with insulting someone for using an LLM to write git commit messages, and in order to defend that statement you say that an LLM hasn't written a killer app by itself.
I am not really sure what to say except that if you are simply looking for a way to insult people, just admit you are a mean person and you won't have to justify in ways that make no sense. But if you really only hate LLMs, you can do that in ways that don't involve insulting people. But to be so full of disdain for a technology that it turns you irrational is something that should be a bit concerning.
Insulting, really? I merely made a statement about the nature of their work. That's not an insult. Please re-read and understand, before conflating. Also you fully misunderstood my comments about the LLMs. If I had disdain I would not have dished out thousands of USD for my team to use them. I am merely saying that they are not what the hype-makers would have you believe. Now show me that one killer app that someone successfully vibe-coded? All we see is theoretical bullshit, benchmarks etc. But no real-world a-ha moment.
You just felt like coming into a thread which was bound to be populated by people talking about using LLM for coding to let them know that their work isn't important because they use an LLM.
It seems to me the only reason someone would feel the need to do such a thing is to validate their own experience. If everyone else seems to be finding value in a tool, but you cannot, it must be because everyone else just isn't doing important things with it.
As I said earlier, I would be concerned about such behavior if I found myself doing it.
Are you also that cocky when you forget to turn off your coding agent during the coding interviews or when you turn in code commits with +300 deletions and +700 new entries that some poor soul has to review? The amount of people like yourself we reject for job applications seems definitely increasing.
You can't comment like this on Hacker News, no matter what you're replying to. If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.
I've replied to them too, but it takes at least two people to make a flamewar and you could have de-escalated or stopped replying at any time. Rather than pointing the finger at someone else, please make an effort to observe the guidelines and show you're sincere about using HN as intended. If you want others to be held to a high standard you need to hold yourself to a high standard.
The right thing to do if someone else is breaking the guidelines is flag their comments and email us at hn@ycombinator.com so we can take action.
By the way, you were right that the other person was inflammatory, and I should have called them out at the same time; it's just that we're often working quickly through lists of flagged comments and don't always think to look over the whole subthread.
In short, the answer to "why didn't you also call out that other user's abusive comment?" is almost always that we hadn't seen it yet, but we likely would have if you'd flagged it or emailed us.
My man, I've been paying for GitHub Copilot Business License and some additional Pro+ accounts for my entire team for more than a year and half, with top-tier access to models like Claude Sonnet, Opus and the rest of the bunch. We even had a generous overage policy. I may have been a bit excited about the tech in 2021, when it was not yet sure just how much of a dead-end its. I've seen a fair share of cocky morons like yourself forgetting to turn the VS Code extension or the CLI assistance off when interviewing with us and going 'let me just turn that off'. Then continuing to demonstrate their utter incompetence and obviously dependence on LLM. But what do I know? I never had my production database deleted by an LLM. Altough we haven't seen disasters on the scale of this buddy: https://www.theregister.com/2025/07/21/replit_saastr_vibe_co..., we did have some close calls, which is why we reverted the usage to strictly conversational mode and heavy supervision requirements. Maybe also explain your excitement about LLM to this fresh thread here https://news.ycombinator.com/item?id=44651485 . It's ok to be junior and to be excited about stuff. But you obviously lack the heavy duty exposure that would open up your eyes a bit. Just be careful not to delete your employer's database.
> My point stands, go get a feel of what’s happening in 2025 with coding agents like Claude code or the one from this article, or you’ll be left behind. I’m done arguing with a smug man child
Junior, first you re-learn to read correctly, as LLM dependency seems to have impacted your reading comprehension skills. I never said I only used them in 2021 (Claude/Anthropic did not even exist back then), as you seem to be falsely constructing in your head. I am saying I've been using them since 2021 and paying for a generous usage profile of my team since the last 18 months. Recently we decided to drop agentic usage as it is absolute crap and is a net negative. I am sorry to pop your bubble, but the only person left behind is you - your arguments are even sounding like an LLM hallucination. Are you sure you did not ask Claude to give you those arguments to shoot back at me?