Depends, I have a prompt ready for changes I made manually, that checks the diff, gets the context, spits a conventional commit with a summary of the changes, I check, correct if needed and add the ticket number. It’s faster because it types really fast, no time thinking about phrasing and remembering the changes, and usually way more complete then what I would have written, given time constraints.
If I’m using a CLI:
the agent already has:
- the context from the chat
- the ticket number via me or when it created the ticket
- meta info via project memory or other terminal commands like API call etc
- Info on commit format from project memory
So it boils down to asking it to commit and update the ticket when we’re done with the task in that case. Having a good workflow is key
For your question: I still read and validated/correct, in the end I’m the one committing the code! So it’s the usual requirements from there. If someone would use their LLM the results would vary, here they have an approved summary. This is why human in the loop is essential.
Interesting approach. I'm a bit old-school, when I make a change I already have all the context and beyond in my head, plus all the expectations from colleagues, historical context etc that might be useful to remind people about. At least for me, it is easier to formulate the commit based on that, than trying to formulate a prompt to formulate what I want to have in the commit. But I have the same with code. When it is born in my head, it's usually easier for me to write what I want, than trying to explain it to an LLM. I find the LLM a bit lacking precision when it comes to comprehension, a little like trying to explain something to a child (with superpowers, but still need step by step directions).
But I find it very interesting how others find prompting more productive for their use cases. It's definitely a new skill. Over years I also built my skill to write commits, so it comes natural to me as opposed to prompting, which requires extra effort and thinking in a different way and context and it doesn't work well for something that I do basically automatically already.
I’m from the old guard, I get where you’re coming from. The thing is when I find a prompt that works well, I can reuse it, build on it, create new rules, all in natural language.
If I’m using a CLI:
the agent already has: - the context from the chat - the ticket number via me or when it created the ticket - meta info via project memory or other terminal commands like API call etc - Info on commit format from project memory
So it boils down to asking it to commit and update the ticket when we’re done with the task in that case. Having a good workflow is key
For your question: I still read and validated/correct, in the end I’m the one committing the code! So it’s the usual requirements from there. If someone would use their LLM the results would vary, here they have an approved summary. This is why human in the loop is essential.