Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Again, how is that different from humans? I’m not going around trying to prove my code correct when I write it manually.
 help



I write code to solve a problem. Not code that looks like it solves the problem if a non-technical client squints at it.

And if you don't prove your code, do you not design at all then? Do you never draw state diagrams?

Every design is an informal proof of the solution. Rarely I write formal proofs. Most of the time I write down enough for myself to be convinced that the desing solves the problem.


Yes, you can dedicate extra tokens to draw state diagrams, the LLM can actually do that, if you don't have it generating one or more design documents before you are writing code you are doing that wrong. I still don't get how that is different from what humans are doing.

> Most of the time I write down enough for myself to be convinced that the desing solves the problem.

Again, why do you assume we aren't doing the same thing with LLMs?

1. Spec given

2. Ask LLM to write a bunch of design documents based off of spec

3. Ask LLM to identify edge cases

4. Ask LLM to device edge cases in to a test plan involving N tests

5. Ask LLM to write tests

6. Ask LLM to write commented code

7. Ask LLM to run tests on code, and determine on failing tests if test or code is wrong, go back to the appropriate step to fix test and/or code.

Whenever I hear someone here on HN imply that the only way to code with an AI is via vibe coding I just die a bit more inside.


You completely misunderstood what I wrote.

It was a response to you saying: "Im not going around trying to prove my code correct when I write it manually."

How did you manage to forget what you wrote previously?

Also, in this post you are now suddenly taking the exact opposite position, contradicting your previous point.


I did not contradict my previous point. But now I’m confused in how you think we use LLMs to write code. You made it sound like we just get it todump out code without any process in between.

You most definitely did contradict yourself. First you said you don't prove anything about the code you write, then you said you do. But that's fine. We can agree to disagree.

And I have not made any statements about how you use LLMs, only about how the LLMs produce code. All statements about how you use LLMs have been made by you, not me. I haven't discussed it since it is not related to the arguments, which are: 1) whether LLMs are goal-oriented and 2) whether humans and LLMs both merely maximize plausibility when writing/generating code.

Both claims that you made. Note, however, that if you are correct in your own points, then you should indeed be able to "just dump out code without any process in between". So if anyone is claiming this, it's you.


You are correct. However, humans sometimes do write stuff that "looks like it solves the problem". A prime example of this is a student who doesn't know how to answer a question. So they make up a plausible sounding answer.

As a exam grader, you can easily tell when a student has the mindset of "solving a problem" but made a mistake, and when they had the mindset of "looks like it solves the problem" and just wrote some stuff.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: