Hacker Newsnew | past | comments | ask | show | jobs | submit | o_nate's commentslogin

I don't think the author is missing this distinction. It seems that you agree with him in his main point which is that companies bragging about LOCs generated by AI should be ignored by right-thinking people. It's just you buried that substantive agreement at the end of your "rebuttal".

Between this and the debate about ideal method length with Ousterhout, my respect for Uncle Bob is plumbing new depths.


This is a cool idea. I wish something like this existed for C#.


The thing that most surprises me is that IDEs don't have a standard protocol for this, so you basically need a custom test runner if you want one-click "this snapshot failed; update it" self-modifying tests.

I wrote WoofWare.Expect for F#, which has an "update my snapshots on disk" mode, but you can't go straight from test failure to snapshot update without a fresh test run, even though I'm literally outputting a patience diff that an IDE could apply if it knew how.

Worse, e.g. Rider is really bad at knowing when files have changed underneath it, so you have to manually tell it to reload the files after running the update or else you clobber them in the editor.


> ...if you want one-click "this snapshot failed; update it" self-modifying tests.

I am envisioning the PR arguments now when the first instinct of the junior developer is to clobber the prior gold standard outputs. Especially lovely when testing floating point functionality using tests with tolerances.

Some things should be hatefully slow so one's brain has sufficient chance to subconsciously mull over "what if I am wrong?"



An Agentic coding tool like Github Copilot will do this for you.


Lots of good observations in this article. I think that barring the possibility that LLMs become able to generate perfect, bug-free code, the question of how AI-generated code can be integrated with TDD is an important one. And as the author correctly points out, simply having the AI generate tests in addition to code is not the answer.


Whatever largest number you can express in your system, I can represent a larger one in only one bit, using the following specification.

0=your largest number 1=your largest number + 1


To be pedantic, that is a instance of the Berry paradox [1] and no you can not [2] as that would be a violation of Godel's incompleteness theorems.

edit: To clarify further, you could create a new formal language L+ that axiomatically defines 0 as "largest number according to L", but that would no longer be L, it would be L+. For any given language with rules at this level of power you could not make that statement without creating a new language with even more powerful rules i.e. each specific set of rules is capped, you need to add more rules to increase that cap, but that is a different language.

[1] https://en.wikipedia.org/wiki/Berry_paradox

[2] https://terrytao.wordpress.com/2010/11/02/the-no-self-defeat...


It's not a paradox, because there is nothing logically inconsistent in my definition, unlike the Berry paradox.


To be more pedantic, yes you can, but only with a meta-language.


Feels like the answer is probably uncertainty about inflation?


It depends on the specifics of what was said. As the complaint states, OpenAI has yet to release the full transcripts.


Encouraging someone to commit a crime is aiding and abetting, and is also a crime in itself.


Perhaps it would be useful to define what we mean by "commoditization" in terms of software. I would say a software product that is not commoditized is one where the brand still can command a premium, which in the world of software, generally means people are willing to pay non-zero dollars for it. Once software is commoditized it generally becomes free or ad-supported or is bundled with another non-software product or service. By this standard I would say there are very few non-commoditized consumer software products. People pay for services that are delivered via software (e.g. Spotify, Netflix) but in this case the software is just the delivery mechanism, not the product. So perhaps one viable path for chatbots to avoid commoditization would be to license exclusive content, but in this scenario the AI tech itself becomes a delivery mechanism, albeit a sophisticated one. Otherwise it seems selling ads is the only viable strategy, and precedents show that the economics of that only work when there is a near monopoly (e.g. Meta or Google). So it seems unlikely that a lot of the current AI companies will survive.


I guess I'm lucky not to have worked at a place with a role for software architects who don't actually write code. I honestly don't know how that would work. However, I think I can appreciate the author's point. Any sufficiently complex piece of existing software is kind of like a chess game in progress. There is a place for general principles of chess strategy, but once the game is going, general strategy is much less relevant than specific insights into the current state of play, and a player would probably not appreciate advice from someone who has read a lot of chess books but hasn't looked at the current state of the board.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: