To be fair, one doesn't need AI to attempt to avoid responsibility and accept undue credit. It's just narcissism; meaning, those who've learned to reject such thinking will simply do so (generally, in abstract), with or without AI.
I think I'd have to reject it in review. The parameter is not used and should therefore be prepended with an underscore or literally be named "_" to signal such to a reader.
IMO, it depends on the events in court; if there was extensive argumentation about that and the judge is finally saying that it's been discussed to death and there's no point bringing it up, that seems fine. (I don't want to read the actual court transcripts to figure out what the attorney is referring to, so this comment is intentionally inconclusive.)
Regardless, their point is that the argument seems faulty. Indeed, their docs going unreviewed seems moot to whether the code goes unreviewed, given there are much stronger reasons to review code than there are to review documentation; as they wrote, bad documentation doesn't automatically break your application when it's published (there's at least a few more steps involved). Your statement's accuracy is not exclusive to the illogic of an argument which agrees with the statement.
> I don't know if you are just playing devil's advocate
Indeed, that is playing Devil's Advocate but one should remember that such Advocacy is performed to make sure that arguments against the Devil are as strong as they can be. It's not straightforward to see how simply repeating an assertion helps to argue for the veracity of it.
> It appears that CBS sees equal airtime as a very serious threat to their programming.
This seems very dubious given the recent ownership change of CBS and the lack of reason behind the decision. The point the parent comment brings up is that "equal airtime" requires that someone actually request to go on the show and be refused. There is no legitimate cover for CBS' decision as this did not occur. It seems incredibly likely to be one made in fear of political liability rather than legal.
> The point the parent comment brings up is that "equal airtime" requires that someone actually request to go on the show and be refused.
Their lawyers recommendation, and Colbert's response and behavior, aligns with the case if they did refuse guests.
Is there some reference you're going off of related to this, that makes it clear they didn't? Or does Carr possibly have knowledge that they did, as part of the (as the article points out) ongoing investigation, resulting in their lawyers making the recommendation?
Call me a crazy conspiracy theorist but, a strongly left leaning show, with a strong left leaning audience, whose whole routine is making fun of republicans, refusing republican guests does NOT seem all that crazy. I would personally expect it, just to prevent their staff from the usual Twitter mob death threats for "platforming nazis"! I also think this whole thing is unreasonable, but I also think it's unreasonable to have 6 companies control 90% of the media, giving them the domination where their guests choices can even be considered a problem.
> Their lawyers recommendation, and Colbert's response and behavior, aligns with the case if they did refuse guests
Colbert's response and behavior also happen to align with his desire to remain contracted with CBS; the lawyers' recommendation aligns with CBS's desire to cater to the whims of the current US administration.
> Is there some reference you're going off of related to this, that makes it clear they didn't [refuse any guests]?
Colbert would say yes without hesitation. He has no reason to refuse the guest because he would take the opportunity to skewer them. I daresay he would revel in it. This equal airtime requirement does not also require equal consideration of opinion. It seems more prudent to look for evidence that this actually occurred; for example, has anybody come forward to complain about being refused as a guest on Colbert's show? One would think someone concerned about publicity would be very interested to do exactly that (unless it would be defamatory, of course).
> Call me a crazy conspiracy theorist but, a strongly left leaning show, with a strong left leaning audience, whose whole routine is making fun of republicans, refusing republican guests does NOT seem all that crazy.
For the reasons I describe in the previous paragraph, that would be illogical for him to refuse, additionally for the reasons as noted in this case: it could be considered illegal to do so. On this topic of conspiracy theories, it appears a more likely one is that this current US administration, known for its bullshit, is just offering more bullshit.
It doesn't seem obvious that it's a problem for LLM coders to write their own tests (if we assume that their coding/testing abilities are up to snuff), given human coders do so routinely.
This thread is talking about vibe coding, not LLM-assisted human coding.
The defining feature of vibe coding is that the human prompter doesn't know or care what the actual code looks like. They don't even try to understand it.
You might instruct the LLM to add test cases, and even tell it what behavior to test. And it will very likely add something that passes, but you have to take the LLM's word that it properly tests what you want it to.
The issue I have with using LLM's is the test code review. Often the LLM will make a 30 or 40 line change to the application code. I can easily review and comprehend this. Then I have to look at the 400 lines of generated test code. While it may be easy to understand there's a lot of it. Go through this cycle several times a day and I'm not convinced I'm doing a good review of the test code do to mental fatigue, who knows what I may be missing in the tests six hours into the work day?
> This thread is talking about vibe coding, not LLM-assisted human coding.
I was writing about vibe-coding. It seems these guys are vibe-coding (https://factory.strongdm.ai/) and their LLM coders write the tests.
I've seen this in action, though to dubious results: the coding (sub)agent writes tests, runs them (they fail), writes the implementation, runs tests (repeat this step and last until tests pass), then says it's done. Next, the reviewer agent looks at everything and says "this is bad and stupid and won't work, fix all of these things", and the coding agent tries again with the reviewer's feedback in mind.
Models are getting good enough that this seems to "compound correctness", per the post I linked. It is reasonable to think this is going somewhere. The hard parts seem to be specification and creativity.
Maybe it’s just the people I’m around but assuming you write good tests is a big assumption. It’s very easy to just test what you know works. It’s the human version of context collapse, becoming myopic around just what you’re doing in the moment, so I’d expect LLMs to suffer from it as well.
> the human version of context collapse, becoming myopic around just what you’re doing in the moment
The setups I've seen use subagents to handle coding and review, separately from each other and from the "parent" agent which is tasked with implementing the thing. The parent agent just hands a task off to a coding agent whose only purpose is to do the task, the review agent reviews and goes back and forth with the coding agent until the review agent is satisfied. Coding agents don't seem likely to suffer from this particular failure mode.
Honestly, some of the shit with ClawdBot^W MoltBot^W OpenClaw and molt.church and molt.book has been some quality entertainment, enabled largely by the Internet. And it's AI slop but that only seems to matter when one of them gets miffed about its PR being rejected and posts an unhinged blog post about the maintainer who rejected said PR. And in a "comedy equals tragedy plus time" way, it's pretty easy to laugh at that, too.
You know there's individuals who will unironically defend any dark pattern one cares to point to so your take here is pretty unsurprising. I feel like this is getting excited over finding a kernel of undigested corn in a random turd.
I meant it more as marveling at the people who get excited at the undigested corn kernel and then make artwork about it, though not to knock participation in this zeitgeist. There really is something fascinating about seeing people congregate over something that excites them, regardless of the curmudgeons who denigrate it. Doubly so if I don't understand it. It doesn't have to be your cup of tea but calling it "a kernel of undigested corn in a random turd" is unduly hostile.
The only thing more predictable than the credulous defense of harmful technologies is the wildly fallacious "old man sneering at clouds". If there is hostility there's generally a good reason for it. Refusing to engage with that is an indication of arrested emotional development or maybe a massive ideological blind spot. It certainly doesn't herald open-mindedness.
This seems like a record for number of projections per sentence.
You do not have any reason to think I've (1) "arrested emotional development" nor (2) an "ideological blind spot"; (3) my "defense of harmful technologies" was not even presented, let alone (4) does it have anything to do with old men shaking their fists at clouds; and you do not have any reason to say I've (5) not been open-minded.
The only thing I said is that there have been some happenings to be entertained by, that is not exclusive to other feelings about them. I can think whoever set up MJ Rathbun has been irresponsible while also laughing at the dumb thing their irresponsible decisions caused.
These feelings are not mutually exclusive and hostility towards the ones I expressed because you made assumptions about other feelings I must have is an indication of arrested emotional development and certainly doesn't herald open-mindedness. Obviously (this is from my perspective, let's remember our emotional development and open-mindedness), you must fear these things in some manner and you are projecting said fears onto my statements in these comments.
This made me smile. Normally it's the other way around.
reply