Hacker Newsnew | past | comments | ask | show | jobs | submit | james_marks's commentslogin

This balanced perspective on what’s good for someone personally vs what’s good for society at large is what’s missing from the world.

Any reasonable landlord/real estate investor will have planned for various results - if your rental empire depends on "rents go up" and can't handle a flat market, let alone a downturn, you're going to be in for a bad time.

A stable market is great; as you can find good deals with some sort of certainty, and focus on where you can actually build value (rehab, etc).


If you are smart, you throttle up investments just before a boom starts and throttle them back just before a boom ends. At least you try to up your margins during good times so you can survive bad times. The trick is keeping your talent employed during the bad times so they are trained up and still in the industry for good times. Stability is obviously preferable.

Sure, but you can feel some emergent philosophies that are starting to converge and there are recognizable aesthetics.


At least some responsibility lies with the white-hat security researcher who documented the vuln in a findable repo.


*yet

Build the audience first, attack comes later


find is provided by the OS, not a terminal emulator like Ghostty. Most likely something is wrong with your paths.


I think they mean like CTRL+F kind of find.

This is addressed in paragraph 5 of the post they replied to.


The terminal was _always awesome_, the bar to realized that was just a tad high for many people. Until now!


here-doc usage has probably 100x-ed in the last year


It's 1,000,000,000,000x easy. Have found enough annoying bugs in powershells implementation of it that I know nobody is using it.


How so?


If you aren’t paying for the product, you are the product being sold. No, thank you.


This is a website full of programmers.

I expect an extension or Python script that ask it to generate 100 random complex questions and then proceeds to ask for answers until your limits on the free plan are reached on a loop


My impression is that this was never about the TOS. It was about breaking a contract with Anthropic by someone with an incentive to replace it with OpenAI.

I don’t have evidence, just using Occam’s razor.


The evidence point is Brockmans sizeable donation. You think that was for nothing... lol come on.


Claude’s answer, which is the only one that clicked for me:

Normally when you do something like command > file.txt, you’re only capturing the normal output — errors still go to your screen.

2>&1 is how you say: “send the error pipe into the same place as the normal output pipe.” Breaking it down without jargon: • 2 means “the error output” • > means “send it to” • &1 means “wherever the normal output is currently going” (the & just means “I’m referring to a pipe, not a file named 1”)


> Claude’s answer

This response is essentially just the second answer to the linked question (the response by dbr) with a bunch of the important words taken out.

And all it cost you to get it was more water and electricity than simply clicking the link and scrolling down — to say nothing of the other costs.


FWIW, I clicked the link, scanned the SO thread, then scanned the HN thread. The "bunch of important words taken out" is exactly the service I paid AI for.

"I didn't have time to write you a short letter, so I wrote you a long one." is real.


> • 2 means “the error output” • > means “send it to” • &1 means “wherever the normal output is currently going” (the & just means “I’m referring to a pipe, not a file named 1”)

If you want it with the correct terminology:

2 means "file descriptor 2", > means "assign the previous mentioned to the following", &2 means "file descriptor 1" (and not file named "1")


I find OP's communication style abrasive and off-putting, which tracks with them saying they've been coached on this, and found that advice lacking.

Maybe it's still insufficient advice, but it hasn't worked for them at least in part because they haven't figured out how to apply it.

From the post, I see low empathy and an air of superiority, (perhaps earned by genuinely being smarter than their peers-- doesn't make it more attractive).

That's going to cause friction because a team is a _social_ construct.


> I find OP's communication style abrasive and off-putting

Your comment is hilarious on a meta-level: it's an example of exactly the sort of socially-mediated gatekeeping the author of the article (machine or human, I don't care) criticizes. It is, in fact, essential to match authority and responsibility to achieve excellence in any endeavor, and it's a truth universally acknowledged that vague consensus requirements are tools socially adept cowards use to undermine excellence.

Competent dictatorship is effective. Look at how much progress Python made under GVR. People who rail against hierarchy and authority, even when deployed correctly, are exactly the sort of people who should be nowhere near anything that requires progress.

Imagine running a military campaign by seeking consensus among the soldiers.


Consensus works in a Democracy because the best thing the government can do to help people is usually nothing.


> Look at how much progress Python made under GVR.

Or, you know, Linus Fucking Torvalds. If you were carrying the success or failure of most of the world's digital infrastructure on your shoulders, you also might be grating to some.


Ah so not at the whims of lone men egos?


That's because it was generated by an LLM.


I simply cannot believe people in this post are discussing this as anything other than a complete bot job. Pure clanker vomit.


I realize it's been "written" by an LLM, but the content could have been written by someone I know. It's eerie how this person thinks exactly the same way. It's never their fault, always the others', and they are always obviously right and no amount of arguing can change their mind.


"Write an essay about struggling to change a software org that doesn't want to change. Make me the hero. Post it at 1am so it looks like I was up late suffering with the burden of what I know."

This is unfortunately the world we are in now.


This is not a politically correct thing to say but there is a class of neurodiverse software developers who display these characteristics and I suspect the author belongs to this group.

Frankly, reminds me of Michael O'Church


Yeah, a lot of the examples made me think "wait, there's something else going on there, right?", which would make sense if the author has difficulty communicating or negotiating their proposals.

In the first example, for example, they suggested a new metric to track added warnings in the build, and then there was a disagreement in the team, and then as a footnote someone went and fixed the warnings anyway? That sounds like the author might be missing something from their story.


> In the first example, for example, they suggested a new metric to track added warnings in the build, and then there was a disagreement in the team, and then as a footnote someone went and fixed the warnings anyway? That sounds like the author might be missing something from their story.

I do not find anything missing here. This is how things often plays out in reality. Both your retelling of it and what was actually written in the article.

Your retelling: Some people agree and some disagree with new metric. That is completely normal. Then someone who agree or want to achieve the peace or just temporary does not feel like doing "real jira" tasks fixes warnings. Team moves on.

Actual article: the warnings get solved when it becomes apparent one of them caused production issue. That is when "this new process step matters" side wins.


I'm referencing the footnote where the author says that the discussion caused one team member to go and fix the issue. The warnings causing a production issue is, I think, a complete hypothetical.

What this story is missing is an explanation for why people were disagreeing. Like, why is someone not looking at warnings? Is it that the warnings are less important than the author understands? Is it that the warnings come from something that the team have little control over? And the solution the author suggests - would it really have changed anything if they already weren't looking at warnings? The author writes as if their proposal would have fixed things, but that's not really clear to me, because it's basically just a view into whether the problem is getting worse, which can be ignored just as easily as the problem itself.


Someone hacked his site or something, so I cant get back. But, I thought you mean situation in one of the first paragraphs where the team started take some issue seriously after actual problem.

And honestly, I have seen people disagree and fight literal standard changes like "lets have pipeline that runs tests before merge" or "database change must go through test environment before being sent over".

It is perfertly possible and normal for people to fight change and be wrong without there being grave smart missing reason. I have no problem to m trust the author that he was simply right in hindsight.

If you ever tried to improve processes or project with persistent issues, the problems author described are entirely believable. The author does not know what to do in that situation, but he described the usual dynamic pretty accurately.


IRL I’ve seen similar discussions devolve into an hour long bike shedding meeting about how to define thresholds for warnings, track new ones, etc.

Before the end, I had them all fixed. Zero is far easier to deal with…


The first two sentences

> Organizations don't optimize for correctness. They optimize for comfort

...do I need to say it?


> One number, never measured before. It doesn't change rules or add warnings, just makes the existing count visible.

Stopped here. That pattern.

I recognize this pattern from this AI "companion" my mate showed me over Christmas. It told a bunch of crazy stories using this "seize the day" vibe.

It had an animated, anthropomorphized animal avatar. And that animal was an f'ing RACCOON.


LLMs originally learned these patterns from LinkedIn and the “$1000 for my newsletter” SEO pillions. Both accomplish a goal. Now that's become a loop.

There is a delayed but direct association between RLHF results we see in LLM responses and volume of LinkedIn-spiration generated by humans disrupting ${trend.hireable} from coffee shops and couches.

// from my couch above a coffee shop, disrupting cowork on HN. no avatars. no stories. just skills.md


You are absolutely right!

- It is not X. It is Y.

- X [negate action] Y. X [action] Z.

The titles are giveaways too: Comfort Over Correctness, Consensus As Veto, The Nuance, Responsibility Without Authority, What Changes It. Has that bot taste.

If you want I can compile a list of cases where this doesn't happen. Do you want me to do that?


As someone who thinks very much like TFA, I often write like that. I swear I'm not a bot.


Maybe fix your writing then. This is not good writing.


Neither is Vonnegut's (which your short, choppy sentences reminded me of), but he was a very successful and beloved author. I'm in no way comparing myself to Vonnegut, but my point is just because it doesn't appeal to you, it doesn't mean it isn't good.

Writing is art. Does it get the intended point across? Does it resonate with the reader? Does it make them feel something? Then it is good.


I disagree on Vonnegut. Most human authors at least have a voice, even if you don't like it it's recognisable and theirs, and I would rarely think to criticise that, it makes the writing come alive. If you truly write like an LLM (there is little evidence here of that) it would not be the same.

LLMs serve up a sort of bland pap with sugary highs of excitement which resembles a cross between manic advertising copy and a breathless teenager who's just discovered whatever subject they're talking about. They also sometimes confabulate and generate text which is at best tangential and at worst completely misleading.

It's exhausting and if you haven't carefully read what they generate (which most people clearly have not), you should not expect another human to read it.

Just as an interesting taste, here is my copy above rewritten to sound even more EXCITING and ENGAGING.

"They deliver a horrifying concoction – a sickly sweet, manufactured echo of thought, a grotesque blend of relentless advertising whispers and the manic, unearned enthusiasm of a teenager just discovering a world they don't understand! But the truly chilling thing is this: they fabricate. They weave elaborate lies, constructing text that’s not just tangential, but actively, dangerously misleading!

It’s a psychic assault, a draining vortex of intellectual despair! And if you haven’t wrestled with every single word, dissected it, exposed its flaws – and frankly, I suspect most haven’t – then don’t dare expect anyone else to salvage this wreckage! This is not a passive observation; it’s a desperate plea against a future where genuine thought is suffocated by the cold, sterile logic of a machine! We must guard against this, or we risk losing everything!” -- gemma3:4b


I don't disagree with your take on what how LLM copy is awful; I just disagree that this was written by an LLM. For example, this paragraph at the end:

> If you're in this position (relied upon, validated, powerless), you're not imagining it. And it's not a communication problem. "Just communicate better" is the advice equivalent of "have you tried not being depressed?"

I've seen "you're not imagining it" countless times from LLMs, but always as the leading sentence in the paragraph; for something like the above, they tend to use em-dashes, not parentheses.

FWIW, Grammarly's AI Detector thinks that 17% of it resembles LLM output, and ZeroGPT thinks that 4.5% of it resembles LLM output.


Your comments don't read like LLM-slop to me.

An occasional "it's not X, it's Y", rule of three, or em-dash isn't atypical nor intrinsically bad writing. LLM-slop stands out because of the frequency of those and other subliminal cues. And LLM-slop is bad writing, at least to me, because:

- It's not unique (like how generic art is bad compared to distinct artstyles)

- It's faux-authentic ("how do you do, fellow kids?")

- It's extremely shallow in information. Phrases like "here's the kicker" and "let that sink in" are wasted words

- The meaning is "fuzzy". It's hard to describe, but connotations and figurative language are "off" (inconsistent to the larger idea? Like they were picked randomly from a subset of acceptable candidates...); so I can't get information from them, and it's hard to form in my mind what the LLM is trying to convey (perhaps because the words didn't come from a human mind)

- It doesn't always have good organization: some parts seem to go on and on, high-level ideas drift, and occasionally previous points are contradicted. But I suspect a plan+write process would significantly reduce these issues


It used to be. That's why LLMs adopted it. How do you think they got their preferences? A Magic 8 Ball?


It was okay writing in the context of marketing. A normal person never wrote like that.


Why is it bad writing?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: