Hacker Newsnew | past | comments | ask | show | jobs | submit | boxed's commentslogin


I am using Claude Code with Elm, a very obscure language, and I find that it's amazing at it.

I wouldn’t call Elm obscure. It’s old, well understood, well documented, and has a useful compiler. This is nearly the perfect fit for an LLM.

I picked up a change that had broad consensus and quite a bit of excitement over even by some core devs.

That ticket now just sits there. The implementation is done, the review is done, there are no objections. But it's not merged.

I think something is deeply wrong and I have no idea what it is.


Looking at your PR, the ticket is still marked as Needs documentation: yes Patch needs improvement: yes

If this is done, you should update it so it appears in the review queue.


Have you tried pinging in the Discord about it?

> It hasn't worked with nuclear waste, has it?

You mean the nuclear waste that we banned companies from using as nuclear fuel for modern reactors? I think you will find that regulation actually stopped us from solving this problem.

> It's not working for corals either.

Imagine if we had much more nuclear power so we didn't produce enormous amounts of CO2! The corals would be in a much better position.

The "environmental movement" has been an anti-nuclear power movement that doesn't care about the environment since the beginning sadly. They've managed to harm the environment more than all nuclear accidents by several orders of magnitude.


> You mean the nuclear waste that we banned companies from using as nuclear fuel for modern reactors?

There is no need to ban this because it (and reprocessing in general) is economically idiotic. It would be like saying government bans prevent companies from setting money on fire.

Dry cask storage is a quite acceptable and economical way to deal with nuclear waste. The demand that something permanent be done immediately reflects a desire to use waste as a lever against nuclear energy. Nuclear fans would do well not to fall into this trap and think immediate reprocessing is necessary or desirable.


Sure, it's fine, but we've ALSO effectively banned research into reactor types that use "nuclear waste" as fuel. In Sweden it's not even effectively, we had laws on the books until quite recently that banned nuclear research.

> effectively banned

This simply isn't true. Not as much may have been invested in said research, but that's more a reflection of the lack of a business case for such things. They are not a magical panacea to all of nuclear's woes.

Your logic reminds me of people who confuse consumer preference with boycotts.


Obviously it's an S curve yes, but we are so far from living in an Ian M. Banks Culture novel that we don't have to worry for probably a million years. Anti-growth people are ideologues with bad imagination and some pseudo-religious hate of humanity driving their political trend.

I think at this point comments like this are equivalent to saying "I didn't like this article, because it's written in too good English".

I would edit sentences like this:

"Erlang is the strongest form of the isolation argument, and it deserves to be taken seriously, which is why what happens next matters."

It doesn't add much, and it has this condescending and pretentious LLM tone. For me as a reader, it distracts from an otherwise interesting article.


I don't think you can get an LLM to write that personally. I tend to write with a bit too long run on sentences like that and have to edit myself carefully to make it readable. I don't think LLMs do that.

It's like when my kids say "that's AI!" about everything now. We're overreacting and thinking everything bad about writing is because of LLMs, but in fact, I think LLMs write better than 99% of humans now. A year ago that wasn't the case, but we need to update our priors.


That what the only place that made me stumble, because “what happens next” doesn’t really make sense in that context.

But mistakes like that are what makes it human! I really don't know anymore that we can have certainty about things being AI or human.

Mistakes yes, but “this obviously makes no sense” less so.

Sorry, good English is good grammatically and structurally while being unique and feeling creative. and AI-written English is not good. It’s correct but totally repetitive, formulaic and circular. It’s like expecting a pizza and finding it’s made of cardboard.

Or maybe more like expecting Italian food and getting pizza?

I liked the content of the article enough to read it to the end, but I did have a hard time due to inflation with LLM-isms. Then again I am not a native so how would I know if this is good English? I can only tell that to me, it is hard to read despite interesting content.

It shows a lack of care for the reader. Use your own words.

Or just a lot of smaller hearts.

It's just a unique ID of a person, it's not a password. I don't see how you can be confused by this.

It's also "anyone's brokerage account holdings, addresses, phone numbers" according to the comment that this subthread of the conversation is about.

It only gives read permissions, to make any changes requires a password.


The graph showing that "Bank teller employment has fallen off a cliff" is not zero based. This is pretty damn bad. The graph looks like it's going down 90%, but it's actually going from 350k to 150k. That's a ~60% drop which is a lot, but not "falling off a cliff".

60% is pretty well in “falling off a cliff” territory. The graph is misleading but that phrase, to me, is not.

60% job loss is not off a cliff?

That huge job loss also means no hiring. If you were a bank teller you would seriously need to consider a job switch


Probably a bigger sign to look for would be average age of bank tellers vs other occupations. If it's trending higher, then it's likely just people who've been doing the job for a long time and serving other older customers. I have a feeling not many young people are becoming tellers or even needing their services, but I can't verify it.

Why would you want to? It's like using a hammer for screws.

CPU compute is infinity times less expensive and much easier to work with in general

Less expensive how? The reason GPUs are used is because they are more efficient. You CAN run matmul on CPUs for sure, but it's going to be much slower and take a ton more electricity. So to claim it's "less expensive" is weird.

In situations where you have space CPU power but not spare GPU power because your GPU(s) & VRAM are allocated to be busy on other tasks, you might prefer to use what you have rather than needing to upgrade that will cost (even if that means the task will run more slowly).

If you are wanting to run this on a server to pipe the generated speech to a remote user (live, or generating it to send at some other appropriate moment) and your server resources don't have GPUs, then you either have to change your infrastructure, use CPU, or not bother.

Renting GPU access on cloud systems can be more expensive than CPU, especially if you only need GPU processing for specific occasional run tasks. Spinning up a VM to server a request then pulling it down is rarely as quick as cloud providers like to suggest in advertising, so you end up keeping things alive longer than absolutely needed meaning spot-pricing rates quoted are lower than you end up paying.


This is far too simplistic, you can't discuss perf per watt unless you're talking about a job running at any decent level of utilisation. Numbers like that only matter for larger scale high utilisation services, meanwhile Intel boxes mastered the art of power efficient idle modes decades ago while almost any contemporary GPU still isn't even remotely close, and you can pick up 32 core boxes like that for pennies on the dollar.

Even if utilisation weren't a metric, "efficient" can be interpreted in so many ways as to be pointless to try and apply in the general case. I consider any model I can foist into a Lambda function "efficient" because of secondary concerns you simply cannot meaningfully address with GPU hardware at present (elasticity and manageability for example). That it burns more energy per unit output is almost meaningless to consider for any kind of workload where Lambda would be applicable.

It's the same for any edge-deployed software where "does it run on CPU?" translates to "does the general purpose user have a snowball's chance in hell of running it?", having to depend on 4GB of CUDA libraries to run a utility fundamentally changes the nature and applicability of any piece of software

A few years ago we had smaller cuts of Whisper running at something like 0.5x realtime on CPU, people struggled along anyway. Now we have Nvidia's speech model family comfortably exceeding 2x real time on older processors with far improved word error rate. Which would you prefer to deploy to an edge device? Which improves the total number of addressable users? Turns out we never needed GPUs for this problem in in the first place, the model architecture mattered all along, as did the question, "does it run on CPU?".

It's not even clear cut when discussing raw achievable performance. With a CPU-friendly speech model living in a Lambda, no GPU configuration will come close to the achievable peak throughput for the same level of investment. Got a year-long audio recording to process once a year? Slice it up and Lambda will happily chew through it at 500 or 1000x real time


GPUs are a near monopoly. There are at least handful of big players in the CPU space. Competition alone makes the latter space a lot cheaper.

Also, for inference (and not training) there are other ways to efficiently do matmuls besides the GPU. You might want to look up Apple's undocumented AMX CPU ISA, and also this thing that vendors call the "Neural Engine" in their marketing (capabilities and the term's specific meaning varies broadly from vendor to vendor).

For small 1-3B parameter transformers like TADA, both these options are much more energy efficient, compared to GPU inference.


To maximise the VRAM available for an LLM on the same machine. That's why I asked myself the same question, anyway.

Not everyone has a GPU available that can run this.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: