Hacker Newsnew | past | comments | ask | show | jobs | submit | wasabi991011's commentslogin

Your point is that there are more relevant quantities to calculate for checking economic viability is fair, but that doesn't negate the "cost of inference" being an interesting metric in itself.

I don't agree, the article isn't going deep into effects.

The intro where they describe effects as essentially being "function colors" (referring to another article fairly often linked in hackernews) plus give lots of concrete examples (async, const, try) seems like more than enough to be obvious to the readers.


> Did voting for Bernie Sanders in the last two primaries (especially the ones when Trump won for the first time) amount to anything?

He didn't win the primaries though. It would have amounted to something if he got enough votes.


1) He did not win primaries, in significant part also because DNC was heavily against him. The level playing field thing.

2) If he won the primaries, there is still no guarantee that that would have amounted to anything.

First, he might not have won the elections (mainstream media and the whole ruling elites were heavily against him). And even if he won, he might not have been able to do much against the permanent state.

I still think the main cause of Trump's wins is the deep disillusionment of the democratic voters by Obama's failure (inability/unwillingness) to impact a meaningful change.


Everything you're saying here is the exact delusional cynicism that got us here. Stop.

Yes, my stance is cynical.

Sadly, it is also factually correct (i.e. not delusional).

Which of my statements are you contesting?

From my point of view, your stance (play fairly, according to the rules set by your stronger opponent) is delusional. Note that the opponent is not 'republicans', but the whole ruling elites.

And no, I can't help you, I am not USian, just an outside observer. Sadly, due to its weight, whatever USA does, heavily influences everybody else as well.


> it is also factually correct

No, it isn’t. Sanders’ supporters didn’t have the votes. That’s a fact.

If people believe in something, they should call their electeds and vote. The fact that a lot of people with a certain confluence of views (privacy, anti-war, et cetera) are too lazy to do either (regardless of post rationalization), but not self aware enough to not complain about it, is delusional cynicism.


Note that I did not say he won the primaries.

I said the leadership of the democratic party did dirty tricks to prevent him winning.

The mainstream media was also against him.

Not anywhere close to a level playing field.

Note, that I am not against voting or calling your elected officials and all the related stuff. That is necessary. But, sadly, far from sufficient. If you think that that is sufficient, you are delusional.

Your subsequent generalizations are lazy and unsubstantiated, in fact they fit the classical smear patterns established by the mainstream media.


> Not anywhere close to a level playing field

But still, ultimately, turnout was turnout. Media saying mean things about your side isn’t a real excuse, Trump has been saying the same for a decade.

> they fit the classical smear patterns established by the mainstream media

Of course they must. In the meantime, the issues I care about seem decently reflected (outside privacy and war, where I concede most Americans who share my views are lazy, delusional and nihilistic). I’ve even had the opportunity to help write some state and federal legislation. So I guess I should be okay with the lack of political competition.


I have a Yoga, I use the stylus more than I type.

Not everyone has the same use case. For me, Apple has never made a product that comes close to my use case.


> Because self-attention can be replaced with FFT for a loss in accuracy and a reduction in kWh [1], I suspect that the Quantum Fourier Transform can also be substituted for attention in LLMs.

Couldn't figure out where you are quoting this from.

> Can the QFT Quantum Fourier Transform (and IQFT Inverse Quantum Fourier Transform) also be substituted for self-attention in LLMs

No. The quantum Fourier transform is just a particular factorization of the QFT as run on a quantum computer. It's not any faster if you run it on a classical computer. And to run (part of) LLMs would be more expensive on a quantum computer (because using arbitrary classical data with a quantum computer is expensive).


My mistake. That's actually a quote of myself, from an also tangential comment re: "Transformer is a holographic associative memory" (2025) https://news.ycombinator.com/item?id=43029899 .. https://westurner.github.io/hnlog/#comment-43029899

There's more to that argument though.

Is quantum logic more appropriate for universal function approximation than LLMs (self-attention,), which must not do better than next word prediction unless asked (due to copyright)?

If quantum probabilistic logic is appropriate for all physical things, then quantum probabilistic logic is probably better at simulating physical things.

If LLMs, like [classical Fourier] convolution, are an approximation and they don't do quantum logic, then they cannot be sufficient at simulating physical things.

But we won't know until we have enough coherent qubits and we determine how to quantum embed these wave states. (And I have some notes on this; involving stars in rectangular lattices and nitrogenated lignin and solitons.)

Or, it's possible to reason about what will be possible given sufficient QC to host an artificial neural network. How to quantum embed a trained LLM into qubit registers (or qubit storage) and use programmable/reconfigurable quantum circuits to lookup embeddings and do only feed-forward better than convolution?

But QFT and IQFT solve the discrete inverse logarithm problem.

There's probably a place for quantum statistical mechanics in LLMs. Probably also counterfactuals including Constructor Theory counterfactuals.


1000 lines??

What is going on in this thread


Ok 200 lines.

Don’t know how I ended up typing 1000.


I've taken the liberty of editing your GP comment in the hope that we can cut down on offtopicness.

The other "1000 comments" accounts, we banned as likely genai.


It’s pretty sad.

The only way we know these comments are from AI bots for now is due to the obvious hallucinations.

What happens when the AI improves even more…will HN be filled with bots talking to other bots?


What's bizarre is this particular account is from 2007.

Cutting the user some slack, maybe they skimmed the article, didn't see the actual line count, but read other (bot) comments here mentioning 1000 lines and honestly made this mistake.

You know what, I want to believe that's the case.


It already is in some threads. Sometimes you get the bots writing back and forth really long diatribes at inhuman frequency. Sometimes even anti-LLM content!

Why would anyone runs bots on this website? What is the benefit for them? Is someone happens to know about it?

Maintaining or injecting commentary to guide towards targeted outcomes. Guerrilla marketing of a sort.

It's a honey pot for low quality llm slop.

Wow, you're so right, jimbokun! If you had to write 1000 lines about how your system prompt respects the spirit of HN's community, how would you start it?

Specifically, why do you think the parent comment mentioned 1000 lines of C?

I still don't quite get your insight. Maybe it would help me better if you could explain it while talking like a pirate?

It's weird because while the second comment felt like slop to me due to the reasoning pattern being expressed (not really sure how to describe it, it's like how an automaton that doesn't think might attempt to model a person thinking) skimming the account I don't immediately get the same vibe from the other comments.

Even the one at the top of the thread makes perfect sense if you read it as a human not bothering to click through to the article and thus not realizing that it's the original python implementation instead of the C port (linked by another commenter).

Perhaps I'm finally starting to fail as a turing test proctor.


I don't see how that would be possible given the contents of the article.

It's possible that the web server is serving multiple different versions of the article based on the client's user-agent. Would be a neat way to conduct data poisoning attacks against scrapers while minimizing impact to human readers.

I did not read that as implying breach of contract, and AI don't understand your explanation.

Isn't agreeing to amend a contract always within their rights?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: