Hacker Newsnew | past | comments | ask | show | jobs | submit | hmry's commentslogin

Agreed. Firefox ships with one, and it's very useful.

The HN title is not the article's title

-O3 also makes build times longer (sometimes significantly), and occasionally the resulting program is actually slightly slower than -O2.

IME -O3 should only be used if you have benchmarks that show -O3 actually produces a speedup for your specific codebase.


This various a lot between compilers. Clang for example treats O3 perf regressions a bugs In many cases at least) and is a bit more reasonable with O3 on. GCC goes full mad max and you don't know what it's going to do.

For companies too, judging by the number of LinkedIn posts along the lines of

"Our 4-person team's AI bill this month was $100K and I've never been more proud of an invoice"

"If your $250K a year engineers aren't spending $250K a year in tokens, you aren't getting your money's worth"

"If you aren't using at least $500 of tokens a day, it's time for a performance improvement plan"


What's the point if it's incompatible? The README suggests using go's testing toolchain and type checker, but that's unreliable if the compiled code has different behavior than the tested code. That's like testing and typechecking your code in a C++ compiler but then for production you run it through a C compiler.

Would have been a lot more useful if it tried to match the Go behavior and threw a compiler error if it couldn't, e.g. when you defer in a loop.

Is this just for people who prefer Go syntax over C syntax?


I don't work regularly on it but I have a proof of concept go to c++ compiler that try to get the exact same behaviour : https://github.com/Rokhan/gocpp

At the moment, it sort of work for simple one-file project with no dependencies if you don't mind there is no garbage collector. (it try to compile recursively library imports but linking logic is not implemented)


> Not everyone uses dollars.

> The price of credits in some currency could change after you bought them.

> The price of credits could be different for different customers (commercial, educational, partners, etc)

Maybe I'm missing something, but doesn't every other compute provider manage that without introducing their own token currency? Convert to the user's currency at the end of the month, when the invoice comes in. On the pricing page, have a table that lists different prices for different customers. I fail to see how tokens make it clearer. Compare:

"This action costs 1 token, and 1 token = $0.03 for educational in the US, or 0.05€ for commercial in the EU"

"This action costs $0.03 for educational in the US, or 0.05€ for commercial in the EU"

> They can ban trading of credits or let them expire

That sounds extremely user-hostile to me


otherwise you end up with "get a $20 subscription for 1000% more value -- equivalent to $200 in API usage!!![1]; [1] -- compared to API pricing for american companies on the first weekend of the month between 18:00 and 22:00 UTC+8 during full moon"

in any case, better than what anthropic does

> user-hostile

credits do expire (I thought they always do?), apparently it's not really up to them: https://news.ycombinator.com/item?id=46230848


Pay 100 Gold or 15 Gems to generate this feature

You joke but as a parent, I’m so sick of the gem packs, etc. they try to push on the kids to obfuscate your actual spend on games in real world money.

And now it feels like the are gamifying the compute we use for work for all the same reasons.


I refuse to play games where you pay real money for consumables.

I hate that pattern so much. It’s also not just to obfuscate the spending - it’s also to ensure you already have some amount left over in your account, so that it feels like you’re not spending as much to just “top up” and afford that one thing you want this time.

If you have some left over that you can’t spend, it feels like you’ve “wasted” them.


Board games do not have this problem.

Please read the link you're citing

> The court held that the Copyright Act requires all eligible works to be authored by a human being. Since Dr. Thaler listed the Creativity Machine, a non-human entity, as the sole author, the application was correctly denied. The court did not address the argument that the Constitution requires human authorship, nor did it consider Dr. Thaler’s claim that he is the author by virtue of creating and using the Creativity Machine, as this argument was waived before the agency.

Or in other words: They ruled you can't register copyright with an AI listed as the author on the application. They made no comment on whether a human can be listed as the author if an AI did the work.


An earlier attempt at registering AI creations without AI attribution was rejected by the Copyright Office[1], saying that person in particular needed to make an AI attribution, which they were originally not doing.

In this case, the court is saying AI attribution is not okay, either. There is no way to register copyrights for AI creations.

It's consistent with the Copyright Office's interpretation of copyright law where it holds that it only applies to human creations and doesn't apply to non-human creations, which is what they say AI creations fall under:

> The Copyright Office affirms that existing principles of copyright law are flexible enough to apply to this new technology, as they have applied to technological innovations in the past. It concludes that the outputs of generative AI can be protected by copyright only where a human author has determined sufficient expressive elements. This can include situations where a human-authored work is perceptible in an AI output, or a human makes creative arrangements or modifications of the output, but not the mere provision of prompts.

[1] https://www.copyright.gov/rulings-filings/review-board/docs/...

[2] https://newsroom.loc.gov/news/copyright-office-releases-part...


The most controversial part is that they wrote a TUI in ReactJS, but they don't try to keep that part secret, they brag about it. :^)

Yeah as much as I avoid OpenAI for [reasons], the Rust TUI was really the move. Claude Code is a mess.

Some are stuck in 2010s, where people thought that JS was turning into a lingua franca. As usual, such delusions are costing us some pretty heavy price. People seem to now accept crappy, laggy UIs "because it makes business sense", completely ignoring that their business _is_ providing a seamless experience. ugh sorry, </rantmode>

I think the reason behind using React and JavaScript is simpler - these tools are heavily vibecoded, and React/JavaScript is what was most present in the training data and as such is what the models excels the most at generating.

The crappy laggy UIs have the same root cause - heavy use of vibecoding with lackluster quality processes


vibe coding is barely a year old, this trend is older

> RISC-V is little

These days it's bi, actually :) Although I don't see any CPU designer actually implementing that feature, except maybe MIPS (who have stopped working on their own ISA, and now want all their locked-in customers to switch to RISC-V without worrying about endianness bugs)


Well, sort of. Instruction fetch is always little-endian but data load/store can be flipped into big. But IIRC the standard profiles specify little, so it's pretty much always going to be little. But yea, technically speaking data load/store could be big. Maybe that's important for some embedded environments.

> Well, sort of. Instruction fetch is always little-endian but data load/store can be flipped into big

ARM works the same way. And SPARC is the opposite, instructions are always big-endian, but data can be switched to little-endian.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: