Hacker Newsnew | past | comments | ask | show | jobs | submit | wrcwill's commentslogin

ugh this is so amateurish. i swear since the release of o3 this has been happening on and off.


can someone explain why zig is often compared to rust?

i understand zig as a better C, but in what world is another memory unsafe language a good idea?

i mean i kinda get if the thing you are writing would require LOTS of unsafe, then zig provides better ergonomics

however most programs can definitely be written without unsafe and in that case i dont get why wed do this to ourselves in 2025

why not take advantage of memory safe, and data race safety?


Memory safety is a gradient. Zig is "memory safe-er" than C just because its arrays store length. Of course, RCE vulnerability is not a gradient.


because rust is more than just memory safety, it's not obvious that "the rust way" is the only way to go. fil-c is a good? example of a alternative strategy. sel4 is a radically different strategy that is as much of a pain in the ass as it's powerful. (disclaimer, i'm working on compile time memory safety for zig)


The article actually addresses these questions. It's way more than just a donation announcement. It lays out the various reasons that Zig is the language of choice for TigerBeetle.


Because both are languages that could feasibly be used to solve a particular set of problems. TigerBeetle could have been written in either, they explain why they didn't choose Rust but it was a feasible alternative (in a way that Java or Python would not have been).

Conceptually C and C++ are also in that potential solution space, but I'm sure they feel that Zig's properties are superior for what they want.


im not sure why we need to go off rumours, the knowledge cutoff for each openai model is clearly listed in the table:

https://platform.openai.com/docs/models/compare?model=gpt-5....


Unless I'm missing something, this has nothing to do with asynchronous code. The delete is just synchronous code running, same as if we called a function/closure right there.

This is just about syntax sugar hiding function calls.


I'm assuming you're referring to the Python finaliser example? If so, there's no syntax sugar hiding function calls to finalisers: you can verify that by running the code on PyPy, where the point at which the finaliser is called is different. Indeed, for this short-running program, the most likely outcome is that PyPy won't call the finaliser before the program completes!


I think it says if your async code holds locks you’re gonna have a bad time. Async and optimistic locks probably should go hand in hand.

I would think finalizers and async code magnify problems that are already there.


If you use a single-threaded executor then you don't need locks in your async code. Well, you might use external locks, but not thread synchronization primitives.

When I write async code I use a single-threaded multi-process pattern. Look ma'! No locks!

Well, that's not very fair. The best async code I've written was embarrassingly parallel, no-sync-needed, read-only stuff. If I was writing an RDBMS I would very much need locks, even if using the single-threaded/multi-processed pattern. But also then my finalizers would mainly drop locks rather than acquire them.


that isn’t the panacea you describe it to be. you just happen to write a lot of code where writing it that way doesn’t result in consistency problems.


You do have to be careful that all of your data updates are transitive, or you have to hold all of the updates until you can apply them in sequential order. One of my favorite tricks there is to use a throttling or limiting library, start all of the tasks, and then run a for loop to await each answer in order. You still have front-of-line issues but you can make as much forward progress as can be made.


can you ctrl-f now?


Nope, it's planned for 1.3


still a low frequency pwm phone.. what i would give for a modern no-pwm / high frequency pwm phone


definitely 1!

it seems to truncate your prompt even under the "maximum message length" and yeah around 55k is where it starts to happen.

extremely annoying. o1 pro worked up until 115k or so. both o3 and gpt5 have the issue. (it happens on all models for me not just the pro variations)

with the new 400k context length in api i would expect atleast 128k message lengths and maybe 200k context in chat.


Do you have a workaround?

I'm putting the highest quality context into the 50k tokens, and attaching the rest for RAG. But maybe there is a better way.


i split the context and give it in two messages :/


how are you using it? codex-cli?


Cursor


ugh still fails my test prompt: https://chatgpt.com/share/689507c7-5394-8009-b836-c6281a246e...

"Assume the earth was just an ocean and you could travel by boat to any location. Your goal is to always stay in the sunlight, perpetually. Find the best strategy to keep your max speed as low as possible"

o3 pro gets it right though..


Mine "thought" for 8 minutes and its conclusion was:

>So the “best possible” plan is: sit still all summer near a pole, slow-roll around the pole through equinox, then sprint westward across the low latitudes toward the other pole — with a peak westward speed up to ~1670 km/h.

Is this to your liking?


well no, thats where it gets confused. as soon as you sail across to the other pole you are forced to go up to a speed of 1670kmh.

when models try to be smart/creative they attempt to switch poles like that. in my example it even says that the max speed will be only a few km/h (since their strategy is to chill at the poles and then sail from north to south pole very slowly)

--

GPT-5 pro does get it right though! it even says this:

"Do not try to swap hemispheres to ride both polar summers. You’d have to cross the equator while staying in daylight, which momentarily forces a westward component near the equatorial rotation speed (~1668 km/h)—a much higher peak speed than the 663 km/h plan."


I don't really understand gpt5's reasoning? does its soln not cross the equator ever? b/c if you cross you always have to do it in daylight so it's kind of strange to say that no? or it means you have to cross it on boundary of daylight or something


oh like its solution is to stay in one hemisphere and just go in a circle following the day-night cycle i guess. but I don't see its reasoning as that rigorous that crossing must need this westward speed but probably i'm being dumb


I guess one has to check that if you are spinning around at 23.5-epsilon angle and then do the dash down the 23.5-epsilon angle in one day from the other side you cannot beat your speed of staying in one hemisphere. you could dash straight down in 12-hour timeframe and it'll need like 343 m/s or 1233 km/h which is much too high. and diagonally probably doesn't help too much? But I think it means at some tilt angle it's worth doing this? does GPT5-pro know this angle?


you include the tilt of axis I assume? Is the best solution of yours rigorous out of curiosity?


while it doesn't take away from the article, i find it worrying that it seems mostly written with chatgpt

"This is not just a technical glitch; it’s a deep theoretical problem that suggests we don’t really understand the beginning at all."

"The bounce is not only possible – it’s inevitable under the right conditions."

ugh


There was a physicist who made a video making fun of crackpot theories from engineers and reading the comments we're all happy to put forth our completely unsubstantiated opinions with zero understanding of the math and observations involved


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: