Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It feels a little bit like saying modern cars are extremely optimized for a fast acceleration which consists of a 0-to-60 of 3 seconds instead of 9 seconds, and acceleration delay is not negligible in this case. I mean, I suppose you're right and it would be a 3x improvement, but surely you're not using your (general-purpose) heavyweight tool (car) to just zoom down the block and back at peak acceleration all the time, right? And if you really do have a niche requirement like that, aren't there bigger fish to fry here (like eliminating the car or the need to transport people there & back altogether)?


I'm not sure what you're trying to achieve with this metaphor. Are you suggesting that fast path itself is "niche" in terms of allocator performance?


I'm saying it's very likely you should be looking into alternative approaches if the fast path performance is genuinely your bottleneck, and that in my experience it's overwhelmingly likely that that's not the case. Most likely you're doing too many dynamic memory allocations to begin with. Note that even if you invoke the fast path 20x more often than the slow path, it's still going to be peanuts if your slow path is 1000x slower.

Best way to support your point would be to illustrate with realistic examples instead of hypotheticals. Otherwise this is like the JIT vs. AOT debate that always goes nowhere because the JIT side only ever uses hypotheticals to support their case.


That's not fair, though, because it really depends on how often the fast path gets hit. In good dynamic language runtimes, for example, the fast path is extremely optimized and hit very often, so even though the slow path can be thousands or millions of times slower the application performance might track the fast path rather than the slow one.


> In good dynamic language runtimes, for example

I'm talking about C++, not dynamic languages?

Are you talking about implementing a dynamic language in C++? Sure, do whatever you want in that case. Seems pretty fair to >99.9(9...?)% of developers aren't doing that.

I think it's safe to say dynamic language writing is to dynamic memory allocation as NASCAR is to car acceleration. You (should) already know your metrics will necessarily differ from most other people's.


That was just an example of a place where fast path performance defined the runtime performance, as that’s how they become optimized. You can see a similar pattern when inserting keys into a hashmap, for example.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: