Hacker Newsnew | past | comments | ask | show | jobs | submit | ahepp's commentslogin

I guess I don't write enough rust to say this with confidence, but isn't that the bare minimum? I find it difficult to believe the rust community would accept using a library where the API requires unsafe.

Not at all. Some things are fundamentally unsafe. mmap is inherently unsafe, but that doesn’t mean a library for it shouldn’t exist.

If you’re thinking of higher level libraries, involving http, html, more typical file operations, etc, what you’re saying may generally be true. But if you’re dealing with Direct Memory Access, MCU peripherals, device drivers, etc, some or all of those libraries have two options: accept unsafe in the public interface, or simply don’t exist.

(I guess there’s a third option: lie about the unsafety and mark things as safe when they fundamentally, inherently are not and cannot be safe)


Yeah I didn’t want to get into the weeds about inherently unsafe stuff, since the OP was about an XML parser

>I guess I don't write enough rust to say this with confidence, but isn't that the bare minimum

I have some experience and yes, unless you're putting out a library for specifically low-level behavior like manual memory management or FFI. Trivia about the unsafe fn keyword missed the point of my comment entirely.


I don't think it makes a lot of sense to put those responsibilities on individual firms. In the USA, achieving maximum employment has been a mandate for the Federal Reserve to achieve through monetary policy. There are many advantages to allowing individual firms to optimize for productivity. There are also a lot of harms caused by forcing firms to adopt unproductive methods. Even Keynes' joking solution for unemployment was that the treasury might bury bottles of money for private industry to dig up.

> Salaries, benefits etc have all not been keeping up with inflation for decades

I don't believe that's consistent with the data

https://fred.stlouisfed.org/series/MEHOINUSA672N


I read an article in FT just a couple days ago claiming that increased productivity was becoming visible in economic data

> My own updated analysis suggests a US productivity increase of roughly 2.7 per cent for 2025. This is a near doubling from the sluggish 1.4 per cent annual average that characterised the past decade.

good for 3 clicks: https://giftarticle.ft.com/giftarticle/actions/redeem/97861f...


I think you're bringing up a great question here. If you ask a random person on the street "is your laptop fast", the answer probably has more to do with what software that person is running, than what hardware.

My Apple silicon laptop feels super fast because I just open the lid and it's running. That's not because the CPU ran instructions super fast, it's because I can just close the lid and the battery lasts forever.


My guess would be that ARM Chromebooks might run substantially more cut-down firmware? While intel might need a more full-fat EFI stack? But I haven't used either and am just speculating.


what are you doing where you find the thermal limits noticeable?


I think in the example the OP is making, the work is not useless. They're saying if you had a system doing the same work, with maybe 60 processes, you're better off splitting that into 600 processes and a couple thousand threads, since that will allow granular classification of tasks by their latency sensitivity


But it is, he's talking about real systems with real processes in a generic way, not a singular hypothetical where suddenly all that work must be done, so you can also apply you general knowledge that some of those background processes aren't useful (but can't even be disabled due to system lockdown)


I think you're right that the article didn't provide criteria for when this type of system is better or worse than another. For example, the cost of splitting a work into threads and switching between threads needs to be factored in. If that cost is very high, then the multi-thread system could very well be worse. And there are other factors too.

However, given the trend in modern software engineering to break work into units and the fact that on modern hardware thread switches happen very quickly, being able to distribute that work across different compute clusters that make different optimization choices is a good thing and allows schedulers to get results closer to optimal.

So really it boils down to if the gains in doing the work on different compute outweighs the cost splitting and distributing the work, then it's a win. And for most modern software on most modern hardware, the win is very significant.

As always, YMMV


> (...) a singular hypothetical where suddenly all that work must be done (...)

This is far from being a hypothesis. This is an accurate description of your average workstation. I recommend you casually check the list of processes running at any given moment in any random desktop or laptop you find in a 5 meter radius.


I've done more than that - after noticing high CPU use I investigated what those processes do, discovered services that I never need and tried to disable them. Now try to actually prove your point


> Now try to actually prove your point

Count the processes.

Go on.


Are you a Mac? Why are you asking me to waste time doing useless counts?


It's true, they don't "make 'em like they used to". They make them in new, more efficient ways which have contributed to improving global trends in metrics such as literacy, child mortality, life expectancy, extreme poverty, and food supply.

If you are arguing that standard of living today is lower than in the past, I think that is a very steep uphill battle to argue

If your worries are about ecology and sustainability I agree that is a concern we need to address more effectively than we have in the past. Technology will almost certainly be part of that solution via things like fusion energy. Success is not assured and we cannot just sit back and say "we live in the best of all possible worlds with a glorious manifest destiny", but I don't think that the future is particularly bleak compared to the past


Sure, it’s complicated.

I worry that humanity has a track record of diving head first into new technologies without worrying about externalities like the environment or job displacement.

I wish we were more thoughtful and focused more on minimizing the downsides of new technologies.

Instead it seems we’re headed full steam towards huge amounts of energy use and job displacement. And the main bonus is rich people get richer.

I’m not sure if having software be cheaper is beneficial. Is it good for malware to be easier to produce? I’d personally choose higher quality software over more software.

I’m not convinced cheaper mass produced clothing has been a net positive. Will AI be a positive? Time will tell. In the short term there are some obvious negatives.


> If you are arguing that standard of living today is lower than in the past, I think that is a very steep uphill battle to argue

We'd first have to agree on a definition for "standard of living". There are certainly many (important to me) aspects in which we have regressed and being able to buy cheap tech crap does not make up for it.


One could set an env var to their local bin dir which is otherwise not in the path, like L=/home/ahepp/.local/bin, and then do $L/mycommand. Doesn't meet the OP's requirement of no shift key.

Or prefix files in the local bin dir with a couple letters from your username, like /home/ahepp/.local/bin/ah-mycommand


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: