Hacker Newsnew | past | comments | ask | show | jobs | submit | shatsky's commentslogin

Does this mean Google just forked it to develop proprietary XR runtime and doesn't contribute back?


https://github.com/shatsky/lightning-image-viewer

Tiny desktop (pre)viewer which displays image in transparent overlay without any UI, allowing to look into specific image detail with single hand move (zoom with scroll and pan with drag simultaneously like in map apps, with nothing but display borders limiting visible image surface) and toggle between file manager and image view almost instantly (close with left click anywhere/keyboard Enter).

Also finished initial rewrite in Rust just hours ago:) (originally did it in C and intentionally tried to make it initially close to preceding C codebase before going further, so many things are still managed manually)


Vibes of sunset of beautiful architecture. "It shouldn't be beautiful it's just a tool it should be practical, if it steals your attention [from some tiktok brainrot] to admire it it's not good thing". Can't say I like most detailed icons most, but not least detailed either.


I don't know much about node but cargo has lock file with hashes which prevents dep substitution unless dev decide to update lock file. Updating lock file has same risks as initial decision to depend on deps.


And yet Rust ecosystem practically killed runtime library sharing, didn't it? With this mentality that every program is not a building block of larger system to be used by maintainers but a final product, and is statically linked with concrete dependency versions specified at development time. And then even multiple worker processes of same app can't share common code in memory like this lib, or ui toolkit, multimedia decoders, etc., right?

PS. Actually I'll risk to share my (I'm new to Rust) thoughts about it: https://shatsky.github.io/notes/2025-12-22_runtime-code-shar...


As a user and developer, runtime is my least favourite place for dependency and library errors to occur. I can't even begin to count the hours, days, I've spent satisfying runtime dependencies of programs. Cannot load library X, fix it, then cannot load library Y, fix it, then library Z is the wrong version, then a glibc mismatch for good measure, repeat.

I'd give a gig of my memory to never have to deal with that again.


if I recall correctly Rust does not support any form of dynamic linking or library loading.

Most of the community I’ve interacted with are big on either embedding a scripting engine or WASM. Lots of momentum on WASM based plugins for stuff.

It’s a weakness for both Rust and Go if I recall correctly


> if I recall correctly Rust does not support any form of dynamic linking or library loading.

Rust supports two kinds of dynamic linking:

- `dylib` crate types create dynamic libraries that use the Rust ABI. They are only usesul within a single project though, since they are only guaranteed to work with the crate that depended on them at the compilation time.

- `cdylib` crate types with exported `extern "C"` functions; this creates a typical shared library in the C way, but you also need to implement the whole interface in a C-like unsafe subset of Rust.

Neither is ideal, but if you really want to write a shared library you can do it, it's just not a great experience. This is part of the reason why it's often preferred to use scripting languages or WASM (the other reason being that scripting languages and WASM are sandboxed and hence more secure by default).

I also want to note that a common misconception seems to be that Rust should allow any crate to be compiled to a shared library. This is not possible for a series of technical reasons, and whatever solution will be found will have to somehow distinguish "source only" crates from those that will be compilable as shared libraries, similarly to how C++ has header-only libraries.


It does support dynamic libs, but virtually all important Rust software seems to be written without any consideration for it.


Rust ABI (as opposed to C ABI) dynamic libraries are incredibly fragile with regard to compiler/build environment changes. Trying to actually swap them out between separate builds is pretty much unsupported. So most of the benefits of dynamic libraries (sharing code between different builds, updating an individual dependency) are not achieved.

They’re only really useful if you’re distributing multiple binary executables that share most of the underlying code, and you want to save some disk space in the final install. The standard Rust toolchain builds use them for this purpose last time I checked.


Yep that’s right. I’ve been working on a game with bevy. The Bevy game engine supports being dynamic linked during development in order to keep compile times down. It works great.


People thinking C++ libraries magically solve this ABI issue is the other side of the coin. I’ve filed numerous bugs against packages precompiled libraries but misusing the C abi so that (owned) objects cross the abi barrier and end up causing heap corruption (with a segfault only if you’re lucky) and other much more subtle heisenbugs.


Rust does support C ABI through cdylib (as opposed to the unstable dylib ABI). This is used widely, especially for FFI. An example of this is Python modules in Rust using PyO3 [1].

[1] https://pyo3.rs/v0.15.1/#using-rust-from-python


Yeah but you can’t use the vast majority of crates that way. You have to create a separate unsafe C ABI, and then use it in the caller. Ergonomically, it’s like your dependency was written in C and you had to write a safe wrapper.


C++ has the opposite problem where people think they can just dynamically or statically link against any api be ok. You can’t cross the ABI barrier without a) knowing it’s there, and b) respecting its rules.

You get lucky when all assets have been compiled with the same toolchain (with the same options) but will lose your mind when you have issues caused by this thing neither you nor the package authors knew existed.


the rust abi is explicitly unstable. there are community projects to bring dynamic linking, but it's mostly not worth it.


That is not correct. Dynamic linking is natively supported in Rust. How else do you make modules for scripting languages like Python (using PyO3) [1]? It uses the stable C API (cdylib).

[1] https://pyo3.rs/v0.15.1/#using-rust-from-python


RAM is cheap mmmkay?

Or at least it used to be when they designed the thing…


Is it a RAM problem though? My understanding is that each process loads the shared library in its own memory space, so it's only a ROM/HDD space problem.


If you stop using shared libraries each application will have its own copy in ram…


The problem is vulnerable dependencies and having to update hundreds of binaries when a vuln is fixed.


Go supports plugins (essentially libraries) but its has a bunch of caveats. You can also

You can also link to C libs from both. I guess you could technically make a rust lib with C interface and load it from rust but that's obviously suboptimal


The dynamic libraries that use the unstable Rust ABI are called `dylib`s, while those that use the stable C ABI are called `cdylib`s. Suppose a stable version of the Rust ABI is defined, what would be the point of putting dynamic libraries that follows this API, in the system? Only Rust would be able to open it, whereas the system shared libraries are traditionally expected to work across languages using C ABI and language-specific wrappers. By extension, this is a problem that affects all languages that has more complex features than C. Why would this be considered as a Rust flaw?


Go definitely supports dynamic libraries


I don’t mean Dylibs like you find on macOS, I mean loading a binary lib from an arbitrary directory and being able to use it, without compiling it into the program.

It’s been some time since I looked into this so I wanted to be clear on what I meant. I’d be elated to be wrong though


Both handle that just fine. Go does this via cgo, and has for over a decade.

You do still need to write the interfacing code, but that's true for all languages.


Then by that argument Rust also supports dynamic linking. Actually it’s even better because that approach sacrifices less performance (if done well) than cgo inherently implies.


Well, Rust does support dynamic linking. It just doesn’t (yet) offer a stable ABI. So you need to either use C FFI over the dynamic linking bridge, or make sure all linked libraries are compiled with the same version of the rust compiler.


It was built to do that, yes


In any modern OS with CoW forking/paging, multiple worker processes of the same app will share code segments by default.


COW on fork has been a given for decades.

You can't COW two different libraries, even if the libraries in question share the source code text.


Not really? You just need to define the stable ABI: you do that via `[repr(C)]` and other FFI stuff that has been around since essentially the beginning. Then it handles it just fine, for both the code using a runtime library and for writing those runtime libraries.

People writing Rust generally prefer to stay within Rust though, because FFI gives up a lot of safety (normally) and is an optimization boundary (for most purposes). And those are two major reasons people choose Rust in the first place. So yeah, most code is just statically compiled in. It's easier to build (like in all languages) and is generally preferred unless there's a reason to make it dynamic.


Dynamic libraries are a dumpster fire with how they are implemented right now, and I'd really prefer everything to be statically linked. But ideally, I'd like to see exploration of a hybrid solution, where library code is tagged inside a binary, so if the OS detects that multiple applications are using the same version of a library, it's not duplicated in RAM. Such a design would also allow for libraries to be updated if absolutely necessary, either by runtime or some kind of package manager.


OSes already typically look for duplicated code pages as opportunities to dedupe. It doesn’t need to be special cases for code pages because it’ll also find runtime heap duplicates that seem to be read only (eg your JS code JIT pages shared between sites).

One challenge will be that the likelihood of two random binaries having generated the same code pages for a given source library (even if pinned to the exact source) can be limited by linker and compiler options (eg dead code stripping, optimization setting differences, LTO, PGO etc).

The benefit of sharing libraries is generally limited unless you’re using a library that nearly every binary may end up linking which has decreased in probability as the software ecosystem has gotten more varied and complex.


I believe NixOS-like "build time binding" is the answer. Especially with Rust "if it compiles, it works". Software shares code in form of libraries, but any set of installed software built against some concrete version of lib which it depends on will use this concrete version forever (until update replaces it with new builds which are built against different concrete version of lib).


The system you’re proposing wouldn’t work, because without additional effort in the compiler and linker (which AFAIK doesn’t exist) there won’t be perfectly identical pages for the same static library linked into the same executable. And once you can update them independently, you have all the drawbacks of dynamic libraries again.

Outside of embedded, this kind of reuse is a very marginal memory savings for the overall system to begin with. The key benefit of dynamic libraries for a system with gigabytes of RAM is that you can update a common dependency (e.g. OpenSSL) without redownloading every binary on your system.


Also, won't most of the lib be removed due to dead code elimination? And used code will be inlined where applicable, so nothing to dedup in reality


I wish the standard way of using shared libraries would be to ship the .so the programs want to dynamically link to alongside the program binary (using RUNPATH), instead of expecting them to exist globally (yes, I mean all shared libraries even glibc, first and foremost glibc, actually).

This way we'd have no portability issue, same benefit as with static linking except it works with glibc out of the box instead of requiring to use musl, and we could benefit from filesystem-level deduplication (with btrfs) to save disk space and memory.


What you're describing is not static linking, it's embedding a dynamically linked library in another binary.


IMHO dynamic libraries are a dumpster fire because they are often used as a method to provide external interfaces, rather then just share common code.


> And yet Rust ecosystem practically killed runtime library sharing, didn't it?

Yes, it did. We have literally millions of times as much memory as in 1970 but far less than millions of times as many good library developers, so this is probably the right tradeoff.


C++ already killed it: templated code is only instantiated where it is used, so with C++ it is a random mix of what goes into the separate shared library and what goes into the application using the library. This makes ABI compatibility incredibly fragile in practise.

And increasingly, many C++ libraries are header only, meaning they are always statically linked.

Haskell (or GHC at least) is also in a similar situation to Rust as I understand it: no stable ABI. (But I'm not an expert in Haskell, so I could be wrong.)

C is really the outlier here.


Static linking is still better than shipping a whole container for one app. (Which we also seem to do a lot these days!)

It still boggles my mind that Adobe Acrobat Reader is now larger than Encarta 95… Hell, it’s probably bigger than all of Windows 95!


Whole container or even chromium in electron


It's not just about memory. I'd like to have a stable Rust ABI to make safe plugin systems. Large binaries could also be broken down into dynamic libraries and make rebuilds much faster at the cost of leaving some optimizations on the table. This could be done today with a semi stable versionned ABI. New app builds would be able to load older libraries.

The main problem with dynamic libraries is when they're shared at the system level. That we can do away with. But they're still very useful at the app level.


> I'd like to have a stable Rust ABI to make safe plugin systems

A stable ABI would allow making more robust Rust-Rust plugin systems, but I wouldn't consider that "safe"; dynamic linking is just fundamentally unsafe.

> Large binaries could also be broken down into dynamic libraries and make rebuilds much faster at the cost of leaving some optimizations on the table.

This can already be done within a single project by using the dylib crate type.


Loading dynamic libraries can fail for many reasons but once loaded and validated it should be no more unsafe than regular crates?


You could check that mangled symbols match, and have static tables with hashes of structs/enums to make sure layouts match. That should cover low level ABI (though you would still have to trust the compiler that generated the mangling and tables).

A significantly more thorny issue is to make sure any types with generics match, e.g. if I declare a struct with some generic and some concrete functions, and this struct also has private fields/methods, those private details (that are currently irrelevant for semver) would affect the ABI stability. And the tables mentioned in the previous paragraph might not be enough to ensure compatibility: a behaviour change could break how the data is interpreted.

So at minimum this would redefine what is a semver compatible change to be much more restricted, and it would be harder to have automated checks (like cargo-semverchecks performs). As a rust developer I would not want this.


What properties are you validating? ld.so/libdl don't give you a ton more than "these symbols were present/absent."


It's really bad for security.


No, transcription has nothing to do with written text, it guessed few words here and there but not even general topic. That's doctors note about patient visit, beginning with "Прием: состояние удовл., t*, но кашель / patient visit: condition is OK, t(temperature normal?) but coughing". But unreadable doctors handwriting is a meme...


Author pushes abstract idea about "page reclamation" in front of ideas of performance, reliability and controllable service degradation which people actually want; because author believes that it is the one and only solution to them; and then defends swap because it is good for it.

No, this is just plain wrong. There are very specific problems which happen when there is not enough memory.

1. File-backed page reads causing more disk reads, eventually ending with "programs being executed from disk" (shared libraries are also mmaped) which feels like system lockup. This does not need any "egalitarian reclamation" abstraction and swap, and swap does not solve it. But it can be solved simply by reserving some minimal amount of memory for buf/cache, with which system is still responsive. 2. Eventually failure to allocate more memory for some process. Any solutions like "page reclamation" with pushing unused pages to some swap can only increase maximum amount of memory which can be used before it happens, from one finite value to bigger finite value. When there is no memory to free without losing data, some process must be killed. Swap does not solve this. The least bad solution would be to warn user in advance and let them choose processes to kill.

See also https://github.com/hakavlad/prelockd


Neither executables nor shared libraries are going to be evicted if they are in active use and have the "accessed" bit set in their page tables. This code has been present in the kernel mm/vmscan.c at least since 2012.


Will look into that again. If you're right about unevictability of these pages, what is the mechanism which causes sudden extreme degradation of performance when system is almost out of memory due to some app gradually consuming it, from quite responsive system to totally unresponsive system which can stay stuck with trashing disk for ages until oom will fire?


Once your active working set starts spilling into swap, the performance degradation goes exponential. The difference in latency between RAM and SSD is orders of magnitude. Assuming 10^3 difference, 0.1% memory excess causes 2x degradation, 1% - 10x degradation, 10% - 100x degradation.


Fun fact: EXIF is simultaneously JPEG format (EXIF spec describes compressed image file format which is based on JIF, the base JPEG file format described in JPEG spec, also EXIF is for "EXchangeable Image File format", suggesting that authors saw it as new file format) and TIFF format (EXIF metadata is actually embedded TIFF which can be parsed with tiffdump, also EXIF spec describes uncompressed image file formats which are TIFF with embedded EXIF metadata which is also TIFF...)


Its a less fun fact when you have to write a parser for it

All the various metadata formats are kind of weird. IIM (less popular now but still sometimds seen in jpeg files. Was originally for news organizations) is even weirder than Exif. My favourit part is how you specify its utf-8 by adding the iso-2022 escape code in a field. Like wut.


Yoshkar-Ola itself is a forgery, just google for its photos and name of that embankment street (no offence, just kidding, I even kinda like it, hope they didn't destroy some valuable authentic architecture to build it).


I've happened to visit the city on a few occasions since 2000, before and after. They didn't destroy anything valuable (I think) as it was pretty generic before, but that fake embankment is bizarrely out of place indeed. They even have a bootleg Neuschwanstein castle, which is kind of ironic, considering that the original is also closer to being a forgery than to some authentic medieval castle.


Repairing it is as easy as rebooting and selecting last generation which worked.


Indeed, that's nice

Fixing the issue ends up being rather difficult though


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: