If you talk to your Asian and South Asian colleagues, the broadly held view among a lot of foreign "non aligned" nuclear countries is that Iran's regime is dumb and stupid because they didn't go nuclear first and instead tried to use it to squeeze the west by threatening to get nukes. The smarter states like India and China sprinted towards getting nukes before forcing the West to the negotiating table.
The middle eastern states are somewhat unique (and perhaps this is what inspired the end of history Western convergence school of thought in the late 90s geopolitical theory) in that they cannot survive without trade/exchange with the West. Your Asian powers like India/Pakistan/China/DPRK are all perfectly happy to be isolationist states to pursue autarchy and nuclear freedom but all of the middle eastern countries (including those like Syria/Libya) want to cosy up and trade with the West instead of going full autarchy. My theory is that it's because they are stuck in the oil resource trap and its just too easy to print money with oil than having to work and innovate.
Then again Iran is fractured internally, there's a lot of traitors within selling out the country to foreign powers. If you have Persian colleagues, ask them about the Iranian "Mossad jokes". They have a lot of funny jokes about the regime and Israeli intelligence.
There was nothing useful about the particular formula they were teaching. It wouldn't even be useful for a bureaucrat. It only tested how well you knew the formula, how confidently you simplified inherently nuanced topics, and how lucky you got that the random underpaid SAT grader (usually a teacher looking for a pittance of extra cash) thought your essay fit the rubrics they were given.
True. Writing structures for arguments and analysis make a huge difference in effective writing.
I wish brevity and linguistic precision were taught more, as well. Miscommunication due to ambiguity is one of the biggest causes I see for confusion or heated arguments.
Yeah I think SSD / NVME makes all the difference here - I certainly remember XP / Vista / Win 7 boxes that became unusable and more-or-less unrecoverable (just like Linux) once a swap storm starts.
What I want is to be able to drag and drop files in my remote server to and from my desktop as if it's an NFS/NAS. What's the best option for this that will fully saturate the link?
KDE/Dolphin has support for sftp built in. If you have sshd on your server you can just open/bookmark sftp://host/path/ as a folder and use it normally. I get around 0.9Gbit/s on a 1Gbit/s link.
Presumably the article is more referring to turbulence at a macro scale. If the air is so turbulent that the compressor blades stall because of it, well, we have bigger problems.
You can recognize a threat to national security without supporting the ideology behind it. It sounds like you are trying to to spread FUD around stronger privacy regulations. It would be a lot less funny when the shoe is on the other foot and it's not Iranian networks that's being compromised. Are you perhaps a vendor of mass surveillance systems like your username's namesake?
I will make it simpler to understand. There is only one thing that make or breaks package resolution: do you support diamond dependencies and when.
A diamond dependency is when you have package A depending on package B and C. B depends on package D@v1 while C depends on D@v2. V1 and V2 are incompatible versions of D. This is a classic dependency conflict problem and whether you can resolve it automatically and bundle both packages into the final codebase/binary is the most important architectural decision of the package manager.
Package managers/ecosystems that support diamond dependencies in most circumstances:
Npm (as long as it's not a peer dep), Golang, Rust, Java/.NET (with shading enabled, it's not turned on by default).
With diamond dependency support, in most circumstances you can have arbitrary depth /complexity of dependency resolution.
If you don't support diamond dependencies (basically the rest of the world, Python, Ruby, Dart, Elixir, most lisps in their default setup, statically linked C/C++ in default configurations, maybe Zig too, I am not sure about that one), your dependency tree size is severely limited and it becomes a pseudo SAT problem in some cases if you want optimal dependency resolution.
This is the core algorithmic and architectural limit on package managers. Almost everything else is just implementation and engineering details. Stuff like centralized vs non centralized repos, package caching proxies, security hashes, chains of trust, vendoring, SLSA/SBOM etc. can all be bolted on as an after thought but supporting conflicting upstream dependencies simultaneously requires compliance on the bundler/transpiler/compiler level.
It's also why some languages lend themselves better to tools like Bazel that micromanages every single dependency you have while others do not.
My sibling makes a great point about type errors: did you know Cargo (Rust) only supports diamond dependencies where the versions differ only in major version[^0]? So you can have exactly the same problem with B depending on D@v1.1 and C depending on D@v1.2 in Cargo. I believe the reason for only supporting concurrent versions with different major versions (to use the paper's parlance) is because packages with different major versions should have incompatible APIs anyway.
[^0]: Or 0 major version and differing minor version -- Cargo has it's own definition of semver incompatible
> ... and it becomes a pseudo SAT problem in some cases if you want optimal dependency resolution
A couple of clarifications: many dependency resolution algorithms are essentially SAT even if they support concurrent versions (see Cargo). Section 3.3 of the paper might be an interesting read -- it discusses the spectrum of complexity in the problem of dependency resolution, and why some ecosystem's approaches don't work for others. Also, it's generally a 'pseudo SAT problem' (i.e. NP-complete and can be reduced to SAT) to find any valid resolution, not just an optimal one.
> This is the core algorithmic and architectural limit on package managers. Almost everything else is just implementation and engineering details.
I agree, and that's why the paper focuses on the semantics of dependency expression and dependency resolution! But there's a lot more than concurrent versions in the semantics of how package managers express and resolve dependencies, i.e. features, formula, peer dependencies. The point of the paper is that there's a minimal common core that we can use to translate between package management ecosystems, which we're planning on using to build useful tooling to bridge multilingual dependency resolution.
> So you can have exactly the same problem with B depending on D@v1.1 and C depending on D@v1.2 in Cargo. I believe the reason for only supporting concurrent versions with different major versions (to use the paper's parlance) is because packages should have incompatible APIs anyway.
Presumably you mean compatible rather than incompatible there?
The rust ecosystem standardised on semver. This means it is perfectly allowed to use 1.2 in place of 1.1. While you can specify upper bounds for the depdnency ranges, that is extremely uncommon in practice. Instead the bounds are just “1.1 or newer semver compatible" etc.
See https://semver.org/ for more on semver (but do note that Rust uses a variation, where it also applies to the leading non-zero component of 0.x).
How many of those are between a crate and it's proc macro crate? E.g. serde and serde_derive. I believe that is a common use case for exact dependencies, as they are really the same crate but have to be split due to how proc-macros work. But that is getting down in the weeds of peculiarites of how rustc works.
Very good points. Though to be pedantic, for package managers with concurrent/diamond dependencies support, there's nothing stopping you from pulling in every single dependency of every dependency (this is ~linear time with respect to the depth dependency tree, since you are not conducting any search here but just pulling them in at face value), and maybe deduplicating in linear/constant time with a Set data structure. In this case it's it's very obviously not a SAT problem, but it's ridiculously inefficient since there's zero optimization on the dependency tree. The moment you apply optimizations on it to turn it into a graph from a tree and prune it gets closer to, yes, a SAT problem.
The paper does make this distinction under the "Concurrent Versions" property.
Allowing concurrent versions though opens you up to either really insidious runtime bugs or impossible-to-solve static type errors.
This happens eg. when you receive a package.SomeType@v1, and then try to call some other package with it that expects a package.SomeType@v2. At that point you get undefined runtime behavior (JavaScript), or a static type error that can only be solved by allowing you to import two versions of the same package at the same time (and this gets real hairy real fast).
Also, global state (if there is any) will be duplicated for the same package, which generally also leads to very hard-to-discover bugs and undefined behavior.
Good points. Practically speaking though global state is rarely an issue unless it's the underlying framework (hence peer deps).
Modern languages are mostly lexically scoped and using primarily global variables for state aside from Singletons has fallen out of favor outside of embedded unless it's a one off script.
While diamond dependencies are indeed one of the big complicating factors, the implementation and engineering details that remain matter a lot too. Section 4 covers the spectrum of quality-of-life features that do introduce subtleties: for example the order of resolution, peer dependencies, depops/features. These are all important for the ergonomics of package constraint expressions, irrespective of whether diamond dependencies are present or not.
The engineering details also flow from the practical implementation constraints: it makes a big difference if solving can done in linear time or if there's a noticeable pause or (worse) you need a big centralised solver. The determinism also guides the implementation of chains of trust.
It's not about the package manager, it's about the runtime. Python isn't able to support this pattern with its resolution pipeline, so package managers have to resort to do the work to dedupe versions.
By contrast Node.js has built-in capabilities that make this possible, so package managers are able to install multiple versions of the same package without that issue.
It's not just that, it's also a filesystem layout issue. If you install everything in `/usr` or `<venv>/lib/pythonX.Y/site-packages` you cannot have two versions / variants of the same package installed concurrently.
For that you need one prefix per installation, which is what Nix, Guix, and Spack do.
The runtime can also use mount namespaces to support concurrent installations. Or, if there is a compilation step, the linker can not expose symbols for clashing libraries and just resolve them within the dependency chain.
The package calculus allows all of these to specified cleanly in a single form.
reply