> Sentry depends on it 20 times. 14 times it's a pin for 0.0.1, once it's a pin for ^1.0.0 and 5 times for ~1.0.0.
This is what I was mentioning in the other thread (and being called a troll for... sigh). I appreciate the idealism of "if we have micromodules, we don't have to reimplement common helper functions, which scales to thousands of bytes saved!". But in practice, there's craptons of duplicate dependencies with different versions. Which negatively scales to hundreds of kilobytes wasted. In code, in downloads, in install time, in developer time (because devs install things too. A lot more than end users in fact...), etc.
One of the many problems which means what's on paper doesn't at all correspond to what we actually get.
I don't see how that is worse than the alternative where every library rewrites its own version of the micromodules. With N libraries with their own version of leftpad, you get N copies of leftpad. With N libraries sharing versions of leftpad, you get <= N versions. Seems like a win to me...
Ideally you would cut down on dependencies on every level, and end up with significantly less than N dependencies.
If you have 5 similar string formatting libraries that depend on leftpad, you could collapse them into one, and have a single instance of leftpad inside the combined library. Less licenses, less READMEs, less time downloading, and with tree shaking you still get similar end result.
In practice, you need to balance the overhead of an extra dependency with the benefits from sharing versions with others. When you add a dependency, you also now have a larger attack surface. Any extra dependency adds some amount of friction. Sometimes it is negligible. But, if you look at the overhead across all of your modules, it can add up quickly.
In some cases, the benefits from a dependency outweigh the overhead costs.
In other cases, you just write your own leftpad function and move on.
It seems like the thing to do would be for the community to build up a single set of commonly used functions. Having one authoritative package would make that package CDN-friendly, which should cut down on the bytes-downloaded problem; and if it becomes popular enough, it could eventually be bundled with browsers or rolled into the standard for the language itself.
And worse, if you're using npm for frontend, every time every single end user loads your web app.
Previously if you wrote a frontend library and wanted to depend on another library you had to either tell developers to install it too, or bundle it in your library (not a great idea), but either way the file size it added was obvious.
Now you can just drop one line in your package.json, which is super convenient, but obscures the cost of that dependency, and all of it's dependencies.
It's about maintenance and testing. Who is going to be responsible for maintaining functionality X? The module developer or you are going to be responsible for everything!
The main question is about caching. Why not just have signed versions of everything floating around?
This is what I was mentioning in the other thread (and being called a troll for... sigh). I appreciate the idealism of "if we have micromodules, we don't have to reimplement common helper functions, which scales to thousands of bytes saved!". But in practice, there's craptons of duplicate dependencies with different versions. Which negatively scales to hundreds of kilobytes wasted. In code, in downloads, in install time, in developer time (because devs install things too. A lot more than end users in fact...), etc.
One of the many problems which means what's on paper doesn't at all correspond to what we actually get.