But this is also why simply switching to OS-native APIs and compiled languages wouldn't help much for the average case. A team that doesn't care about performance in Electron also wouldn't care about peformance in native applications, and performance isn't some magic pixie dust that automatically appears when chosing a different UI framework or programming language, it needs to be actively worked towards (some YMMV of course).
Ehh... there's more than a little YMMV: every ecosystem has common, least-effort paths with certain performance characteristics and those characteristics vary greatly depending on language.
> performance isn't some magic pixie dust that automatically appears when chosing a different UI framework or programming language
That's a little misleading. If you write the same program in idiomatic C++ and Python then it's almost guaranteed that the C++ version will be much much faster even before you have done any profiling or performance optimisation. So there is some magic pixie dust.
For a crud style app, the difference in performance between python and C++ is going to be absolutely negligible. The Python version might use more resources, and if you profile it, it might show up with a couple hotspots, but you're still going to have a responsive desktop app if it's implemented properly.
The slowdown is architectural or design based. Sending one HTTP request and not updating your UI until the request has completed fully is going to have _way_ more of an impact on the perceived performance and responsiveness of a native app.
The average electron program has very little data to work with and not a lot of hard algorithms to run, yet still feels sluggish, that's the main complaint here
It's because of work done on the main thread when it should be done using Workers. More about the lack of proficiency with regard to understanding UI applications themselves. Block the event loop in another language and you'll also get a laggy irresponsive UI.
There are probably a lot of different reasons why Electron apps can get laggy and unresponsive when sloppily written.
However, I notice a significant difference in responsiveness between VS Code and Sublime Text -- enough that I changed my workflow back to Sublime Text because the very slight latency difference annoyed me. So I do think there is a baseline difference between the frameworks used by those two apps that no amount of optimization can overcome. It's sort of like the acceleration difference between a truck and a sedan: sure, powerful trucks can sometimes out-accelerate anemic sedans. But if you put a similar drivetrain in both vehicles, the vehicle with less mass (or memory footprint, in the framework analogy) is going to win.
Stacking abstractions is a great way to give yourself Big O problems. Fighting the height of the stack reduces them, and "Use C++" does tend to fight the height of the stack.
C++ doesn't fight the stack more than any good JIT. In fact, it may be less capable in noticing ways to inline functions given that it's a statically compiled language. It's only current advantage with the stack is tail call optimization which is coming to V8 very soon.
Funny that the C++ guy is talking about abstractions, where them V-Tables at?
They talked about stack of libraries beneath you, not the execution stack. If an API call takes 10 microseconds because several "premature optimization is the root of all evil" abstractions in between then doing that just 10k times already nets you 100ms which is a very noticeable stutter. So with that limitation you are now forced to create elaborate data structures with caching etc to try to work around this slow API call. However doing the same querying in C++ without those abstractions where each call takes 10 nanoseconds means that you no longer have to try to create complex data structures to work around that slow API call, even if you do it a million times it would only take 10 milliseconds and maybe drop a frame.
Let me respond for him. He meant abstraction as in "stack of dependencies." Because everyone knows that's what abstraction means, right?! Because we all know NPM is a disaster as opposed to C++ dependency management which doesn't even exist in a standard form. /s
Microsoft has a C++ API you use to make programs for windows. If you write in C++ you will code directly against that. If you write in Javascript then you will code against someone else's API that they wrote to interface with the Microsoft API. If you write in Javascript another step up then you will code your plugin against the VSCode API which then uses the Electrons wrapper around Microsofts API to do stuff, if there aren't more libraries in between. This stack of abstractions turned out to be too slow to solve the problem mentioned in the article we are talking about, so to solve it they moved the entire thing up one step in the stack.
In these situations it is common to have a problem that is really easy to solve but the API abstractions you have to work with doesn't support the operations at the speeds you need to solve it. If you have never experienced that then you aren't working on performance intensive projects where every bit counts and your input on this topic doesn't matter. I have worked on performance intensive libraries crossing programming language boundaries and the abstraction boundaries absolutely puts a limit on the amount of performance you can get. Well designed abstractions are less obstructive but plenty of them aren't well designed and even the well designed ones have overhead.
Microsoft has a COM API. C++ has a stack of macros and abstractions for communicating with a COM API. If you code in C++ you code in abstraction to an interface to the Microsoft API but not the API.
To get rid of abstractions you almost should be writing more directly in some sort of COM+ language.
(To explain the punchline for those that don't catch the joke: COM+ was the codename/early name of what became .NET.)
Microsoft has a COM API but when I used to develop, I'd just call kernel32, user32, advapi32, and other system C APIs directly. COM is a POS imo. DirectX is a decently engineered class-based API. But the rest of them have a lot of flaws.
The funny part is OP is talking all about uOPS when if he was really hardcore (like I am) he'd know that these dlls in turn call ntdll, many calls in ntdll are undocumented but much faster than their wrappers in the other dlls. But no sane person is going to strictly do ntdll calls except for the most performance critical code.
OP just doesn't know enough about C and C++, he probably grew up on C++ and forgot about the old C apis. I used to reverse engineer and delve deep into the windows API. I know a little bit more about performance than the average high level programmer.
And ultimately, .NET does a fine job with performance. C++ coders crapping all over .NET should take a look at the Objective C API of Apple. It's the default and every Objective C call incurs overhead and is basically a wrapper around the undocumented C API. But I don't think anyone ever complained about this abstraction, because it's such a stupid and small amount of performance to harp about. The convenience outweighs the tiny little uOPS loss.
Ah, so this is all coming from a Microsoft C++ guy. Well better get to using those undocumented ntdll calls since you really need those uOps. It's not like the standard libraries of other languages is written in C or anything... ;-)
That is a VM running in a web browser. If I was thinking about a joke that was so comically far away from the metal, that’s pretty much what I would use as an example.
This is why it’s very difficult to have conversations. Most programmers really don’t even know how computers work.
Claims that JITs can produce code that's routinely faster than C++ have been around since Java. So far, the promised gains haven't really materialized. It appears that optimizations that can be gleaned from dynamically profiling a running program simply don't provide enough benefit to account for optimization opportunities lost because the JIT has to be "fast enough", and from language semantics that is inherently hard to compile to fast code (of which C++, quite intentionally, has little). V8 is not an exception.
Your specific example - inlining functions - is not particularly illustrative, since C++ can inline just fine across translation unit (and thus also static library) boundaries with link-time optimization. What it can't do is inline across shared library boundaries, but large C++ apps are usually mostly statically linked for redist anyway.
Except Windows loves COM, and it is all about DLLs and out-of-process IPC.
Those gains have materialized in distributed computing, where the 1% cases where C++ wins in micro-benchmarks don't really matter, when network latency, databases, load balancers and the whole lot come into play.
Java is not V8, it isn't Dalvik either. Java is a memory hungry turd that never delivered, agreed. But let's not forget that Google made their own runtime for Android and invested millions into V8.
> What it can't do is inline across shared library boundaries, but large C++ apps are usually mostly statically linked for redist anyway.
I'm guessing you've never looked inside System32 on Windows or /usr/*/lib on Unix. I don't see shared libraries as a bad thing, they are great for reducing disk and memory usage. Let's not make bullshit up about how most applications are statically compiled with all their dependencies ;-)
Java is not V8, of course, because one is a language, and the other one is a VM.
Java is also not JS. Semantics of Java are much better suited to effective compilation than that of JS, due to static typing and (in many cases) early binding. Consequently, modern Java VMs have the best JITs in the industry - faster than V8 - and they're still not on par with C++.
Most applications dynamically link to system libraries. On Linux, this is kinda fuzzy because package managers handle everything; and yes, I agree, on Linux the norm for distro-packaged software is dynamic linking. But stuff packaged as Flatpak etc is much more likely to be statically linked to anything other than libc. And idiomatic C++ tends to involve lots of templates, which are inherently "statically linked".
(Also, native code is broader than C/C++ - it includes e.g. Go, which drops even the libc dependency, and Rust.)
On Windows and macOS, though, where "app is a self-contained folder" has been the rule rather than the exception for a long time now, dependencies that don't come from the OS are often statically linked.
The JVM is hot garbage. It is slow to start up, a memory hog - and like you mentioned previously, overabstracted - no amount of magic unboxing of primitives like int help its performance. There are 100s of different parameters one can set to control garbage collection and other performance characteristics. It's a job in itself dealing with the beast of JVM. YMMV but every time I've dealt with Java I was dissapointed. I'd much rather use C++, Julia, or pretty much any other AOT or JIT than Oracle/Sun's JVM.
You keep saving face. So you admit Linux is largely dynamically linked. Well Windows is too, even to C stdlib, look at how many versions of MSVC runtimes are in your system32 dir after just a few installs. Templates are preprocessor so obviously they are "statically linked."
Microsoft has a COM API (which a lot of C++ uses) but when I used to develop, I'd just call kernel32, user32, advapi32, and other system C APIs directly. COM is a POS imo. DirectX is a decently engineered class-based API. But the rest of them have a lot of flaws.
If you were really hardcore you'd know that these dlls in turn call ntdll, many calls in ntdll are undocumented but much faster than their wrappers in the other dlls. But no sane person is going to strictly do ntdll calls except for the most performance critical code.
I used to reverse engineer and delve deep into the windows API. I know a little bit more about performance than the average high level programmer.
And ultimately, .NET does a fine job with performance. C++ coders crapping all over .NET should take a look at the Objective C API of Apple. It's the default and every Objective C call incurs overhead and is basically a wrapper around the undocumented C API. But I don't think anyone ever complained about this abstraction, because it's such a stupid and small amount of performance to harp about. The convenience outweighs the tiny little uOPS loss.
JVM is a hog, but when it comes to raw compute perf, it's a very fast hog once it starts running. Do you have any examples of anything better (post-startup)?
Yes, I'm well aware that many system DLLs on Windows in turn call into NTDLL, where the actual syscalls are. And yes, I agree that it hardly matters in practice - but it was your premise that inlining across shared object / DLL boundaries is crucial! In practice, yes, it almost never is. And yes, .NET is perfectly fine perf-wise, and even JS is fast enough for most cases. I've actually spent most of my career writing C# and Python, after writing a bunch of C++, and I very much appreciate the productivity gains those abstractions offer.
But this is a very different point. Native code is still measurably faster where it matters, and JS/V8 can't keep up for very good reasons.
We're talking about small, quick operations supporting a synchronous, interactive user interface here. When it comes to performance, asymptotics aren't everything.
Have fun implementing all the overhead just communicating with those workers. That's a serialisation pass, a serialisation pass and two event queues, all just so your application doesn't lock up.
Not if you use WebAssembly. You can simply pass an Array Buffer to be worked on, you postMessage it with transferable=true.
transfer Optional
An optional array of Transferable objects to transfer ownership of. If the ownership of an object is transferred, it becomes unusable in the context it was sent from and becomes available only to the worker it was sent to.
You can compile JS itself or pretty much any other language to WASM using Emscripten or the other LLVM toolkits. Looking at VSCode, it appears they use this technique for some of the heavy lifting.
JS is very performant if you know how to use this hybrid architecture. The C++ guys above are shitting all over JS when an Electron app can simply use C++ transpiled to WASM if they really wanted to. Electron isn't really for JS as a coding language. It is much more for the awesome cross-platform UI you get with HTML, CSS, and JS. A lot has been done to optimize rendering engines. The same techniques that render snappy web pages can be used within Electron. All the griping above is really just ignorance.
WASM isn't Javascript. WASM is a way to represent native code in Javascript that some Javascript runtimes then can use to run it as native code. If the Javascript runtime hasn't implemented a hack to run WASM as native code then WASM is ridiculously slow. If the VSCode team writes their code in C++, compiles it to WASM and then runs WASM and it is fast, then it was C++ that was fast and not Javascript.
If that is really how they got their performance then no wonder that few others actually managed to do it, because most teams wouldn't write their Electron app in C++ and then compile it to WASM. The difference is that Microsoft has a ton of C++ engineers so they could do it easily, but I doubt many Electron teams put out job postings to hire C++ people.
That's like saying if you use the _asm directive in any language, your entire project is now assembly. Subtlety is not your strong suite. One can use WASM for manual memory critical paths while using raw JS for other parts. And you did not hear me. You can write WASM in JS. Transpiling. Amazing, huh?
VSCode does not prove that at all. The main reason why it feels faster is because it's written from scratch to be async. VS is a legacy codebase going all the way to 1997 (if not earlier, in places), with lots of code still running on UI thread, and all extensions running in-process.
The big thing they don't tell you in CS class is that constant factors matter far more than asymptotic complexity most of the time. (There are a few exceptions though.)
a text editor (especially for code) is a good example of a case where both matter. it doesn't take much latency to make typing a very frustrating experience. the common case (editing small files) needs to be very fast. but we also need to gracefully handle very large files that involve nontrivial processing (10kloc+ c++ files do unfortunately exist). I gave up on atom several years ago when it slowed to a crawl opening a 4kloc c file. vs code can handle many multiples of that without breaking a sweat, so it's my current first choice.
The base performance of a native app in most operating systems is higher than the base performance of your average electron app though, in my experience.
You're not wrong, but what about feature sets? The only native IDE I can think of is xcode, the rest are either big Java ones or are lacking in features.
It absolutely would help. It's practically tautological to say that you can create bad performance in any language, but that ignores the very real fact that some languages and systems just perform better for any given coding skill level. With native apps, you have to be extremely bad at coding to get bad performance. With electron, you have to be extremely good at coding to get good performance. VSCode is the exception, not the rule.
Even then, it's a pretty poor exception. The "good" performance of VSCode doesn't scale. It's a pretty barebones editor by itself, and once you load it down with extensions, it slows down quite significantly. The python extension, also written by Microsoft, is one that was bad enough to make me leave VSCode for good.