Those splits are inevitable. There is no One Size Fits All concurrency primitive. Embedded devs and web devs have vastly different requirements.
Either the split is "every use-case finds the primitive that works best for it" (Rust), or the split is "here's the blessed primitive, other use-cases can pound sand" (Go)
Must it? In fact, I'd expect the reverse -- choosing a model for the language will make the ecosystem tied to that model
For example, while tokio is being integrated with hyper, it does not affect the "usual" users of hyper at all, both in terms of API and in terms of what's going on under the hood, cost-wise.
Having a blessed solution for async like tokio that is not in the stdlib seems to encourage folks to integrate with it, but not in an irrevocable manner; the core library stays intact and able to support potential other models in the future.
Sure, I do hope we can focus on one concurrency primitive, but the possibility that we may have async/await and coroutine at the same time seems kind of unfortunate to me.
AFAICT async/await the way Rust intends to have it (plans are rough right now) basically coroutines with extra sugar to specialize it with futures. The core language won't know about event loops, it will just know about futures, and you wire up your async/await or whatever to the event loop yourself. As both in the context of concurrency will involve futures they will probably work well together.
My initial motivation of following Rust is because it is the only language that can integrate into and extend the traditional Linux C world seamlessly except C++, which is generally hated by the open source culture. Glad to see the initial attempt goes well, and hope the gnome people will no longer need to construct GUI with C in the second decade of twenty-first century.
> hope the gnome people will no longer need to construct GUI with C in the second decade of twenty-first century.
It may 'compile' to C, but we've had Vala for quite a while now. Honestly, my biggest complaints are the syntax is so close to C# that I always find myself trying to do stupid things like `using System.Collections.Generic;` instead of Gee, etc. Oh, and it would be REALLY NICE if the compiler had support for custom attributes without needing to patch the compiler itself (even just for storing metadata would be hugely beneficial, but I don't see any reason why there can't be a plugin api for the codegen side of things either).
Rust is great and all, but every time I play with it I just don't see how it would work well with the complex inheritance tree that is a widget toolkit (meanwhile GObject works just fine, even if it's a little verbose at the C-level).
> Rust is great and all, but every time I play with it I just don't see how it would work well with the complex inheritance tree that is a widget toolkit (meanwhile GObject works just fine, even if it's a little verbose at the C-level).
I'm not convinced that this is the best way to build widget toolkits rather than just how it's always been done. I'd like to see one built more on traits like IClickable instead of subclassing button.
HTML/CSS is already like this (for styling only), attributes can be mixed and matched.
> Rust is great and all, but every time I play with it I just don't see how it would work well with the complex inheritance tree that is a widget toolkit
I haven't actually done anything along this route myself, but I think the most promising approach is to model all the widgets as traits (and traits can depend on other traits, which lets you model the inheritance tree).
The problems lie both in the lack of field inheritance (something talked about in Rust as "virtual structs") and the difficulties with modeling a bi-directional tree structure with the borrow checker.
> Vala has been a great tool for prototyping, I love it myself, but debugging it is a nightmare, it’s filled with security issues and even if we fixed those really difficult problems, we’d be maintaining our own language on top of everything else. I would like to see us maintaining less stuff other than a desktop and the application development framework, not more.
Sadly Vala is not safe, which is an essential defect for a high-level language, and It seems to be impossible to get wide adoption outside gnome world.
Lack of inheritance is a shortage of Rust, but maybe we can have a kind of React-like GUI library for it.
Simply comparing M:N threading with futures is not correct. In my experience using futures in Javascript, it does not support control flow at all. The goroutine's equivalent is async/await, which transform the cps of asynchronous I/O back to sync form, but it may impose more overheads than M:N threading since it stores the stack structure in its parameter.
The Windows version is not availiable because mozjs the JavaScript engine, Or say, SpiderMonkey, fails to compile on windows currently, and servo developers can not know when it will be fixed. Once it can be build on windows, windows version may come out soon.
This is not true. Servo and Spidermonkey compile fine on windows using the mingw toolchain. For the MSVC toolchain we don't have that fully working, but obviously Spidermonkey is not the problem there since the official Firefox builds use MSVC. Mostly we have to fix the build glue.
The Windows port didn't ship yesterday because of technical glitches post-install. Servo works fine, but after installation it had some strange behavior we couldn't track down in time. It should come in a few days.
I already have gtk2, gtk3, qt4, qt5 installed on my linux machine, the situation which I really hope to get rid of.
And now GNOME devs hope user to install gtk2, gtk3, gkt4, gkt5, gtk6 at the same time to solve imcompatibility. Nice try.
You could have a system that has broken software instead?
But seriously, giving multiple years between the major versions is a lot of time in between for applications to port forward (or they can choose the version they want to stay on) and enough time to stabilize features that take more than 6 months to write. Turns out writing whole new rendering engines takes some time and testing and can't be flipped on as fast as javascript frameworks materialize.
If applications want to stay on a stable version for 5 years or more, that all of a sudden becomes tenable which wasn't the case before unless you wanted to stick to gtk2, which is right out of 1998 "how to write a toolkit".
This isn't very different from other systems that bump the soname and have multiple versions based on what ABI, as required by the applications. I'm sure you have multiple of these on your system already. The reason soname isn't enough for GTK has to do with parallel installability of headers, bindings, etc etc
There is no guarantee that the good software you are using have someone maintaining it or interested in porting it, which is really common in open source word. For me they are Texmacs which only have bug fixes nowadays and a bunch of browser plugins whose producer are not willing in porting them.
The serious problem here is that, What this plan shows is a total lack of concern. Doing rolling release for a distribution is OK, but for a fundamental library this is unacceptable. Qt has been doing much better on this.
FWIW, Qt has been bumping major versions more often than us as well.
It really comes down to ensuring that developers are shipping a known quantity. They should be shipping with a set of libraries that were actually tested.
It's unreasonable to expect the toolkit authors to be able to test every possible permutation on every release unless people stand up to run build bots, automated testing, and report back to us when things break.
Keeping your Texmacs, for example, working in 5+ years time is important to us. That is why we want to give it a stable API that only gets bug fixes after a certain time frame. Is that such a bad idea? Would you really expect to magically get touch, HiDPI support, etc on a 5 year old application when you never changed your application code?
> For a fundamental library like this devs should at least keep core api stable for a long time, and release unstable components seperately.
This is something we'd like to get to (say external widget libraries). But it requires, guess what, an ABI break :)
> As I have stated, I really don't like a large number of similar libraries installed on my machine, each with a bunch o dependencies.
This is a long running "problem" on GNU/Linux. I've been around for a couple of decades and the problem has existed pretty much the entire time. We all have some holy grail of design in how we'd like the world to be and are disappointed it isn't what we think it should clearly be.
I'm not saying your viewpoint isn't valid, just that I'm not sure you can put the necessity to solve it on our shoulders.
> As a gentoo user I would say that the ABI break may still worth it if it can enable a lot of application continuing to work with just a recompile, right?
Yes, just a recompile in all the cases we've really discussed. However, if we can break ABI in minor releases, we have contemplated the idea of installing private headers to allow developers to "do wtf they want" with a I_KNOW_WHAT_IM_DOING #define or something.
But the important thing, is to test the software!
> The controversial around this plan origins from people's inability to understand why such a fundamental library like gtk+ needs to have api break for such a high frequency, which would have a huge impact on the experience of app devs and users? The world has its intrinsic complications, but we all hope to avoid the casual one.
That is the thing. We are a very small team of mostly part time contributors trying to build a toolkit that competes with the big players, who have teams the size of hundreds. There is a lot of work to do, with an insane cadence required.
I don't think this isn't really any different than choosing your target device version for say, iOS, Android, or macOS. They likely are likely breaking subtle things in-between major releases too, but you can either 1) lock to a version or 2) upgrade and fix-the-world.
> No it's not. In case of device only a small bunch of things break. For gtk+ everything breaks without a fix.
That's because about every other year they have the equivalent of a major ABI break (and they you target explicitly the newer version or stay locked to the old version) and the platform ships both.
As a gentoo user I would say that the ABI break may still worth it if it can enable a lot of application continuing to work with just a recompile, right?
The controversial around this plan origins from people's inability to understand why such a fundamental library like gtk+ needs to have api break in such a high frequency, which would have a huge impact on the experience of app devs and users? The world has its intrinsic complications, but we all hope to avoid the casual one.