The post that I commented on was arguing that what sets the Mac apart from other options with 8 GB RAM, and what makes them more expensive, is that they are seen as a status symbol. I made a point against that mentioning two areas in which Macs are truly superior.
> and the worst have died away (things like goto and classical inheritance)
What's so wrong about classical inheritance, and how it died away while being well-supported in most popular programming languages of today (Python, C++, Java, C#, TS, Swift)?
In a sense, it’s like global variables. About every complex program [1] has a few of them, so languages have to support them, but you shouldn’t have too many of them, and people tend to say “don’t use globals”.
[1] some languages such as classical Java made it technically impossible to create them, but you can effectively create one with
class Foo {
public static int bar;
}
If you’re opposed to that, you’ll end up with making that field non-static and introducing a singleton instance of “Foo”, again effectively creating a global.
In some Java circles, programmers will also wrap access to that field in getters and setters, and then use annotations to generate those methods, but that doesn’t make such fields non-global.
Who are Wirths, Dijkstras, Hoares, McCarthies and Keys of today? I mean - who represents current generation of such thinkers? Genuinely asking. Most stuff I see here and in other places is about blogposts, videos and rants made by contemporary "dev influencers" and bloggers (some of them very skilled and capable of course, very often more than I am), but I would like to be in touch with something more thoughtful and challenging.
Contemporary PL designers who have inspired my programming language design journey the most are people like Chris Granger (Eve), Jamie Brandon (Eve/Imp/others), Bret Victor (Dynamicland), Chris Lattner (Swift / Mojo), Simon Peyton Jones (GHC/Verse), Rich Hickey (Clojure), and Jonathan Edwards (Subtext). My favorite researcher is Amy J. Ko for her unique perspective on the nature of languages. Check out her language "Wordplay" which is very interesting.
There is a lot of good computer science, but the computer science community today is vastly larger than it was in the 1960s and 1970s when Dijkstra, Knuth, Wirth, and others became legends. There are so many subfields of CS, each with its own deep literature and legendary figures. It’s difficult to be a modern Dijkstra or Knuth due to these factors, though to be fair, it is an impressive feat for Dijkstra to be Dijkstra and for Knuth to be Knuth even in their heydays. It’s just easier to get famous in an upstart field compared to getting famous in a mature field.
I think there are two typical paths to widespread visibility across CS subfields: (1) publishing a widely-adopted textbook, and (2) writing commonly-used software. For example, many computer scientists know about Patterson and Hennessy due to their famous computer architecture textbooks, and many computer scientists know about people like Jeff Dean due to their software.
Reading more academically-oriented literature such as the ACM’s monthly periodical “Communications of the ACM” is also a good way to get acquainted with the latest developments of computer science.
I can't claim to be equal to the greats, but I do run a Discord server where I think and talk a lot about both the philosophy and practice of language design while building tools that I hope will change the state of the art: https://discord.gg/NfMNyYN6cX
very hot and edgy take: theoretical CS is vastly overrated and useless. as someone who actively studied the field, worked on contemporary CPU archs and still doing some casual PL research - asides from VERY FEW instances from theoretical CS about graphs/algos there is little to zero impact on our practical developments in the overall field since 80s. all modern day Dijkstras produce slop research about waving dynamic context into java program by converting funds into garbage papers. more deep CS research is totally lost in some type gibberish or nonsense formalisms. IMO research and science overall is in a deep crisis and I can clearly see it from CS perspective
Well, I think there is something to it. Computers were at some point newly invented so research in algorithms suddenly became much more applicable. This opened up a gold mine of research opportunities. But like real life mines at some point they get depleted and then the research becomes much less interesting unless you happen to be interested in niche topics. But, of course, the paper mill needs to keep running and so does the production of PhDs.
You made a preposterous statement, got called out, and are now making excuses.
Anybody who claims to have studied "Theoretical Computer Science" can/will never make the statements that you did (and that too in a thread to do with Niklaus Wirth's achievements who was one of the most "practical" of "theoretical computer scientists"!).
I assume that you are talking about modern "theoretical CS", because among the "theoretical CS" papers from the fifties, sixties, seventies, and even some that are more recent I have found a lot that remain very valuable and I have seen a lot of modern programmers who either make avoidable mistakes or they implement very suboptimal solutions, just because they are no longer aware of ancient research results that were well known in the past.
I especially hate those who attempt to design new programming languages today, but then demonstrate a complete lack of awareness about the history of programming languages, by introducing a lot of design errors in their languages, which had been discussed decades ago and for which good solutions had been found at that time, but those solutions were implemented in languages that never reached the popularity of C and its descendants, so only few know about them today.
If you really have followed the research in type systems and see how it *factually* intersects with practical reality you wouldnt joke about it. Its a bizzare nonsense what they do in „research“ and sane implementations (only slightly grounded in formalisms) are actually used
I do, and hope that one day stuff like dependent types and formal proofs are every day tools, alongside our AI masters, which also don't use any learnings from scientific research.
> There is hope that with AI we get to better tested, better written, better verified software.
And it is one thing we don't get for sure.
This tech, in a different world, could be empowering common people and take some weight from their shoulders. But in this world its purpose is quite the opposite.
But it is not something new that came with AI, even if it is most recent and most visible symptom of the sickness. We keep buying tons of useless crap and convert to tons of trash. We waste tremendous amount of energy for most trivial whims. Frugality was never dominant idea.
As far as I can tell after a quick Google, you can't share your Qt UI with the browser version of your app. Considering that "lite" browser-based versions of apps are a very common funnel to a more featureful desktop version, it makes sense to just use the UI tools that already work and provide a common experience everywhere.
The same search incidentally turned up that Qt requires a paid license for commercial projects, which is surprising to me and obviously makes it an even less attractive choice than Electron. Being less useful and costing more isn't a great combo.
> you can't share your Qt UI with the browser version of your app
You can with WASM (but you shouldn't).
> Qt requires a paid license for commercial projects
It doesn't, it requires paid license if you don't want to abide with (L)GPL license, which should be fair deal, right? You want to get paid for your closed-source product, so you should not have any reservations about paying for their product that enables you to create your product, right? Or is it "money for me, but not for thee"?
> Being less useful and costing more isn't a great combo.
Very nice, but now explain why you are talking about using Qt to create apps, whereas grandparent talks about experience with apps created with Qt.
I looked up the WASM Qt target and it renders to a canvas, which hampers accessibility. The docs even call out that this approach barely works for screen readers [0], and that it provides partial support by creating hidden DOM elements. This creates a branch of differing behavior between your desktop and browser app that doesn't have to exist at all with Electron.
It should go without saying that the requirements of the LGPL license are less attractive than the MIT one Electron has, fairness doesn't really come into it. Beyond the licensing hurdles that Qt devotes multiple pages of its website to explaining, they also gate commercial features such as "3D and graphs capabilities" [1] behind the paid license price, which are more use cases that are thoroughly covered by more permissively licensed web projects that already work everywhere.
On your last point I'm completely lost; it's late here so it might be me but I'm not sure what distinction you're making. I guess I interpreted dmix' comment generally to be about the process of producing software with either approach given that my comment above was asking for details on alternatives from the perspective of a developer and not a user. I don't have any personal beef with using apps that are written with Qt.
I do frontend work so struggle to get over how bad most Qt GUIs are. They are far out of date compared to Gnome or MacOS in a lot of the small widget details and menus.
Plus I use Mac these days and Qt apps just never looked right on that platform.
There is Classic Flang, there is New Flang (part of LLVM tree), there is LFortran, there is Intel's ifx (also based on LLVM) and Nvidia's nvfortran (also based on LLVM, I think). And maybe even more.
Fortran ecosystem is actually more prolific, at least in terms of toolchains, including proprietary ones, than most of more popular languages.
Digression, but people sometimes forget that there is whole world outside of Python or JS, and that GitHub Stars or ShowHN posts do not easily translate to real world usage.
Today, there are at least 9 production-level surviving Fortran compilers (GNU Fortran, IFX, nagfor, nvfortran, XLF, Cray/HPE's ftn, Fujitsu's frt, old Flang-based Arm/AMD, and flang-new). This situation has advantages and disadvantages for our users. Their Venn diagram of equivalently implemented features is very much not a circle, and portability across compilers is really tough. The ISO standard is hardly clear and doesn't have a test suite or reference implementation, so it's been a very challenging task to make flang-new as easy to port existing codes to as possible.
People, I get it, you love Apple, but get in touch with reality.
reply