True, it doesn't give you the bare machine. What it gives you is the thinnest of machine abstraction with the possibility of linking to your own assembly if you have the demand for it.
I am curious, what was it I said that you consider to be a myth? If I have some misunderstanding I would like to know. I looked at JOVIAL on wikipedia quickly but I can't see exactly how it would be thinner than C or if it's compiler would output something vastly different to a C compiler. Or did you mean it's as thin as C but it came out earlier?
I see, you thought I meant that C was the only language with this property. No there are plenty of others, I was fully aware of that. I on the other hand thought you meant that JOVIAL in some way was even thinner or more tuned to underlying architecture in some way that made it thinner than C.
First off, I want to congratulate you on reaching this milestone. I think this is the state where the most seasoned programmers end up. They know how to write code that works and they don't need a language to "help" or "guide" them.
If software development taught me anything it is that everything that can go wrong will go wrong, the impossible will happen. As a result I prefer having less things that can go wrong in the first place.
Since I acknowledge my own fallibility and remote possibilities of bad things happening I have come to prefer reliability above everything else. I don't want a bucket that leaks from a thousand holes. I want the leaks to be visible and in places I am aware of and where I can find and fix them easily. I am unable to write C code to that standard in an economical fashion, which is why I avoid C as much as possible.
This is, perhaps surprisingly, what I consider the strength of C. It doesn't hide the issues behind some language abstraction, you are in full control of what the machine does. The bug is right there in front of you if you are able to spot it (given it's not hiding away in some 3rd party library of course) which of course takes many years of practice but once you have your own best practices nailed down this doesn't happen as often as you might expect.
Also, code doesn't need to be bulletproof. When you design your program you also design a scope saying this program will only work given these conditions. Programs that misbehaves outside of your scope is actually totally fine.
Empirically speaking, programmers as a whole are quite bad at avoiding such bugs. Humans are fallible, which is why I personally think it's good to have tools to catch when we make mistakes. One man's "this takes control away from the programmer" is another man's "friend that looks at my work to make sure it makes sense".
Nothing of that is written in pure C, as per ISO C standard.
Rather they rely on a mix of C compiler language extensions, inline or external Assembly written helpers functions, which any language compiled language also has available, when going out of the standard goes.
When most people say "I write in C", they don't mean abstract ISO C standard, with the possibility of CHAR_BIT=9. They mean "C for my machine" - so C with compiler extensions, assumptions about memory model, and yes, occasional inline assembly.
That is not an argument. ANSI/ISO C standardizes hardware-independent parts of the language but at some point you have to meet the hardware. The concept of a "implementation platform" (i.e. cpu arch + OS + ABI) is well known for all language runtimes.
All apps using the above-mentioned are written in standard ANSI/ISO C. The implementation themselves are "system level" code and hence have Language/HW/OS specific extensions which is standard practice when interfacing with low-level code.
> any language compiled language also has available
In theory yes, but in practice never to the ease nor flexibility with which you can use C for the job. This is what people mean when they say "C is close to the metal" or "C is a high-level assembly language".
C took off because it was free, shipped alongside with an operating system that initially was available for a symbolic price, as AT&T was forbidden to take advantage of UNIX.
Had UNIX been a commercial operating system, with additional licenses for the C compiler, like every other operating systems outside Bell Labs, we would not be even talking about C in 2026.
Being easily affordable/available in those times was the initial "hook" but C's subsequent and sustained success was due to a happy confluence of various design decisions.
Not too high-level, Not too low-level, easy access to memory/ISA, simple abstract machine, being imperative procedural, spanning bare-metal/OS/app, adopted by free software movement producing free compilers/tools, becoming de-facto industry standard ABI etc. all were crucial in its rise to power.
Note that its main competitor at that time, Pascal; lost out in spite of being simpler, having clean high-level features, promoted by academia, safety focused etc.
C is quirky, flawed, and an enormous success. While accidents of history surely helped, it evidently satisfied a need for a system implementation language efficient enough to displace assembly language, yet sufficiently abstract and fluent to describe algorithms and interactions in a wide variety of environments.
An LLM doesn't understand the difference between fact and fiction.
It just uses probability to choose the next word. Hopefully, there are a more facts in it's database that can serve as a guide. But if not, it will just as readily use fiction to produce something that sounds plausible.
Anything an LLM produces simply cannot be trusted and is a poor example of "intelligence".
If Linux previously always outperformed Windows the result should be similar this time around as well. It could possibly be some missing feature or a bug in the linux drivers but it sounds unlikely to me. I mean the architecture isn't fundamentally different. Maybe windows ignores some thermal throttling? Something smells fishy here for sure.
1) Intel is optimising for common cases inside the most dominant desktop operating system.. this is like apple having really good floating point in their cpu’s that makes javascript not suck for performance… and is why macbooks feel snappy with electron.
2) Intel and microsoft worked together when designing the CPU, so Windows is able to take advantage of some features that Linux is only just learning how to handle (or learning the exact way it works).
3) The way the operating systems schedule tasks is better in this generation for Windows over Linux, by accident.
“it’s better” doesn’t really factor, Windows has been shown repeatedly over the last half-decade to be so inferior as to be beaten by Linux when Linux is emulating Windows APIs. It’s difficult to be so slow that you’re slower than someone emulating your platform.
Well that too isn't exactly correct; Windows isn't getting beaten by Wine/Proton on Linux by any significant (nontrivial) margin; at best they're at par, and most of Linux's advantage comes from not having to bear the load of a thousand background processes (unlike Windows) when it's running a Windows app. I perfectly understand the appeal of desktop Linux being an avid user myself, but let's be real here, it's not very likely to run a Windows-native app or game much better than a debloated/LTSC Windows setup.
I'd wonder about the performance impact of windows defender and the like.
I'd really wonder if one took a game that was on both xbox and linux. constructed a linux box to have basically as close to specs as on the xbox and then benchmarked the games against each othe, what would would see.
I'm not saying that linux is better than windows or that windows is better than linux, just that I think its very hard to make an apples to apples benchmark comparison and there are constant services on windows that run that one doesn't generally have running on a linux system that can cause problems.
You buy Windows as a product, and those subsystems are so spidered in that turning them off is not possible, and if it even was possible it would have some impact.
You buy Windows for games, thats been the consensus for years, the NT kernel could in theory run games 10x better, but it doesn’t mean anything because you only get it, with Windows.
So, an Apples to Apples comparison is Bazzite. The general purpose operating system you install and play games on. No need to apologise for Microsofts choices.
Otherwise Windows could make WSL (1) faster than Linux, but they can’t because they don’t have the similar enough underlying operating system paradigms.
I could give examples, but I think just comparing native python performance on both platforms is the easiest case I can make without going into details.
WSL 1 was faster and better than WSL 2, but they abandoned it for its technical complexity and switched to containers / virtual machines, which create a slew of Other Problems.
"Intel is optimising for common cases inside the most dominant desktop operating system."
- literally the history of Intel for more than 30 years and likely why we see this benefit now. gaming the compiler and hoping they wont get caught bought them a decade against AMD.
"Intel and microsoft worked together when designing the CPU"
- I guess the bitterness of Itanium doesnt last forever.
That's how a good benchmark looks like.
From ancient wisdom (Linux Benchmarking Howto):
"
5.3 Proprietary hardware/software
A well-known processor manufacturer once published results of benchmarks produced by a special, customized version of gcc. Ethical considerations apart, those results were meaningless, since 100% of the Linux community would go on using the standard version of gcc. The same goes for proprietary hardware. Benchmarking is much more useful when it deals with off-the-shelf hardware and free (in the GNU/GPL sense) software. "
The oddity is that Windows is slower everywhere but on this one specific kind of laptop, as far as I understand. If it's not a quirk of the laptop, windows would be better everywhere.
I just want to be a bit picky and say that bike shedding means focusing on trivial matters while ignoring or being oblivious to the complicated parts. What he described sounded more like a combination of feature creep/over-engineering.
The author could also have used the phrase "hobby horsing", which is similar to bike shedding in that the individual is focusing on things that don't really push the project forward, but which rather give them personal pleasure, instead. Bike shedding usually is explained as "working out what color to paint the bike shed before the rest of the house is done".
A command-line tool called berk that is a versatile job dispatcher written in c. It is meant to replace big clunky tools like Jenkins, Ansible etc. It has syntax similar to git. It works pretty well, just need to iron out some kinks before final release. https://github.com/jezze/berk
I think it would have been better if they had designed it so that the error message from the kernel came in a seperate register. That would mean you didnt have to use signed int for the return value. The issue is that one register now is sort of disambigious. It either returns the thing you want or the error but these are seperate types. If you had them in seperate registers you would have the natural type of the thing you are interested in without having to convert it. This would however force you to first check the value in the error register before using the value in the return register but that makes more sense to me than the opposite.
That is quite expensive. Obviously you need to physically add the register to the chip.
After that the real work comes. You need to change your ISA to make the register addressible by machine code. Pdp11 had 8 general purpose registers so they used 3 bits everywhere to address the registers. Now we need 4 sometimes. Many op codes can work on 2 registers, so we need to use 8 out of 16 bits to address both where before we only needed 6. Also pdp11 had fixed 16 bits for instruction encoding so either we change it to 18 bit instructions or do more radical changes on the ISA.
This quickly spirals into significant amounts of work versus encoding results and error values into the same register.
There are quite a few registers (in all the ISAs I'm familiar with) that are defined as not preserved across calls; kernels already have to wipe them in order to avoid leaking kernel-specific data to userland, one of them could easily hold additional information.
EDIT: additionally, it's been a long time since the register names we're familiar with in an ISA actually matched the physical registers in a chip.
It is distinctly odd to watch people in the 2020s laboriously explaining how difficult all this stuff would be, when the reality was that the register scarcity that prompted this sort of double-duty in 1979 was already going away in mass-market computers in 1982.
By 1983, operating system vendors designing their APIs ab initio were already making APIs that just used separate registers for error and result returns. Sinclair QDOS was one well-known example. MS-DOS version 2 might have done things the PDP-11 way, but by the time of MS-DOS version 4 people were already inventing INT calls that used multiple registers to return things. OS/2 was always returning a separate error value in 1987. Windows NT's native API has always been returning a separate NTSTATUS, not doubled up with anything else, since the 1990s.
I was around then but never got that low level into things, so anecdotes like this always fascinate me.
Too, I'll read any post that mentions OS/2; I loved that OS so much as a user. Partially also because some of the REXX I learned in college could be put to use.
Yeah I am not advocating creating a new seperate register, even though that would be nice. Like the poster below said, there are usually some unpreserved registers to choose from but if you for some reason cant spare a register you could instead write the error code to any virtual address instead, or send a signal, a message or anything else you could come up with. Just some way that does away with this intermix of return types and error types.
I have no experience of using it so I might be wrong but AMD has ROCm which has something called HIP that should be comparable to CUDA. I think it also has a way to automatically translate CUDA calls into HIP as well so it should work without the need to modify your code.
To quote "The Dude"; "Well ... ummm ... that's ... ahh ... just your opinion man". There are people who are successfully running it in production, but of course depending on your code, YMMV.
Sad to hear this since I think this was the lander containing the Moonhouse art project. I would have loved to see the little red cottage on the moon with the earth as it's backdrop.
I know it didn't exactly serve any scientific purpose but an image like that could have been very inspirational to a lot of people.
I would buy one but only if I am guaranteed to be able to compile the source code somewhat easily and flash it to the device. Anyone knows if they have made any promises around that?