As a developer of desktop applications as well, I've asked myself these same questions. While I don't have any definitive answers for you, I'll share my thoughts.
If you consider that an ecosystem evolves to the constraints of its environment, even from their pre-history at Xerox Parc, early GUI frameworks were incredibly constrained. But I think the most important constraint in their development was object orientation. Smalltalk blazed the trail for GUI programming, everyone else following after modeled their systems in that way. Object orientation became the most expressive way to program a hierarchical widget system (which most desktop systems are). Even early Windows HWND-style programming emulates virtual dispatch in non-object oriented C (also 'gobject').
In response, a vast body of knowledge and techniques around object orientation were amassed to mitigate these constraints as the ecosystem matured through the 80s and 90s. Not just patterns like MVC, Flyweight, Command et al, but also structural techniques like the event loop.
> Eventually, somewhere near the end of the 90s/middle of the 2000s, most of these things were "solved" for desktop/native, or put less charitably, they stopped changing.
So at the end of the 90s/middle of the 2000s the web browser grew out of being a document viewer into a very constrained widget kit, but the programming model was not object oriented and so the 'solved' problems needed to be solved again for a new set of constraints. Few of the traditional techniques applied, and the old object oriented way was either adapted (e.g. data binding became Redux, Flyweight/Prototype became templates) or disposed. Personally I find it a bit of a waste, but I don't expend any energy in the new ecosystem, so I don't want to be too judgy.
But I don't think desktop widget kits 'stopped changing', and are cherry-picking ideas from the browser, like 'declarative' (XAML, QML, etc) and 'responsive' (widget containers that re-flow the viewport for phone/tablet orientation etc). I hope I haven't misunderstood your question and wasted everyone's time.
Interesting... How do you schedule this? If the queue is empty, do you back off and retry later, or spin the query until it returns a queue item, or some other way? It's a nice approach.
Indeed, the whole press release brings to mind a couple of points from "Engineering a Safer World" which, if you're interested in this stuff, I can't recommend enough[0].
In the section "Questioning the Foundations of Traditional Safety Engineering":
Old Assumption
Most accidents are caused by operator error. Rewarding safe behaviour and punishing unsafe behaviours will eliminate or reduce accidents significantly.
New Assumption
Operator error is a product of the environment in which it occurs. To reduce operator "error" we must change the environment in which the operator works.
And:
Old Assumption
Major accidents occur from the chance simultaneous occurrence of random events.
New Assumption
Systems will tend to migrate toward states of higher risk. Such migration is predictable and can be prevented by appropriate system design or detected during operations using leading indicators of increasing risk.
In the press release we see both the "operator error" and "random events" hand-waving. Regardless of the fiduciary duty of this man, this is just not good enough.
The same quotes in an easier to read format, and I agree with them:
"Old Assumption
- Most accidents are caused by operator error. Rewarding safe behaviour and punishing unsafe behaviours will eliminate or reduce accidents significantly.
New Assumption
- Operator error is a product of the environment in which it occurs. To reduce operator "error" we must change the environment in which the operator works.
---
Old Assumption
- Major accidents occur from the chance simultaneous occurrence of random events.
New Assumption
- Systems will tend to migrate toward states of higher risk. Such migration is predictable and can be prevented by appropriate system design or detected during operations using leading indicators of increasing risk."
Just a day ago, user Gibbon1 also posted a link to a talk by the author of that book:
Thanks so much for your comment. I've just read (and played around with) your waveguide synthesis article, which led me down the rabbit hole of Sporth, ChucK, Soundpipe and now I find myself really excited looking over the contents of the "Physical Audio Signal Processing" book. I am going to learn a lot of new stuff today, thanks!
(And this is not entirely derailing the thread, since many of these papers are full of block diagrams, and a tool to generate nice DSP diagrams from text would be pretty useful)
My thoughts on the philosophical aspects of the article don't really relate to Rust.
> These values reflect a deeper sense within me: that software can be permanent
Maybe it can be permanent, but it shouldn't be. Software is disposable and all the time developers are mistakenly fighting this aspect of its nature instead of embracing it. Great software is malleable and develops, over time, to adapt to the human that is using it. But it is impermanent--parts are snipped off here, fleshed out there, nothing stays the same, it is obsolete as soon as it is released. Don't delude yourself that your software will run for a thousand years, be like Warhol and celebrate the ephemera that is pretty much every program ever written.
> I have believed (and continue to believe) that we are living in a Golden Age of software, one that will produce artifacts that will endure for generations
Museum pieces, sure, but do we still want to be using generations-old software in the years to come? Hope not. Times change, needs change, and software that doesn't change is replaced by software that does, and quick. What about such monumental artifacts as 'cc', 'awk', or even the UNIX kernel? For years they have dominated the landscape, they are the Ozymandias, the King of Kings. If we are still clinging to these titans in another 20, 50 years, is that a good thing?
> do we still want to be using generations-old software in the years to come?
*BSD, vim, Emacs, Perl, C, Apache & Linux in some way, gcc, the GNU userland, Air company & banks infrastructure come to mind, and they still do the work.
There are a couple of ways to look at this. Philosophically, to pick any one of your examples, my emacs is version 26.1. Is this the same software as emacs 15.10 released April 1985? Will a perl 6 program run on perl 5? Am I the same person I was 5 years ago?
Another take: what is the ratio of the same software still in use after, say, 10 years to all software in use? I would argue that more than 99% of all software (e.g. by version number) is no longer in use after a mere 5 years. Software is inherently disposable, let's not pretend we're building bridges that will stand for generations.
My point is that developers (I'm one) have a hard time with the qualities of software: we don't understand the nature of software change, and bicker about what bumping a semver number means, and we fight its disposable nature by engineering it to the point where it could run for a decade (it won't).
I'm glad there are Warhols doing their thing but I think the world would be worse off without Michelangelo.
awk, vim, whatever Dos 3.1 GUI the guys at B&H use to fulfill my orders, why shouldn't I expect to be using these programs in 50 years? They solve a problem and work every time.
> why shouldn't I expect to be using these programs in 50 years?
No reason at all, if they still solve the same problem, they're still the best solution. The problem with 'problems' though, is that once they're cornered they tend to change into a whole new problem. A nasty one the current solution won't work for, usually.
...like all Linux installs will eventually be replaced by BSD? I don't think the big, complex picture of technology licensing can be reduced as flippantly as that.
BSD is more liberal in what it allows other developers to do. GPL is more liberal in what it guarantees the end users of the software, which is the point of the GPL. Either is more liberal than purpose-limiting licenses.
If you look at data relating to user conversion, and users staying on and revisiting websites, fast would seem to be just about everyone's favorite feature!
You're right, but there is a lot of misunderstanding around this end of the market, mainly because advances have blurred the traditional segmentation. When talking about ARM, the 'M' in Cortex-M means 'microcontroller' whereas the 'A' in Cortex-A means 'application'. Cortex-A systems are often at the centre of 'System-on-Chips' and will run linux, and indeed the NT kernel as well, as they are bundled with enough RAM and fairly modern peripheral interfaces such as HDMI.
Cortex-Ms typically can't run linux (excepting uClinux) as they don't have the RAM and typically don't need to as they address a different need (dedicated function instead of general-purpose compute), and have far fewer peripheral interfaces. It used to be all about power profile, but the recent SoCs are getting pretty competitive there as well.
But as I said the traditional segmentation at this end of the spectrum is being re-cast seemingly every second week, and so terms such as 'microcontroller' are becoming less meaningful all the time. And who the hell can agree on just what 'embedded' means these days?
"And who the hell can agree on just what 'embedded' means these days?"
Or what it'll mean in 10 years, as power/radio/processor/sensor specs continue to improve. I'm speculating, but perhaps MS is banking on the low-end to grow, up into the Android/iOS space. So instead of our current 3-10 devices per family we'll have 30-100 devices. Hopefully, those devices will be secure. Maybe they'll be useful :-)
Interesting definition choice, as what constitutes an MMU has also fuzzily shifted over the years. From what I've seen, most of the SoC designs contain what in the microcontroller world of the 90s would be considered more than a minimal MMU, take for instance the classic M68451 [1], and the multi-stage bus pipelines and super-wide buses of these 'embedded' designs easily surpass such early MMUs.
Embedded developer here, ARM Cortex-M4 microcontrollers. I've been keeping an eye on Rust for the embedded space and although there has been a lot of movement in that area--particularly in the last couple of months--I'm not sure the value proposition fits the microcontroller market, where of course C is king.
While Rust has much to offer as a programming paradigm in general, the main value is in the borrow-checker (the linked transcript cites 'memory bugs' as the most common class of bugs). Embedded software practitioners long ago abandoned dynamic memory allocation and with it the 'use-after-free' and 'out-of-bounds access' bugs, instead re-defining the problem as one of latency (e.g. you'll need to process that static buffer before it gets re-used by your UART interrupt). Take away the borrow-checker, and Rust looks less compelling.
In time, Rust will find its niche in the embedded space, most likely occupying the high-level RPi/BBB/iMX SoC layer and perhaps working its way down to microcontrollers. As wiremine points out, it will require vendor support--moving away from your vendor's toolchain is a world of hurt that seasoned embedded developers just won't even consider. Pragmatism reigns: time-to-market and a cheap BoM are the main metrics, programming language a distant 10th.
Also working in the same space. I had the opportunity to evaluate Rust for our development environment back in late September. The killer feature it offered us is serde - rust's general purpose SERializer DEserializer library.
So much of our code is centered around taking measurements with a bare metal system and then transmitting them to a linux box for processing. Being able to just write `#[derive(Serialize, Deserialize)]` above a struct and then be able to send/receive it across the channel via `rpmsg.send(mystruct)` or `let mystruct: MyStruct = rpmsg.recv()` is magic. Furthermore, by encapsulating each possible message type as an enum variant, match statements provide a really great way for dispatching the message to the appropriate handler after we deserialize it.
As for the borrow checker, I actually did find it useful in bare metal. But more for handling hardware resources. Different measurements require different sets of power supplies to be activated, and exclusive control over different I/Os, etc. The ownership model made it easier to ensure statically that we sequence the measurements in a way such that the different measurement routines can never reconfigure resources that are in use by a different routine.
Anyway, we sadly aren't using Rust yet in production, even after that. Holding off until we start the next product.
What data format do you use with Serde on embedded? JSON? I read somewhere that Serde works in no-std environments, but wasn't sure whether it does that with all possible data formats.
Serde is middleware, which really just shuttles calls between a serializable object and the serializing backend in a standard (and performant!) way. That middleware works in no_std environments, but not all backends do.
I'm not up to date on which backends support no_std, and which backends support no-alloc - some backends support no_std but require an allocator. When I looked into this in Sept, ssmarshal was the only general-purpose backend I could find that supported no_std & didn't need an allocator. There was some talk of adding no_std support to bincode - looks like it hasn't gone anywhere: https://github.com/TyOverby/bincode/issues/189
My one gripe with ssmarshal is that - in Sept - it would refuse to serialize collections whose size isn't compile-time constant. Obviously, you aren't going to be serializing Vec, Map, etc, in a no_std environment. But one could very well wish to serialize stack-allocated equivalents (e.g. arrayvec, where you have a vector that stores all data on the stack and grows up to the space allocated for it). In order to serialize an arrayvec, I had to write wrapper code that serialized the entire underlying fixed-size storage, regardless of how much was actually in use.
Things move fast in rust-land - ssmarshal might have a feature that allows serializing dynamically-sized types, or there might be new/more versatile backends since Sept.
I think the most difficult thing about deserializing JSON in a no-std environment is that strings can have escape sequences. So when you deserialize a string, you can't just pass a reference into your buffer up to the frontend - you have to decode the string. Usually one would heap-allocate in the backend for that, but if you have the ability to mutate the buffer you're deserializing from, I don't see any fundamental reason why you couldn't decode the string in place and then yield it - I'm pretty sure all encoded JSON strings are at least as long as their decoded version. The easy alternative is to deserialize the string as a [Char] sequence (i.e. pass it to the frontend character by character) and let the frontend worry about memory management, which isn't even necessarily so bad, with things like ArrayString.
Just that Rust is known for its borrow checker doesn't mean that the borrow checker is the only type of safety that Rust offers. The old standbys are still valuable: bounds checks, null pointers, compile-time data race detection.
> Pragmatism reigns: time-to-market and a cheap BoM are the main metrics, programming language a distant 10th.
I find that I can develop software faster with Rust than with C, simply because of language and standard library features: closures, iterators, a real string library, a vector type, hash tables in libstd, better unit testing support, etc. Development speed can affect time to market.
As the industry matures, though, reliability generally becomes more important. And "embedded" covers everything from IoT light bulbs (correctness less important…for now) to avionics (correctness extremely important).
Null pointer is actually perfectly fine on bare metal. There’s no memory protection, so it just points to address 0x0, and if you deref it nothing bad will happen.
First, of course, there is no requirement for NULL to map to address zero.
Second even if you do en uo there, many architectures don't even have memory at 0x0. Spurious writes are spurious writes regardless of whether or not you get a fault. You are still not doing what you want to be doing.
Ones I worked with did nothing on when reading from 0x0. I mentioned this because for someone who spends all their time well above bare metal this is not intuitive at all. And null is de-facto 0 on all C compilers, even though it’s not required to be. So let’s not engage in hyperbole here.
But it is never what you want. So even if there are no immediate explosions, your LED's not going to blink the way you expected it. I'd say a clear, stern sign that something was wrong is better than limping along after dereferencing null.
Sure. I’m just saying that null pointer deref and read is not generally a fatal operation. Most programmers expect the program to die in this case. When it doesn’t, they are surprised.
In many ways, generalized statements like this for all possible controllers and software out there are worse than understanding that accessing address 0 from high level code can have its uses and be completely correct. ;)
In a less condescending tone, if some HW designer put control structures at address 0 and they are writeable, then you have to deal with it in software. If there is no MMU that can remap that memory range, you will end up having legitimate memory accesses to that area. They can only be distinguished from accidental null pointer dereferences by context. This context would need to come from the developer by annotating the source somehow.
If the software is unintentionally reading or writing address zero, it’s by definition not functioning properly, but because of the lack of memory protection/safety this failure mode is going undetected. Rust won’t stop you from intentionally accessing 0x0.
This seems among the hardest bugs to track down I could think of, regardless of what is mapped at address zero. I don’t think it’s condescending to say software that begins operating incorrectly in an undetectable way is always bad.
Have you had to track one of these down before? I haven’t, But I have had to track down silent memory corruption issues in memory unsafe languages in the past and it can take days, on the desktop, with good tooling, I can’t imagine doing it on an embedded system.
If tracking down these kinds of errors on the desktop took you days, your tooling was maybe not good enough. I honestly cannot remember a an instance where valgrind completely failed me. This is lind of my gold standard for debugging memory issues.
Also, some microcontrollers have amazing debugging support these days. Instruction tracing on Cortex M devices is a great feature, for example. The CPU will log every instruction that it executes over a serial interface for the hardware debugger to store. This allows you to go back in time after the fact, something that desktop debuggers have a really hard time with.
My point is, with a language like Rust, you can pretty much throw all this away. Why put yourself through this intentionally?
I also feel you're dodging my question. A 1-in-1000 spurious write to 0x0 is something you'll have a terrible time even identifying as the cause of your failure specifically because it is completely silent. Your embedded system just happens to stop working sometimes, where do you even think to begin? Assuming you know this is why, sure, throw on a watchpoint and call it a day, but how did you connect "heater stops heating" to 1-in-1000 write to 0x0?
You don't have to worry about that with a language that wont even let you make that invalid program in the first place.
Well, this hasn't even been an issue for us in the last couple of years, even though we use controllers without MMUs. We have a quite complex C++ codebase and our coding style catches a lot of these mistakes outright.
Rust is simply not an option for us because of a distinct lack of tooling available for it. We need a ISO 61508 qualified toolchain including testing frameworks and there is none in sight for rust.
Also, out of interest: has anyone ever tried to write code in rust that is protected against bit flips caused by radiation? Our code is able to detect this because it stores long lived values also as bit inverted patterns and compares them regularly. This does not allow us to recover outright, but we can at least fail gracefully and attempt
to reboot the device.
I take it you didn't do much development on machines without memory protection. The problem isn't reading from 0x0. It's writing to 0x0 (and beyond), clobbering system memory , memory mapped IO, or even your own application.
I learned C on an Amiga, back in the late 80's. A bad pointer typically resulted in "Guru Meditation" error (OS crash), followed by a reboot.
You might want to check out https://blog.rust-lang.org/2015/04/10/Fearless-Concurrency.h..., it goes into other types of concurrency in Rust, some probably more interesting to embedded devs than the general dynamic allocation stuff. The key takeaway is that the borrow checker isn't only good for dynamic memory allocation in GCless environments (in general even the stack allocated variables which we have in embedded profit from the borrow checker).
The other thing is that Rust offers a lot of very useful abstractions (e.g. enums with data) and a strong type system, which is sorely lacking for C.
The embedded systems industry is one of the slowest industries to adopt to new technologies, so I'm not holding my breath, but I think everything is there in Rust to make it a good embedded language.
You don't need dynamic memory to make a ton of memory errors. There's still lots of possibilities to have dangling pointers to somewhere else on the stack, memory issues with objects pools, memory issues due to race conditions in multitasked RTOS systems, etc.
On top of those there's misunderstandings whether a char* is actually a pointer or an array, misunderstandings who knows those, etc.
I've seen enough of these issues in an RTOS project to believe that Rust (and even modern C++) will be a huge step up in overall quality and productivity.
It is not just the borrow checker. The ownership system / move semantics and powerful static type system with traits and generics helps to create some interesting abstractions (which have very little run time overhead) - pleas check: http://blog.japaric.io/brave-new-io/ as well as other articles on that blog.
The borrow checker is not inherently about dynamic memory allocation. Heck, my toy x86_64 OS doesn't even have an allocator at all yet! Rust's features, including the borrow checker, are still useful here.
>As wiremine points out, it will require vendor support--moving away from your vendor's toolchain is a world of hurt that seasoned embedded developers just won't even consider.
I wont consider a chip/microcontroller if it doesn't support open source command line tools.
Vendor support is technological debt that hinders every bit of testing and automation.
I used to work in a space that was once considered to be embedded (POS systems, including some based on low-cost 8bit controllers, m86k based stuff and low-end ARM at the end), and got out of it just before the entire thing was taken over by fast 32bit or even 64bit ARM CPU's, which eventually moved to running stock Linux or Android. As ARM chips become cheaper and cheaper, the low-level embedded fields will shrink further and further, and the line will shift more and more towards running a full-blown OS below the actual applications. I've even encountered a platform marketed as being "realtime" which was running Linux.
It will always be there, micro-controllers are dirt dirt cheap, and in many cases more suited for industrial environments, and realtime will always be there, but it will become more & more specialized. And as you say, once you get down to a certain level the borrow-checker is a useful feature so I expect it won't be such an interesting target for Rust, and this will probably remain C's stronghold.
Your compiler adding the execution of an unpredictable amount of CPU instructions at certain points, which can change with new compiler versions? That sounds like an nightmare for realtime applications. Yes, I have been in situations where I had to count the instructions and clock-cycles to meet deadlines.
To me, the power of it isn't just in dynamic memory but also in that Rust has clear move vs ref vs bit copy vs logical copy (clone) and use-after-free protection.
As one example of the benefit of this. This makes me feel a lot more comfortable writing state machines in Rust's type system.
C is king because the industry is currently dominated by people who have been doing this since before C++ was a thing.
Additionally, most of these people are primarily electrical engineers, and don't have as strong of a background in computer science. They've been using C for decades and it does everything they want, why would they take the time to learn the boundless complexity introduced by a language that offers them (what they perceive to be) very little?
> C is king because the industry is currently dominated by people who have been doing this since before C++ was a thing.
No. If C++ were a great language those C coders would have moved over in an instant. One of the advantages of looking at C code is that you can actually figure out in your head what the assembly will look like.
They'd move over if technical considerations were the only reason why people choose programming languages. In my experience it tends to be psychological ones.
> you can actually figure out in your head what the assembly will look like
I keep hearing this, and I don't buy it. Did you know that `gcc -O3` will turn `int add(int x, int y) { return a + b; }` into an `lea` instruction? I doubt many people do.
And it's not like the compiler will magically switch to emitting different instructions if you compile the code above as C++...
Psychological ones? No, simply the lack of available reliable compilers for some platforms was my problem. I developed POS applications, where C++ would have worked fine, if we had a decent C++ compiler on every platform we wanted to support. Some platforms used GCC, but most used proprietary compilers - where C++ support was completely absent or very scetchy. When you can't use exceptions, the memory allocator is absolute garbage and leaks stuff on it's own and encounter various random compiler bugs, you quickly decide to stick with plain old C. C++ in my experience was an absolute mess when it came to embedded work (note that the last embedded work I did dates back from 2006, so not sure what the current situation is)
Also, C++ uses a lot more memory, which can also be a no-go when you get as little as 32kb for code+data, luckily with in-place execution.
Depends on how you use it. "If you don't use it, you don't pay for it" is the C++ philosophy. If you use it as "C with objects", it should use no more memory than C with structs. If you use it as "C with polymorphism", it should use no more memory than C with function pointers.
I was doing C++ development on MS-DOS already in the 90's.
Never cared for C beyond using it in Turbo C 2.0 for MS-DOS, and later when required to use it for university projects and client projects that explicitly required ANSI C89.
So it wasn't 64 KB, but it was perfectly usable on 640 KB systems.
The main problem has always been fighting misconceptions.
> I keep hearing this, and I don't buy it. Did you know that `gcc -O3` will turn `int add(int x, int y) { return a + b; }` into an `lea` instruction? I doubt many people do.
Uh. That's a pretty obvious one.
Sometimes using address generation ports is preferable to ALU ports.
Also 'lea' can load the result in a different register from both operands, 'add' will always need to modify a register.
People have been using 'lea' for calculations since dawn of time, for example:
shl ebx, 6
lea edi, [ebx*4 + ebx + 0xa0000]
add edi, eax
== y * 320 + x + framebuffer address.
This was a common way in DOS days for calculating pixel address in mode 0x13.
> One of the advantages of looking at C code is that you can actually figure out in your head what the assembly will look like.
One can do this with most C++ too. Though admittedly, non-tree virtual inheritance hierarchies, as well as member function pointers [et al] make this harder to achieve universally. I will also admit that it's easier to do with C.
If the optimizer gets its hands on either though, you may be in for a surprise no matter your choice.
I think you're not giving engineers enough credit here.
The world moved from C++ to Java on the enterprise side back in the late 1990's. Why? Java was arguably faster and easier to develop in, even though many thought (including me) that C++ was technically a better language.
Regarding Java vs C++, yes the enterprise world has adopted Java, however as someone doing consulting across Java, .NET and C++, I am really seeing it coming back since ANSI C++ has picked up steam again.
I see it in projects related to IoT, AI, VR, big data,....
They are all polyglot projects with C++ plus something else, not C plus something else.
It is very hard to get a Java or Python programmer (what those AI guys want to use) to move to C, even if they HAVE to use something native. So C++ is where they end up.
This whole thread started about embedded development.
As noted, unless we are speaking about PICs with 8KB and similar, the majority of them can easily be targeted by C++, which is what Arduino and ARM mbed do.
Already in MS-DOS, on 640KB computers, using C made little sense.
When we needed performance, Assembly was the only option, because the code any compiler was generating in those days was average quality on their better days.
When performance wasn't that critical, then the improved type system, RAII, reference types, type safe encapsulations were already better than using plain C.
We even had frameworks like Turbo Vision available.
So if something like a PCW 512 didn't had issues with C++, so a modern micro-controller can also be targeted by it, except for political reasons.
Developers that are against anything other than C, even if their compiler nowadays happens to be written in C++ (e.g. gcc, clang, icc, vc).
Sometimes. Usually you just write "normal" C, until you realise your single `sprintf` use took 20% of your ROM size. Or until you need some interrupt handler to take no more than N cycles. You probably don't switch to assembly at that point, but you definitely start checking what the compiler output is and where are the bytes/cycles wasted.
Actually writing assembly is more of a last resort time.
Because of cost we use very constrained microcontrollers; every byte literally counts. In the end it really matters cost wise (in mass production embedded every cent counts as well; using a high spec mcu just costs more) but we had to rewrite from C to assembly to get a few more kb for features in the flash. C++ or Rust are generally not good for the cost of materials.
Writing assembly tends to be restricted to the bits where you need it - special function prologues for interrupt handlers, requiring a particular unusual instruction for something, time-sensitive or cycle-accurate "beam racing" code.
Reading assembly is more useful, especially if your platform's debugger isn't very good.
The standard library (with all its' duplicate code resulting from hardcore templating) will blow up your flash space usage significantly, to the point where you will run out of it sooner than you expect. You will spend time finding alternative standard libraries that are size-optimized and you might end up rewriting a lot of what you take for granted in your C++ daily usage. For example, the Arduino environment is C++-based, but it's not anything like on the desktop due to it not shipping an std:: .
Your typical heap-happy usage will not go down well on a microcontroller, either. Having very constrained RAM makes heap fragmentation much more of an issue.
Then don't use the standard library. Don't even link it in.
> Your typical heap-happy usage
Huh? 1990s C++ was typically heap-happy, which is part of the reason Java looks the way it does. Idiomatic modern C++ uses the stack as much as possible. And one can use custom allocators.
A lot of people do. But they may as well not. They tend to write C-style C++.
Simply put, because you can't use the STL, or a lot of other C++ features, or only with a lot of consideration.
A whole swathe of the embedded world still cares about program size in bytes. There are some that don't, but they tend to be using Linux, and are at a higher abstraction level than many others in the industry. (Industry is kinda divided in half. Those who use tiny Linux machines, and those working with microcontrollers. It's a generalization, but generally fits.)
The stuff I work on day-to-day, usually has between 1-4kb for dynamic memory, and 8-16kb for the compiled program. That line is also usually a bit blurry, and you can move things between both at runtime, but at various costs.
With C++, you get tempted to use stuff like vector, which can blow your memory stack.
I generally work with C++, but it looks like C. I get a few things like implicit pointers, for free, but generally still have to end up making most things explicit.
But, unlike twenty years ago, I no longer have to dive into assembly unless the project is pushing it's limits. The compiler tends to be "good enough".
It really depends how you use it. If you are approaching from the standpoint of "I am writing code on a microcontroller", which means no exceptions, no static initializers, probably no templates, definitely no rtti, then it will be all okay. If you approach it from the standpoint of "I'm just programming, how hard could it be? Let's just use std::", you're going to have a very bad time and very quickly.
C++, when used in that way, tends to do a lot of things behind the scenes. This is perfectly okay in a place where you have an operating system and a linker that have your back. On an embedded system none of this is guaranteed.
For you yes, since you know how they are implemented at linker level. But get a few junior devs on your team, and you'll be wondering why hundreds of thousands of cycles run before main is called, or why some driver code is being entered before it is initialized, since someone decided to make a static singleton object for some driver and called some driver method in the constructor which will run before main(), not realizing how this stuff really works underneath.
So, C++ can be a wonderful tool in proper hands, but it is much easier to misuse than C in an embedded context.
note, static initializers works in embedded you need to execute functions between __init_array_start and __init_array_end before using any static object in gcc somewhere in program
I'm well aware of that. But giant array of static Constructors makes it hard to reason about when any piece of Hardware is accessed since a lot of stuff happens before Main (). Especially for people who don't play with the insides of linkers for fun on weekends.
Sorry that sounded like i'm scorching you i didn't mean it. Yes agree with when using HW initializing code in constructors I ended with two types of 'constructors/inits': 1 - for object initialization 2 - just for hardware and calling manually :/.
Nothing. I've worked on Cortex-M4 projects in C++. It's nice in many ways. The people working on the project had a much more diverse background than the typical EE who learned C as an undergrad mentioned in another thread.
It's more difficult. I used C++ in many projects for years and enjoyed working with it. You have to be very disciplined, though, and you need to know a lot about how it works under the hood to get it to play nicely, but it has it's niche. Embedded is not it's niche IMHO (and I've also done embedded, where we specifically chose to use C over C++).
Really, whenever you are doing any kind of embedded or real time project you are basically doing "resource limited development". The resource can be memory, I/O, CPU or any and all of that (or more). You need to be able to control exactly how it's used. C++ is often used to abstract you away from those things -- which is exactly the opposite of what you want.
C is a high-ish level language that is close enough to the metal that you can fairly easily understand the implications of what you are doing. C++ is not and it's incredibly easy to build a monstrocity that chews memory -- not just working memory, but application size too. Even dealing with name mangling is a surprising PITA when you are dealing with embedded -- remember embedded means you often have to build your own tools because nobody else is using your platform ;-).
Like I said, I actually like C++ (or at least C++ of a couple of decades ago -- the language seems to have changed a lot since I last used it, so it's hard for me to say). There are a lot of times where I simply don't care about controlling resources to that level. These days there are a lot of other choices and I'm not sure that I would ever choose C++ for a project again, but definitely back in the day it was something I reached for quite a bit.
WRT Rust, I agree with the OP that the borrow checker is really nice. I recently spent some time playing with Rust to see how easy it was to implement higher level abstractions. One of the things I was really impressed with was how hard Rust slaps you when you try to do something that would explode your memory footprint. It still feels a bit immature to me, but it has tremendous promise (and if you don't mind working around the immaturity, it's probably fine to use at the moment).
About 15 years ago I did some PIC16 programming immediately after a lot of C++, so I tried working in a C++ style.
The first obstacle was that there was no C++ compiler.
So I wrote some very C++ style C: nice little structs with associated functions for mainpulating them, which took a pointer to struct as first argument.
The code did not fit in the PIC.
It turns out that the PIC16 lacks certain indirect addressing modes, so every access to a structure member from a pointer turns into a long sequence of instructions to do the arithmetic.
Oh, and this particular chip only allows you a maximum stack depth of 8, so you have to ration your use of utility functions. The compiler is bad at inlining so macros are prefereable.
By the time I had finished it was an extremely C program with no trace of C++ style at all.
The situation has got a lot better but there are still limitations which will trip up the unwary. And one day someone's going to point out that they can save $0.50 on every one of a million devices if you use one of these tiny chips with no indirect addressing and limited stack.
I find it easier to use C and assembly on very constrained devices becauee I know what the output will be; with C++ it is less clear. If you write code that has to fit in 24kb, you need to think about what every instruction looks like after compilation and that just is far easier with C and (obviously) asm in my experience.
I’m not an embedded developer, but my guess is that if they’re not even using dynamic memory, I doubt they need or want anything that C++ has to offer.
That's more a matter of experience and attitude -- even simple things like reference types are nice. Also, templates offer a lot of abstraction power that can be used to model the hardware nicely, without sacrificing efficiency.
Many embedded programmers come from a background that doesn't expose them to those sorts of ideas though.
Can confirm, C++ has some nice features which I'd even like without malloc. OTOH I'm already horrified at the code quality problems pretty much every embedded shop faces. C++ would only make this matter worse.
As a software inclined embedded guy I also often think of what would be possible if we switched to C++. But then I think of what's probable.
In general tools for constraining hardware complexity like namespaces and encapsulation are a lot less important for the project sizes where you're typically working with embedded systems. For the rest they mostly provide some benefit but that's offset by the danger of a move to a much, much larger and more complicated language in an environment where many people writing code are primarily EEs and most C++ answers they Google will provide solutions inappropriate for embedded development. And I say this as someone who was one of those EEs when he started out.
RAII is absolutely killer. Managing real time priorities with lock_guards so that you can never forget to drop priority will win over almost any grizzly old firmware engineer.
Many people need in memory databases and linked lists (or binary trees) to hash information and sort it.
There's a case to be made for an OO style program where I create an object and give it a chunk of memory to manage a B-tree, so I can keep my memory from being fragmented, but using C++ for that is serious overkill.
I'm not sure how much water his argument here holds anymore. C++ has changed A LOT since 2007; idiomatic C++11 is an extremely different language from C++03, and C++20 is almost unrecognizable to an early C++ developer.
I was going to honor my bet here, until I did a little googling and found this during the discussion of when they ported subsurface to Qt.
"A word of warning: Linus has very strong feelings about all the things
that are wrong with C++ and at times has been known to be less
diplomatic than me when explaining his point of view... :-)
But he made a clear statement that he is interested in seeing this port
happening, as long as most of the program logic that is not UI code
stays in (quote) "sane C files". So please keep that in mind as we drive
this further."
If you consider that an ecosystem evolves to the constraints of its environment, even from their pre-history at Xerox Parc, early GUI frameworks were incredibly constrained. But I think the most important constraint in their development was object orientation. Smalltalk blazed the trail for GUI programming, everyone else following after modeled their systems in that way. Object orientation became the most expressive way to program a hierarchical widget system (which most desktop systems are). Even early Windows HWND-style programming emulates virtual dispatch in non-object oriented C (also 'gobject').
In response, a vast body of knowledge and techniques around object orientation were amassed to mitigate these constraints as the ecosystem matured through the 80s and 90s. Not just patterns like MVC, Flyweight, Command et al, but also structural techniques like the event loop.
> Eventually, somewhere near the end of the 90s/middle of the 2000s, most of these things were "solved" for desktop/native, or put less charitably, they stopped changing.
So at the end of the 90s/middle of the 2000s the web browser grew out of being a document viewer into a very constrained widget kit, but the programming model was not object oriented and so the 'solved' problems needed to be solved again for a new set of constraints. Few of the traditional techniques applied, and the old object oriented way was either adapted (e.g. data binding became Redux, Flyweight/Prototype became templates) or disposed. Personally I find it a bit of a waste, but I don't expend any energy in the new ecosystem, so I don't want to be too judgy.
But I don't think desktop widget kits 'stopped changing', and are cherry-picking ideas from the browser, like 'declarative' (XAML, QML, etc) and 'responsive' (widget containers that re-flow the viewport for phone/tablet orientation etc). I hope I haven't misunderstood your question and wasted everyone's time.