"The bug was easily fixed by upgrading the build server, but in the end we decided to leave assertions enabled even for live builds. The anticipated cost-savings in CPU utilization (or more correctly, the anticipated savings from being able to purchase fewer computers in the future) were lost due to the programming effort required to identify the bug, so we felt it better to avoid similar issues in future."
I'd like to ask: for anyone here who's ever worked on a large C++ codebase, were assertions ever actually observed to be a noticeable detriment to performance? I'm sort of naively assuming that a good branch predictor would make their impact negligible, but I ain't exactly a system programmer.
My primary experience with C++ involves game development, so my perspective is a bit skewed. The cheaper chipsets in consoles tend to have weaker branch prediction, so any branching can be a big hit in aggregate. And everything is in aggregate because all your code is running in a tight frame loop at 30 or 60hz.
That said, the typical large C++ codebase is probably losing a lot more performance to bad algorithms than it is to problems that generally are only measurable in micro-benchmarks. There's just something about C++ that makes a lot of people obsess over performance to a degree that doesn't even affect the mindset of most C hackers. And because your brain is so preoccupied with performance in the small, you often miss opportunities for performance in the large.
Unless, of course, you're a AAA game, in which case you're fine tuning at the individual instruction and cache line levels for your most inner loops. I'm sure there are other, similar use cases for C++, but desktop software probably isn't on that list outside of a key component or two.
> Unless, of course, you're a AAA game, in which case you're fine tuning at the individual instruction and cache line levels for your most inner loops.
I think this is the most important part:
Those micro-optimizations have exactly one place: the most inner loops. Nowhere else! In the larger scale, better algorithms and code readability provide more performance than micro-optimizations ever could.
> There's just something about C++ that makes a lot of
> people obsess over performance to a degree that doesn't
> even affect the mindset of most C hackers.
Interesting observation. My first guess is that it is a lot easier in C++ to hide a stupid bottleneck, for example an object in a function call (which will call a copy constructor). So in the experience of a C++ dev, there are low hanging fruits. On the other hand in pure C it is a lot harder to hide this type of complexity and therefore C optimizations tend to be a lot more subtle.
It is also often pretty evident that C++ induces people to obsess over performance bottlenecks that were relevant on 80's hardware and while doing so introduce another (often more severe) bottlenecks relevant for modern CPU's. See for example C++ developers affinity for inline functions and templates expanding to huge amounts of inlined code, another common belief is that there is profound performance difference between virtual and non-virtual methods.
Do you have evidence that there is not a profound performance difference between virtual and non-virtual methods? Virtual methods require two memory lookups (the vtable address, then the function address) and hence often two cache misses, compared to non-virtual methods which can be static addresses. If you've got a (common in games) loop like:
for ( ..some list of 5000 objects.. ) {
object.update();
}
Then those cache misses will add up. That is my experience anyway, though I'll don't have statistics to back it up
Presumably it depends on the complexity of the logic that has to be evaluated to determine whether the assertion passes.
I've seen math libraries where, for example, the invert-matrix function finishes with an assertion that the input matrix multiplied with the output matrix is the identity matrix. That's a reasonable enough test, but it means when you enable assertions you see a major performance hit.
I've have little experience in performance critical code. However (in non performance critical code), many of the assertions I have written are not simple equality tests, but rather depend on the outcome of one or two method calls, which may require a non trivial amount of CPU.
If the number of cases is small and can be completely enumerated, sure. But there are plenty of algorithms out there where the number of cases is effectively infinite -- the matrix inversion function someone else mentioned is a great example -- where it is prudent to routinely check results to make sure the algorithm is working. (Mind you, also having a unit test to sure it works on a small set of carefully chosen cases is a great idea.)
Sure, but isn't it awesome if, instead of being just one unit tests, you can have some critical checks running in every unit test (and when running the code as a whole)?
However, if you're going this route, it might be worth having a second assert macro that can allow you more fine-grained control of enabling/disabling fast asserts vs slow asserts.
In Guild Wars case, it might also have been that you should not leave too much debug info in your release build, to make it harder to reverse engineer and write cheats/bots.
It depends where they are. Some games uses asserts at a very low level to check things like ensuring that vectors or matrices are valid (to some expected properties) after every calculation. This is very useful for bug fixing, but means you suddenly have asserts in the most performance intensive of inner loops.
Now those particular asserts aren't often turned on the default debug build, but turning them on will have a significant effect on performance.
I'd like to ask: for anyone here who's ever worked on a large C++ codebase, were assertions ever actually observed to be a noticeable detriment to performance? I'm sort of naively assuming that a good branch predictor would make their impact negligible, but I ain't exactly a system programmer.