Although he's trying to avoid using floating point, the dirty secret in many Microsoft-derived BASICs, including Commodore's, is that everything is floating point. In fact, even if you explicitly declare a variable as integer, it actually gets truncated and expanded: the native format for calculations is still 40-bit MBF. The only advantage integer variables have is smaller array storage. Every variable in his program is actually internally handled as a floating point value even though they're all integrals.
I also recently bought a Honda hybrid. I turned off as many of the data sharing features as I could from the first day I drove it. They don't make it easy, of course.
Modern flash carts like EasyFlash and clones allow for absolutely cavernous cartridge images. As good examples, see the C64 ports of Prince of Persia and Eye of the Beholder, which run entirely from massive cartridge ROMs.
As always in demo scene we speak about limits we put on ourselves. If the contest is "64K game" this probably won't fit - but not sure. Thus my question.
Of course everything can be put on cartridge (fast) or a diskette (slow loading). If they decide on cartridge, correct me if I'm wrong, it won't work on emulators, right? Also characters and animations must fit in memory too. There are so many technical barriers to be sorted out aside from the backgrounds. That's all what I am wondering about.
Most emulators support .crt images, including large ones like these, so if this is their chosen distribution format they should work just fine on an emulator. They would also be okay on systems like the Ultimate 64, or real machines with EasyFlash or a 1541-Ultimate (which I use with my 128DCR).
You can do amazing things with only a single SID channel. One of the most impressive examples is the in-game music of Hawkeye [1] which allows to use the remaining two channels for sound effects.
OK, I'll bite. If this is a truly competitive core - I don't claim enough personal expertise to judge - does anyone fab and sell it? There should be a business case if it is.
As a counterexample, I point to another relatively boring RISC, PA-RISC. It took off not (just) because the architecture was straightforward, but because HP poured cash into making it quick, and PA-RISC continued to be a very competitive architecture until the mass insanity of Itanic arrived. I don't see RISC-V vendors making that level of investment, either because they won't (selling to cheap markets) or can't (no capacity or funding), and a cynical take would say they hide them behind NDAs so no one can look behind the curtain.
I know this is a very negative take. I don't try to hide my pro-Power ISA bias, but that doesn't mean I wouldn't like another choice. So far, however, I've been repeatedly disappointed by RISC-V. It's always "five or six years" from getting there.
I would not call PA-RISC boring. Already at launch there was no doubt that it is a better ISA than SPARC or MIPS, and later it was improved. At the time when PA-RISC 2.0 was replaced by Itanium it was not at all clear which of the 2 ISAs is better. The later failures to design high-performance Itanium CPUs make plausible that if HP would have kept PA-RISC 2.0 they might have had more competitive CPUs than with Itanium.
SPARC (formerly called Berkeley RISC) and MIPS were pioneers that experimented with various features or lack of features, but they were inferior from many points of view to the earlier IBM 801.
The RISC ISAs developed later, including ARM, HP PA-RISC and IBM POWER, have avoided some of the mistakes of SPARC and MIPS, while also taking some features from IBM 801 (e.g. its addressing modes), so they were better.
ISAs fail to gain traction when the sufficiently smart compilers don't eventuate.
The x86-64 is a dog's breakfast of features. But due to its widespread use, compiler writers make the effort to create compilers that optimize for its quirks.
Itanium hardware designers were expecting the compiler writers to cater for its unique design. Intel is a semi company. As good as some of their compilers are, internally they invested more in their biggest seller and the Itanium never got the level of support that was anticipated at the outset.
I am a firm believer that if AMD wasn't in the position to be able to come up with AMD64 architecture, eventually those Itanium issues would have been sorted out, Windows XP was already there and there was no other way for 64 bit going forward.
It has never happened that a compiler was able to do static scheduling of general purpose instructions over the long term.
Every CPU changes the cycles it takes for many instructions, adds new instructions etc.
Out of order execution is a huge dividing line in performance for a reason. The CPU itself needs to figure these things out to minimize memory latency, cache latency, pipelining, prefetching and all that stuff.
I haven't said that, I said that I am a firm beliver that Itanium would have prevailed without AMD being able to push their AMD64 alternative.
Maybe compilers would get better, maybe Itanium would have needed some redesign, after all it isn't as if a Raptor Lake Refresh execution units are the same as an Xeon Nocona, yet both execute x64 instructions.
I don't know anything about Itanium in particular, but AMD's NPU uses a VLIW architecture and they had to break backwards compatibility in the ISA for the second generation NPU (XDNA2) to get better performance.
I mean "boring" in the sense that its ISA was relatively straightforward, no performance-entangling kinks like delay slots, a good set of typical non-windowed GPRs, no wild or exotic operations. And POWER/PowerPC and PA-RISC weren't a lot later than SPARC or MIPS, either.
reply