Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Z80s are still available new and in IP core form for integration into SoCs... as is the 6502, 8051, and several other "classic" MCUs designed in the late 70s/early 80s.

As I'm typing this my keyboard's controller is an 8051 variant, the touchscreen of my phone also uses an 8051, the mouse has a 6502, and the monitors in front of me have an 80186 core in them.

They are fast enough for the task, cheap, and development tools widely available, so they really don't need to be replaced.



Interesting considering the article, it was a defining part of Commodore hardware design that they compensated for slow CPU's by using co-processors all over the place, including putting a 6502 compatible CPU in the keyboard controller for the A500 and A2000...

(But you'd also find this spilling over into 3rd party hardware: My HD controller for my Amiga 2000 back in the day had a Z80 on it.

That machine was totally schizophrenic: In addition to the 6502 core on the keyboard, the M68k main CPU and the Z80 on the HD controller it also had an x86 on a bridge board - the A2000 had both Amiga-style Zorro slots and ISA slots, including one slot where both connectors were in line where you could slot in a "bridge board" with an 8086 that effectively gave you a PC inside your Amiga, with the display available in a window on the Amiga desktop).


It's kind of mind blowing that our cheap peripherals are driven by what used to be top-of-the-line processors only a few decades ago. I guess all that firmware has to run on something.


As I mentioned in another comment, already the ca. 1987 Amiga 2000 in this article had a 6502 compatible core on the keyboard controller, and some same era hd controllers had Z80's on them - they were cheap already then.


Do you think its better to teach uni students on new processors and tools e.g. Freescale, ARM, etc. or on older z-80, 80186 cpus?


I think university students should definitely start with older processors, and then gradually change the levels. I agree there is an architectural change in the newer processors, plus the additional cores. But, working with an older processor with limited memory and processing ensures the programmer realizes how important is each line of code and appreciates the comfort provided by newer processors and thus their complexity.


The first computer I programmed was a Z80 micro-controller connected to some basic peripherals (LED readout, sensors, actuators, stepper motors, potentiometers, etc...). There was no compiler, no assembler; nothing but a keypad to enter the instructions into memory and a button to start execution.

The CPU was less powerful than any of the x86 32bit chips that were widely available at the time, but as a kid it still really gave me the idea that whatever I could think of, I could make a computer do.

I'd agree, understanding things at a really basic level first helped me to better understand things at a higher level later on. It probably helps me to keep in mind what a computer actually needs to do to run code as well. I think it's probably one of the reasons Knuth uses MIX in TAOCP.


Kind of a "which students" sort of question.

I'd say with the older ones. With those, you can put a logic analyzer on the memory bus and see what's going on - if the pins aren't on a BGA under the chip and the board has no vias.


Working on the older CPUs is more approachable to understanding all the low level details plus it makes you appreciate all that the newer CPUs offer. However when actually working, I don't think one should work with an older CPU unless it really makes sense (sufficient computer power, low power requirements, etc.) Working with a powerful CPU lets you focus on the job at hand instead of the idiosyncrasies.


I don't think this is true at all, older CPUs are not a "more purified" and "cleaner" version of todays, they have the same and often considerably more cruft and crazyness.

To work with them is to teach bad habits and useless skills.


Some older CPU's maybe, but you can't seriously look at e.g. the 68000 next to an x86 CPU and tell me the 68000 is not cleaner.

It's not that they don't have craziness, it's that the functionality that mere mortals need to use to write efficient code is simpler.

The M68k's 8 general purpose data registers and 8 general purpose address registers alone is sufficient to make a huge difference.

For me, moving to an x86 machine was what made me give up assembler - in disgust - and it is something I've heard many times over the years: it takes a special kind of masochist to program x86 assembly; for a lot of people who grew up with other architectures, it's one step too far into insanity.


I have the pleasure of working with PowerPC in my day job. Also a relatively clean architecture. I really do wish that Apple had been more successful with it, that Microsoft would have continued supporting it in NT, that Motorola / IBM had kept up with Intel in raw performance, and that it had a larger user base than it does today.


Not to mention the m68k flat address space. A clean architecture for clean code.


Just look at the 6502. No two instructions follow the same pattern - every one is a moss-covered three-handled family credenza, to quote the good Doctor.


The 6502's instruction set is pretty regular, with most instructions of the form aaabbbcc. For instance, if cc==01, aaa specifies the arithmetic operation and bbb specifies the addressing mode. Likewise with cc==01, aaa specifies the operation and bbb the address mode. See http://www.llx.com/~nparker/a2/opcodes.html

The regularity of the 6502's instruction set is a partially a consequence of using a PLA for instruction decoding. If you can decode a bunch of instructions with a simple bit pattern, it saves space.


Aftger arithmetic, instructions have little or no regularity. They omit addressing modes, swap codings for modes. There's internal hardware reasons for this, but for the programmer its chaotic.


Not that it's in the least bit relevant to the discussion, but the moss-covered three-handled family credenza is not a Dr. Seuss quote found anywhere in his books, it came from the 70's era 'Cat in the Hat' TV adaptation, authored by Chuck Jones.


Cool! I never knew. I guess it shouldn't be considered 'canon' then.


That's just not true. It has irregularities, but most of the instructions fit into a small set of groups that follow very simple patterns.

But secondly, where the 6502 deviates from a tiny set of regular patterns it is largely by omitting specific forms of instructions, either because the variation would make no sense, or to save space - the beauty of the 6502 is how simple it is:

You can fit the 6502 instruction set on a single sheet of paper with sufficient detail that someone with some asm exposure could understand most of it.


The x86 family is the same.


Oh there is quite a lot of consistency in the structure of instructions across the basic set - register numbering, many instructions allow full register and addressing modes. The 6502 had pretty much no two instructions the same.


What mouse uses a 6502?


This one:

http://www.mcuic.com/bookpic/200811516244620817.pdf

(Look at page 9. This IC is found in a lot of generic mouses.)


TI runs all its low-level calculator stuff on Z80 emulators which are then helpfully run by whatever actual chip they are putting in the calculators these days.


Nope; with one exception, the Z80-family calculators are still run by real, bona-fide Z80s (or, in the case of the new TI-84 Plus CE, an eZ80).

(That "one exception" was the TI-84 Keyboard for the original Nspire, which did run the 84's firmware in a Z80 emulator on the Nspire's ARM processor.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: