I'd be ok with it if it's possible to use all the insights and software from after that, we'd just be constrained by the hardware - the Amiga 4000, Apple Mac Quadra 840av 128 MB or a Pentium 66 MHz or the 486 DX2-66 with 64 MB ram.
Can't we achieve something like this now with microcontrollers like ESP32 or RP2040?
This project runs a ca. 1990 scientific workstation (not just a PC) on an RP2040:
How did you use Claude? How much guidance did you give it? Did you just write CLAUDE.md and stand back, Claude did all the rest? Or did you write some assembly language samples - like object.inc etc. - to get it pointed in the direction you wanted it to go? Did you review and revise the code that Claude wrote?
Have you benchmarked it against CPython? apython is still a bytecode interpreter, like CPython. And, CPython is compiled to machine code, as
apython is assembled to machine code. So we wouldn't expect a huge speedup.
A skilled assembly language programmer might be able to write faster code
than the C compiler can make from CPython. Is that you? I see the code was written with Claude - does Claude write faster assembly code than the C compiler? Or did you tune the assembly code by hand, or somehow train Claude to write fast code for this application?
I think after a day or two of full immersion in this environment, disassembling hex code would be mostly straightforward. Woz is said to have 6502 binary memorized. It doesn't really take much, you don't need to memorize it all. Knowing a dozen instructions can take you a long way.
It does. I am certainly no Woz, but I used to program a KIM-1 by hand assembling with pencil and paper from a programming card, then keying hex codes into its onboard keypad. After a few days you don't need to look at the card much. It's really quite practical - it's actually easier than dealing with editor and assembler tools. After fifty years, I still recall that A5 encodes LDA.
I am teaching myself Arm assembly for the M-series of processors, M-4 for now. I have been playing and using J (jsoftware.com) since 2010, and I have to say that as much as the higher abstracted languages and programs become, I still love the atoms and terseness of array languages and writing close to the metal. I started with Factor, gforth, and retro years ago. Something magical happens when you immerse yourself in it. Right now, I am working with KlongPy, which using the PyTorch backend along with the Klong language is amazing. I used to write assembly code for my Vic-20 back in the day and then bought the VIC FORTH cartridge for like $30 in 1982. I programmed my 1977 PET 2001 in the Commodore Basic 1.0 it came with, but there was a sys instruction for machine code! I used to write my code on an index card before typing it in and saving to the cassette recorder. Magazines had code to hand type in, so my coding was learned with reading and writing it first. I accidentally bought a hardcover book on PDP-11 programming and read the whole book before I bought my PET in 1977. Machine language.
I miss the early days of computing before the internet or Genie Online, but Echo in NYC was a blast - thanks, Stacy!!
Agreed, though I would say that 6502 is a lot more straightforward to memorize than x86. A lot fewer addressing modes and every instruction is always just a byte, possibly followed by immediate data. The 6502 was a little gem.
There are none. It was not routine to make videos in 1961 - 1964.
The Feynman Lectures website does have links to recordings of almost every lecture, but no video. BUT there are thousands of photos --- many photos of each lecture, showing Feynman's blackboards.
Also, there are videos there of a series of 7 lectures at Cornell in 1964.
It wasn't a coincidence, or an accident. C was specifically designed to write Unix, by people who had experience with a lot of other computer languages, and had programmed other operating systems including Multics and some earlier versions of Unix. They knew exactly what they were doing, and exactly what they wanted.
I'm not sure what you mean by "coincidence" or "accident" here.
C is a pretty OK language for writing an OS in the 70s. UNIX got popular for reasons I think largely orthogonal to being written in C. UNIX was one of the first operating systems that was widely licensed to universities. Students were obliged to learn C to work with it.
If the Macintosh OS had come out first and taken over the world, we'd probably all be programming in Object Pascal.
When everyone wanted to program for the web, we all learned JavaScript regardless of its merits or lack thereof.
I don't think there's much very interesting about C beyond the fact that it rode a platform's coattails to popularity. If there is something interesting about it that I'm missing, I'd definitely like to know.
> Operating systems have to deal with some very unusual objects and events: interrupts; memory maps; apparent locations in memory that really represent devices, hardware traps and faults; and I/O controllers. It is unlikely that even a low-level model can adequately support all of these notions or new ones that come along in the future. So a key idea in C is that the language model be flexible; with escape hatches to allow the programmer to do the right thing, even if the language designer didn't think of it first.
This. This is the difference between C and Pascal. This is why C won and Pascal lost - because Pascal prohibited everything but what Wirth thought should be allowed, and Wirth had far too limited a vision of what people might need to do. Ritchie, in contrast, knew he wasn't smart enough to play that game, so he didn't try. As a result, in practice C was considerably more usable than Pascal. The closer you were to the metal, the greater C's advantage. And in those days, you were often pretty close to the metal...
Later, on page 60:
> Much of the C model relies on the programmer always being right, so the task of the language is to make it easy what is necessary... The converse model, which is the basis of Pascal and Ada, is that the programmer is often wrong, so the language should make it hard to say anything incorrect... Finally, the large amount of freedom provided in the language means that you can make truly spectacular errors, far exceeding the relatively trivial difficulties you encounter misusing, say, BASIC.
Also true. And it is true that the "Pascal model" of the programmer has quite a bit of truth to it. But programmers collectively chose freedom over restrictions, even restrictions that were intended to be for their own good.
The irony is that all wannabe C and C++ replacements are exactly the "Pascal model" brought back into the 21st century, go figure.
"A consequence of this principle is that every occurrence of every subscript of every subscripted variable was on every occasion checked at run time against both the upper and the lower declared bounds of the array. Many years later we asked our customers whether they wished us to provide an option to switch off these checks in the interests of efficiency on production runs. Unanimously, they urged us not to they already knew how frequently subscript errors occur on production runs where failure to detect them could be disastrous. I note with fear and horror that even in 1980 language designers and users have not learned this lesson. In any respectable branch of engineering, failure to observe such elementary precautions would have long been against the law."
-- C.A.R Hoare's "The 1980 ACM Turing Award Lecture"
The thing for me at least is that when I looked at Pascal, MODULA-2, Ada, if you had complex data structures which had to allocate and deallocate memory, then those language would not help at all. They would allow you to make pointer mistakes. Pascal and MODULA-2 were also very restrictive in various area (no generics). Ada is better in that respect, but Ada compilers were rare.
In my opinion it is only Rust that offers a language without runtime system requirement and fixes essentially all of the problems of C.
First of all C did not had any generics, so same playing field.
C has a runtime, even if tiny. That is what calls into main(), handles floating point arithmetic when none is available, functions that run before and after main(), nowadays also does threading.
Heap memory handling in Pascal, Modula-2, Ada, is much safer than C, first of all no need to do math to calculate the right size, arenas are available on the standard library, dinamic allocation can also be managed by the compiler if desired (Ada), pointers are safe as they by default must be used with existing data, however if one really wants to do pointer arithmetic it is available.
The only issue that they have in regards to C, is the use-after-free, but that apparently isn't an issue for folks moving away from C into Zig, wich is basically Modula-2 with some C syntax flavour.
C uses pointer casts all over the place to fake generics. If you don't have that (in Pascal or MODULA-2) then life becomes very unpleasant.
There is a quite a bit of C code that makes creative use of the size of allocations. For example linked lists with a variable sized payload. Again one of the things that would prevent a C programmer from switching to Pascal.
I don't expect the Zig user base to become larger than the Rust user base any time soon. But we have to wait and see, Zig is quite young.
> C uses pointer casts all over the place to fake generics.
by "C" do you mean users of C? because most of the C code I write I don't use those sorts of techniques; instead I just use the preprocessor to make scuffed generics.[1] Unless you mean in libc itself, where I don't recall any use of pointer casts like that? If I'm missing something, please enlighten me.
Same tricks are possible in Modula-2, Pascal, Ada, if fake generics count.
Creative use of the size of allocations are also possible in those languages, the BIG difference is that they aren't the default way everything gets done.
In Pascal (not the original Pascal standard, but, say, Turbo Pascal), could you allocate a variable-sized array of something, and still have index protection when using it?
(I know quite well that C couldn't. Even a C++ vector may or may not, depending on which access method you use.)
It is often said that C became popular just because Unix was popular, due to being free -- it just "rode its coattails" as you put it.
As if you could separate Unix from C. Without C there wouldn't have been any Unix to become popular, there wouldn't have been any coattails to ride.
C gave Unix some advantages that other operating systems of the 1970s and 80s didn't have:
Unix was ported to many different computers spanning a large range of cost and size, from microcomputers to mainframes.
In Unix both the operating system and the applications were written in the same language.
The original Unix and C developers wrote persuasive books that taught the C language and demonstrated how to do systems programming and application programming in C on Unix.
Unix wasn't the first operating system to be written in a high-level language. The Burroughs OS was written in Algol, Multics was written in PL/I, and much of VMS was written in BLISS. None of those languages became popular.
IN the 1970s and 80s, Unix wasn't universal in universities. Other operating systems were also widely used: Tenex, TOPS-10, and TOPS-20 on DEC-10s and 20s, VMS on VAXes. But their systems languages and programming cultures did not catch on in the same way as C and Unix.
The original Macintosh OS of the 1980s was no competitor to Unix. It was a single user system without integrated network support. Apple replaced the original Macintosh OS with a system based on a Unix.
> Unix wasn't the first operating system to be written in a high-level language. The Burroughs OS was written in Algol, Multics was written in PL/I, and much of VMS was written in BLISS. None of those languages became popular.
Of course, they weren't available as free beer with source tapes.
> Apple replaced the original Macintosh OS with a system based on a Unix.
Only because they decided to buy NeXT instead of Be.
Had they bough Be, that would not been true at all.
> Of course, they weren't available as free beer with source tapes.
I think this was less important then, than people sometimes think.
I recall those days. In the 1980s and 90s I worked as a scientific programmer in a university department. Some of our software was commercialized and sold and supported as a product for a time in the 80s. Pardon the following long memoir, but I think some reporting on what actually happened then, as seen by even one participant, is pertinent.
We used a VAX with DEC's VMS operating system. Our application was developed in DEC Pascal (which didn't have the limitations of Standard Pascal because it used the DEC CLR, Common Language Runtime). Later on we began using Allegro Common Lisp for some things.
Through the 80s and early 90s, we never used Unix and C. And, we were not unusual, even in a university. Most of the VAXes at that university ran VMS
(or one of the DEC-10/20 OS in the early 80s), including the computer science department (which began running Unix on some but not all systems later in the 80s). So Unix was not as pervasive in the 80s as some people seem to think.
About "free beer": running Unix on a VAX in the 1980s was definitely not "free", it was a major investment in time, effort, and yes, money (in the form of salaries). First, the OS wasn't a separate line item. You bought a bundled system including both the VAX hardware and the VMS OS. Then the DEC guy came and turned it on and it just worked. I don't even know how buying a bare VAX and installing your own OS worked. How did you handle DEC field service?
They required their own utilities that ran on VMS. If you used Unix, you needed an expert in Unix to install it and maintain it.
And it was no different with the early commercial Unixes. You bought a Sun workstation and it came with their Unix bundled (Solaris or whatever).
In the 1990s we switched from VAX/VMS to HP workstations that bundled HP-UX, their Unix. In all of these Unix platforms, Unix was bundled and you did pay for it, it was just included in the price.
I think there is some confusion about the history. The free, frictionless, install-it-and-run-it-yourself OS was not Unix in the 80s, it was Linux in the 1990s. By then C and Unix-like operating systems were well established.
Also, there was genuine admiration for Unix technical features, notably its simplicity and uniformity, even at sites like ours that didn't use it. There were several projects to give VMS a Unix-like userspace. There was a (yes) free Software Tools project (that was its name), and a commercial product called Eunice. People who had already paid for VMS paid more for Enunice to make VMS
look like Unix.
Unix was a better platform for teaching CS than VMS or the other alternatives.
VMS did come with source code. It came on a huge stack of fiche cards, along with several pallet-loads of hardcopy documentation in binders.
There was nothing like the books The C Programming Language by K&R, or The Unix Programming Environment by Kernighan and Pike. Or the many Unix and C books that followed them. And then the college courses that used them.
Instead there were special courses in system programming and OS internals (separate courses) from DEC. The university would pay for them once in a while. A DEC expert would come for a week and programmers from all the VAX sites would get together all day every day in a classroom while they lectured. There was no textbook, but everyone got a huge binder of printed notes.
So systems programming on VMS, and I suppose other non-Unix platforms, remained an esoteric, inaccessible art, totally divorced from application programming, that used a programming language that was not used for anything else.
A few words comparing my experience programming in C in the 1990s to programming in DEC Pascal in the 80s: C wasn't much worse. The greater safety of Pascal did not make much difference in application programming. In Pascal, array-bounds errors etc. produced a crash with a traceback. In C similar errors produced a crash with a cryptic message like "segfault". But often the actual defect was far from the line that crashed, that appeared in the traceback, so the investigation and debugging was similar in both languages. But the more common (and often more difficult) errors that just computed the wrong answer were about the same in both languages.
My recollection of working in a similar environment was very different. The Comp Sci department wanted Unix but not for its own sake. They wanted access to the burgeoning software being produced for it aimed at academics. Tex/LaTeX was the biggest driver because it was the best way at the time to make a readable research paper that was heavy in math.
Then the students needed access to lex/yacc etc for their courses and X Windows too.
That we produced other Unix programs was just an artifact of the original drive to have Unix. The Compaq 386 or Macintosh II were niche products for that job and VMS had been turfed by the late eighties.
First to market is not necessarily the best, case in point: many video sites existed before Youtube, including ones based on Apple Quicktime. But in the end Flash won.
To me it looks like there is a better way to do things and the better one eventually wins.
> I'm not sure what you mean by "coincidence" or "accident" here.
I mean Unix had to be written in C, not in, say, Algol or PL/I or BLISS, high-level languages used to write other operating systems.
I also meant that the features of C were not put there by impulse or whim, they were the outcome of considered decisions guided by the specific needs of Unix.
> Although we entertained occasional thoughts about implementing one of the major languages of the time like Fortran, PL/I, or Algol 68, such a project seemed hopelessly large for our resources: much simpler and smaller tools were called for. All these languages influenced our work, but it was more fun to do things on our own.
They say right there that Fortran, PL/I, and Algol 68 were too big and complicated for Unix. Yes, if you are building a system, it is more productive to use a language that is built for purpose and pleasant to work with ("fun") than one you have to struggle against all the time.
They wanted to play and ignored other languages on purpose, that is all.
> Although we entertained occasional thoughts about implementing one of the major languages of the time like Fortran, PL/I, or Algol 68, such a project seemed hopelessly large for our resources: much simpler and smaller tools were called for. All these languages influenced our work, but it was more fun to do things on our own.
Pity that in regards to secure programing practices in C, community also ignores the decisions of the authors.
> Although the first edition of K&R described most of the rules that brought C's type structure to its present form, many programs written in the older, more relaxed style persisted, and so did compilers that tolerated it. To encourage people to pay more attention to the official language rules, to detect legal but suspicious constructions, and to help find interface mismatches undetectable with simple mechanisms for separate compilation, Steve Johnson adapted his pcc compiler to produce lint [Johnson 79b], which scanned a set of files and remarked on dubious constructions.
Also to be noted that on Plan 9 they attempted to replace C with Alef for userspace, and while the experiment failed, they went with Limbo on Inferno, and also contributed to Go.
And that C compiler on Plan 9 is its own thing,
> The compiler implements ANSI C with some restrictions and extensions [ANSI90]. Most of the restrictions are due to personal preference, while most of the extensions were to help in the implementation of Plan 9. There are other departures from the standard, particularly in the libraries, that are beyond the scope of this paper.
It's not the number of actions, it's because the slide rule is analog and physical. The smaller numbers are to the left, the larger to the right, and you have to slide the rule to the first number, then the hairline cursor to the second number. There's no way you could mix up a large number like 987 with a small number like 187.
reply