To find this requires a combination of a few keystrokes and button pushes plus an "outlook" (rather than "inlook") attitude.
Instead, as with so many discussions in a so-called "CS" community, we find something like adolescents trying to BS each other by presenting their mere opinions as facts.
Come on! Please!
Long ago my research community put in a lot of work to make it easy to deal with many simple questions, but we didn't reckon with the sheer inwardness of so many end-users. A similar problem is that most people in CS have no idea what Doug Engelbart really did, yet just typing his name into Google will provide great info in just the first few hits.
How can the current community repair itself and start trying to become a real field again?
it is the paradox of the information era: society has never had such easy and immediate access to the limits of human knowledge, and at the same time, society can’t agree on whether the earth is spherical, vaccines work, or if demonstrably corrupt politicians are really that bad.
What special access to the divine do you think you have that would encourage you to report as facts the last two sentences (which are actually quite false)? You are confusing your internal inferences with what is actually going on -- and to the extent that you feel you can tell others.
If you are trying to bluff your way through -- to appear knowledgeable when you aren't and can't be -- this is a behavioral syndrome that will not serve you at all well overall.
"Back then" was the 60s (whose decade really ran from about 1963 to 1973).
A good quote from Dave Evans (to faculty members who complained about a genius faculty member who could be abrasive): "We don't care if they're prima donnas as long as they can sing".
As I've mentioned in a few other places, including this forum, most people found Edsger to be funny. He obviously enjoyed wielding English for comments, biting, snide, and otherwise. He loved to project snide arrogance in his particular highly developed style.
A good friend of his was Bob Barton -- I think even more of a genius -- and perhaps even more idiosyncratic, pointed and neurotic. Barton was kind of on-stage most of the time, and had truly eloquent extemporaneous opinions.
But so what? Listen to Dave Evans, and then listen to what great and interesting people have to say.
One of the definitions for these people is that whether they are right or wrong, or whether you agree with them or not, what they say is so interesting that it absolutely demands to be thought about.
You can't beat that kind of help for your own thinking processes.
Actually, we didn't claim to have successfully done this.
More careful reading of the proposal and reports will easily reveal this.
The 20,000 lines of code was a strawman, but we kept to it, and got quite a bit done (and what we did do is summarized in the reports, and especially the final report).
Basically, what didn't get finished was the "bottom-bottom" by the time the funding ran out.
I made a critical error in the first several years of the project because I was so impressed with the runable-graphical-math -- called Nile -- by Dan Amelang.
It was so good and ran so close to actual real-time speeds, that I wanted to give live demos of this system on a laptop (you can find these in quite a few talks I did on YouTube).
However, this committed Don Knuth's "prime sin" : "the root of all evil is premature optimization".
You can see from the original proposal that the project was about "runnable semantics of personal computing down to the metal in 20,000 lines of code" -- and since it was about semantics, the report mentioned that it might have to be run on a supercomputer to achieve the real-time needs.
This is akin to trying to find "the mathematical content" of a complex system with lots of requirements".
This is quite reasonable if you look at Software Engineering from the perspective of regular engineering, with a CAD<->SIM->FAB central process. We were proposing to do the CAD<->SIM part.
My error was to want more too early, and try to mix in some of the pragmatics while still doing the designs, i.e. to start looking at the FAB part, which should be down the line.
Another really fun sub-part of this project was Ian Piumarta's from scratch runnable TCP/IP in less than 200 lines of code (which included parsing the original RFC about TCP/IP.
It would be worth taking another shot at this goal, and to stick with the runnable semantics this time around.
And to note that if it wound up taking 100K lines of code instead of 20K, it would still be helpful. One of the reasons we picked 20K as a target was that -- at 50 lines to a page -- 20K lines is a 400 page book. This could likely be made readable ... whereas 5 such books would not be nearly as helpful for understanding.
Ah, I see, you are objecting that you did not complete what you were setting out to do. I think that is quite noble of you to admit but at least the demo presentations were quite impressive even if they lack a certain totality.
Well anyway, thanks a lot for these presentations. I don't think it's as good as a software course where someone breaks all my preconceptions of what computing is, to leave me truly free, but they are helping to widen my thoughts.