To be specific, a linear solver can be (as in I have done) written in a week.
A serious non-linear solver that handles legacy Spice models is another beast entirely. And if you want to integrate modern advances in algebraic-differential systems you take that to a higher level.
These are not partial differential equations such as you find in Navier-Stokes. These are sparse non-linear differential equations that do not parallelize nearly as simply.
Another example of related problems that parallelize poorly even though they are linear are the FDTD formulations for Maxwell's equations. These are relatively simple systems, but the bottleneck is almost always the memory bandwidth because it is so hard to parallelize.
The original point stands. Ngspice shows its heritage from the days of Fortran far more than a modern code base would or should. It's sole great virtue (from my point of view) is that it integrates with KiCad and only falls over with no reason about 5% of the time.
I would suspect that some of the simulation systems coming out of the Julia community or Xyce would be a better base.
Seems like you can use the core intuition here of SIMD comparison of multiple elements on more than just the terminal scale.
The outline would be:
a) use a gather to grab multiple elements from 16 evenly spaced locations
b) compare these in parallel using a SIMD instruction
c) focus in on the correct block
d) if the block is small revert to linear search, else repeat the gather/compare cycle
Even though the gather instruction is reading from non-contiguous memory and reading more than you normally would need to with a binary sort, enabling a multiway compare and collapsing 4 levels of binary search should be a win on large tables.
You also may not be able to do a full 16-way comparison for all data types. Searching for float64 will limit you to 8 way comparisons (with AVX-512) but int32, float32 will allow 16 way comparisons.
What you say is something that is actually already happening.
There is no longer a binary approval/non-approval status. Drugs and treatments that address terminal conditions often get special status. Drugs for rare diseases definitely get special treatment.
In addition, many studies now use continual data collection and evaluation. If the results are very good or very bad, that can be recognized much more quickly and with fewer people exposed to risk than previous types of studies. Reaction to negative events doesn't happen in hours, but it isn't all that far from that.
Your case is even stronger because understanding the 100 conditions is likely to lead to a few being cured and build the technology base for the moonshoot to be vastly easier.
Actually quite a lot of diseases have cures. Many have very low cost cures or prevention.
And cancer isn't one disease. It is hundreds. Many of which have cures.
Diabetes was much harder to understand (and also isn't a single disease). Recent results have demonstrated islet cell transplants in type 1 that don't require life-long immune suppression. That isn't wide-spread yet but it is promising.
An interesting example of an actual cure is ulcers. Most humans who gets ulcers get them because of bacterial infection with H. Pilori. Killing that infection cures the ulcers. That didn't used to be possible because the cause wasn't understood.
Oddly enough, most uses of Lean never actually run the program. The fact that it type checks is enough to prove the theorem in question.
That said, if execution is seriously required for your problem along with strong logic on the side, you may prefer Dafny which transpiles the computation part of your proof to C++ or Go.
A serious non-linear solver that handles legacy Spice models is another beast entirely. And if you want to integrate modern advances in algebraic-differential systems you take that to a higher level.
These are not partial differential equations such as you find in Navier-Stokes. These are sparse non-linear differential equations that do not parallelize nearly as simply.
Another example of related problems that parallelize poorly even though they are linear are the FDTD formulations for Maxwell's equations. These are relatively simple systems, but the bottleneck is almost always the memory bandwidth because it is so hard to parallelize.
reply