I will say one thing: I pity the person (grad student?) that has to do error propagation analysis on a research project using posits (I'm the original implementor)
Yeah, I pretty much think that posit only makes sense for 32 bit and smaller, and that you want your 64 bit numbers to be closer to float64 (although with the posit semantics for Inf/NaN/-0).
Oh I actually only think it's useful for machine learning. I have some unpublished, crudely done research showing that the extended accumulator is only necessary for the Kronecker delta stage of the back propagation (posits trivially convert to higher precision by zero-padding).. you can see what I'm talking about I'm the Stanford video.
Fun fact: John sometimes claims he invented the name, but this is untrue; my old college website talks about building "positronic brains" and it's long been a goal of mine to somehow "retcon" Asimov/tng data, into being a real thing, and when this opportunity for some clever wordsmithing arrived I coined the term with the hopes that someone would make a posit-based perceptron, or "positron".
In your RTL synthesis did you use "hidden -2 bit" for negative posits? Assuming you are "cheating" IEEE by not implementing subnormals or NaN... This is one key insight that makes posit sizes much smaller, but the algebra that you have to do to get the correct circuits is a bit trickier!
If you're familiar with how the hidden bit works for IEEE floating point, use a hidden '10' in front of the fraction for negative numbers for posits and suddenly a whole bunch of math falls out. This is equivalent to having the fraction be added to a -2 value which pins the 'overall value' of the fraction to be between -1 and -2, (like how for positive values the hidden bit is 1 and the 'overall value' of the fraction is pinned to between 1 and 2).
I'm the implementor for posits (deactivated my primary hn account) -- I built circuit diagrams for computation with posits. So, we have them -- and they are smaller. There is also a key insight into the representation of posits (negative numbers have a "hidden -2" instead of a "hidden 1") that I cracked early on in my fiddling with circuits that was completely missed by all other implementers until earlier this year, even after I communicated it to them through email correspondence several times.
> negative numbers have a "hidden -2" instead of a "hidden 1"
Isn't this just (what I assumed was) the standard implementation technique for FPUs that aren't[0] stuck with IBM/Intel's braindead sign-magnitude junk?
0: so (eg, for a 1.7.8 float) -1.996..-1.000 would be 3F00-3FFF, and +1.000..+1.996 would be C000-C0FF, making 0x[-2].00-0x[-2].FF and 0x[+1].00-0x[+1].FF (with implicit -2/+1 corresponding to 3F/C0)
I can believe that for not too large number sizes a posit implementation might be smaller than for the traditional FP format.
However "smaller" must be qualified, because the size of a FPU varies enormously depending on the speed target. A completely different size results when the target is to do a fused multiply-add in 4 cycles at 5 GHz than when the target is to do it in 100 cycles at 200 MHz.
So unless you give more information about what you have compared, we cannot know whether it is true that a posit implementation can be smaller.
I don't know how you can claim to be "the" implementor as there are many implementations. However, your explanation doesn't have enough context. Do you have a paper describing this in better detail?
Sorry, should have specified: I'm the original implementor. I'm on the paper with John Gustafson, and presenter/live-demoer of the second half the Stanford video.
There is a paper coming out with details on the yonemoto -2 hidden bit method... I don't know if it's still preprint or embargoed or what but it is recent. I'm really not involved in the project anymore so my knowledge of the existence of this paper is only due to the courtesy of the authors.
Is the source code available? It would be amazing to the best possible implementation available as Verilog. There are quite a few pretty good IEEE 754 implementations.