Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes, it is .NET as Tokio blog post references.

Unfortunately, it does not appear to look into .NET's implementation with sufficient detail and as a result gets its details somewhat wrong.

Starting with .NET 6, there are two mechanisms that determine active ThreadPool's active thread count: hill-climbing algorithm and blocking detection.

Hill-climbing is the mechanism that both Tokio blog post and the articles it references mention. I hope the blog's contents do not indicate the depth of research performed by Tokio developers because the coverage has a few obvious issues: it references an article written in 2006 covering .NET Framework that talks about the heavier and more problematic use-cases. As you can expect, the implementation received numerous changes since then and 14 years later likely shared little with the original code. In general, as you can expect, the performance of then-available .NET Core 3.1 was incomparably better to put it mildly, which includes tiered-compilation in the JIT that reduced the impact of such startup-like cases that used to be more problematic. Thus, I don't think the observations made in Tokio post are conclusive regarding current implementation.

In fact, my interpretation of how various C# codebases evolved throughout the years is that hill-climbing worked a little too well enabling ungodly heaps of exceedingly bad code that completely disregarded expected async/await usage and abuse threadpool to oblivion, with most egregious cases handled by enterprise applications overriding minimum thread count to a hundred or two and/or increasing thread injection rate. Luckily, those days are long gone. The community is now in over-adjustment phase where people would rather unnecessarily contort the code with async than block it here an there and let threadpool work its magic.

There are also other mistakes in the article regarding task granularity, execution time and behavior there but it's out of scope of this comment.

Anyway, the second mechanism is active blocking detection. This is something that was introduced in .NET 6 with the rewrite of threadpool impl. to C#. The way it works is it exposes a new API on the threadpool that lets all kinds of internal routines to notify it that a worker is or about to get blocked. This allows it to immediately inject a new thread to avoid starvation without a wind-up period. This works very well for the most problematic scenarios of abuse (or just unavoidable sync and async interaction around the edges) and allows to further ensure the "jitter" discussed in the articles does not happen. Later on, threadpool will reclaim idle threads after a delay where it sees they do not perform useful work, with hill-climbing or otherwise.

I've been meaning to put up a small demonstration of hill-climbing in light of un-cooperative blocking for a while so your question was a good opportunity:

https://github.com/neon-sunset/InteropResilienceDemo there are additional notes in the readme to explain the output and its interpretation.

You can also observe almost-instant mitigation of cooperative (aka through managed means) blocking by running the code from here instead: https://devblogs.microsoft.com/dotnet/performance-improvemen... (second snippet in the section).



Thanks for the up-to-date info.

> .NET 6

(I’m under the impression that this was released in 2021, whereas the linked Tokio post is from 2020. Hopefully that frames the Tokio post’s more accurately.)


UPD: Ouch, messed up the Rust lib import path on Unix systems in the demo. Now fixed.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: