I know it’s not straight HTML, but SSI (server side includes) helped with this and back in the day made for some incredibly powerful caching solutions. You could write out chunks of your site statically and periodically refresh them in the server side, while benefitting from serving static content to your users. (This was in the pre varnish era, and before everyone was using memcached)
I personally used this to great success on a couple of Premier League football club websites around the mid 2000s.
One benefit of doing it on the client is the client can cache the result of an include. So for example, instead of having to download the content of a header and footer for every page, it is just downloaded once and re-usef for future pages
How big are your headers and footers, really? If caching them is worth the extra complexity on the client plus all the pain of cache invalidation (and the two extra requests in the non-cached case).
I’m willing to bet the runtime overhead of assembly on the client is going to be larger than the download cost of the fragments being included server or edge side and cached
If you measure download cost in time then sure.. If you measure download cost in terms of bytes downloaded, or server costs, then nope. The cost would be smaller to cache.
Not necessarily, compression is really effective at reducing downloaded bytes
In server terms the overhead of tracking one download is going to be less that the overhead of tracking the download of the multiple components
And for client side caching to be any use then a visitor would need to view more than one page and the harsh reality is many sessions are only one page long e.g. news sites, blogs etc
I personally used this to great success on a couple of Premier League football club websites around the mid 2000s.