One of JSON's selling-points is to be human-readable. I would struggle reading any JSON document with a nesting level greater than 6 or so. Why would I ever want a nesting level > 100?
This doesn't really make sense to me. Just because JSON is human-readable doesn't mean that humans are meant to read any possible serialized data nor that it should always be an ergonomic read. Not everything you serialize is just a config file or hand-edited data. It may be arbitrarily dense with arbitrary data.
To me, this is kinda like seeing a JSON array of user IDs coming over the network and going "Huh? Why not usernames? I thought JSON was supposed to be human-readable!"
I see your point; although if you look at the specs for a lot of easy-to-parse formats for computers, a stated design goal is also easy-for-humans (e.g. Markdown, YAML).
Large, complex object hierarchies with lots of nesting might make more sense represented in binary (e.g. Avro).
I realize I'm making a little bit of a McLuhan-esque argument in a largely computer science-oriented context, but I hope you can see what I'm getting at.
Another example: source code is human readable (by definition), but that doesn't mean I can read any possible program. I struggle to read many programs with deep nesting, but that doesn't mean I'd find it acceptable for my compiler to crash in such situations.
Maybe because you use a JSON library as all the time anyway so using it even when it's not ideal but good enough is the path of least resistance. Or you're a cargo cultist.
When dealing with interconnected systems it's a pretty decent format. That it is human readable also allows for relatively easy interaction in terms of the value without building one-off tooling.
When I'm dealing with non-sql data, one thing I've also done is fire the record into a service that compresses to .gz and a worker sends it to S3_PATH/data-type/recordid.json.gz as a recovery option. Not that it's the first or best recovery option, but when your core data is only a few hundred thousand records, potentially faster than a full DB restore. Not to mention, services being written in one language, and a recovery script in a scripting language.
It just depends on how you are doing things. Now, for configurations, I tend to prefer TOML or YAML.