I had to work with a 2.1 GB JSON file today. It was really striking that, on a machine with 4 GB free RAM, I could find literally nothing that could work with the whole thing in memory as JSON, including every GUI editor I looked at, Node, jq, etc.
less works fine on files with enormous numbers of lines, but not lines that are extremely long. Open up a 1G file with lines that are all 10K long and you'll see just how slow the pager can get!
ipython, plus a bit of magic to json.loads to get it to intern the object keys? (Assuming you have enough repetition in the keys to result in enough memory savings to get you within your budget.)
I had to do this once for a large piece of JSON that ballooned quite large when loaded into RAM. But in my case I could wait until after json.loads to do the interning; but I think you can do it on the fly with object_hook or object_pairs_hook.
Alternatively, Rust+Serde & doing as bare-bones a computation to get it working, again with an eye out to whether the decoded form will fit. (Hopefully, since Serde'ing into a struct should make the keys 0 bytes, essentially, but there would be some memory lost for pointers and the like.)
I believe emacs chunks large files and then lazily loads them to enable this. I remember having to mess around with a specific mode to get it to work in the past, but I think it's included in base emacs now.
In this case it was a single giant object I was trying to extract a specific list of keys (with unknown value lengths) from. Every method I found with jq died with out of memory errors.