No mention whatsoever of the $TERM environment variable. sigh
Outputting raw escape codes and hope that they work is not how you do it. This is not how any reasonable library (or bash, or git, etc.) does it. These programs and libraries start with the $TERM environment variable to find out what terminal the user is using, and then use something like termcap(5) or terminfo(5) to look up what capabilities that specific terminal has, and what actual escape codes to output to get that effect. In reality, though, most regular programs delegate all this to a library, like ncurses.
(And also make sure not to output any terminal codes at all if the standard output is not a terminal, as in isatty(3) or tty(1).)
This way, you can check for and use modern cool stuff like sixels, but not use them if some user is using something older like XTerm or the Windows Console.
Aside from the 256-color section, I believe all of the examples given in the article are basic sequences which are supported in ~every terminal. Are you aware of any notable terminals where that is not the case?
Maybe I've been lucky, but I've written plenty software that blindly shoots CSI's at the console and still haven't hit any snags. Especially convenient when the channel is unidirectional; curl ocv.me :)
Shelll-mode in emacs uses $TERM=dumb and doesn’t support escapes code by default. Same with compile-mode. Also there’s some tools that persist using escapes code even when piped to another program (like less).
Hence my question -- my belief was/is that all of the escape-sequences mentioned in the article will work on 99.99% of all terminals, with the exception of 256-color. Assuming this is true, IMO the remaining 0.01% does not justify introducing another dependency (ncurses -> tput).
I have read that people still use 9term. Its terminfo entry doesn't show any sign of ANSI escape code support.
If I remember correctly, ANSI-like terminal emulators in RGB mode (aka direct mode) only support 16-color and red/green/blue escape codes. Not the 256-color palette used in this article.
ANSI-compatible terminal emulators widely disagree on whether RGB values should be separated by semicolons or colons.
I have read articles (possibly on hackaday) this decade by people working on retro hardware projects, expressing frustration with command line programs that spew ANSI garbage to their non-ANSI terminals. At least one author resorted to the screen program, because it can translate some ANSI-isms to the correct codes for the active terminal.
I had genuine TeleVideo 912 glass terminals hooked up until not terribly long ago.
In any case, I don't view this as a matter of whether unusual terminals are "notable". We have an abstraction that plays nicely with them (and allows new terminal protocols to be developed). It's a POSIX standard. It's easy to use. I suggest using it.
Everything is either xterm or something else that's emulating most if not all of its features. Whatever isn't is probably terrible and broken and not worth supporting unless you're getting paid fat stacks to do it.
> No mention whatsoever of the $TERM environment variable. sigh
I have mixed feelings about this... Sometimes I feel young and reckless and want to just output whatever I need without checking $TERM. In practice, all modern terminal emulators are essentially xterm-256color compatible. If something breaks, the worse you get is some funny characters. Is that such a big deal? Much better than propagate the silly termcap/terminfo complexity.
This sounds interesting, how are you getting python docs in the `texinfo` format? I’ve used zeal a fair amount or *.read the docs.io, but always looking for local alternatives. A quick search online didn’t have much info
Whenever the discussion comes up about man pages and how documentation should be organized, I like to quote this section from the GNU coding standards about how Info documentation is structured:
----
Programmers tend to carry over the structure of the program as the structure for its documentation. But this structure is not necessarily good for explaining how to use the program; it may be irrelevant and confusing for a user.
Instead, the right way to structure documentation is according to the concepts and questions that a user will have in mind when reading it. This principle applies at every level, from the lowest (ordering sentences in a paragraph) to the highest (ordering of chapter topics within the manual). Sometimes this structure of ideas matches the structure of the implementation of the software being documented--but often they are different. An important part of learning to write good documentation is to learn to notice when you have unthinkingly structured the documentation like the implementation, stop yourself, and look for better alternatives.
[…]
In general, a GNU manual should serve both as tutorial and reference. It should be set up for convenient access to each topic through Info, and for reading straight through (appendixes aside). A GNU manual should give a good introduction to a beginner reading through from the start, and should also provide all the details that hackers want. […]
That is not as hard as it first sounds. Arrange each chapter as a logical breakdown of its topic, but order the sections, and write their text, so that reading the chapter straight through makes sense. Do likewise when structuring the book into chapters, and when structuring a section into paragraphs. The watchword is, at each point, address the most fundamental and important issue raised by the preceding text.
This advice makes most sense for tools that have a clear linear flow from beginner to expert. As soon as something is complex enough to have multiple different personas or desired outcomes for the novice user it's considerably harder to structure docs pedagogically.
The extreme case of this is something like Nix, which is notorious for terrible docs, and I think that's in large part because even the basic "install my first package" could involve profiles, environments, flakes, whatever; there's like five ways to do everything and which one you want depends a lot what your "real" eventual goal is.
> Programmers tend to carry over the structure of the program as the structure for its documentation[…] the right way to structure documentation is according to the concepts and questions that a user will have in mind when reading it.
That's also the right way to structure the program. Deficient compilers and bad advice have done serious damage to source code comprehensibility.
Outputting raw escape codes and hope that they work is not how you do it. This is not how any reasonable library (or bash, or git, etc.) does it. These programs and libraries start with the $TERM environment variable to find out what terminal the user is using, and then use something like termcap(5) or terminfo(5) to look up what capabilities that specific terminal has, and what actual escape codes to output to get that effect. In reality, though, most regular programs delegate all this to a library, like ncurses.
(And also make sure not to output any terminal codes at all if the standard output is not a terminal, as in isatty(3) or tty(1).)
This way, you can check for and use modern cool stuff like sixels, but not use them if some user is using something older like XTerm or the Windows Console.
reply