I think the size issue is a red herring. UTF-8 wins some, UTF-16 wins others, but either encoding is acceptable. There is no clear winner here so we should look at other properties.
UTF-8 is more reliable, because mishandling variable-length characters is more obvious. In UTF-16 it's easy to write something that works with the BMP and call it good enough. Even worse, you may not even know it fails above the BMP, because those characters are so rare you might never test with them. But in UTF-8, if you screw up multi-byte characters, any non-ASCII character will trigger the bug, and you will fix your code more quickly.
Also, UTF-8 does not suffer from endianness issues like UTF-16 does. Few people use the BOM and no one likes it. And most importantly, UTF-8 is compatible with ASCII.
I know that both encodings are variable-length. That is the issue I am trying to address.
My point is that in UTF-16 it's too easy to ignore surrogate pairs. Lots of UTF-16 software fails to handle variable-length characters because they are so rare. But in UTF-8 you can't ignore multi-byte characters without obvious bugs. These bugs are noticed and fixed more quickly than UTF-16 surrogate pair bugs. This makes UTF-8 more reliable.
I am not sure why you think I am advocating UTF-16. I said almost nothing good about it.
Bugs in UTF-8 handling of multibyte sequences need not be obvious. Google "CAPEC-80."
UTF-16 has an advantage in that there's fewer failure modes, and fewer ways for a string to be invalid.
edit: As for surrogate pairs, this is an issue, but I think it's overstated. A naïve program may accidentally split a UTF-16 surrogate pair, but that same program is just as liable to accidentally split a decomposed character sequence in UTF-8. You have to deal with those issues regardless of encoding.
> A naïve program may accidentally split a UTF-16 surrogate pair, but that same program is just as liable to accidentally split a decomposed character sequence in UTF-8. You have to deal with those issues regardless of encoding.
The point is that using UTF-8 makes these issues more obvious. Most programmers these days think to test with non-ascii characters. Fewer think to test with astral characters.
"Therefore if there are more characters in the range U+0000 to U+007F than there are in the range U+0800 to U+FFFF then UTF-8 is more efficient, while if there are fewer then UTF-16 is more efficient. "
That same page also states: "A surprising result is that real-world documents written in languages that use characters only in the high range are still often shorter in UTF-8, due to the extensive use of spaces, digits, newlines, html markup, and embedded English words", but I think the "citation needed]" is added rightfully there (it may be close in many texts, though)
UTF-8 is variable length in that it can be anywhere from 1 to 4 bytes, while UTF-16 can either be 2 or 4. That makes a UTF-16 decoder/encoder half as complex as a UTF-8 one.
> Even worse, you may not even know it fails above the BMP, because those characters are so rare you might never test with them.
I don't think this is too relevant because anyone who claims to know UTF-16 should know about the surrogates. And if you are handling mostly Asian text (which is where UTF-16 is more likely to be chosen), then those high characters become a lot more common.
UTF-8 has its own unique issues, like non-shortest forms and invalid code units, that you are even less likely to encounter in the wild. Bugs in handling of these have enabled security exploits in the past.
UTF-8 is more reliable, because mishandling variable-length characters is more obvious. In UTF-16 it's easy to write something that works with the BMP and call it good enough. Even worse, you may not even know it fails above the BMP, because those characters are so rare you might never test with them. But in UTF-8, if you screw up multi-byte characters, any non-ASCII character will trigger the bug, and you will fix your code more quickly.
Also, UTF-8 does not suffer from endianness issues like UTF-16 does. Few people use the BOM and no one likes it. And most importantly, UTF-8 is compatible with ASCII.