Or we can relax the requirement that “it remains you”, and be content with the idea of cloning the mind.
How do we know that the incomprehensibly weird and advanced minds running our ships and habitats in those unimaginably distant points of space and time won't just decide that virtual smiley faces with our names written on them are close enough to, "it remains you." How do we know they won't one day do a "slightly lossy compression" of the human race?
I left it up to the imagination of the reader. It could be one of our minds after an imperfect cloning, modified by someone for "efficiency." It could be super-optimizing AIs. Tens of thousands of years from now, thousands of light-years distant, who knows what it will be like?
It seems the consensus in this subthread is that gradual replacement involves a potentially unsolvable philosophical problem, unlike cloning. How can you be sure you remain you during gradual replacement, and don't turn into some kind of a philosophical zombie?
How do we know that the incomprehensibly weird and advanced minds running our ships and habitats in those unimaginably distant points of space and time won't just decide that virtual smiley faces with our names written on them are close enough to, "it remains you." How do we know they won't one day do a "slightly lossy compression" of the human race?