It doesn't know that because it wasn't trained on any tasks that required it to develop that understanding. There's no fundamental reason an LLM couldn't learn "what it knows" in parallel with the things it knows, given a suitable reward function during training.
That's not true for all types of questions. You've likely seen a model decline to answer a question that requires more recent training data than it has, for example.