Sort of. The problem is people get confused about the same statistical methodology they use in other situations when the measure they are estimating is also a percentage.
For example, if you are estimating the height of a male in the US, you would collect data on US males and get the average. But unless you surveyed every male in the US, there is some error associated with your estimate. So you would either construct error bounds (a frequentist approach) or a probability distribution (a Bayesian approach) around the mean height. So your results may dictate that the mean height of the American male is 5’11, plus or minus 2 inches. Those two inches represent uncertainty around your data collection. That’s the exact same thing that is done here, but with a percentage instead of a height. Outlets may predict Hillary winning at 95%, but the reality is their methodology should provide a plus-minus value around that. The problem is that few of them actually report that.
But it gets more confusing. That error bound is only around the mean. Pick a random guy out and not only will he likely not be 5’11, there is a decent chance he will be outside of that range of 5’9 - 6’1. You will get 5’7 guys and 6’4 guys pretty commonly. In the case of the election, it may actually be true that Hillary had a chance between say, 93% and 97% of winning. But even if that is the case, she will still lose between 3-7% of the time. But since we only have one reality to observe, we can’t know if she lost simply because we saw that 3-7% realized, or because they people coming up with that number screwed up. That’s why groups like 538 deserve more leeway. When they say that Donald Truml has a 30% chance of winning, and he does. That’s not that crazy. And therefore there is much less reason to assume they screwed something up than the people who predicted a 5% chance of Trump winning. It’s possible those models were right, but much less so.
For example, if you are estimating the height of a male in the US, you would collect data on US males and get the average. But unless you surveyed every male in the US, there is some error associated with your estimate. So you would either construct error bounds (a frequentist approach) or a probability distribution (a Bayesian approach) around the mean height. So your results may dictate that the mean height of the American male is 5’11, plus or minus 2 inches. Those two inches represent uncertainty around your data collection. That’s the exact same thing that is done here, but with a percentage instead of a height. Outlets may predict Hillary winning at 95%, but the reality is their methodology should provide a plus-minus value around that. The problem is that few of them actually report that.
But it gets more confusing. That error bound is only around the mean. Pick a random guy out and not only will he likely not be 5’11, there is a decent chance he will be outside of that range of 5’9 - 6’1. You will get 5’7 guys and 6’4 guys pretty commonly. In the case of the election, it may actually be true that Hillary had a chance between say, 93% and 97% of winning. But even if that is the case, she will still lose between 3-7% of the time. But since we only have one reality to observe, we can’t know if she lost simply because we saw that 3-7% realized, or because they people coming up with that number screwed up. That’s why groups like 538 deserve more leeway. When they say that Donald Truml has a 30% chance of winning, and he does. That’s not that crazy. And therefore there is much less reason to assume they screwed something up than the people who predicted a 5% chance of Trump winning. It’s possible those models were right, but much less so.