Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What came to my mind when I first read the paradox was that the class of ravens is closed whereas the class of non ravens is open. We have a definition of raven that's independent of its colour, otherwise we wouldn't be able to tell whether a particular thing is a raven, and thus, whether its colour is evidence for anything related to ravens at all. However, we don't have a definition for all non raven things, so the number of non-ravens is not merely large, as the article suggests, it is in fact infinite as counting needs definition to tell one thing from another. Drawing conclusions by induction from one of an infinite number of instances seems meaningless to me.

But there is another interesting question that arises from the Bayes explanation at the bottom of the article. If we had all things in the universe available and sufficiently defined to tell what is one thing and what is another, and we found that there are indeed no non-black ravens, that would still leave open the question of whether there could possibly be non-black ravens in the future. They could be born right in this moment, so there would be a race condition between making the statement and the statement being falsified by the event of a non-black raven being born. And I think that's one of the problematic things about purely extensional logic advocated by philosophers like Quine.



"If we had all things in the universe available and sufficiently defined to tell what is one thing and what is another, and we found that there are indeed no non-black ravens, that would still leave open the question of whether there could possibly be non-black ravens in the future."

That's not a bug in Bayesian inference, that's a feature! It would be a mistake to ever assign a probability of 0.0 or 1.0 to any statement. It means you have infinite confidence in the statement, which is impossible unless you have a screwy prior distribution.

You also don't have to evaluate all of the data if you're giving a probability. You can just report your degree of confidence in the proposition based on the data you've evaluated so far, and this degree of confidence is called a probability.


I know the whole point of Bayesian statistics is that you actually need much less data to get a good prediction. But the edge case I was talking about is still interesting I think, if not as a criticism of Bayes.


Doesn't the statement limit itself to the present? "All ravens are black."


Yes absolutely, but logic aspires to be useful in the world. Therefore it seems important whether all ravens are black because all white ones have been culled a minute ago, because the word raven is defined as a black bird with some additional features, or because the genetic pattern of ravens causes them to always be black. Each of those possibilities would have very different consequences for inductive reasoning.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: