The study pre-print is published and some of the numbers are publicly available, we don't need to play a game of revelations here between one person and another, or incorporate Twitter users into the mix. (I didn't even realize this was being criticized over Twitter, as I don't really use the service.) Gelman's critique is quite substantive, and commenters on Gelman's post have created Bayesian analyses which incorporate the uncertainty from test sensitivity and specificity.
When I made one in PyMC3 (which lined up with a commenter's approach with PyStan), the 97% CI for the prevalence based on the non-poststratified data I got had the prevalence between (-0.3%, 1.7%). What does that mean? The test just isn't certain enough to allow us to make any conclusions, not that the null hypothesis is correct or that we can reject the null hypothesis.
There's nothing wrong with performing the study. Indeed, the publishing of the study allows us to have these vigorous debates about methods and informs future trials from being more exact and not suffering from the same problems as previous studies. But trying to extrapolate a conclusion for something as important as COVID based on studies with extremely high uncertainty is highly irresponsible. Sometimes we have to accept that coming up with statistically significant conclusions is difficult.
"When I made one in PyMC3 (which lined up with a commenter's approach with PyStan), the 97% CI for the prevalence based on the non-poststratified data I got had the prevalence between (-0.3%, 1.7%). What does that mean? The test just isn't certain enough to allow us to make any conclusions, not that the null hypothesis is correct or that we can reject the null hypothesis."
Yeah, that doesn't sound substantially different than Gelman's frequentist intuition in the blog post. I'm not sure the more complex methods are adding much here, except that you can now examine the posterior, and see what portion of the density lies below zero (i.e. probably not much of it).
IMO the "CI includes zero" was weak when Gelman advanced it, because even though it's possible, it was clear from the assay error rates that the outcome was on the tails of the distribution; even if 95% of repeated samples may include zero, very few of them actually would. So at the end of the day, as you have demonstrated, you get a non-post-stratified posterior that encompasses the point estimate they gave (1.5%), but your confidence interval is different, and perhaps the mean is lower.
Now you're just left with debating the validity of the bias adjustments they made.
That said, it's wrong to frame this in terms of a "rejecting the null hypothesis". There's no hypothesis in an observational study like this.
> So at the end of the day, as you have demonstrated, you get a non-post-stratified posterior that encompasses the point estimate they gave (1.5%), but your confidence interval is different, and perhaps the mean is lower.
You cannot use confidence intervals to argue the validity of a point estimate inside of the CI. When using frequentist methods, we usually have some sort of control group where we can use a paired test to compare sample means in order to reject a hypothesis.
I wanted to use Bayesian methods not because they were more complex, but because I felt that when a control group is not available, a Bayesian analysis would be a lot more obvious about surfacing uncertainty. Bayesian methods also allow us to actually simulate P(prevalence | data). And no, just because 1.5% is in the 95th percentile of the posterior prevalence, does not mean you can say that 1.5% is a valid estimate. What the CI shows is that, with 97% confidence, the prevalence is somewhere between -0.3% and 1.7%. Additionally, the mean of this posterior came out to 0.8% prevalence, which to me is good as, to me, saying it's inconclusive. In fact, if we use the median of P(prevalence | data), then we get very close to 0.8%, so this test is basically showing that the prevalence in this population is negligible.
"You cannot use confidence intervals to argue the validity of a point estimate inside of the CI."
You're using a Bayesian method, so you have a posterior distribution. You can sample from it.
"And no, just because 1.5% is in the 95th percentile of the posterior prevalence, does not mean you can say that 1.5% is a valid estimate."
You told me that was the confidence interval on the parameter. The confidence interval contains the point estimate for the original study. It's as valid as any other point within the confidence interval. As you say: "you cannot use confidence intervals to argue the validity of a point estimate inside the CI".
"What the CI shows is that, with 97% confidence, the prevalence is somewhere between -0.3% and 1.7%."
> You told me that was the confidence interval on the parameter. The confidence interval contains the point estimate for the original study. It's as valid as any other point within the confidence interval. As you say: "you cannot use confidence intervals to argue the validity of a point estimate inside the CI".
> Which includes 1.5%.
And everything else in the CI. If we're treating this like a CI, then it's like saying a dice will land on 1, just because it's equally likely to land on 6.
The actual P(1.5% | prevalence) is quite low at 3%.
"And everything else in the CI. If we're treating this like a CI, then it's like saying a dice will land on 1, just because it's equally likely to land on 6. The actual P(1.5% | prevalence) is quite low at 3%."
You just said that you can't use a CI to estimate the likelihood of any point within the CI (you actually can, for well-behaved problems, but I digress) when I commented that 0% isn't a likely outcome within the interval.
Literally the same argument. If you want to argue that 1.5% is unlikely, then you have to accept that 0% is unlikely for the same reasons.
When I made one in PyMC3 (which lined up with a commenter's approach with PyStan), the 97% CI for the prevalence based on the non-poststratified data I got had the prevalence between (-0.3%, 1.7%). What does that mean? The test just isn't certain enough to allow us to make any conclusions, not that the null hypothesis is correct or that we can reject the null hypothesis.
There's nothing wrong with performing the study. Indeed, the publishing of the study allows us to have these vigorous debates about methods and informs future trials from being more exact and not suffering from the same problems as previous studies. But trying to extrapolate a conclusion for something as important as COVID based on studies with extremely high uncertainty is highly irresponsible. Sometimes we have to accept that coming up with statistically significant conclusions is difficult.