No, Dean's comment is indeed on the mark. A null hypothesis is comparable to a random number generator and we are looking for the probability that such a generator would have produced the results we see. Go check out Andrew Gelman's blog he supports such an interpretation.
No, Dean's comment is indeed on the mark. A null hypothesis is comparable to a random number generator and we are looking for the probability that such a generator would have produced the results we see. Go check out Andrew Gelman's blog he supports such an interpretation.
can you provide link to Andrew Gelman's blog article? I'm interested in reading it. I have the same question. "the probability that such a generator would have produced the results we see" => yes, but this is not equal to "the probability that what we see is generated by a random generator". The former is P(Data|Hypothesis), while the later is P(Hypothesis|Data). The difference is fundamental in math. P value by definition is the former and cannot be directly translated to the later without introducing priors regarding hypothesises.
I haven't checked out Andrew Gelman's blog, but your phrasing is accurate. However, that's not the same as saying that "there is a 91% chance that the effect was not by chance." By Bayes' theorem we have:
P(it was chance | data) = [P(data | it was chance) * P(it was chance)] / P(data)
= [P(data | it was chance) * P(it was chance)] / [P(data | it was chance) * P(it was chance) + P(data | it wasn't chance) * P(it wasn't chance)]
The P value estimates P(data | it was chance). It doesn't tell us anything about the other factors, in particular the priors P(it was chance) and P(it wasn't chance). For this case it comes down to the a priori likelihood that IVM has a positive effect. If that were already considered really likely then the results presented here would provide some support. If it were considered incredibly unlikely then it would increase the probability somewhat without making it overwhelming.
I recommend checking out the examples at https://en.wikipedia.org/wiki/Bayes'_theorem to see the difference. Even apparently strong evidence can run up against incredibly low prior probabilities.
Thanks for all this, I am a grizzled Bayesian myself. Andrew Gelman developed Hamiltonian Monte Carlo which as you probably know enabled whole new vistas of statistical problems to be solved in fully Bayesian fashion using the programming language Stan. His blog is at Columbia and not hard to find.
I think you're actually conflating a couple of things here. One is whether rejection of a null hypothesis is the same thing as accepting the alternative hypothesis. It is of course not as a matter of interpreting the statistics. But it's really an experimental design question. The whole point of an experiment is to create controlled conditions and restrict the degrees of freedom of the experiment. We do this so that an effect, if it exists, is most credibly attributed to one particular cause. Therefore a rejection of the null must imply that cause. Of course this is never possible to do perfectly so every paper (and every science news article) should be very clear about what the limitations are of every experiment.
So I think Dean is right and you're criticizing a straw man essentially. But thanks for the good discussion.
I only meant to challenge two very specific statements:
Dean: "If I understand correctly, "P= .09" means there is a 91% chance that the effect was not by chance."
Igor Chudov: "Your interpretation is 100% on the mark, ..."
That interpretation wasn't what P = .09 means, but it's easy to be confused on these points and a common mistake to make. I merely wanted to set the record straight; I wasn't intending to argue more than that or express an opinion regarding this precise work.
Exactly how we should react to P = 0.09 in general and this study specifically is a separate, somewhat philosophical issue. Igor makes some good points -- maybe this doesn't conclusively demonstrate IVM effectiveness, but based on these numbers it doesn't rule it out; we'd just need a larger study to (hopefully) find stronger evidence (one way or the other, but I'd prefer IVM to work). On the other hand, I've seen people argue that confounders (such as relative vaccination percentages in the cohorts) taint these numbers, and so the arguments continue.
Our epidemiology course was deadly boring so I went skiing half the days. I have had to appreciate some of this to challenge the drug reps over the last thirty years.
I do remember my favourite pharmacology prof's advice though: "Test your drug on 15 people. You will know if it works."
No, Dean's comment is indeed on the mark. A null hypothesis is comparable to a random number generator and we are looking for the probability that such a generator would have produced the results we see. Go check out Andrew Gelman's blog he supports such an interpretation.
can you provide link to Andrew Gelman's blog article? I'm interested in reading it. I have the same question. "the probability that such a generator would have produced the results we see" => yes, but this is not equal to "the probability that what we see is generated by a random generator". The former is P(Data|Hypothesis), while the later is P(Hypothesis|Data). The difference is fundamental in math. P value by definition is the former and cannot be directly translated to the later without introducing priors regarding hypothesises.
I haven't checked out Andrew Gelman's blog, but your phrasing is accurate. However, that's not the same as saying that "there is a 91% chance that the effect was not by chance." By Bayes' theorem we have:
P(it was chance | data) = [P(data | it was chance) * P(it was chance)] / P(data)
= [P(data | it was chance) * P(it was chance)] / [P(data | it was chance) * P(it was chance) + P(data | it wasn't chance) * P(it wasn't chance)]
The P value estimates P(data | it was chance). It doesn't tell us anything about the other factors, in particular the priors P(it was chance) and P(it wasn't chance). For this case it comes down to the a priori likelihood that IVM has a positive effect. If that were already considered really likely then the results presented here would provide some support. If it were considered incredibly unlikely then it would increase the probability somewhat without making it overwhelming.
I recommend checking out the examples at https://en.wikipedia.org/wiki/Bayes'_theorem to see the difference. Even apparently strong evidence can run up against incredibly low prior probabilities.
Thanks for all this, I am a grizzled Bayesian myself. Andrew Gelman developed Hamiltonian Monte Carlo which as you probably know enabled whole new vistas of statistical problems to be solved in fully Bayesian fashion using the programming language Stan. His blog is at Columbia and not hard to find.
I think you're actually conflating a couple of things here. One is whether rejection of a null hypothesis is the same thing as accepting the alternative hypothesis. It is of course not as a matter of interpreting the statistics. But it's really an experimental design question. The whole point of an experiment is to create controlled conditions and restrict the degrees of freedom of the experiment. We do this so that an effect, if it exists, is most credibly attributed to one particular cause. Therefore a rejection of the null must imply that cause. Of course this is never possible to do perfectly so every paper (and every science news article) should be very clear about what the limitations are of every experiment.
So I think Dean is right and you're criticizing a straw man essentially. But thanks for the good discussion.
I only meant to challenge two very specific statements:
Dean: "If I understand correctly, "P= .09" means there is a 91% chance that the effect was not by chance."
Igor Chudov: "Your interpretation is 100% on the mark, ..."
That interpretation wasn't what P = .09 means, but it's easy to be confused on these points and a common mistake to make. I merely wanted to set the record straight; I wasn't intending to argue more than that or express an opinion regarding this precise work.
Exactly how we should react to P = 0.09 in general and this study specifically is a separate, somewhat philosophical issue. Igor makes some good points -- maybe this doesn't conclusively demonstrate IVM effectiveness, but based on these numbers it doesn't rule it out; we'd just need a larger study to (hopefully) find stronger evidence (one way or the other, but I'd prefer IVM to work). On the other hand, I've seen people argue that confounders (such as relative vaccination percentages in the cohorts) taint these numbers, and so the arguments continue.
Our epidemiology course was deadly boring so I went skiing half the days. I have had to appreciate some of this to challenge the drug reps over the last thirty years.
I do remember my favourite pharmacology prof's advice though: "Test your drug on 15 people. You will know if it works."