I highly recommend watching leading "anti-vaxxer" Del Bigtree debate Neil DeGrasse Tyson. Decide for yourself who seems more credible. https://thehighwire.com
While I appreaciate that preterm is very serious. I think we need to take a crash course in statistics.
The difference is not statistically significant. What does that mean? Well if you don't know and haven't run the statistical analysis - TTEST, Fisher's exact test etc. You really shouldn't be making conclusions that would be invalidate by such tests.
Anyway I ran some tests and the difference (p value of 0.16) means you can't tell that the difference is due to the variable (vaccination) with any sort of certainty
I just saw a Neil deGrasse talk and his musing on the failure of most people to understand data analysis, statistics and probability was hilarious. I suggest you check it out in Starry Messenger: Cosmic Perspectives on Civilization
Basic stats question: what's the chance of a real difference existing if p=0.16? Answer: there is an eighty-four percent chance that there is a difference, that this vaccine is less safe.
We commonly define an alpha of 0.05 as "significant", but that's really just giving ourselves a 95% chance to not make the mistake of calling a random result a real effect.
In the specific case here, we should seriously concern ourselves with the "beta" value, or 1-power. The beta value of a study with about n=3500 per group and an actual difference of 0.6%, comparing a 4.4% incidence and a 5.0% incidence, is about 75%.
So this study with that many participants would fail to see a real difference of 0.6% three out of four times. That's certainly not a study powered well enough to conclude that a failure to reject the null hypothesis means much.
However, we can rely on the p value to suggest our best estimate still. Our best estimate is that there is an 84% there is some worsening with this vaccine (by your calculation. though note that means any size of difference, so the distribution of possibilities includes from just the other side of zero to near twice as bad as what was measured, or something akin to 95%CL: +0.1% to -1.1%). In the context Hollander cites, with a highly effective and safe alternative, that still would sway many people.
A p value of 0.16 does NOT mean there's a 84% the vaccine is causing the effect. A very common misconception is that p value = the percent chance that the effect is real. It does not!!
This is why its very important that people writing articles like this can properly interpret the statistical analysis. Most disconcerning is the author makes zero mention of statistic analysis when the entire argument is based on a difference being "real"
The p value, or probability value, tells you how likely it is that your data could have occurred under the null hypothesis.
The null hypothesis is that there is no relationship between your variables of interest. The alternative hypothesis in this case is that vaccines cause the pre-term.
Statistical tests do not tell if the alternative hypothesis is correct, it tells you the chance your results would occur through random chance.
Therefore what a p value of 0.16 says is you can expect that the diifference (or greater difference) you are looking at would happen 16% of the time due to pure random chance. In other world if you took two groups the size used in this study that were completely equal this distribution would occur one ~ 1 out of 7 times.
Now here's the kicker! the opposite distribution (ie better preterm) outcome would also be expected to happen 16% of the time. So a p value of such is really telling you that there's no there...here
For all practical purposes, while not exactly meaning that, it's the best estimate we have of the chance that there's a real effect.
Your two tails suggestion is completely off... in a two tailed test, a p value of 0.16 is in the last 8% of the right tail or left tail distribution. An alpha of 0.05 leaves room for 2.5% on each side, not 5%.
If you set an alpha of 0.05, the chance you would see a distribution as random as that is 1 in 20. I.e. there's a 5% chance that you are not correct, that you are making a type 1 error. That means there's a 95% chance you are NOT making a type 1 error. The same applies when the p-value is greater than 0.05.
Here's a good way to prove this to yourself. Go look at a meta analysis of a small number of studies. Note that the p-value of the metaanalysis is basically the p values of the individual studies multiplied together (roughly). That is because this principle applies. If you run two studies that each have a p-value of 0.16, the chance that that is random is 0.025.
Also note that 1 in 7 would be 0.1428 not 0.16 which is best approximated as 1 in 6.
You are completely misunderstanding p value. I don't have time to explain it.
Go google "why p values are not measures of evidence"
But in short it doesn't tell you if you are correct or not It predicts if the null hypothesis is correct. Also
For a study like this a p value of 0.98, 0.5 or 0.16 for all practical purposes tell you the same thing..... there's no statistical effect (No sorry p=0.5 is not "better " than 0.16). There is no "best estimate" by this data.
The author doesn't have the objective facts to back up their supposition
If you are such an expert why don't you go and run the stats on this study?
Or better author of this article go run the stats and explain why the article isn't wrong statistically
If that were true, the only possible alpha to use would be 0.05. But 0.05 is simply a convention. I can use whatever p value threshold I want to, and 0.01 and 0.10 are not uncommon. I often use 0.10 or even up to 0.20 when the experiment is one where the bias would be towards including the "hit" in additional studies.
And guess what, empirically I can also tell you that results where p=0.10 end up holding up after further experimentation and hitting p<0.05 about 90% of the time.
Go figure .... no serious go figure what those numbers mean. (I've told many a grad student this). 0.05 what does that actually mean as far as reproducibility? probability? and confidence?
Challenge yourself and go understand what p=0.05 means. I'm willing to bet it isn't what you think it is.
Gettting back to this article. I'll point out the author did absolutely zero analysis or discussion on P values statistical significance or quantitative analysis of the statistical meaning of the data.
In other word pure speculation without objective analysis (aka garbage)
Thank you for this thorough analysis. I have a few questions: RSV lands children in the hospital but what is the infection fatality rate and does it leave serious results? In other words, can we quantify the real benefit of this vaccine?`
In terms of risk, all vaccines are meant to modify the immune system for a fairly long period of time. So shouldn't the risk of this shot be rigorously studied especially since it is being given to pregnant women? ( Who used to be told not even to have one glass of wine a week!). Can we rule out that the vaccine could effect fetal development?
I would think that a very large, long term study should be done to compare the vaccinated children to the unvaccinated children in terms of overall health, including ADHD, autism diagnoses etc. (Of course this type of long term study of vaccinated versus unvaccinated children is not done for any vaccine)
You mention that pre term births were higher in the vaccinated group! That is a serious signal as you say with potentially long term consequences.
Sadly vaccine are big business and Pharma is very good at selling its products. The industry has captured the regulators and Fox is guarding the hen house.
I highly recommend watching leading "anti-vaxxer" Del Bigtree debate Neil DeGrasse Tyson. Decide for yourself who seems more credible. https://thehighwire.com
Del Bigtree is a television producer
Neil is trained in mathematics and physics
Would you ask a physicist or TV producer for
Legal advice?
how about a dental procedures?
what about tax and financial strategies?
How about best way to treat (insert serious disease here)?
Hopefully point is made
While I appreaciate that preterm is very serious. I think we need to take a crash course in statistics.
The difference is not statistically significant. What does that mean? Well if you don't know and haven't run the statistical analysis - TTEST, Fisher's exact test etc. You really shouldn't be making conclusions that would be invalidate by such tests.
Anyway I ran some tests and the difference (p value of 0.16) means you can't tell that the difference is due to the variable (vaccination) with any sort of certainty
I just saw a Neil deGrasse talk and his musing on the failure of most people to understand data analysis, statistics and probability was hilarious. I suggest you check it out in Starry Messenger: Cosmic Perspectives on Civilization
Do you think it's worth investigating further to see if it's a real biological effect or not?
Basic stats question: what's the chance of a real difference existing if p=0.16? Answer: there is an eighty-four percent chance that there is a difference, that this vaccine is less safe.
We commonly define an alpha of 0.05 as "significant", but that's really just giving ourselves a 95% chance to not make the mistake of calling a random result a real effect.
In the specific case here, we should seriously concern ourselves with the "beta" value, or 1-power. The beta value of a study with about n=3500 per group and an actual difference of 0.6%, comparing a 4.4% incidence and a 5.0% incidence, is about 75%.
So this study with that many participants would fail to see a real difference of 0.6% three out of four times. That's certainly not a study powered well enough to conclude that a failure to reject the null hypothesis means much.
However, we can rely on the p value to suggest our best estimate still. Our best estimate is that there is an 84% there is some worsening with this vaccine (by your calculation. though note that means any size of difference, so the distribution of possibilities includes from just the other side of zero to near twice as bad as what was measured, or something akin to 95%CL: +0.1% to -1.1%). In the context Hollander cites, with a highly effective and safe alternative, that still would sway many people.
A p value of 0.16 does NOT mean there's a 84% the vaccine is causing the effect. A very common misconception is that p value = the percent chance that the effect is real. It does not!!
This is why its very important that people writing articles like this can properly interpret the statistical analysis. Most disconcerning is the author makes zero mention of statistic analysis when the entire argument is based on a difference being "real"
The p value, or probability value, tells you how likely it is that your data could have occurred under the null hypothesis.
The null hypothesis is that there is no relationship between your variables of interest. The alternative hypothesis in this case is that vaccines cause the pre-term.
Statistical tests do not tell if the alternative hypothesis is correct, it tells you the chance your results would occur through random chance.
Therefore what a p value of 0.16 says is you can expect that the diifference (or greater difference) you are looking at would happen 16% of the time due to pure random chance. In other world if you took two groups the size used in this study that were completely equal this distribution would occur one ~ 1 out of 7 times.
Now here's the kicker! the opposite distribution (ie better preterm) outcome would also be expected to happen 16% of the time. So a p value of such is really telling you that there's no there...here
For all practical purposes, while not exactly meaning that, it's the best estimate we have of the chance that there's a real effect.
Your two tails suggestion is completely off... in a two tailed test, a p value of 0.16 is in the last 8% of the right tail or left tail distribution. An alpha of 0.05 leaves room for 2.5% on each side, not 5%.
If you set an alpha of 0.05, the chance you would see a distribution as random as that is 1 in 20. I.e. there's a 5% chance that you are not correct, that you are making a type 1 error. That means there's a 95% chance you are NOT making a type 1 error. The same applies when the p-value is greater than 0.05.
Here's a good way to prove this to yourself. Go look at a meta analysis of a small number of studies. Note that the p-value of the metaanalysis is basically the p values of the individual studies multiplied together (roughly). That is because this principle applies. If you run two studies that each have a p-value of 0.16, the chance that that is random is 0.025.
Also note that 1 in 7 would be 0.1428 not 0.16 which is best approximated as 1 in 6.
You are completely misunderstanding p value. I don't have time to explain it.
Go google "why p values are not measures of evidence"
But in short it doesn't tell you if you are correct or not It predicts if the null hypothesis is correct. Also
For a study like this a p value of 0.98, 0.5 or 0.16 for all practical purposes tell you the same thing..... there's no statistical effect (No sorry p=0.5 is not "better " than 0.16). There is no "best estimate" by this data.
The author doesn't have the objective facts to back up their supposition
If you are such an expert why don't you go and run the stats on this study?
Or better author of this article go run the stats and explain why the article isn't wrong statistically
If that were true, the only possible alpha to use would be 0.05. But 0.05 is simply a convention. I can use whatever p value threshold I want to, and 0.01 and 0.10 are not uncommon. I often use 0.10 or even up to 0.20 when the experiment is one where the bias would be towards including the "hit" in additional studies.
And guess what, empirically I can also tell you that results where p=0.10 end up holding up after further experimentation and hitting p<0.05 about 90% of the time.
Go figure .... no serious go figure what those numbers mean. (I've told many a grad student this). 0.05 what does that actually mean as far as reproducibility? probability? and confidence?
Challenge yourself and go understand what p=0.05 means. I'm willing to bet it isn't what you think it is.
Gettting back to this article. I'll point out the author did absolutely zero analysis or discussion on P values statistical significance or quantitative analysis of the statistical meaning of the data.
In other word pure speculation without objective analysis (aka garbage)
Thank you for this thorough analysis. I have a few questions: RSV lands children in the hospital but what is the infection fatality rate and does it leave serious results? In other words, can we quantify the real benefit of this vaccine?`
In terms of risk, all vaccines are meant to modify the immune system for a fairly long period of time. So shouldn't the risk of this shot be rigorously studied especially since it is being given to pregnant women? ( Who used to be told not even to have one glass of wine a week!). Can we rule out that the vaccine could effect fetal development?
I would think that a very large, long term study should be done to compare the vaccinated children to the unvaccinated children in terms of overall health, including ADHD, autism diagnoses etc. (Of course this type of long term study of vaccinated versus unvaccinated children is not done for any vaccine)
You mention that pre term births were higher in the vaccinated group! That is a serious signal as you say with potentially long term consequences.
Sadly vaccine are big business and Pharma is very good at selling its products. The industry has captured the regulators and Fox is guarding the hen house.