I'm gonna show you some photos from Thailand, anyway:
|6 foot shadow puppet|
|Detail from Wall Mural, Temple of the Emerald Buddha|
I'm not gonna get into the part where I was sick, except to convey the one important lesson I learned, although it was too late to be of use to me: There are home IV services. You can google them, if you like. They're mainly for hangovers, and they come to your home and hang some lactated ringers, but they also cover the occasional flu and GI bug. Good to know.
Now for some real content.
You may recall that a few months ago, I went into this rabbit hole about statistics and flu vaccination. I looked at an article by Talbot et al, that the CDC is using to support its recommendation for universal influenza vaccination. First I finagled around for a while trying to figure out how the article obtained its statistics. When I had finally done that, I considered the conclusions from those statistics, and decided they were erroneous-meaning that even if the statistics are accurate, they don't prove what the authors claim they prove, namely, that there is a 71% reduction in flu-related hospitalizations in patients who have been vaccinated against flu, vs. those who haven't. Instead, I believe what they demonstrate is that flu vaccination resulted in a 71% reduction in flu INFECTION, in their study population, which consisted exclusively of patients who were already hospitalized for something that resembled flu. These are very different conclusions, and if you check my post, you can see where I demonstrated that with the study's data, it's possible flu vaccination reduced flu-related hospitalization, but it's also possible it increased it. They don't have the right data to know.
Since I'm not a statistician, I asked a few people who I thought knew more statistics than I if my conclusion was correct, but I couldn't seem to get a clear answer. I even wrote to the Cochrane Review about it, then promptly forgot I had done so.
Well, they responded, and this is en excerpt (most) of their reply:
Our opinion on the Talbot, et al observational study is as follows:
The public health significance of the study is limited for multiple reasons including the inability to estimate absolute treatment effects and number needed to vaccinate. Without this vital information it is difficult to determine the value of the intervention for public health use. Furthermore there is no information on harms reported therefore we have no idea of the balance between positive and negative outcomes. It is well known that observational studies tend to be associated with higher risk of bias compared to randomised clinical trials. Bias introduced by the design usually has the effect of exaggerating the effects of the intervention, in this case influenza vaccines. Coupled with absence of any mention of harms is well within the narrative findings of our reviews.
The assumptions required for validity of the case-positive, control-negative study design are not stated in the paper and no information is provided on whether they have been met. For example, a critical assumption is that the risk of non-influenza ILI needs to be the same in vaccinated and non-vaccinated individuals1. This may not be the case as shown by Cowling et al who reported an increased risk of non-influenza respiratory virus infections associated with receipt of inactivated influenza vaccine2. Stratification by disease severity (apparently not measured in this study) is needed because this is likely to be associated with a person’s probability to seek medical care as well as vaccination status3. If the vaccine modifies the conditional probability of developing symptoms after infection with influenza then the odds ratio may be biased3.
The case-positive, control-negative study design is not recommended for study of patients hospitalised for influenza-related illnesses because hospitalisations may occur due to complications that become manifest after the virus is no longer detectable3. It is notable that only 10% had positive RT-PCR tests; surprisingly low given the participants were selected and recruited during the influenza season.
Further weaknesses of the study include selection bias as only 169 of 413 eligible participants were included in the analysis. Only 17 had a positive RT-PCR test making estimates from statistical modelling unreliable.
Given these multiple methodological concerns we consider the study to be at high risk of bias; providing very low quality evidence.
1Broome, C., Facklam, R., Fraser, D. Pneumococcal Disease after Pneumococcal Vaccination — An Alternative Method to Estimate the Efficacy of Pneumococcal Vaccine. N Engl J Med 1980; 303: 549-552.
2Cowling, B., Fang, V., Nishiura, H., et al. Increased Risk of Noninfluenza Respiratory Virus Infections Associated With Receipt of Inactivated Influenza Vaccine. Clin Infect Dis. 2012 Jun 15; 54(12): 1778–1783.
3Foppaa, I., Haberc, M., Ferdinandsa, J., Shaya, D. The case test-negative design for studies of the effectiveness of influenza vaccine. Vaccine 31 (2013) 3104– 3109.
Cochrane replied to my inquiry! I feel so important! Sure, they didn't really answer my vanity question of, "Did I catch the CDC in a big statistical boo boo?" But that's probably for the best. What they do seem to be saying is that the the study is poor enough that it's not even worth considering the validity of its conclusions, which still supports my contention that this is a terrible study to use as part of the recommendation for universal flu vaccination.
The Cochrane people also suggested that I post our exchange in PubMed Commons as comments to the Talbot et al study, which I told them I would like to do, but haven't gotten around to yet.
That's my bit for today. I will try to be less neglectful of the blog, going forward.