by Hannes Knüppel, Courtney Metz, Joerg J. Meerpohl, and Daniel Strech.
The authors looked at 123 Psychiatry journals, and a subset of the "Top Ten" journals, which were unspecified, to determine how frequently/strongly these journals encourage or require authors who submit papers to follow reporting guidelines, and to register their respective trials with registries such as ClinicalTrials.gov, and the WHO's International Clinical Trials Registry Platform (ICTRP).
Actually, according to their site, the ICTRP is not a clinical trials registry. To become registered, trials need to be submitted to one of the Primary Registries in the WHO Registry Network.
The ICTRP site had a nice description of why it's important to register clinical trials:
The registration of all interventional trials is considered to be a scientific, ethical and moral responsibility because:
- There is a need to ensure that decisions about health care are informed by all of the available evidence
- It is difficult to make informed decisions if publication bias and selective reporting are present
- The Declaration of Helsinki states that "Every clinical trial must be registered in a publicly accessible database before recruitment of the first subject".
- Improving awareness of similar or identical trials will make it possible for researchers and funding agencies to avoid unnecessary duplication
- Describing clinical trials in progress can make it easier to identify gaps in clinical trials research
- Making researchers and potential participants aware of recruiting trials may facilitate recruitment
- Enabling researchers and health care practitioners to identify trials in which they may have an interest could result in more effective collaboration among researchers. The type of collaboration may include prospective meta-analysis
- Registries checking data as part of the registration process may lead to improvements in the quality of clinical trials by making it possible to identify potential problems (such as problematic randomization methods) early in the research process
With respect to trial registration, 66% (30%) did not mention; 9% (0%) mentioned with recommendation to adhere; and 25% (70%) mentioned with the requirement to adhere. Furthermore, 70% of the top ten journals did not ask for the specific trial registration number.
The article did not discuss how well the journals that required the use of guidelines and trial registry followed up on compliance.
What I found most interesting, though, were the reporting guidelines, which I found at the Equator Network. For example, CONSORT (CONsolidated Standards Of Reporting Trials), the guideline for randomized trials, has the following Results excerpt from their checklist:
Participant flow (a diagram is strongly recommended)
|
13a
|
For each group, the numbers of participants who were randomly assigned, received intended treatment, and were analysed for the primary outcome
|
13b
|
For each group, losses and exclusions after randomisation, together with reasons
|
|
Recruitment
|
14a
|
Dates defining the periods of recruitment and follow-up
|
14b
|
Why the trial ended or was stopped
|
|
Baseline data
|
15
|
A table showing baseline demographic and clinical characteristics for each group
|
Numbers analysed
|
16
|
For each group, number of participants (denominator) included in each analysis and whether the analysis was by original assigned groups
|
Outcomes and estimation
|
17a
|
For each primary and secondary outcome, results for each group, and the estimated effect size and its precision (such as 95% confidence interval)
|
17b
|
For binary outcomes, presentation of both absolute and relative effect sizes is recommended
|
|
Ancillary analyses
|
18
|
Results of any other analyses performed, including subgroup analyses and adjusted analyses, distinguishing pre-specified from exploratory
|
Harms
|
19
|
All important harms or unintended effects in each group (for specific guidance see CONSORT for harms)
|
STROBE (Strengthening the Reporting of Observational Studies in Epidemiology) is the guideline for observational studies, PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) for meta-analyses and systematic reviews, etc.
So different types of studies have different, and apparently clear and explicit guidelines.
Not much more to say about this right now, except that I like the idea that the authors looked at sources of bias coming from a less common place (as opposed to from greedy pharmaceutical companies, not that they aren't).