Mandy Müller
Leo Bonati

Authors: Mandy D Müller, MD and Leo H Bonati, MD
Department of Neurology and Stroke Center, University Hospital Basel, Switzerland

Due to limited sample sizes, many studies comparing treatments are not adequately powered. Thus a single study may fail to detect a true treatment difference and as a consequence end in a false negative result, especially if the difference between the investigated treatments is small. Larger sample sizes are often not economically or logistically feasible. In addition, because of the sheer amount of data published each year, it can be almost impossible to keep track of all relevant studies to find an answer to a specific clinical problem. Therefore, systematic reviews and, if appropriate meta-analysis of the identified studies, are useful tools to summarize the available evidence from multiple similar studies and allow for more accurate conclusions for clinical practice.

However, there are several caveats which need to be kept in mind. In order to avoid selective inclusion of studies in a systematic review, it is essential to work according to a pre-specified protocol which clearly outlines the research question to be addressed, a detailed and systematic search strategy, the criteria to select all relevant studies as well as analyse the gathered study results. Nevertheless, meta-analyses carry an inherent risk of publication bias as only published studies are included and negative studies are less likely to be published. Furthermore, it is essential to assess the quality of the included studies in order to avoid bias being introduced by studies with inferior quality compared to others.

After conducting a systematic review of the available evidence, the results of the identified studies may be summarized and combined in a meta-analysis. However, it will not always be appropriate to do so (e.g. different definitions of outcomes), thus not all systematic reviews will contain a meta-analysis. If it is appropriate to combine the results in a meta-analysis, there are two different approaches to conduct statistical analysis: fixed-effects meta-analysis and random-effects meta-analysis. If it is reasonable to assume that the true underlying treatment effect is the same in all studies and the differences in the results between the studies are simply due to chance, a fixed-effect meta-analysis may be conducted (Figure 1). The fixed-effect approach ignores the possibility of between-study differences. This assumption should be assessed by statistical testing for heterogeneity.

Fixed-effects model
Random-effects model

Figure 1: Fixed-effects model. The green, vertical line represents the underlying true effect from which the individual study results deviate by chance only. (Source: http://training.cochrane.org/resource/exploring-heterogeneity

 

Figure 2: Random-effects model. The green, curved line represents the distribution of the effects, while the green, vertical line represents the mean of true effects.(Source: http://training.cochrane.org/resource/exploring-heterogeneity

If it is assumed that there are significant differences between the identified studies (heterogeneity) and that there are real differences in the underlying effects being measured by each study, a random-effects model should be employed (Figure 2). Due to the additional heterogeneity being allowed in random-effects models, this type of meta-analysis will be more conservative than its fixed-effect counterpart and result in wider confidence intervals for the summary estimate and a larger p-value. Figure 3 shows the influence of the two different approaches (random-effects model vs. fixed-effects model) in a meta-analysis with significant heterogeneity between the included studies. Due to the presence of significant heterogeneity (I2=61%) between the included studies, a random-effects model should be preferred in this case.

Figure 3: Differences between a random-effects and fixed-effects meta-analysis in the presence of significant heterogeneity between included studies: severe (≥70%) restenosis after carotid artery stenting (CAS) versus carotid endarterectomy (CEA). Trials are ordered by year of publication. The squares and horizontal lines correspond to the trials’ odds ratios (OR) and 95% confidence intervals. The area of the squares corresponds to the weight each trial contributes in the meta-analysis. The diamond represents the summary treatment effect estimate (result of the meta-analysis) with its 95% confidence interval (width of the diamond) in a random-effects meta-analysis (3a) and fixed-effects meta-analysis (3b).

 

Figure 3a: Random-effects meta-analysis

Random-effects meta-analysis

3a: Random-effects meta-analysis: the summary treatment effect (diamond) overlaps the vertical line indicating an OR of 1, resulting in no significant difference between the two compared treatments.

Figure 3a: Random-effects meta-analysis

Fixed-effects meta-analysis

3b: Fixed-effects meta-analysis: the summary treatment effect shows a significant difference between the two treatments favouring carotid endarterectomy (diamond does not overlap the vertical line indicating an OR of 1). (Source: Müller, M.D. et al, “Percutaneous transluminal balloon angioplasty and stenting for carotid artery stenosis”, work in progress).

Another type of meta-analysis is the individual patient data (IPD) meta-analysis. For this kind of meta-analysis multiple trials are combined on the individual patient data level which results in a larger sample size compared to analysis of a single study. One of the main advantages of the availability of IPD is the possibility to check for differences in the observed treatment effect in patient subgroups, e.g. men versus women or old versus young patients. Single trials are often underpowered to provide reliable subgroup analyses. Because data can be analysed on the individual patient level, differences in lengths of follow-ups between the source trials, as well as different definitions of outcomes or exposures can be standardised. Accordingly this kind of meta-analysis is often considered the gold standard. Unfortunately the conduct of an IPD meta-analysis is often a lengthy and expensive process. It requires the agreement of the investigators who conducted the source trials originally and it may be prone to selection bias when investigators do not agree to collaborate.

In conclusion, systematic reviews are a useful tool to provide an overview of the existing evidence. A detailed search strategy and comprehensive criteria for in- and exclusion of studies are essential to avoid introducing bias and spurious results. If appropriate, the results of the identified studies may be combined in a meta-analysis. For statistical analysis either a fixed-effect model or a random-effects model are possible, depending on the amount of heterogeneity between the included studies. Combination of several studies on the individual patient data level in the form of an IPD meta-analysis is often considered the gold standard as standardisation of outcome definitions and lengths of follow-up between the individual studies are possible.

References:
Egger M., Davey Smith G. and Sterne J. A. C. (2009) Systematic reviews and meta-analysis in Detels R., Beaglehole R., Lansang M. A. and Gulliford M. Oxford Textbook of Public Health, 5th Edition. Oxford: Oxford University Press
Kirkwood B. R. and Sterne J. A. C. (2003) Essential Medical Statistics, 2nd Edition. Oxford: Blackwell Publishing
http://training.cochrane.org/handbook