Keding B. Corbacho A.
[P.D.F] Sequential Experimentation in Clinical Trials: Design and Analysis (Springer Series in
Alban A. Registered: Martin Forster. We investigate value-based clinical trial design by applying a Bayesian decisiontheoretic model of a sequential experiment to data from the ProFHER pragmatic trial. Rangan, Corrections All material on this site has been provided by the respective publishers and authors. Louis Fed. Therefore, the RIS and thereby the RIS-AIS seem to be a fair trade-off between the number of required additional randomised participants and the number of required additional trials.
In two examples given by Kulinskaya and Wood, the number of additional randomised participants is reduced from to and from 11,, to , when using RIS-AIS at the expense of four more trials than the minimal number of trials required. This is a sample size corresponding to the minimal number of required trials. Such a trial may very well be substantially larger than the total acquired information size in the meta-analysis conducted before the trial.
When the result from such a trial becomes available, the updated cumulative meta-analysis using the a priori anticipated intervention effect and a new estimate of the between trial variance may be used in a fixed-effect or a random-effects model to evaluate how far we will be from a conclusion of whether the intervention effect exists or not. The fixed-effect model may then turn out to be the most appropriate model to evaluate the pooled intervention effect when one or a few trials heavily dominate the entire accumulated evidence [ 77 ]. If diversity and the proportion of events in the control group change substantially, the magnitude of the required information size and the corresponding number of required future trials may change accordingly.
However, a moving target seems better than having no target at all. Recently, we documented that in prospective application of TSA in very large cumulative meta-analyses, TSA prevented false positive conclusions in 13 out of 14 meta-analyses when RIS was not reached [ 45 ]. TSA of meta-analysis like the sequential analysis of a single randomised trial, originates from frequentist statistics [ 29 ].
The frequentist way of thinking was initially based on testing of the null hypothesis. The anticipation of an intervention effect of a specific magnitude, the alternative hypothesis, and subsequently the calculation of a required information size enabling the conclusion whether such an effect could be accepted or rejected, is, however, intimately related to the Bayesian prior.
TSA contains an element of Bayesian thinking by relating the result of a meta-analysis to the a priori point estimate of the intervention effect addressed in the analysis [ 77 ]. In a Bayesian analysis, the prior takes form of an anticipated probability distribution of one or more possible alternative hypotheses or intervention effects which multiplied with the likelihood of the trial, results in a posterior distribution [ 79 ].
A methodological position between the frequentist and the Bayesian thinking can be perceived both in sequential interim-analyses of a single trial and in TSA of several trials [ 29 ]. Both have a decisive anticipation of a realistic intervention effect, although a full Bayesian analysis should incorporate multiple prior distributions with different anticipated distributions of intervention effects: e. The TSA prioritise one or a few specific alternative hypotheses, specified by point estimates of the anticipated effect in the calculation of the required information size just as in the sample size estimation of a single trial [ 11 ].
The incentive to use sequential analyses arise because the true effect is not known and the observed intervention effect may be larger than the effect addressed in the sample size estimation of a single trial as well as in the estimation of the required information size for a meta-analysis of several trials. The need to discover an early, but greater effect than the one anticipated in the sample or information size calculation, or to discard it, thereby originates.
If the intervention effect, in relation to its variance, happens to be much larger during the trial or the cumulative meta-analysis, this will be discovered through the breakthrough of the sequential boundary. However, this may also be problematic as too small sample sizes in relation to the true effect , as mentioned, increase the risk of overestimation of the intervention effect or the risk of underestimation of the variance. In other words, due to a factitious too small sample size, we may erroneously confirm an unrealistic large anticipated intervention effect due to the play of chance.
There seems to be an ancestry between the sceptical prior in a Bayesian analysis and the use of a realistic intervention effect in a sequential analysis when the sample size in a single trial or the information size in a meta-analysis should be calculated [ 77 , 78 ]. The smaller the effect, the greater the demand for quantity of information, and the sequential statistical significance boundaries become more restrictive. In other words, it becomes more difficult to declare an intervention effective or ineffective, in case the required information size is not achieved. Christopher Jennison and Bruce Turnbull, however, have shown that on average, when a small, but realistic and important intervention effect is anticipated, a group sequential design requires fewer patients than an adaptive design, e.
The group sequential design seems more efficient than the adaptive design. In line with mathematical theory [ 72 ], simulation studies [ 6 ], and empirical considerations [ 44 , 45 , 81 , 82 ], there is evidence that small trials and small meta-analyses by chance tend to overestimate the intervention effect or underestimate the variance. Early indicated large intervention effects are often contradicted in later published large trials or large meta-analyses [ 6 , 45 , 81 , 82 ].
The reason might be that statistical confidence intervals and significance tests, relating exclusively to the null hypothesis, ignore the necessity of a sufficiently large number of observations to assess realistic or minimally important intervention effects. In general, it is easier to reject the null hypothesis than to reject a small, but realistic and still important, alternative hypothesis [ 64 ].
The null hypothesis can never be proven, and in practice, this means that it can never be completely discarded, as this would require an infinitely large number of observations. The reason for early spurious significant findings may be quite simple, although not self-evident. Even adequate randomisation in a small trial lacks ability to ensure the balance between all the involved, known or unknown, prognostic factors in the intervention groups [ 81 ].
When we find a statistically significant intervention effect in a small trial or in a small meta-analysis, it is often due to insufficient balance of important prognostic factors, known or unknown, between the intervention groups. Therefore, it is not necessarily intervention effects that we observe, but rather an uneven distribution of important prognostic factors between groups. In addition to the described risks of random error, the overall risk of bias which includes the risk of publication bias makes it understandable why published trials and meta-analyses often result in unreliable estimates of intervention effects [ 2 , 83 ].
The power of frequentist inference in a single trial and in a meta-analysis of several trials lies in two basic assumptions. First, the only decisive difference between the intervention groups during the trial is the difference between the interventions. In a small trial and a small meta-analysis, the assumption, that all other risk factors are equally distributed in the two intervention groups, may not be fulfilled as described above, even though adequate bias control has been exercised.
This assumption, which never totally excludes the possibility that the result of a trial may agree with or be a result of the null hypothesis, demands a specific a priori chosen threshold for statistical significance. That is, a sufficiently small P -value leads us to regard the trial result as virtually impossible under the null hypothesis, and, therefore, we regard the opposite to be true and discard the null hypothesis. Or alternatively expressed, does a P -value less than an a priori chosen threshold of statistical significance reject the null hypothesis?
Ronald A. Nevertheless, ever since, it seems to have broadly been implemented as a criterion for conclusion in medical research [ 83 ], and this is likely wrong [ 85 ]. Most systematic reviews with meta-analyses, including Cochrane systematic reviews, do not have sufficient statistical power to detect or reject even large intervention effects.
Meta-analyses are updated continuously, and, therefore, ought to be regarded as interim-analyses on the way towards a required information size. The evaluation of meta-analyses ought to relate the total number of randomised participants to the required meta-analytic information size and the corresponding number of required trials considering statistical diversity. A Bayesian meta-analysis, using prior distributions for both the intervention effect and the statistical heterogeneity, may even be more reliable for deciding whether an intervention effect is present or not. However, the Bayesian meta-analysis also poses difficulties with interpretation.
Until easy-to-use software programs for full Bayesian meta-analysis become accessible, TSA represents a better assumption-transparent analysis than the use of traditional meta-analysis with unadjusted confidence intervals and unadjusted thresholds for statistical significance. The impact of study size on metaanalyses: examination of underpowered studies in Cochrane reviews. PLoS One. Statistically significant metaanalyses of clinical trials have modest credibility and inflated effects.
J Clin Epidemiol. Random error in cardiovascular meta-analyses: how common are false positive and false negative results? Int J Cardiol. Imberger G. Multiplicity and sparse data in systematic reviews of anaesthesiological interventions: a cause of increased risk of random error and lack of reliability of conclusions? Apparently conclusive metaanalyses may be inconclusive—trial sequential analysis adjustment of random error risk due to repetitive testing of accumulating data in apparently conclusive neonatal metaanalyses.
mathematics and statistics online
Int J Epidemiol. The number of patients and events required to limit the risk of overestimation of intervention effects in meta-analysis—a simulation study. Trial sequential analysis may establish when firm evidence is reached in cumulative meta-analysis. Pogue J, Yusuf S.
- HTML5 Solutions: Essential Techniques for HTML5 Developers.
- SEQTEST Procedure.
- Creep Resistant Steels.
Cumulating evidence from randomised trials: utilizing sequential monitoring boundaries for cumulative meta-analysis. Control Clin Trials. Overcoming the limitations of current meta-analysis of randomised controlled trials. User manual for trial sequential analysis TSA. Estimating required information size by quantifying diversity in a random-effects meta-analysis.
Software for trial sequential analysis TSA ver. Young C, Horton R. Putting clinical trials into context. Clarke M, Horton R. Bringing it all together: Lancet-Cochrane collaborate on systematic reviews. Hypothermia after cardiac arrest should be further evaluated—a systematic review of randomised trials with metaanalysis and trial sequential analysis.
N Engl J Med. Target temperature management after out-of-hospital cardiac arrest — a randomised, parallel-group, assessor-blinded clinical trial — rationale and design.
- Log in to Wiley Online Library.
- 75 Simple Ways to Celebrate the Holidays: Scatter Joy, Lend a Hand, Pray for Peace, Understand?
- Fler böcker av författarna!
- Author information.
- Saltmarsh Conservation, Management and Restoration: Coastal Systems and Continental Margins.
Am Heart J. Discrete sequential boundaries for clinical trials. Repeated significance tests on accumulating data. Pocock SJ. Group sequential methods in the design and analysis of clinical trials. Uncertainty of the time of first significance in random effects cumulative meta-analysis. Statistical multiplicity in systematic reviews of anaesthesia interventions: a quantification and comparison between Cochrane and non-Cochrane reviews.
Wald A. Contributions to the theory of statistical estimation and testing hypotheses. Ann Math Stat. Sequential tests of statistical hypotheses. Wald A, Wolfowitz J. Bayes solutions of sequential decision problems. Winkel P, Zhang NF. Statistical development of quality in medicine. Chichester, West Sussex: Wiley; Armitage P. The evolution of ways of deciding when clinical trials should stop recruiting. James Lind Library Bulletin Dunn OJ.
Multiple comparisons among means. J Am Stat Assoc. Design and analysis of randomised clinical trials requiring prolonged observation of each patient. Introduction and design. Br J Cancer. A multiple testing procedure for clinical trials. Statistical principles for clinical trials. Stat Med. Confidence intervals following group sequential tests in clinical trials.
Sequential experimentation in clinical trials : design and analysis in SearchWorks catalog
DeMets DL. Group sequential procedures: calendar versus information time.
Jennison C, Turnbull BW. Group sequential methods with applications to clinical trials. Issues in data monitoring and interim analysis of trials. Health Technol Assess. Sample size calculations in clinical research. Computations for group sequential boundaries using the Lan-DeMets spending function method. DerSimonian R, Laird N. Meta-analysis in clinical trials. Statistical algorithms in Review Manager ver. Quantifying heterogeneity in a meta-analysis. Kulinskaya E, Wood J. Trial sequential methods for meta-analysis.
Res Synth Methods. Can trial sequential monitoring boundaries reduce spurious inferences from meta-analyses? False positive findings in cumulative meta-analysis with and without application of trial sequential analysis: an empirical review. BMJ Open. Systematic reviews of anesthesiologic interventions reported as statistically significant: problems with power, precision, and type 1 error protection. Anesth Analg.
Mascha EJ. Alpha, beta, meta: guidelines for assessing power and type I error in meta-analyses. Predicting the extent of heterogeneity in meta-analysis, using empirical data from the Cochrane Database of Systematic Reviews. Trial sequential analysis reveals insufficient information size and potentially false positive results in many meta-analyses. The Cochrane Collaboration, Evidence at a glance: error matrix approach for overviewing available evidence. Evidence-based clinical practice: overview of threats to the validity of evidence.
Eur J Intern Med. Reported methodological quality and discrepancies between large and small randomised trials in meta-analyses. Ann Intern Med. Influence of reported study design characteristics on intervention effect estimates from randomised, controlled trials. Full Name Comment goes here. Are you sure you want to Yes No. Be the first to like this. No Downloads. Views Total views.
Actions Shares. Embeds 0 No embeds. No notes for slide. Description this book Sequential Experimentation in Clinical Trials: Design and Analysis is developed from decades of work in research groups, statistical pedagogy, and workshop participation. The background material in these building blocks is summarized in Chapter 4.
Besides group sequential tests and adaptive designs, the book also introduces sequential change- point detection methods in Chapter 5 in connection with pharmacovigilance and public health surveillance. Don't hesitate!!! If you want to download this book, click link in the last page 6. You just clipped your first slide! Clipping is a handy way to collect important slides you want to go back to later.