Sensitivity Analysis Methods


Chan, W. (2017). Partially Identified Treatment Effects for Generalizability. Journal of Research on Educational Effectiveness, 10(3), 646–669. ERIC.

  • Recent methods to improve generalizations from nonrandom samples typically invoke assumptions such as the strong ignorability of sample selection, which is challenging to meet in practice. Although researchers acknowledge the difficulty in meeting this assumption, point estimates are still provided and used without considering alternative assumptions.
  • We compare the point identifying assumption of strong ignorability of sample selection with two alternative assumptions–bounded sample variation and monotone treatment response–that partially identify the parameter of interest, yielding interval estimates. Additionally, we explore the role that population data frames play in contributing identifying power for the interval estimates. We situate the comparison around causal generalization with nonrandom samples by applying the assumptions to a cluster randomized trial in education. Bounds on the population average treatment effect are derived under the alternative assumptions and the case when no assumptions are made on the data. While comparing the bounds, we discuss the plausibility of each alternative assumption and the practical trade-offs.
  • We highlight the importance of thoughtfully considering the role that assumptions play in causal generalization by illustrating the differences in inferences from different assumptions.

Chan, W. (2021). An Evaluation of Bounding Approaches for Generalization. The Journal of Experimental Education, 89(4), 690–720. Psychology Database.

  • Statisticians have developed propensity score methods to improve generalizations from studies that do not employ random sampling. However, these methods rely on assumptions whose plausibility may be questionable.
  • We introduce and discuss bounding, an approach that is based on alternative assumptions that may be more plausible. The bounding framework nonparametrically estimates population parameters using a range of plausible values. We illustrate how to tighten bounds using three approaches: imposing a monotonicity assumption, redefining the population of inference, and using propensity score stratification. Using two simulation studies, we examine the conditions under which bounds are tightened.
  • We conclude with an application of bounding to SimCalc, a cluster randomized trial that evaluated the effectiveness of a technology aid on mathematics achievement.

Dahabreh, I. J., Robins, J. M., Haneuse, S. J., Saeed, I., Robertson, S. E., Stuart, E. A., & Hernán, M. A. (2019). Sensitivity analysis using bias functions for studies extending inferences from a randomized trial to a target population. ArXiv Preprint ArXiv:1905.10684.

  • Extending (generalizing or transporting) causal inferences from a randomized trial to a target population requires “generalizability” or “transportability” assumptions, which state that randomized and non-randomized individuals are exchangeable conditional on baseline covariates. These assumptions are made on the basis of background knowledge, which is often uncertain or controversial, and need to be subjected to sensitivity analysis. We present simple methods for sensitivity analyses that do not require detailed background knowledge about specific unknown or unmeasured determinants of the outcome or modifiers of the treatment effect. Instead, our methods directly parameterize violations of the assumptions using bias functions.
  • We show how the methods can be applied to non-nested trial designs, where the trial data are combined with a separately obtained sample of non-randomized individuals, as well as to nested trial designs, where a clinical trial is embedded within a cohort sampled from the target population. We illustrate the methods using data from a clinical trial comparing treatments for chronic hepatitis C infection.

Nguyen, T. Q., Ackerman, B., Schmid, I., Cole, S. R., & Stuart, E. A. (2018). Sensitivity analyses for effect modifiers not observed in the target population when generalizing treatment effects from a randomized controlled trial: Assumptions, models, effect scales, data scenarios, and implementation details. PloS One, 13(12), e0208795.

  • Background: Randomized controlled trials are often used to inform policy and practice for broad populations. The average treatment effect (ATE) for a target population, however, may be different from the ATE observed in a trial if there are effect modifiers whose distribution in the target population is different that from that in the trial. Methods exist to use trial data to estimate the target population ATE, provided the distributions of treatment effect modifiers are observed in both the trial and target population—an assumption that may not hold in practice.
  • Methods: The proposed sensitivity analyses address the situation where a treatment effect modifier is observed in the trial but not the target population. These methods are based on an outcome model or the combination of such a model and weighting adjustment for observed differences between the trial sample and target population. They accommodate several types of outcome models: linear models (including single time outcome and pre- and post-treatment outcomes) for additive effects, and models with log or logit link for multiplicative effects. We clarify the methods’ assumptions and provide detailed implementation instructions. Illustration We illustrate the methods using an example generalizing the effects of an HIV treatment regimen from a randomized trial to a relevant target population.
  • Conclusion: These methods allow researchers and decision-makers to have more appropriate confidence when drawing conclusions about target population effects.

Nguyen, T. Q., Ebnesajjad, C., Cole, S. R., & Stuart, E. A. (2017). Sensitivity analysis for an unobserved moderator in RCT-to-target-population generalization of treatment effects. The Annals of Applied Statistics, 225–247.

  • In the presence of treatment effect heterogeneity, the average treatment effect (ATE) in a randomized controlled trial (RCT) may differ from the average effect of the same treatment if applied to a target population of interest. If all treatment effect moderators are observed in the RCT and in a dataset representing the target population, then we can obtain an estimate for the target population ATE by adjusting for the difference in the distribution of the moderators between the two samples.
  • This paper considers sensitivity analyses for two situations: (1) where we cannot adjust for a specific moderator V observed in the RCT because we do not observe it in the target population; and (2) where we are concerned that the treatment effect may be moderated by factors not observed even in the RCT, which we represent as a composite moderator U. In both situations, the outcome is not observed in the target population. For situation (1), we offer three sensitivity analysis methods based on (i) an outcome model, (ii) full weighting adjustment and (iii) partial weighting combined with an outcome model. For situation (2), we offer two sensitivity analyses based on (iv) a bias formula and (v) partial weighting combined with a bias formula.
  • We apply methods (i) and (iii) to an example where the interest is to generalize from a smoking cessation RCT conducted with participants of alcohol/illicit drug use treatment programs to the target population of people who seek treatment for alcohol/illicit drug use in the US who are also cigarette smokers. In this case a treatment effect moderator is observed in the RCT but not in the target population dataset.