The regression discontinuity design is a research design that overcomes some of ethical problems with experimental designs-but also overrides some of the limitations to the internal validity of quasi-experimental designs. This design was developed by Campbell and colleagues (Campbell & Stanley, 1963& Cook & Campbell, 1979) as well as promulgated by Trochim (1984, 1990).
To illustrate, consider some researchers who want to examine whether or not training in meditation reduces anxiety at work. To apply a regression discontinuity design, the researcher would:
This design is applicable whenever two conditions are fulfilled (see Trochim, 1984). First, the decision as to whether participants should be allocated into one condition or another condition depends on whether they exceed some threshold on a variable. Second, the outcome variable is equally applicable to all participants, regardless of the condition to which they were assigned,
Experiments, in which individuals are randomly assigned to conditions, are often considered to be an exemplary design. To illustrate, consider a researcher who wants to examine whether or not training in meditation reduces anxiety at work. To conduct an experiment-sometimes called a randomized control design-some system, such as random numbers, are applied to allocate participants into one of two conditions: meditation or control. After the training is completed, the anxiety of all participants is assessed.
Suppose anxiety is lower in the participants who received training in meditation. This finding implies that training in meditation must reduce anxiety. The participants in each condition are likely to be equivalent on all other factors-such as age, performance, and health. These factors, therefore, could not be responsible for the difference between the two conditions.
Three complications to experiments can arise. First, the experiment can be unethical. That is, the researcher must withhold the treatment, in this instance the training in mediation, from half the participants. This delay in the treatment of relaxed or happy participants is not necessarily a problem. However, this delay in the treatment of anxious participants might amplify their emotional concerns.
Second, although related to this issue, the experiment is often infeasible. In many situations, some participants have already received some treatment or intervention, such as training in meditation. If researchers confined their studies to experiments, valuable pools of data would need to be disregarded.
Third, factors that are confounded with the treatment or intervention could also affect the outcome. Training in meditation, for example, also involves disruption from the usual work routine, which alone could affect anxiety.
The regression discontinuity design overcomes two of these three limitations.
In particular, in regression discontinuity design, individuals who appreciably need some treatment are assigned to the condition in which the intervention is offered. For example, individuals who experience severe anxiety are assigned to the condition in which training in meditation is offered. The ethical concern with delaying the treatment is thus circumvented,
Second, the design utilizes data that might otherwise be disregards. That is, this design is applicable to many contexts. Researchers might, for example, want to examine whether:
In these examples, whether or not individuals are subjected to the intervention-the job rejection or the training, for example-depends on the performance on another variable. Hence, the regression discontinuity design applies.
However, like experiments, the regression discontinuity design cannot overcome the limitation that other factors might be confounded with the treatment or intervention. Control conditions that are more similar to the treatment could overcome this issue.
The regression discontinuity design also circumvents many of the limitations that constrain the legitimacy of traditional quasi experimental designs. In quasi experimental designs, individuals are not randomly assigned to conditions. For example, perhaps individuals in some organizations receive training in meditation and individuals in other organizations do not receive this training. Any differences in anxiety between the two conditions cannot necessarily be attributed to the meditation. Perhaps the individuals who received the meditation are assigned more enjoyable jobs, have developed better friends, and so forth.
In these traditional quasi experimental designs, the variables that determine in which condition individuals are assigned are not measured. Hence, these variables cannot be controlled statistically. In the regression discontinuity design, the variable that determines in which condition individuals are assigned is measured-and thus can be controlled statistically (Trochim, 1984). In short, although strictly a quasi experimental design, the regression discontinuity design can generate more informative conclusions.
During the first phase, researchers need to decide which variable should determine whether participants assigned to one condition or the other condition. Consider again the study in which researchers want to ascertain whether training in meditation alleviates anxiety. Several options are available. First, participants could be assigned to the treatment condition if they exhibited above average levels of anxiety on some measure. In other words, the allocation variable might be the same as the outcome variable, but merely measured before the treatment begins.
Second, participant s could be assigned to the treatment condition if they expressed dissatisfaction towards their job-perhaps in a recent survey. Hence, the allocation variable might be correlated with, but different to the outcome variable. Indeed, the allocation variable might be uncorrelated with the outcome variable (see the example that referred to social security numbers by Trochim, 1990).
Third, participants could be assigned to the treatment condition if the CEO of this organization feels the individuals are stressed. That is, the decision to allocate might be derived from a subjective variable. In this instance, however, the subjective variable must be quantified, even roughly. The CEO, for example, could assign a rating to each individual, representing the extent to which the person seems stressed.
2. Measure individuals after the treatment
Participants are then allocated to one of the two conditions, and the treatment is administered. The outcome of interest then needs to be assessed. The key consideration is to ensure the outcome measure is applicable to all participants.
For example, a traditional measure of anxiety would probably be acceptable. However, a measure of feelings while meditating would not be applicable to the individuals who had not received training in meditation.
This example might be contrived, but similar complications can arise in many settings. Consider a researcher who wants to examine whether promotions at work improve performance. The researcher would then need to generate a measure of performance that applies to individuals at different levels in the organization-which can be a difficult task.
3. Conduct the multiple regression analysis.
Once the data is collected, a specific variant of multiple regression analysis would need to be conducted, as delineated by Trochim (1984, 1990). To demonstrate this analysis, suppose the allocation variable is job satisfaction. That is, only individuals with low levels of job satisfaction receive training in meditation. In addition, anxiety, which is measured after the intervention, is the outcome variable. The results are presented in the following figure. To show this discontinuity is significant, the researcher needs to:
In the final regression analysis, if the treatment variable is significant, the discontinuity has been established. Furthermore, sometimes the treatment variable interacts with the allocation variable. In other words, the benefits of the treatment depend on values on the allocation variable.
Conclusions derived from the regression discontinuity design may not be accurate in some contexts. First, conclusions will be misleading if any non-linear relationship between the allocation variable and the outcome variable are not included in the regression equation. The following figure, which derives from an example presented by Cook and Campbell (1979), illustrates this problem. Suppose the multiple regression included linear terms-but no squared or cubic terms. The regression analysis would thus assume the equation corresponds to the horizontal doted lines in this figure. That is, the analysis would uncover a discontinuity.
However, suppose the relationship actually corresponds to the unbroken, curvy line. This equation does not seem to imply a discontinuity. That is, no sudden change in values arises precisely at the threshold-and therefore the pattern of findings cannot be ascribed to the intervention.
A multiple regression analysis with linear terms would thus generate a misleading conclusions. A multiple regression analysis with squared and cubic terms, like dissatisfaction_r * dissatisfaction_r * dissatisfaction_r, might override this problem. Fortunately, this complication is not common.
Second, compared to experimental designs, regression discontinuity designs are less powerful. That is, the sample size needs to be increased to ensure significant effects are uncovered (see Reichardt, Trochim, & Cappelleri, 1995).
Berk, R. A., & Rauma, D. (1983). Capitalizing on nonrandom assignment to treatment: A regression discontinuity of crime control program. Journal of American Statistical Association, 78, 21-28.
Braden, J. P., & Bryant, T. J. (1990). Regression discontinuity designs: Applications for school psychologists. School Psychology Review, 19, 232-239.
Campbell, D. T., & Stanley, J. C. (1963). Experimental and quasi-experimental designs for research on teaching. In N. L. Gage (Ed.), Handbook of research on teaching (pp. 171-246). Chicago: Rand McNally.
Cappelleri, J. C., Trochim, W.M.K., Stanley, T. D., & Reichardt, C. S. (1991). Random measurement error does not bias the treatment effect estimate in the regression-discontinuity design. I. The case of no interaction. Evaluation Review, 15, 395-419.
Cook, T. D., & Campbell, D. T. (Eds.). (1979). Quasi-experimentation: Design and analysis issues for field settings. Chicago: Rand McNally.
Judd, C. M., & Kenny, D. A. (1981). Estimating the effects of social intervention. New York: Cambridge University Press.
Law, K. S., & Myors, B. (1993). Cutoff scores that maximize the total utility of a selection program: Comment on Martin and Raju's (1992) procedure. Journal of Applied Psychology, 78, 736-740.
Martin, S. L., & Raju, N. S. (1992). Determining cutoff scores that optimize utility: A recognition of recruiting costs. Journal of Applied Psychology, 77, 15-23.
Mellor, S., & Mark, M. M. (1998). A quasi-experimental design for studies on the impact of administrative decisions: Applications and extensions of the regression-discontinuity design. Organizational Research Methods, 1, 315-333.
Reichardt, C. S. (1979). The statistical analysis of data from nonequivalent group designs. In T. D. Cook & D. T. Campbell (Eds.), Quasi-experimentation: Design and analysis issues for field settings (pp. 147-206). Chicago: Rand McNally.
Reichardt, C. S., Trochim, W. M. K., & Cappelleri, J. C. (1995). Reports of the death of the regression-discontinuity analysis are greatly exaggerated. Evaluation Review, 19, 39-63.
Seaver, W. B., & Quarton, R. J. (1976). Regression-discontinuity analysis of dean's list effects. Journal of Educational Psychology, 66, 459-465.
Simpson, E. H. (1951). The interpretation of interaction contingency tables. Journal of the Royal Statistical Society (Series B), 13, 238-241.
Stanley, T. D. (1991). "Regression-discontinuity design" by any other name might be less problematic. Evaluation Review, 15, 605-624.
Trochim, W.M.K. (1984). Research design for program evaluation: The regression-discontinuity approach. Beverly Hills, CA: Sage.
Trochim, W.M.K. (1990). The regression-discontinuity design. In L. Sechrest, E. Perrin, & J. Bunker (Eds.), Research methodology: Strengthening causal interpretations of nonexperimental data (PHS Rep. No. 90-3454, pp. 119-139). Rockville, MD: U.S. Department of Health and Human Services.
Trochim, W.M.K., & Cappelleri, J. C. (1992). Cutoff assignment strategies for enhancing randomized clinical trials. Controlled Clinical Trials, 13, 190-212.
Trochim, W.M.K., Cappelleri, J. C., & Reichardt, C. S. (1991). Random measurement error does not bias the treatment effect estimate in the regression-discontinuity design. II. When an interaction effect is present. Evaluation Review, 15, 571-604.
Last Update: 6/22/2016