Tipultech logo

Overview of program evaluation

Author: Dr Simon Moss

Overview

Program evaluation refers to a coordinated series of activities that are intended to evaluate a program, policy, or initiative. For example, program evaluation might be used to assess whether or not an advertising campaign to reduce the incidence of smoking or speeding was effective. To conduct a program evaluation, practitioners need to:

  • Decide who are the stakeholders, such as the managers of this program, the governing bodies, the funding agencies, the customers, and community leaders. All of these stakeholders need to be represented, especially when the design and objectives of this evaluation are negotiated.
  • Identify the objectives of these stakeholders. That is, practitioners should ask questions to uncover the reasons they want the program evaluated and their principal, as well as covert, concerns and purposes. Often, the practitioner will then need to negotiate with the stakeholders to constrain the breadth of purposes and objectives..
  • Stipulate the resources that are available to complete the evaluation. Resources include money, time, personnel, and support from organizations, such as written authorizations.
  • Gather information about whether evaluations have been conducted previously, such as a meta-analysis on similar programs.
  • Characterize the nature of this program, such as the objectives, history, growth, and primary facets. Differences between official documentation and actual practices need to be established.
  • Delineate the environment in which this program operates. For example, characterize the organization that coordinates this program and alternatives services for clients.
  • Consider which research designs should be followed. Practitioners might use several designs in parallel, including experimental, correlational, time series, and qualitative approaches.
  • Identify opportunities to collect data. For instance, practitioners might consider issues such as the sources of data that are available now, the validity and relevance of these data, the opportunities to collect additional data, and so forth.
  • Decide whether or not an evaluation of the program is feasible.
  • If the program is regarded as feasible, practitioners need to

    Write and disseminate the report, which should include an executive summary, simple language, visual representations, and feasible recommendations to improve the program.

    Example of program evaluation

    McDavid and Hawthorn (2006) provided an excellent demonstration of a typical program evaluation. Specifically, they discussed the evaluation of a Neighbourhood Integrated Service team--designed to improve communications across community services and ultimately to improve collaboration with the community. Sixteen committees, which included representatives from the major city departments, were formed, one for each of the principal neighborhoods. Until the program was implemented, the various services, such as police and fire department, were not coordinated well.

    Objectives of the evalution

    Three years after its inception, an evaluation was implemented, because concerns about the efficacy of this program were surfacing. The evaluation was undertaken to:

    The evaluation was not conducted to ascertain whether or not the program should be discarded& such potential threats might have compromised honesty and openness.

    Parameters of the evaluation

    To conduct the evaluation, the contractor had to decide upon:

    Complications to program evaluation

    Attribution

    Many of the observed outcomes of a program, such as more communication across communities, can be ascribed to other factors. That is, outcomes might have improved even if the program had not been implemented (Mayne, 2001).

    Efficiency

    Rather than merely show the program generated desirable outcomes, practitioners also need to show the initiative was efficient. That is, practitioners usually examine whether the ratio of inputs to outputs, such as cost per meeting, is acceptable, called technical efficiency. They also need to examine whether the ratio of benefits to costs, called economic efficiency, is reasonable.

    Relevance

    Programs might be effective and efficient but nevertheless futile. In particular, the broader context, such as government priorities, might have changed. Hence, researchers need to explore whether the program is still germane to the vision, mission, values, goals, and objectives of some body, such as the government, often by applying a needs analysis.

    Uncertain objectives

    Often, the objectives of program evaluations are ambiguous or vary across stakeholders. Practitioners can ask several questions to clarify these goals and objectives, such as

    Varieties of program evaluations

    Design

    Some program evaluations are experimental designs, in which individual participants or organizations are randomly assigned to one of two or more conditions. For example, half of the participants might complete the program, and the remaining participants might not complete the program. Differences between these two groups of participants demonstrate the effect of this program. The CONSORT statement stipulates many of the criteria that can be applied to assess the validity of program evaluations (see Altman et al., 2001;; Campbell, 2004;; Moher, Schulz, Altman, 2001;; Plint et al., 2006)

    Usually, however, program evaluations are not experimental designs (McDavid & Hawthorn, 2006). For instance, the program might already have been completed, and thus participants could not be randomly allocated to conditions. Alternatively, other factors, rather than random allocation, might have determined which individuals completed the program, such as willingness to participate.

    References

    Alkin, M. C. (Ed.) (2004). Evaluation roots. Thousand Oaks: Sage.

    Altman, D. G., Schulz, K. F., Moher, D., Egger, M., Davidoff, F., Elbourne, D., et al. (2001). CONSORT GROUP (Consolidated Standards of Reporting Trials). The revised CONSORT statement for reporting randomized trials: explanation and elaboration. Annals of Internal Medicine, 134, 663-694.

    Bledsoe, K., & Graham, J. A. (2005). The use of multiple evaluation approaches in program evaluation. American Journal of Evaluation, 26, 302-319.

    Campbell, M. J. (2004). Extending CONSORT to include cluster trials. British Medical Journal, 328, 654-655.

    Chen, H. T. (2004). Practical program evaluation: Assessing and improving planning, implementation, and effectiveness. Newbury Park, CA: Sage.

    Christie, C. A. (2003). The practice-theory relationship in evaluation. New Directions for Program Evaluation, 97. San Francisco, CA: Jossey-Bass.

    Donaldson, S.I. (2001). Mediator and moderator analysis in program development. In S. Sussman (Ed.), Handbook of program development for health behavior research and practice (pp. 470-496). Newbury Park, CA: Sage.

    Donaldson, S. I. (2003). Theory-driven program evaluation in the new millennium. In S. I. Donaldson & M. Scriven (Eds.) Evaluating social programs and problems: Visions for the new millennium (pp. 111-142). Mahwah, NJ: Erlbaum.

    Donaldson, S. I., & Gooler, L. E. (2003). Theory-driven evaluation in action: Lessons from a $20 million statewide work and health initiative. Evaluation and Program Planning, 26, 355-366.

    Donaldson, S. I., Gooler, L. E., & Scriven, M. (2002). Strategies for managing evaluation anxiety: Toward a psychology of program evaluation. American Journal of Evaluation, 23, 261-273.

    Mark, M. M. (2003). Toward a integrative view of the theory and practice of program and policy evaluation. In S. I. Donaldson & M. Scriven (Eds.) Evaluating social programs and problems: Visions for the new millennium (pp. 183-204). Mahwah, NJ: Erlbaum.

    Mayne, J. (2001). Addressing attribution through contribution analysis: Using performance measures sensibly. Canadian Journal of Program Evaluation, 16, 1-24.

    McDavid, J. C., & Hawthorn, L. R. L. (2006). Program evaluation and performance measurement: An introduction to practice. London: Sage.

    Moher, D., Schulz, K. F., Altman, D. G. (2001). The CONSORT statement: Revised recommendations for improving the quality of reports of parallel-group randomised trials. Lancet, 357, 1191-1194.

    Plint, A. C., Moher, D., Morrison, A., Schulz, K., Altman, D. G., Hill, C., et al. (2006). Does the CONSORT checklist improve the quality of reports of randomised controlled trials? A systematic review. Medical Journal of Australia, 185, 263-267.

    Rossi, P. H., Lipsey, M. W., & Freeman, H. E. (2004). Evaluation: A systematic approach (7th Ed.). Thousand Oaks, CA: Sage

    Scriven, M. (2003). Evaluation in the new millennium: The transdisciplinary vision. In S. I. Donaldson & M. Scriven (Eds.) Evaluating social programs and problems: Visions for the new millennium (pp. 19-42). Mahwah, NJ: Erlbaum.

    Shadish, W. R., Cook, T. D., & Campbell, D. T. (2001). Experimental and quasi-experimental designs for generalized causal inference. Boston: Houghton-Mifflin.

    Shadish, W. R., Cook, T. D., & Leviton, L. C. (1991). Foundations of program evaluation: Theories of practice. Newbury Park, CA: Sage.

    Stufflebeam, D. L. (2004). The 21st-Century CIPP Model: Origins, Development, and Use. In M. C. Alkin (Ed.), Evaluation roots (pp. 245-266). Thousand Oaks: Sage.

    Weiss, C. H. (2004). On theory-based evaluation: Winning friends and influencing people. The Evaluation Exchange, IX, 4, 1-5.



    Academic Scholar?
    Join our team of writers.
    Write a new opinion article,
    a new Psyhclopedia article review
    or update a current article.
    Get recognition for it.





    Last Update: 6/1/2016