EVAL 6970: Research on Evaluation
Description
This course is in the interdisciplinary Ph.D. in evaluation program at Western Â鶹´«Ã½Ó¦Óà University.
Evaluators use research methods to evaluate and often improve programs. Rarely, though, are research methods applied to evaluate and improve evaluation practice and training. For the evaluation discipline to grow and earn credibility, scholars and evaluation practitioners must create a culture of empirical research on evaluation.
Historically, research on evaluation was frequently conducted, carving and shaping practice to emphasize among other topics, use and quality (Henry & Mark, 2003). However, several decades of stagnate efforts to conduct research on evaluation has limited evaluation innovation. Only in recent years have attempts to define and encourage more research on evaluation sparked new efforts.
This course is designed to expose students to the many different types of research on evaluation by engaging them in a systematic review of the research on evaluation literature. Currently, no such comprehensive source for research on evaluation literature exists.
In this project-based class, students will be expected to develop an awareness of the research on evaluation landscape and to identify and plan opportunities for contributing to it. Students will be required to locate, read, critique, summarize, present and discuss a broad spectrum of recent published studies of research on evaluation from the past decade. Additionally, students will be expected to formulate a detailed proposal, including a background and methodology section, for conducting their own research on evaluation study.
Syllabus
Instructor
Required readings
These papers will serve as the topics for class discussion, and students are expected to read and be prepared for in-depth discussion regarding these articles with each other as well as the guest presenters or discussants. The list of required readingsmay be amended as necessary.
In addition, approximately 12 yet-to-be-determined readings across six weeks will be assigned by student presenters. These papers will be chosen as exemplars from the individual domains by the students working within each domain. All students in the class will be expected to familiarize themselves with these readings prior to the student-led presentations.
- Christie, C. A. (2003). What guides evaluation? A study of how evaluation practice maps onto evaluation theory. In C. A. Christie (Ed.), & The practice-theory relationship. New Directions for Evaluation, 97, 7–36.
- Christie, C. A. (2011). Advancing empirical scholarship to further develop evaluation theory and practice. Canadian Journal of Program Evaluation, 26(1), 1-18.
- Cooksy, L. J. & Caracelli, V. J. (2005). Quality, context, and use: Issues in achieving the goals of metaevaluation. American Journal of Evaluation, 26(1), 31-42.
- Cooksy, L. J. & Caracelli, V. J. (2009). Metaevaluation in practice: Selection and application of criteria. Journal of MultiDisciplinary Evaluation, 6(11), 1-15.
- Cooksy, L. J. & Mark, M. M. (2012). Influences on evaluation quality. American Journal of Evaluation, 33(1), 79-87.
- Gargani, J. (2011). More than 25 years of the American Journal of Evaluation: Recollections of past editors in their own words, American Journal of Evaluation, 32(3), 428-447.
- Gargani, J. (2012). .
- Gargani, J. (in press). What can practitioners learn from theorists’ logic models?
- Gargani, J., & Donaldson, S. I. (2011). What works for whom, where, why, for what, and when? Using evaluation evidence to take action in local contexts. In H. T. Chen, S. I. Donaldson, & M. M. Mark (Eds.), Advancing validity in outcome evaluation: Theory and practice. New Directions for Evaluation, 130, 17–30.
- Henry, G. T. & Mark, M. M. (2003), Toward an agenda for research on evaluation. In C. A. Christie (Ed.), The practice-theory relationship. New Directions for Evaluation, 97, 69–80.
- Mark, M. M. (2001). Evaluation’s future: Furor, futile, or fertile? American Journal of Evaluation, 22(3), 457-479.
- Mark, M. M. (2008). Building a better evidence base for evaluation theory. In N. L. Smith (Ed.) Fundamental Issues in Evaluation (pp. 11-134). New York, NY: Guilford.
- Mark, M. M. (2011). Toward better research on—and thinking about—evaluation influence, especially in multisite evaluations. In J. A. King & F. Lawrenz (Eds.), Multisite evaluation practice: Lessons and reflections from four cases.New Directions for Evaluation, 129, 107-119.
- Miller, R. L. (2010). Developing standards for empirical examinations of evaluation theory. American Journal of Evaluation, 31(3), 390-399.
- Miller, R. L. & Campbell, R. (2006). Taking stock of empowerment evaluation: An empirical review. American Journal of Evaluation, 27(3), 296-319.
- Skoltis, G. J., Morrow, J. A., & Burr, E. M. (2009). Reconceptualizing evaluator roles. American Journal of Evaluation, 30(3), 275-295.
- Smith, N. L., Brandon, P. R., Hwalek, M., Kistler, S. J., Labin, S. N., Rugh, J., Thomas, V. & Yarnall, L. (2010).Looking ahead: The future of evaluation. American Journal of Evaluation, 32(4), 565-599.
- Szanyi, M., Azzam T., & Galen, M. (In press). Research on evaluation: A needs assessment. Canadian Journal of Program Evaluation.