Thinking About Your Evaluation Questions
Evaluations are organized around questions. The evaluation is a strategy designed to provide information to answer the question of interest. Good evaluation questions suggest a source for the information used as well as a method for collecting that information. Program evaluations and the evaluation questions around which they are organized can be divided into two broad categories: 1) Formative (sometimes called Process), and 2) Summative (sometimes called Outcome). In her extensive work on program evaluation, Jacobs (1988) asserts the importance of knowing the stage of development of the program being evaluated. In the chart below, we have tried to depict how stages of program development can influence selection of formative or summative evaluation over time.
Formative Evaluation
Formative evaluation focuses on program improvement. It is conducted during the creation of the program to provide information that staff can use to make changes in the operation and implementation of the program. Formative evaluation typically is conducted early in the life cycle of a program, such as during the development and implementation phases. Data collection can occur at multiple points in time: before, during and/or after the program is implemented.
Summative Evaluation
Summative evaluation focuses on program products, results or impact. It is conducted to provide evidence about the worth or merit of a program. Summative evaluation typically is conducted later in the life cycle of a program, once issues related to the operations of the program have been resolved.
Formative versus Summative Evaluation
The appropriateness of different types of questions underlying a program evaluation can change as a program moves from implementation to maturity. This balance of formative and summative evaluation as a program moves from innovation to routine is illustrated in the Ages and Stages figure below. This figure was created by the authors of this tutorial, who based it on ideas from Jacobs, 1988.
It is not unusual for an evaluation to focus on multiple questions, which might include both formative and summative measures.
For new programs, there is frequently a concern about basic implementation issues, essentially "how well did the program transfer from paper to the real world?" Many of these kinds of questions fall within the domain of formative assessment, and can guide program implementation over time to improve the performance of the program.
Short-term outcomes might be measurable for newer programs, depending on the success of initial implementation efforts. As individuals complete the program, there is an interest in documenting whether there was a change in attitudes, skills or knowledge as a result of program participation.
Long-term outcomes often can only be documented for established programs. These kinds of questions focus on changes in attitudes, knowledge or skills that extend beyond the period of program participation and into the daily work of the participants. Such long-term changes can lead to institutional change over time, but again will likely only result from established programs.
What is the Focus of the Question?
Program evaluations can focus on a wide range of possibilities, based on different features or elements of the program, such as:
Often in designing a program evaluation, there are multiple questions of interest to various stakeholders, such as program developers, accrediting organizations, administrators or funding organizations. The questions can be prioritized in terms of the importance, ease of data collection and available resources for evaluation.
The figure below helps illustrate the various features of a program that could be the subject of evaluation, as well as possible domains around which questions could be developed [see sidebar for larger version]. This figure was created by the authors of this tutorial, who based it on ideas from Elissavet & Economides, 2003.
EXAMPLE
Earlier we had established three program objectives: 1) to improve medical students’ communication skills in patient education, shared decision-making and delivering bad news, 2) to use experienced faculty as facilitators, and 3) to emphasize the clinical relevance of communication to the practice of surgery.
In this example we have written a set of evaluation questions related to those objectives:
Objective 1:
– Do students rate these workshops as valuable in terms of:- Quality of Instruction?
- Relevance of the Content?
- Amount of Instruction?– Do students rate themselves as having improved competence in the three areas of communication?
– Do students demonstrate communication competence in standardized clinical encounters?
Objective 2:
– Do students rate the involvement of experienced surgeons as faculty facilitators as valuable?– Do faculty rate their own involvement in this course as valuable?
Objective 3:
– Do students understand the impact of physician communication skills on patient outcomes?
See Example 1 in the Resources menu at the right for this example evaluation plan completed thus far.
WRITE YOUR OWN EVALUATION QUESTIONS
In your Evaluation Planning Tool write your evaluation questions in column 4. Make sure each objective has at least one evaluation question. This is demonstrated in the examples in the Resources Menu.
Do you need more information? Check out the RESOURCES menu at the right.