Stage 4: Assessing Effectiveness and Making Refinements
Questions to Ask and Answer
Why Outcome Evaluation Is Important
Revising the Outcome Evaluation Plan
Conducting Outcome Evaluation
Refining Your Health Communication Program
Common Myths and Misconceptions About Evaluation
Questions to Ask and Answer
- How can we use outcome evaluation to assess the effectiveness of our program?
- How do we decide what outcome evaluation methods to use?
- How should we use our evaluation results?<</li>
- How can we determine to what degree we have achieved our communication objectives?
- How can we make our communication program more effective?/li>
In Stage 3, you decided how to use process evaluation to monitor and adjust your communication activities to meet objectives. In Stage 4, you will use the outcome evaluation plan developed in Stage 1 to identify what changes (e.g., in knowledge, attitudes, or behavior) did or did not occur as a result of the program. Together, the progress and outcome evaluations will tell you how the program is functioning and why. (If you combine information from the two types of evaluation, be sure that you focus on the same aspects of the program, even though you look at them from different perspectives.) This section will help you revise your plans and conduct outcome evaluation.You should begin planning assessment activities either before or soon after you launch the program.
Why Outcome Evaluation Is Important
Outcome evaluation is important because it shows how well the program has met its communication objectives and what you might change or improve to make it more effective. Learning how well the program has met its communication objectives is vital for:
- Justifying the program to management
- Providing evidence of success or the need for additional resources
- Increasing organizational understanding of and support for health communication
- Encouraging ongoing cooperative ventures with other organizations
Revising the Outcome Evaluation Plan
During Stage 1, you identified evaluation methods and drafted an outcome evaluation plan. At that time, you should have collected any necessary baseline data. The first step in Stage 4 is to review that plan to ensure it still fits your program. A number of factors will influence how your communication program’s outcomes should be evaluated, including the type of communication program, the communication objectives, budget, and timing. The outcome evaluation needs to capture intermediate outcomes and to measure the outcomes specified in the communication objectives. Doing so can allow you to show progress toward the objectives even if the objectives are not met.
Consider the following questions to assess the Stage 1 outcome evaluation plan and to be sure the evaluation will give you the information you need:
- What are the communication objectives?
What should the members of the intended audience think, feel, or do as a result of the health communication plan in contrast to what they thought, felt, or did before? How can these changes be measured?
- How do you expect change to occur?
Will it be slow or rapid? What measurable intermediate outcomes (steps toward the desired behavior) are likely to take place before the behavior change can occur? The behavior change map you created in Stage 1 should provide the answers to these questions.
- How long will the program last? What kinds of changes can we expect in that time period (e.g., attitudinal, awareness, behavior, policy changes)? Sometimes, programs will not be in place long enough for objectives to be met when outcomes are measured (e.g., outcomes measured yearly over a 5-year program). To help ensure that you identify important indicators of change, decide which changes could reasonably occur from year to year.
- Which outcome evaluation methods can capture the scope of the change that is likely to occur?
Many outcome evaluation measures are relatively crude, which means that a large percentage of the intended audience (sometimes an unrealistically large percentage) must make a change before it can be measured. If this is the case, the evaluation is said to "lack statistical power." For example, a public survey of 1,000 people has a margin of error of about 3 percent. In other words, if 50 percent of the survey respondents said they engage in a particular behavior, in all likelihood somewhere between 47 percent and 53 percent of the population represented by the respondents actually engages in the behavior. Therefore, you can conclude that a statistically significant change has occurred only if there is a change of 5 or more percentage points. It may be unreasonable to expect such a large change, and budgetary constraints may force you to measure outcomes by surveying the general population when your intended audience is only a small proportion of the population.
- Which aspects of the outcome evaluation plan best fit with your organization’s priorities?
Only rarely does a communication program have adequate resources to evaluate all activities. You may have to illustrate your program’s contribution to organizational priorities to ensure continued funding. If this is the case, it may be wise to evaluate those aspects most likely to contribute to the organization’s mission (assuming that those are also the ones most likely to result in measurable changes).
Conducting Outcome Evaluation
Conduct outcome evaluation by following these steps:
- Determine what information the evaluation must provide.
- Define the data to collect.
- Decide on data collection methods.
- Develop and pretest data collection instruments.
- Collect data.
- Process data.
- Analyze data to answer the evaluation questions.
- Write an evaluation report.
- Disseminate the evaluation report.
See a description of each step below.
1. Determine What Information the Evaluation Must Provide
An easy way to do this is to think about the decisions you will make based on the evaluation report. What questions do you need to answer to make those decisions?
2. Define the Data You Need to Collect
Determine what you can and should measure to assess progress on meeting objectives. Use the following questions as a guide:
- Did knowledge of the issue increase among the intended audience (e.g., understanding how to choose foods low in fat or high in fiber, knowing reasons not to smoke)?
- Did behavioral intentions of the intended audience change (e.g., intending to use a peer pressure resistance skill, intending to buy more vegetables)?
- Did intended audience members take steps leading to the behavior change (e.g., purchasing a sunscreen, calling for health information, signing up for an exercise class)?
- Did awareness of the campaign message, name, or logo increase among intended audience members?
- Were policies initiated or other institutional actions taken (e.g., putting healthy snacks in vending machines, improving school nutrition curricula)?
3. Decide on Data Collection Methods
The sidebar Outcome Evaluation Designs describes some common outcome evaluation designs, the situations in which they are appropriate, and their major limitations. (See the Communication Research Methods section for more information.) Complex, multifaceted programs often employ a range of methods so that each activity is evaluated appropriately. For example, a program that includes a mass media component to reach parents and a school-based component to reach students might use independent cross-sectional studies to evaluate the mass media component and a randomized or quasi-experimental design to evaluate the school-based component.
The following limitations can make evaluation of your communication program difficult:
- Lack of measurement precision (e.g., available data collection mechanisms cannot adequately capture
change or cannot capture small changes). Population surveys may not be able to identify the small number of people making a change. Self-reported measures of behavior change may not be accurate.
- Inability to conclusively establish that the communication activity caused the observed effect.
Experimental designs, in which people are randomly assigned to either receive an intervention or not, allow you to assume that your program causes the only differences observed between the group exposed to the program and the control group. Outcome evaluations with experimental designs that run more than a few weeks, however, often wind up with contaminated control groups, either because people in the group receiving the intervention move to the control group, or because people in the control group receive messages from another source that are the same as or similar to those from your program.
The more complex your evaluation design is, the more you will need expert assistance to conduct your evaluation and interpret your results. The expert can also help you write questions that produce objective results. (It’s easy to develop questions that inadvertently produce overly positive results.) If you do not have an evaluator on staff, seek help to decide what type of evaluation will best serve your program. Sources include university faculty and graduate students (for data collection and analysis), local businesses (for staff and computer time), state and local health agencies, and consultants and organizations with evaluation expertise.
|Outcome Evaluation Designs Appropriate for Specific Communication Programs|
Programs Not Delivered to the Entire Population of the Intended Audience
|Evaluation Design||Major Limitations|
|Randomized experiment. Members of the intended audience are randomly assigned to either be exposed to the program (intervention group) or not (control group). Usually, the same series of questions is asked pre- and postintervention (a pretest and posttest); posttest differences between the two groups show change the program has caused.|
|Quasi-experiment. Members of the intended audience are split into control and intervention groups based simply upon who is exposed to the program and who is not.|
|Before-and-after studies. Information is collected before and after intervention from the same members of the intended audience to identify change from one time to another.|
|Independent cross-sectional studies. Information is collected before and after intervention, but it is collected from different intended audience members each time.|
|Panel studies. Information is collected at multiple times from the same members of the intended audience. When intended audience members are differentially exposed to the program, this design helps evaluators sort out the effects of different aspects of the program or different levels of exposure.|
|Time series analysis. Pre- and postintervention measures are collected multiple times from members of the intended audience. Evaluators use the preintervention data points to project what would have happened without the intervention and then compare the projection to what did happen using the postintervention data points.|
4. Develop and Pretest Data Collection Instruments
Most outcome evaluation methods involve collecting data about participants through observation, a questionnaire, or another method. Instruments may include tally sheets for counting public inquiries, survey questionnaires, interview guides. Select a method that allows you to best answer your evaluation questions based upon your access to your intended audience and your resources. To develop your data collection instruments—or to select and adapt existing ones—ask yourself the following questions:
The data you collect should be directly related to your evaluation questions. Although this seems obvious, it is important to check your data collection instruments against the questions your evaluation must answer. These checks will keep you focused on the information you need to know and ensure that you include the right measures. For example, if members of your intended audience must know more about a topic before behavior change can take place, make sure you ask knowledge-related questions in your evaluation.
You will need to decide how many members of each group you need data from in order to have a sufficiently powerful evaluation to assess change. Make sure you have adequate resources to collect information from that many people. Realize that you may also need a variety of data collection instruments and methods for the different groups from whom you need information.
Before you decide how to collect your data, you must assess your resources. Do you have access to, or can you train, skilled interviewers? Must you rely on self-reports from participants?
Also consider how comfortable the participants will be with the methods you choose to collect data. Will they be willing and able to fill out forms? Will they be willing to provide personal information to interviewers? Will the interviews and responses need to be translated?
5. Collect Data
Collect postprogram data. You should have collected baseline data during planning in Stage 1, before your program began, to use for comparison with postprogram data.
6. Process Data
Put the data into usable form for analysis. This may mean organizing the data to give to professional evaluators or entering the data into an evaluation software package.
7. Analyze the Data to Answer the Evaluation Questions
Use statistical techniques as appropriate to discover significant relationships.Your program might consider involving university–based evaluators, providing them with an opportunity for publication and your program with expertise.
8. Write an Evaluation Report
A report outlining what you did and why you did it, as well as what worked and what should be altered in the future, provides a solid base from which to plan future evaluations.Your program evaluation report explains how your program was effective in achieving its communication objectives and serves as a record of what you learned from both your program’s achievements and shortcomings. Be sure to include any questionnaires or other instruments in the report so that you can find them later.
See Appendix A for a sample evaluation report. As you prepare your report, you will need someone with appropriate statistical expertise to analyze the outcome evaluation data. Also be sure to work closely with your evaluators to interpret the data and develop recommendations based on them.
Writing an evaluation report will bring your organization the following additional benefits:
- You will be able to apply what you've learned to future projects. Frequently, other programs are getting under way when evaluation of an earlier effort concludes, and program planners don’t have time to digest what has been learned and incorporate it into future projects. A program evaluation report helps to ensure that what has been learned will get careful consideration.
- You will show your accountability to employers, partners, and funding agencies. Your program’s evaluation report showcases the program’s accomplishments. Even if some aspects of the program need to be modified based on evaluation results, identifying problems and addressing them shows partners and funding agencies that you are focused on results and intend to get the most benefit from their time and money.
- You will be able to give evidence of your program and methods’ effectiveness. If you want other organizations to use your materials or program, you need to demonstrate their value. An evaluation report offers proof that the materials and your program were carefully developed and tested. This evidence will help you explain why your materials or program may be better than others, or what benefits an organization could gain from using its time and resources to implement your program.
- You will provide a formal record that will help others. A comprehensive evaluation report captures the institutional memory of what was tried in the past and why, which partners had strong skills or experience in specific areas, and what problems were encountered. Everything you learned when evaluating your program will be helpful to you or others planning programs in the future.
Consider the UsersBefore you write your evaluation, consider who will read or use it. Write your report for that audience. As you did when planning your program components in Stage 1, analyze your audiences for your report before you begin to compose. To analyze your audience, ask yourself the following questions:
- Who are the audiences for this evaluation report?
- Public health program administrators
- Evaluators, epidemiologists, researchers
- Funding agencies
- Partner organizations
- Project staff
- The public
- The media
- How much information will your audience want?
- The complete report
- An executive summary
- Selected sections of the report
- How will your audience use the information in your report?
- To refine a program or policy
- To evaluate your program’s performance
- To inform others
- To support advocacy efforts
- To plan future programs
Consider the Format
Decide the most appropriate way to present information in the report to your audience. Consider the following formats:
- Concise, including hard-hitting findings and recommendations
- General, including an overview written for the public at the ninth-grade level
- Scientific, including a methodology section, detailed discussion, and references
- Visual, including more charts and graphics than words
- Case studies, including other storytelling methods
Selected Elements to IncludeDepending on your chosen audience and format, include the following sections:
- Program results/findings
- Evaluation methods
- Program chronology/history
- Theoretical basis for program
- Barriers, reasons for unmet objectives
9. Disseminate the Evaluation Report
Ask selected stakeholders and key individuals to review the evaluation report before it is released so that they can identify concerns that might compromise its impact. When the report is ready for release, consider developing a dissemination strategy for the report, just as you did for your program products, so the intended audiences you’ve chosen will read it. Don’t go to the hard work of writing the report only to file it away.
Letting others know about the program results and continuing needs may prompt them to share similar experiences, lessons, new ideas, or potential resources that you could use to refine the program. In fact, feedback from those who have read the evaluation report or learned about your findings through conference presentations or journal coverage can be valuable for refining the program and developing new programs.You may want to develop a formal mechanism for obtaining feedback from peer or partner audiences. If you use university–based evaluators, the mechanism may be their publication of findings.
If appropriate, use the evaluation report to get recognition of the program’s accomplishments. Health communication programs can enhance their credibility with employers, funding agencies, partners, and the community by receiving awards from groups that recognize health programs, such as the American Medical Writers Association, the Society for Technical Communication, the American Public Health Association, and the National Association of Government Communicators. A variety of other opportunities exist, such as topic–specific awards (e.g., awards for consumer information on medications from the U.S. Food and Drug Administration) and awards for specific types of products (e.g., the International Communication Association’s awards for the top three papers of the year). Another way to get recognition is to publish articles about the program in professional journals or give a presentation or workshop at an organization meeting or conference.
Refining Your Health Communication Program
The health communication planning process is circular. The end of Stage 4 is not the end of the process but the step that takes you back to Stage 1. Review the evaluation report and consider the following to help you identify areas of the program that should be changed, deleted, or augmented:
- Goals and objectives:
- Have your goals and objectives shifted as you’ve conducted the program? If so, revise the original goals and objectives to meet the new situation.
- Are there objectives the program is not meeting? Why? What are the barriers you’re encountering?
- Has the program met all of your objectives, or does it seem not to be working at all? Consider ending the program.
- Where additional effort may be needed:
- Is there new health information that should be incorporated into the program’s messages or design?
- Are there strategies or activities that did not succeed? Review why they didn't work and determine what can be done to correct any problems.
- Implications of success:
- Which objectives have been met, and by what successful activities?
- Should successful communication activities be continued and strengthened because they appear to work well or should they be considered successful and completed?
- Can successful communication activities be expanded to apply to other audiences or situations?
- Costs and results of different activities:
- What were the costs (including staff time) and results of different aspects of the program?
- Do some activities appear to work as well as, but cost less than, others?
- Is there evidence of program effectiveness and of a continued need to persuade your organization to continue the program?
- Have you shared the results of your activities with the leadership of your organization?
- Have you shared results with partners?
- Do the assessment results show a need for new activities that would require partnerships with additional organizations?
Once you have answered the questions above and decided what needs to be done to improve the program, use the planning guidelines in Stage 1 to help determine new strategies, define expanded or different intended audiences, and rewrite/revise your communication program plan to accommodate new approaches, new tasks, and new timelines. Review information from the other stages as you plan the next phase of program activities.
Common Myths and Misconceptions About Evaluation
Myth: We can’t afford an evaluation.
Fact: Rarely does anyone have access to adequate resources for an ideal health communication program, much less an ideal evaluation. Nevertheless, including evaluation as a part of your work yields the practical benefit of telling you how well your program is working and what needs to be changed. With a little creative thinking, some form of useful evaluation can be included in almost any budget.
Myth: Evaluation is too complicated. No one here knows how to do it.
Fact: Many sources of help are available for designing an evaluation. Several pertinent texts are included in the selected readings at the end of this section. If your organization does not have employees with the necessary skills, find help at a nearby university or from someone related to your program (e.g., a board member, a volunteer, or someone from a partner organization). Also, contact an appropriate clearinghouse or Federal agency and ask for evaluation reports on similar programs to use as models. If the program has enough resources, hire a consultant with experience in health communication evaluation. Contact other communication program managers for recommendations.
Myth: Evaluation takes too long.
Fact: Although large, complicated outcome evaluation studies take time to design and analyze (and may require a sufficient time lapse for changes in attitudes or behavior to become clear), other types of evaluation can be conducted in a few weeks or months, or even as little as a day. A well-planned evaluation can proceed in tandem with program development and implementation activities. Often, evaluation seems excessively time-consuming only because it is left until the end of the program.
Myth: Program evaluation is too risky. What if it shows our funding source (or boss) that we haven’t succeeded?
Fact: A greater problem is having no results at all. A well-designed evaluation will help you measure and understand the results (e.g., if an attitude or a perception did not change, why not?). This information can direct future initiatives and help the public health community learn more about how to communicate effectively. The report should focus on what you have learned from completing the program evaluation.
Myth: We affected only 30 percent of our intended audience. Our program is a failure.
Fact: Affecting 30 percent of the intended audience is a major accomplishment; it looks like a failure only if your program’s objectives were set unrealistically high. Remember to report your results in the context of what health communication programs can be expected to accomplish. If you think the program has affected a smaller proportion of the intended audience than you wanted, consult with experts (program planning, communication, or behavioral) before setting objectives for future programs.
Myth: If our program is working, we should see results very soon.
Fact: Results will vary depending on the program, the issue, and the intended audience. Don’t expect instant results; creating and sustaining change in attitudes and particularly in behavior or behavioral intentions often takes time and commitment. Your program may show shorter term, activity-related results when you conduct your process evaluation; these changes in knowledge, information seeking, and skills may occur sooner than more complex behavioral changes.
Academy for Educational Development. (1995). A tool box for building health communication capacity. Washington, DC.
Agency for Toxic Substances and Disease Registry. (1994). Guidelines for planning and evaluating environmental health education programs. Atlanta.
Center for Substance Abuse Prevention. (1998). Evaluating the results of communication programs [Technical Assistance Bulletin].Washington, DC: U.S. Government Printing Office.
Flay, B. R., & Cook, T. D. (1989). Three models for evaluating prevention campaigns with a mass media component. In R. E. Rice & C. K. Atkin (Eds.), Public communication campaigns (2nd ed.). Thousand Oaks, CA: Sage.
Flay, B. R., Kessler, R. C., & Utts, J. M. (1991). Evaluating media campaigns. In S. L. Coyle, R. F. Boruch, & C. F. Turner (Eds.), Evaluating AIDS prevention programs. Washington, DC: National Academy Press.
Morra, M. E. (Ed.). (1998). The impact and value of the Cancer Information Service: A model for health communication. Journal of Health Communication, 3(3) Suppl.
Muraskin, L. D. (1993). Understanding evaluation: The way to better prevention programs. Washington, DC: U.S. Department of Education.
Rice, R. E., & Atkin, C. K. (2000). Public communication campaigns (3rd ed.). Thousand Oaks, CA: Sage.
Rossi, P. H., Freeman, H. E., & Lipsey, M.W. (1998). Evaluation: A systematic approach (6th ed.). Thousand Oaks, CA: Sage.
Siegel, M., & Doner, L. (1998). Marketing public health: Strategies to promote social change. Gaithersburg, MD: Aspen.
Windsor, R.W., Baranowski, T. B., Clark, N. C., & Cutter, G. C. (1994). Evaluation of health promotion, health education and disease prevention programs (2nd ed.). Mountain View, CA: Mayfield.