Introduction
In the ever-evolving landscape of public health, the pursuit of health improvement remains the central goal. Public health professionals dedicate their expertise and commitment to achieve this by evaluating the effectiveness of their actions. As the scope of public health initiatives expands beyond infectious diseases to encompass chronic conditions, violence prevention, emerging pathogens, bioterrorism threats, and the social determinants of health disparities, the complexity of evaluation has grown significantly. To navigate this intricate transition while maintaining accountability and a focus on measurable health outcomes, the Centers for Disease Control and Prevention (CDC) developed a comprehensive framework for program evaluation. This framework is essential for anyone interested in understanding Which Program Focuses On Processes Used To Provide Care Quizlet might be referring to, as effective care delivery is often a key component of public health programs that undergo rigorous evaluation.
Integrating the principles of this framework into all CDC program operations is designed to foster innovation in outcome improvement and enhance the ability to detect program effects promptly. This timely detection is crucial for translating evaluation findings into practical improvements. The framework’s steps and standards guide an evolution in program planning, ensuring that prevention research informs clearer and more logical program designs. Stronger partnerships, facilitated by the framework, enable collaborators to concentrate on shared objectives. Integrated information systems support more systematic measurement, and lessons derived from evaluations are used more effectively to guide adjustments in public health strategies.
The publication of this framework underscores CDC’s ongoing dedication to enhancing community health. Recognizing that isolated categorical strategies are insufficient, public health professionals across diverse program areas must collaborate to evaluate their combined impact on community health. This collaborative evaluation is the only way to realize and demonstrate the success of a vision: healthy people in a healthy world through prevention.
This report was prepared by Robert L. Milstein, M.P.H., and Scott F. Wetterhall, M.D., M.P.H., along with the CDC Evaluation Working Group and numerous contributors and consultants from various public health institutions and organizations.
Summary: The Essence of Program Evaluation
Effective program evaluation is a systematic approach to enhancing and justifying public health actions. It involves procedures that are useful, feasible, ethical, and accurate. This framework serves as a guide for public health professionals in implementing program evaluation, offering a practical, non-prescriptive tool to summarize and organize essential components. The framework comprises key steps in program evaluation practice and standards for ensuring effectiveness. By adhering to these steps and standards, a deeper understanding of each program’s unique context is achieved, leading to improved conception and execution of program evaluations. Furthermore, the framework promotes an evaluation approach that is seamlessly integrated into routine program operations, emphasizing practical, ongoing strategies that engage all program stakeholders, not just evaluation specialists. Understanding and applying this framework is a powerful catalyst for planning effective public health strategies, refining existing programs, and demonstrating the tangible results of resource investments.
Background: The Need for a Structured Approach
Program evaluation is a fundamental organizational practice in public health, yet its consistent application across all program areas and its integration into daily program management remain challenges. Evaluation is crucial for upholding CDC’s operational principles, which emphasize:
- Science-based decision-making and public health action.
- Promoting social equity through public health initiatives.
- Effective service delivery.
- Outcome orientation.
- Accountability.
These principles highlight the necessity for programs to develop well-defined plans, foster inclusive partnerships, and establish feedback mechanisms for continuous learning and improvement. Routine, practical evaluations are essential to ensure both new and existing programs adhere to these principles, providing valuable information for management and enhancing program effectiveness.
This report introduces a framework to facilitate understanding and integration of program evaluation throughout the public health system. The primary objectives of this report are to:
- Summarize the core elements of program evaluation.
- Provide a framework for conducting effective program evaluations.
- Clarify the distinct steps involved in program evaluation.
- Review the established standards for effective program evaluation.
- Address common misconceptions regarding the purposes and methodologies of program evaluation.
Evolution of Program Evaluation
Over the past three decades, evaluation has matured as a distinct discipline, marked by new definitions, methodologies, approaches, and applications across diverse subjects and settings. Despite these advancements, a foundational organizational framework specifically for program evaluation in public health practice was lacking. In response to this gap, the CDC Director and executive team recognized the critical need for such a framework in May 1997, emphasizing the importance of integrating evaluation with program management. They also stressed the necessity for evaluation studies to demonstrate the clear link between program activities and prevention effectiveness. Consequently, CDC convened the Evaluation Working Group, tasking it with the development of a framework that effectively summarizes and organizes the fundamental elements of program evaluation.
Framework Development Process
The Evaluation Working Group, comprising representatives from across CDC and in collaboration with state and local health officials, engaged in a year-long information-gathering phase, seeking input from eight distinct reference groups. These groups included:
- Evaluation experts
- Public health program managers and staff
- State and local public health officials
- Nonfederal public health program directors
- Public health organization representatives and educators
- Community-based researchers
- U.S. Public Health Service (PHS) agency representatives
- CDC staff
In February 1998, the Working Group hosted a workshop dedicated to developing a framework for evaluation in public health practice, with approximately 90 representatives participating. Additionally, the working group conducted interviews with around 250 individuals, reviewed a wide range of evaluation reports (both published and unpublished), consulted with stakeholders from various programs to test the framework’s applicability, and maintained a website to disseminate documents and gather feedback. In October 1998, a national distance-learning course featuring the framework was launched through CDC’s Public Health Training Network (8), reaching an audience of approximately 10,000 professionals. These comprehensive information-sharing strategies provided the working group with ample opportunities to test and refine the framework with public health practitioners.
Defining Key Concepts
Throughout this report, the term “program” is used broadly to encompass any organized public health action that is the subject of evaluation. This inclusive definition allows the framework to be applied to a wide spectrum of public health activities, including:
- Direct service interventions
- Community mobilization efforts
- Research initiatives
- Surveillance systems
- Policy development activities
- Outbreak investigations
- Laboratory diagnostics
- Communication campaigns
- Infrastructure-building projects
- Training and educational services
- Administrative systems
The additional terms defined in this report are intended to establish a shared vocabulary for evaluation within the public health community.
Integrating Evaluation into Program Practice
Evaluation becomes an integral part of routine program operations when the focus shifts to practical, continuous evaluation that involves all program staff and stakeholders, not just specialized evaluators. Evaluation complements program management by providing essential information for improving program effectiveness and ensuring accountability. Public health professionals already utilize informal evaluation processes in their daily work, such as responding to community concerns, consulting with partners, using feedback to make judgments, and refining program operations (9). While these informal processes are suitable for ongoing program assessment and minor adjustments, more explicit, formal, and justifiable evaluation procedures become crucial when significant decisions or program changes are considered, such as determining service offerings in a national health promotion program (10).
Assigning Value to Program Activities
Evaluating program activities involves addressing questions of value, which typically encompass three interconnected aspects:
- Merit (Quality): How good is the program?
- Worth (Cost-Effectiveness): Is the program worth its cost?
- Significance (Importance): Does the program make a meaningful difference?
Judging a program’s value and making evidence-based decisions requires answering the following fundamental questions (3, 4, 11):
- What to evaluate? Define the program and its context.
- What aspects to consider? Identify key program dimensions for performance judgment.
- What are the standards? Determine the performance levels required for program success.
- What evidence to use? Identify evidence to demonstrate program performance.
- What conclusions are justified? Compare evidence against standards to draw performance conclusions.
- How to use lessons learned? Apply evaluation insights to improve public health effectiveness.
These questions should be addressed at the outset of a program and revisited throughout its implementation. The framework detailed in this report provides a systematic approach to answering these critical questions.
Framework for Program Evaluation in Public Health
Effective program evaluation is a systematic method for improving and accounting for public health actions, employing procedures that are useful, feasible, ethical, and accurate. This framework is designed to guide public health professionals in utilizing program evaluation effectively. It is a practical, non-prescriptive tool intended to summarize and organize the essential elements of program evaluation. The framework consists of distinct steps in evaluation practice and established standards for effective evaluation, as illustrated in Figure 1.
Figure 1. Steps in Program Evaluation and Standards for Effective Evaluation
This framework is structured around six interconnected steps that are fundamental to any evaluation. These steps serve as starting points for tailoring an evaluation to a specific public health effort at a particular time. While the steps are interdependent and may be addressed in a non-linear sequence, they follow a logical order, with earlier steps laying the groundwork for subsequent progress. Decisions made within each step are iterative and should be revisited and refined as needed.
The six steps are:
- Engage stakeholders.
- Describe the program.
- Focus the evaluation design.
- Gather credible evidence.
- Justify conclusions.
- Ensure use and share lessons learned.
Adhering to these six steps facilitates a comprehensive understanding of a program’s context, including its history, setting, and organizational structure, and enhances the overall quality of evaluation conception and execution.
The framework’s second key element is a set of 30 standards for assessing the quality of evaluation activities. These standards are categorized into four groups:
- Utility
- Feasibility
- Propriety
- Accuracy
These standards, adapted from the Joint Committee on Standards for Educational Evaluation (12), address the critical question: “Will this evaluation be effective?” They serve as recommended criteria for evaluating the quality of program evaluation efforts in public health. The following sections detail each step, its sub-components, and the governing standards for effective program evaluation (Box 1).
Box 1. Steps in Evaluation Practice and Standards for Effective Evaluation
Steps in Evaluation Practice
- Engage stakeholders: Identify and involve those invested in the evaluation.
- Describe the program: Define need, expected effects, activities, resources, stage, context, and logic model.
- Focus the evaluation design: Clarify purpose, users, uses, questions, methods, and agreements.
- Gather credible evidence: Select indicators, sources, and ensure quality, quantity, and logistical considerations.
- Justify conclusions: Apply standards, analyze/synthesize data, interpret findings, make judgments, and formulate recommendations.
- Ensure use and share lessons learned: Focus on design, preparation, feedback, follow-up, and dissemination.
Standards for Effective Evaluation
- Utility: Ensure the evaluation serves the information needs of intended users.
- Feasibility: Ensure the evaluation is realistic, prudent, diplomatic, and frugal.
- Propriety: Ensure the evaluation is legal, ethical, and respects the welfare of all involved.
- Accuracy: Ensure the evaluation provides and conveys technically accurate information.
Steps in Program Evaluation: A Detailed Guide
Step 1: Engaging Stakeholders
The evaluation process begins with engaging stakeholders, defined as individuals or organizations with a vested interest in the evaluation’s findings and their subsequent use. Public health work is inherently collaborative, necessitating consideration of the values of all partners in any program assessment. Stakeholder engagement ensures that diverse perspectives are understood and incorporated. Without it, evaluations may overlook crucial aspects of program objectives, operations, and outcomes. This can lead to evaluation findings being disregarded, criticized, or resisted if stakeholder concerns and values are not addressed (12). Active stakeholder involvement is essential throughout all subsequent steps of the evaluation process.
Three primary groups of stakeholders are critical to identify and engage:
- Those involved in program operations: Sponsors, collaborators, coalition partners, funding officials, administrators, managers, and staff.
- Those served or affected by the program: Clients, family members, neighborhood organizations, academic institutions, elected officials, advocacy groups, professional associations, skeptics, opponents, and staff of related or competing organizations.
- Primary users of the evaluation: The specific individuals who are in a position to make decisions or take action based on the evaluation findings.
Those Involved in Program Operations: These stakeholders have a direct interest in the evaluation process as program modifications may result from the findings. While they collaborate on the program, they may not represent a unified interest group, with subgroups holding different perspectives and agendas. It’s crucial to distinguish program evaluation from personnel evaluation, as stakeholders in operational roles might perceive evaluation as a personal judgment (13).
Those Served or Affected by the Program: Individuals and organizations directly or indirectly affected by the program should be engaged to the extent possible. This includes both program supporters and those who may be skeptical or opposed. Engaging program opponents can be particularly valuable, as their perspectives can strengthen the evaluation’s credibility by addressing diverse values and concerns.
Primary Users of the Evaluation: These are the key individuals who will utilize the evaluation findings to make decisions or take action regarding the program. Identifying primary users early in the evaluation process and maintaining consistent communication ensures the evaluation addresses their specific information needs and values (7).
The scope and intensity of stakeholder involvement will vary depending on the specific program evaluation. Box 2 outlines example activities for engaging stakeholders (14), ranging from direct involvement in design and execution to regular updates through meetings and reports. Fostering trust and managing potential conflicts are vital to prevent misuse of the evaluation process by stakeholders attempting to sabotage or distort program outcomes.
Box 2. Engaging Stakeholders: Definition, Role, and Example Activities
Definition: Actively fostering input, participation, and power-sharing among individuals and organizations invested in the evaluation process and its findings, particularly primary users.
Role: Enhances the usefulness and credibility of the evaluation, clarifies roles and responsibilities, promotes cultural competence, protects human subjects, and mitigates conflicts of interest.
Example Activities:
- Consulting both internal stakeholders (leaders, staff, clients, funding sources) and external stakeholders (skeptics).
- Making targeted efforts to include less powerful groups or individuals.
- Coordinating stakeholder input throughout evaluation design, operation, and use.
- Avoiding overly broad stakeholder identification that could hinder evaluation progress.
Step 2: Describing the Program
Program descriptions are essential for conveying the mission and objectives of the program being evaluated. These descriptions should be sufficiently detailed to ensure a clear understanding of program goals and strategies. The description should encompass the program’s capacity for change, its developmental stage, and its integration within the broader organizational and community context. Program descriptions establish the foundational frame of reference for all subsequent evaluation decisions. They facilitate comparisons with similar programs and help establish connections between program components and their effects (12). Stakeholders may hold varying interpretations of program goals and purposes, making a shared program definition crucial for effective evaluation. Negotiating a clear and logical program description with stakeholders can yield benefits even before data collection begins (7).
Key aspects to include in a program description are:
- Need: A clear articulation of the problem or opportunity the program addresses and how the program intends to respond. This includes the nature and magnitude of the issue, affected populations, and any changes in the need over time.
- Expected Effects: Descriptions of what the program aims to achieve to be considered successful. These effects should be outlined across different timeframes, from immediate to long-term consequences, encompassing the program’s mission, goals, and objectives. Potential unintended consequences should also be considered.
- Activities: A detailed account of what the program does to effect change, presented as a logical sequence of steps, strategies, or actions. This clarifies the program’s hypothesized mechanism or theory of change (16, 17) and distinguishes program-led activities from those conducted by partners or related programs (18). External factors that might influence program success should also be noted.
- Resources: An inventory of the time, talent, technology, equipment, information, financial resources, and other assets available to support program activities. This description should highlight the scale and intensity of program services and identify any mismatches between desired activities and available resources. Economic evaluations require a comprehensive understanding of all direct and indirect program inputs and costs (19-21).
- Stage of Development: Recognizing that public health programs evolve, a program’s stage of development reflects its maturity. Programs in the planning, implementation, or effects stage will have different evaluation priorities (22). Evaluation goals should be tailored to the program’s stage, focusing on refining plans in early stages, improving operations during implementation, and assessing intended and unintended effects in mature programs.
- Context: A description of the program’s setting and environmental influences, including historical, geographical, political, social, and economic conditions, as well as the efforts of related or competing organizations (6). Understanding the context is essential for designing context-sensitive evaluations and interpreting findings accurately, including assessing their generalizability.
- Logic Model: A visual representation, often a flowchart, map, or table, that outlines the sequence of events through which the program is expected to achieve change (23-35). Figure 2 provides an example of a logic model. The logic model summarizes the program’s mechanism of change, linking processes to expected effects, and can also depict the necessary infrastructure for program operations. Typical elements include inputs, activities, outputs, and results across immediate, intermediate, and long-term timeframes. Developing a logic model helps stakeholders clarify program strategies, improve program direction, reveal underlying assumptions, and provide a framework for evaluation. It can also strengthen claims of causality and serve as a basis for estimating program effects on outcomes that are not directly measured but are logically linked within the model (35).
Figure 2. Example of a Program Logic Model
Program descriptions will be unique to each evaluation. Box 3 outlines example activities for describing the program, such as using multiple information sources to create a comprehensive description. The accuracy of a program description can be validated through stakeholder consultation and by comparing reported descriptions with direct observations of program activities in the field. A program description can be enhanced by considering factors such as staff turnover, resource limitations, political pressures, and community engagement levels that may impact program performance.
Box 3. Describing the Program: Definition, Role, and Example Activities
Definition: Thorough examination of the program’s features, including its purpose and place in a broader context. Encompasses both intended program function and actual implementation, as well as contextual factors influencing evaluation conclusions.
Role: Enhances evaluation fairness and accuracy, enables balanced assessment of program strengths and weaknesses, and helps stakeholders understand program components and their broader context.
Example Activities:
- Characterizing the need(s) addressed by the program.
- Listing specific expectations as goals, objectives, and success criteria.
- Clarifying the rationale behind program activities leading to expected changes.
- Developing an explicit logic model to illustrate relationships between program elements and expected changes.
- Assessing the program’s maturity or stage of development.
- Analyzing the program’s operational context.
- Considering the program’s connections to other ongoing initiatives.
- Avoiding overly precise descriptions for programs still under development.
Step 3: Focusing the Evaluation Design
Focusing the evaluation design is crucial for assessing the most relevant issues for stakeholders while optimizing the efficient use of time and resources (7, 36, 37). Different design options vary in their suitability for meeting stakeholder information needs. Once data collection begins, procedural changes can be difficult or impossible, even if better methods emerge. A well-defined plan anticipates intended uses and creates an evaluation strategy with the highest likelihood of being useful, feasible, ethical, and accurate.
Key considerations when focusing an evaluation include:
- Purpose: Articulating the evaluation’s intent prevents premature decisions about evaluation methods. Program characteristics, particularly its stage of development and context, influence the evaluation’s purpose. Public health evaluations typically serve four general purposes (Box 4):
- Gain Insight: To explore the feasibility of innovative approaches or to understand program operations in depth.
- Change Practice: To improve program implementation, quality, effectiveness, or efficiency during the implementation stage.
- Assess Effects: To examine the relationship between program activities and observed outcomes, appropriate for mature programs. This includes both attribution (direct cause-and-effect) and contribution (program’s role in broader changes).
- Affect Participants: To use the evaluation process itself to positively influence stakeholders, such as reinforcing program messages, empowering participants, promoting staff development, or facilitating organizational growth (7, 38-42).
Box 4. Selected Uses for Evaluation in Public Health Practice by Category of Purpose
Gain Insight
- Assess community needs, desires, and assets.
- Identify barriers and facilitators to service use.
- Learn to describe and measure program activities and effects.
Change Practice
- Refine plans for new service introduction.
- Characterize intervention plan implementation extent.
- Improve educational material content.
- Enhance cultural competence.
- Verify participant rights protection.
- Set staff training priorities.
- Adjust patient/client flow mid-course.
- Improve health communication message clarity.
- Determine customer satisfaction improvement potential.
- Mobilize community support.
Assess Effects
- Assess participant skill development.
- Compare provider behavior changes over time.
- Compare costs and benefits.
- Identify successful program participants.
- Decide resource allocation.
- Document objective accomplishment levels.
- Demonstrate accountability fulfillment.
- Aggregate evaluations to estimate outcome effects for similar programs.
- Gather success stories.
Affect Participants
-
Reinforce intervention messages.
-
Stimulate dialogue and raise health issue awareness.
-
Broaden coalition member consensus on program goals.
-
Teach evaluation skills to staff and stakeholders.
-
Support organizational change and development.
-
Users: Identify the specific individuals who will receive and utilize evaluation findings. Users should be involved in choosing the evaluation focus as they experience the direct consequences of design choices (7). User involvement is crucial for clarifying intended uses, prioritizing questions and methods, and ensuring the evaluation remains relevant and focused.
-
Uses: Define the specific ways evaluation information will be applied. Vague statements of use are less effective than clearly defined, prioritized uses linked to specific users. Uses should be planned with stakeholder input and consider the program’s stage of development and context.
-
Questions: Establish the boundaries of the evaluation by specifying the program aspects to be addressed (5-7). Developing evaluation questions encourages stakeholders to articulate their information needs. Negotiating and prioritizing questions among stakeholders refines the evaluation focus. This phase may also reveal differing stakeholder opinions on the unit of analysis, such as community-level systems, individual programs, or program components. Clear decisions on questions and units of analysis are essential for subsequent steps.
-
Methods: Select evaluation methods from scientific research options, particularly those from social, behavioral, and health sciences (5-7, 43-48). Design types include experimental, quasi-experimental, and observational designs (43, 48). Method selection should align with stakeholder questions and information needs. Experimental designs use random assignment, quasi-experimental designs compare non-equivalent groups or use time series data, and observational methods use within-group comparisons (45, 52-54). Methodological decisions also determine data collection, instrument selection, data management, and analysis approaches. Mixed-methods evaluations are often more effective due to the limitations of any single method (44, 56-58). Methods may need to be revised during the evaluation to adapt to changing circumstances or intended uses.
-
Agreements: Formalize procedures and clarify roles and responsibilities in a written agreement (6, 12). Agreements outline how the evaluation plan will be implemented with available resources, safeguard human subjects, and address ethical and administrative approvals (59, 60). Key elements include purpose, users, uses, questions, methods, deliverables, timeline, and budget. Agreements verify mutual understanding and provide a basis for modifications if needed.
Box 5 outlines example activities for focusing the evaluation design, such as consulting both program supporters and skeptics to ensure politically viable questions and circulating potential evaluation uses to prioritize stakeholder interests.
Box 5. Focusing the Evaluation Design: Definition, Role, and Example Activities
Definition: Planning the evaluation’s direction and steps in advance, iteratively refining the approach to answer evaluation questions with methods deemed useful, feasible, ethical, and accurate by stakeholders.
Role: Ensures evaluation quality, increases the likelihood of success by identifying practical, politically viable, and cost-effective procedures. Thorough planning prevents impractical or useless evaluations. Stakeholder agreement on design focus keeps the project on track.
Example Activities:
- Meeting with stakeholders to clarify evaluation purpose.
- Identifying and engaging intended users to tailor the plan to their needs.
- Understanding how evaluation results will be used.
- Writing explicit evaluation questions.
- Describing practical methods for sampling, data collection, analysis, interpretation, and judgment.
- Preparing a written protocol or agreement outlining procedures, roles, and responsibilities.
- Revising the evaluation plan as circumstances change.
Step 4: Gathering Credible Evidence
An evaluation should aim to gather information that provides a comprehensive picture of the program and is perceived as credible by primary users. Credibility is subjective and depends on the evaluation questions and the motives behind them. For some questions, rigorous experimental evidence may be required, while for others, systematic observations may suffice. Consulting evaluation methodology specialists is advisable when data quality is critical or when errors in inference carry significant consequences (61, 62).
Credible evidence strengthens evaluation judgments and recommendations. While all data types have limitations, overall credibility is enhanced by using multiple procedures for data gathering, analysis, and interpretation, as well as stakeholder participation (7, 38). Aspects of evidence gathering that influence credibility include:
- Indicators: Define program attributes relevant to the evaluation’s focus and questions (63-66). Indicators translate general concepts into specific, measurable terms, providing a basis for valid and reliable data collection. They reflect program aspects meaningful for monitoring (66-70) and can measure program activities (service delivery capacity, participation rates, client satisfaction, resource efficiency) and program effects (behavior changes, community norms, health status). Logic models can guide indicator development, linking program activities to expected effects (23, 29-35). Multiple indicators are needed to track program implementation and effects, allowing for early detection of performance changes. Indicators should be related to the logic model to clarify accountability and show how intermediate effects contribute to health outcomes. Intangible factors can be measured by recording markers of their expression (72, 73). Indicators may need modification during the evaluation. However, performance indicators alone are not a substitute for the full evaluation process and justified conclusions (66, 67, 74).
- Sources: Sources of evidence are individuals, documents, or observations providing information for the evaluation (Box 6). Multiple sources for each indicator provide diverse perspectives and enhance credibility. Inside perspectives from program staff and documents can be balanced with external perspectives from clients or neutral observers. Source selection criteria should be transparent to allow users to assess potential bias (45, 75-77). Integrating qualitative and quantitative data from various sources can create a balanced evidence base (43, 45, 56, 57, 78-80). Existing evaluations can also serve as sources for synthesis evaluations (58, 81, 82).
Box 6. Selected Sources of Evidence for an Evaluation
Persons:
- Clients, program participants, nonparticipants
- Staff, program managers, administrators
- General public
- Key informants
- Funding officials
- Critics/skeptics
- Staff of other agencies
- Representatives of advocacy groups
- Elected officials, legislators, policymakers
- Local and state health officials
Documents:
- Grant proposals, newsletters, press releases
- Meeting minutes, administrative records, registration/enrollment forms
- Publicity materials, quarterly reports
- Publications, journal articles, posters
- Previous evaluation reports
- Asset and needs assessments
- Surveillance summaries
- Database records
- Records held by funding officials or collaborators
- Internet pages
- Graphs, maps, charts, photographs, videotapes
Observations:
-
Meetings, special events/activities, job performance
-
Service encounters
-
Quality: Refers to the appropriateness and integrity of evaluation information. High-quality data are reliable, valid, and informative for their intended use, facilitated by well-defined indicators. Quality is influenced by instrument design, data collection procedures, training of data collectors, source selection, data management, and error checking. Quality considerations involve trade-offs (breadth vs. depth) that should be negotiated with stakeholders. Practical evaluations aim for a level of quality that meets stakeholder credibility thresholds.
-
Quantity: Refers to the amount of evidence gathered. The required quantity should be estimated in advance, or criteria set for iterative processes. Quantity affects confidence levels, precision, and the power to detect effects (83). All collected evidence should have a clear, anticipated use, and respondent burden should be minimized.
-
Logistics: Encompasses the methods, timing, and infrastructure for evidence gathering and handling. Techniques (Box 7) must suit sources, analysis plans, and communication strategies. Cultural preferences dictate acceptable data collection methods. Procedures (Box 8) must align with cultural conditions and protect privacy and confidentiality (59, 60, 84).
Box 7. Selected Techniques for Gathering Evidence
- Written surveys (handout, telephone, fax, mail, e-mail, Internet)
- Personal interviews (individual, group; structured, semi-structured, conversational)
- Observation
- Document analysis
- Case study
- Group assessment (brainstorming, nominal group)
- Role play, dramatization
- Expert or peer review
- Portfolio review
- Testimonials
- Semantic differentials, paired comparisons, similarity tests
- Hypothetical scenarios
- Storytelling
- Geographical mapping
- Concept mapping
- Pile sorting
- Free-listing
- Social network diagramming
- Simulation, modeling
- Debriefing sessions
- Cost accounting
- Photography, drawing, art, videography
- Diaries or journals
- Logs, activity forms, registries
Box 8. Gathering Credible Evidence: Definition, Role, and Example Activities
Definition: Compiling information perceived as trustworthy and relevant by stakeholders for answering evaluation questions. Evidence can be experimental or observational, qualitative or quantitative, or mixed-methods.
Role: Enhances evaluation utility and accuracy, guides information scope and selection, prioritizes defensible sources, and promotes valid, reliable, and systematic data as a foundation for effective evaluation.
Example Activities:
- Choosing indicators that meaningfully address evaluation questions.
- Describing information source attributes and selection rationale.
- Establishing clear procedures and training staff for high-quality data collection.
- Periodically monitoring information quality and implementing improvement steps.
- Estimating required information quantity or setting criteria for iterative data collection processes.
- Safeguarding information and source confidentiality.
Step 5: Justifying Conclusions
Evaluation conclusions are justified when they are logically linked to the gathered evidence and assessed against agreed-upon values or standards set by stakeholders. Stakeholder agreement on the justification of conclusions is essential for them to confidently use evaluation results. Justifying conclusions involves:
- Standards: Reflect stakeholder values and provide the basis for judging program performance. Explicit standards differentiate evaluation from management approaches that set priorities without value references. Stakeholder values, once articulated and negotiated, become the standards for judging program success, adequacy, or failure. Value systems can serve as sources for norm-referenced or criterion-referenced standards (Box 9). Standards operationalize comparisons for program judgment (3, 7, 12).
Box 9. Selected Sources of Standards for Judging Program Performance
-
Needs of participants
-
Community values, expectations, norms
-
Degree of participation
-
Program objectives
-
Program protocols and procedures
-
Expected performance, forecasts, estimates
-
Feasibility
-
Sustainability
-
Absence of harms
-
Targets or fixed performance criteria
-
Change in performance over time
-
Performance by previous or similar programs
-
Performance by a control or comparison group
-
Resource efficiency
-
Professional standards
-
Mandates, policies, statutes, regulations, laws
-
Judgments by reference groups (participants, staff, experts, funding officials)
-
Institutional goals
-
Political ideology
-
Social equity
-
Political will
-
Human rights
-
Analysis and Synthesis: Detect patterns in evidence through analysis (isolating findings) or synthesis (combining information for broader understanding). Mixed-methods evaluations require separate analysis and synthesis of all sources to examine agreement, convergence, or complexity. Organizing, classifying, comparing, and displaying information are guided by evaluation questions, data types, and stakeholder input (7, 85-87).
-
Interpretation: Assigning meaning to findings to determine their practical significance (88). Interpretation draws on stakeholder information and perspectives and is strengthened by their participation.
-
Judgments: Statements about program merit, worth, or significance, formed by comparing findings and interpretations against selected standards. Multiple standards can lead to different or conflicting judgments. Disagreements often highlight different stakeholder values and can catalyze value clarification and negotiation of judgment bases.
-
Recommendations: Proposed actions resulting from the evaluation. Recommendations require information beyond performance judgments, considering context and organizational factors (89). Recommendations should be evidence-based and aligned with stakeholder values. Sharing draft recommendations and soliciting feedback increases relevance and acceptance.
Box 10 outlines example activities for justifying conclusions, such as summarizing change mechanisms, delineating timelines, searching for alternative explanations, and demonstrating effect repeatability. Consensus-building techniques like the Delphi process (90) can be used for value judgments. Analysis and synthesis techniques should be agreed upon before data collection.
Box 10. Justifying Conclusions: Definition, Role, and Example Activities
Definition: Making program claims warranted by data compared against relevant merit, worth, or significance standards, aligning conclusions with evidence and stakeholder values.
Role: Reinforces evaluation utility and accuracy through value clarification, data analysis and synthesis, systematic interpretation, and comparison against judgment standards.
Example Activities:
- Using appropriate analysis and synthesis methods to summarize findings.
- Interpreting result significance for meaning.
- Making judgments based on clearly stated values (positive/negative, high/low classifications).
- Considering alternative result comparison methods (objectives, comparison groups, norms, past performance, needs).
- Generating and evaluating alternative explanations for findings.
- Recommending actions consistent with conclusions.
- Limiting conclusions to applicable situations, times, persons, contexts, and purposes.
Step 6: Ensuring Use and Sharing Lessons Learned
Evaluation lessons do not automatically translate into action. Deliberate effort is needed to ensure appropriate use and dissemination. Preparation for use is ongoing, starting with stakeholder engagement and continuing throughout the evaluation. Five key elements ensure evaluation use:
- Design: Evaluation questions, methods, and processes should be designed from the outset to achieve intended uses by primary users. A clear, use-focused design clarifies roles, benefits, and stakeholder contributions to relevance, credibility, and utility.
- Preparation: Rehearsing the use of findings, especially negative ones, prepares stakeholders for evidence-based decision-making (92). Hypothetical results can be used to explore potential actions and identify needed evaluation modifications. Preparation also allows stakeholders to consider implications and improvement options.
- Feedback: Continuous communication among evaluation parties builds trust and keeps the evaluation on track. Stakeholder feedback on decisions affecting information utility is essential. Regular discussions and sharing of interim findings and draft reports encourage feedback.
- Follow-Up: Providing technical and emotional support to users during and after the evaluation. Active follow-up reminds users of planned uses and prevents lessons from being lost in complex decisions. An evaluation advocate can ensure findings are considered in decision-making. Follow-up also helps prevent misuse of findings by ensuring proper interpretation and application within context.
- Dissemination: Communicating evaluation procedures and lessons to relevant audiences in a timely, unbiased, and consistent manner. Reporting strategies should be discussed with stakeholders in advance. Effective communication considers timing, style, tone, message source, vehicle, and format. The goal is full disclosure and impartial reporting. Box 11 provides a checklist for effective evaluation reports.
Box 11. Checklist for Ensuring Effective Evaluation Reports
- Provide timely interim and final reports to intended users.
- Tailor report content, format, and style to the audience through involvement.
- Include a summary.
- Summarize stakeholder description and engagement.
- Describe essential program features (including logic models).
- Explain evaluation focus and limitations.
- Include an adequate summary of the evaluation plan and procedures.
- Provide necessary technical information (in appendices).
- Specify standards and criteria for evaluative judgments.
- Explain judgments and their evidence support.
- List evaluation strengths and weaknesses.
- Discuss action recommendations, advantages, disadvantages, and resource implications.
- Ensure protections for program clients and stakeholders.
- Anticipate findings’ impact on people/organizations.
- Present minority opinions or rejoinders.
- Verify report accuracy and unbiasedness.
- Organize report logically and include appropriate details.
- Remove technical jargon.
- Use examples, illustrations, graphics, and stories.
Process uses, beyond findings use, are also valuable (Box 12) (7, 38, 93, 94). Evaluation participation can lead to shifts in thinking and behavior, prompting staff to clarify program goals and fostering cohesive teamwork (95). Evaluation can define indicators, discover decision-maker priorities, and link outcomes to structural reinforcements (96). Process uses provide further justification for early evaluation initiation.
Box 12. Ensuring Use and Sharing Lessons Learned: Definition, Role, and Example Activities
Definition: Ensuring stakeholder awareness of evaluation procedures and findings, findings are used in program decisions, and participants have a beneficial experience (process use).
Role: Ensures evaluation usefulness, although use is influenced by evaluator credibility, report clarity, timeliness, dissemination, disclosure, impartial reporting, and program/organizational context changes.
Example Activities:
- Designing the evaluation for intended use by intended users.
- Preparing stakeholders for use by rehearsing conclusion implications throughout the project.
- Providing continuous feedback on interim findings, interpretations, and decisions.
- Scheduling follow-up meetings to facilitate translation of conclusions into action.
- Disseminating procedures and lessons learned using tailored communication strategies.
Standards for Effective Evaluation
Public health professionals recognize the framework steps as part of their routine work. While informal evaluation is common, standards assess the design and effectiveness of evaluative activities. The Joint Committee on Standards for Educational Evaluation has developed program evaluation standards (12) applicable to public health programs. These standards provide practical guidelines for making evaluation choices and avoiding imbalanced evaluations. They can be applied during planning and implementation. The Joint Committee emphasizes that these are “guiding principles, not mechanical rules,” requiring judgment in their application (12).
The standards are grouped into four categories with 30 specific standards (Boxes 13-16):
- Utility
- Feasibility
- Propriety
- Accuracy
Each category includes guidelines and common errors, illustrated with case examples. For each step in the framework, a relevant subset of standards should be considered (Box 17).
Box 13. Utility Standards
Utility standards ensure an evaluation serves the information needs of intended users:
- Stakeholder Identification: Identify those involved in or affected by the evaluation.
- Evaluator Credibility: Ensure evaluators are trustworthy and competent.
- Information Scope and Selection: Collect pertinent and responsive information.
- Values Identification: Clearly describe perspectives, procedures, and rationale for interpretations.
- Report Clarity: Reports should be clear, describing the program, context, purposes, procedures, and findings.
- Report Timeliness and Dissemination: Disseminate interim findings and reports in a timely manner.
- Evaluation Impact: Plan, conduct, and report evaluations to encourage stakeholder follow-through and use.
Box 14. Feasibility Standards
Feasibility standards ensure an evaluation is realistic, prudent, diplomatic, and frugal:
- Practical Procedures: Use practical procedures to minimize disruption while obtaining needed information.
- Political Viability: Consider varied positions of interest groups to obtain cooperation and avert bias or misuse.
- Cost-Effectiveness: Ensure the evaluation is efficient and produces valuable information justifying resources.
Box 15. Propriety Standards
Propriety standards ensure an evaluation is legal, ethical, and respects the welfare of all involved:
- Service Orientation: Design evaluation to assist organizations in effectively serving target participants.
- Formal Agreements: Ensure written agreements on obligations among principal evaluation parties.
- Rights of Human Subjects: Design and conduct evaluation to respect and protect human subject rights and welfare.
- Human Interactions: Evaluators should interact respectfully with all evaluation participants.
- Complete and Fair Assessment: Conduct a complete and fair examination of program strengths and weaknesses.
- Disclosure of Findings: Ensure full evaluation findings and limitations are accessible to affected persons and those with legal rights.
- Conflict of Interest: Handle conflicts of interest openly to prevent compromising evaluation processes and results.
- Fiscal Responsibility: Ensure prudent and ethically responsible resource allocation and expenditure.
Box 16. Accuracy Standards
Accuracy standards ensure an evaluation conveys technically adequate information on program merit:
- Program Documentation: Clearly and accurately document the program being evaluated.
- Context Analysis: Examine program context in detail to identify probable influences.
- Described Purposes and Procedures: Monitor and describe evaluation purposes and procedures in detail.
- Defensible Information Sources: Describe information sources in detail to assess adequacy.
- Valid Information: Develop and implement information-gathering procedures to ensure valid interpretations.
- Reliable Information: Develop and implement information-gathering procedures to ensure reliable information.
- Systematic Information: Systematically review collected, processed, and reported information and correct errors.
- Analysis of Quantitative Information: Analyze quantitative information appropriately and systematically.
- Analysis of Qualitative Information: Analyze qualitative information appropriately and systematically.
- Justified Conclusions: Explicitly justify conclusions for stakeholder assessment.
- Impartial Reporting: Guard against distortion from personal feelings and biases to reflect findings fairly.
- Metaevaluation: Formatively and summatively evaluate the evaluation against pertinent standards.
Box 17. Cross-Reference of Steps and Relevant Standards
| Steps in Evaluation Practice | Relevant Standards