header
Cooperative Extension Skip Navigation UW Extension
Local Program Evaluation in Tobacco Control
Home Sitemap Contact Search
Navigation

RESOURCES

About Our Program
Evaluation Manual
Multi Year Action Planning
Existing Data
Evaluation Planning

Evaluation Methods

Analyzing Data
Using Results
Resources
Restaurant and Worksite Surveys
Clean Indoor Air
Coalition Development
Youth Prevention
Upcoming Training
Conferences & Presentations

Download a copy of the free Adobe Acrobat Reader to view and print information provided as PDF files.
Get Adobe Acrobat Reader

GLOSSARY OF EVALUATION TERMS

A    B    C    D  E   F  G  H   I   J    K    L    M   N  O  P   Q   R    S    T    U    V   W   X    Y   Z

Accountability:
Responsibility for effective and efficient performance of programs.Measures of accountability focus on (1) benefits accruing from the program as valued by the stakeholders and (2) how resources are invested and results attained.

Anonymity:
Participant identity is unknown to the people who use the evaluation and, if possible, to the investigators themselves.

Benchmarks:
Performance data used as standards either as a baseline to compare future performance or as an interim measure of progress toward a goal.

Census:
Enumeration of an entire population.

Confidentiality:
An attempt to remove any elements that might indicate the participant’s identity.

Control Group:
A group who has not been exposed to the intervention being evaluated. The control group should be similar to the program group (the participants that have been exposed to the intervention) so that systematic differences between the two groups may be attributed to the effects of the interventiononce other plausible alternative hypotheses have been eliminated or discounted.

Cost-benefit analysis:
Process to estimate the overall cost and benefit of a program or components within a program. Seeks to answer the question, "Is this program or product worth its costs?", "Which of the options has the highest benefit/cost ratio?". This is only possible when all values can be converted into money terms.

Effectiveness:
Degree to which the program yields desired/desirable results.

Efficiency:
Comparison of outcomes to costs.

Empowerment evaluation:
Use of evaluation concepts, techniques, and findings to foster improvement and self-determination. In empowerment evaluation, program participants maintain control of the evaluation process; outside evaluators work to build the evaluation capacity of participants and help them use evaluation findings to advocate for their program.

Evaluation:
Systematic inquiry to inform decision-making and improve programs. Systematic implies that the evaluation is a thoughtful process of asking critical questions, collecting appropriate information, and then analyzing and interpreting the information for a specific use and purpose.

Evaluation questions:
Specific critical questions that the evaluation is intended to answer.

Formative evaluation:
Evaluation conducted during the development and implementation of a program whose primary purpose is providing information for program improvement.

Frequency:
The count of how many people fit into a certain category or the number of times a characteristic occurs.

Impact:
Social, economic, and/or environmental effects or consequences of the program.Impacts tend to be long-term. They may be positive, negative or neutral intended or unintended.

Impact evaluation: A type of evaluation that determines the net causal effects of the program beyond its immediate results. Impact evaluation often involves a comparison of what appeared after the program with what would have appeared without the program.

Implementation evaluation:
Evaluation activities that document the evolution of a project and provide indications of what happens within a project and why it happens. Project directors use information to adjust current activities. Implementation evaluation requires close monitoring of program delivery.

Indicator:
Expression of what is/will be measured or described; evidence which signals achievement. Answers the question, "How will I know it?"

Inputs:
Resources that go into a program including staff time, materials, money, equipment, facilities, and volunteer time.

Instrument:
A standardized tool for collecting data. This could be a survey questionnaire, a focus group protocol, or a group interview protocol.(taken from Appendix 1, Key terms and phrases, work site and restaurant survey guide)

Mean:
Also known as the ‘arithmetic average'. To calculate a mean, sum the value of all responses and divide the sum by the number of observations.

Measure/measurement:
- Representation of quantity or capacity. In the past, these terms carried a quantitative implication of precision and, in the field of education, were synonymous with testing and instrumentation. Today, the term "measure" is used broadly to include quantitative and qualitative information to understand the phenomena under investigation.

Median:
The middle observation, where half the respondents have provided smaller values, and half larger ones. Calculate the median by arranging all observations from the lowest to highest score and counting to the middle value.

Mixed methods:
Use of both qualitative and quantitative methods to study phenomena. These two sets of methods can be used simultaneously or at different stages of the same study.

Mode:
The value of the observation that occurs most frequently.

Monitoring:
Ongoing tracking of the extent to which a program is operating consistent with its design.

Outcome evaluation:
Type of evaluation to determine what results from a program and consequences on people.

Outcome monitoring:
The regular or periodic reporting of program outcomes in ways that stakeholders can use to understand and judge results. Outcome monitoring exists as part of program design and provides frequent and public feedback on performance.

Outcomes:
Results or changes of the program. Outcomes answer the questions, "So what?" and "What difference does the program make in people's lives?". Outcomes may be intended or unintended; positive or negative. Outcomes fall along a continuum from short-term/immediate/initial/proximal, to medium-term/intermediate, to long-term/final/distal outcomes, often synonymous with impact.

Outputs:
Activities, services, events, products, participation generated by a program.

Participant:
A person, family, neighborhood, community, or other entity participating in a program or receiving services.

Participatory evaluation:
Evaluation in which the perspective of the evaluator carries no more weight than other stakeholders, including participants or subjects. The evaluation process and its results are relevant and useful to stakeholders for future actions. Participatory approaches attempt to be practical, useful, and empowering to multiple stakeholders and actively engage all stakeholders in the evaluation process.

Performance measure
A particular value or characteristic used to measure/examine a result or performance criteria; may be expressed in a qualitative or quantitative way.

Performance measurement:
The regular measurement of results and efficiency of services or programs.

Performance targets:
The expected result or level of achievement; often set as numeric levels of performance.

Population:
All of the individuals who share certain specified characteristics.

Probability:
When probabilities are used to describe a particular event, they are describing the likelihood of that event happening. The value of a probability will range from 0 (never) to 1 (always).

Process evaluation:
A type of evaluation that examines what goes on while a program is in progress. It assesses what the program is.

Qualitative analysis:
A process of using systematic techniques to understand, reduce, organize and draw conclusions from qualitative data.

Qualitative data:
Data that is rich in detail and description, usually in a textual or narrative format. Examples would include data from case studies, focus groups, or document review.

Qualitative methodology:
Methods that examine phenomena in depth and detail without predetermined categories or hypotheses. Emphasis is on understanding the phenomena as it exists. Often connoted with naturalistic inquiry, inductive, social anthropological worldview. Qualitative methods usually consist of three kinds of data collection: observation, open-ended interviewing, and document review.

Quantitative analysis:
The use of statistical techniques to understand quantitative data and to identify relationships between and among variables.

Quantitative data:
Data in a numerical format.

Quantitative methodology:
Methods that seek the facts or causes of phenomena which can be expressed numerically and analyzed statistically. Interest is in generalizability. Often connoted with a positivist, deductive, natural science world view. Quantitative methods consist of standardized, structured data collection including surveys, closed-ended interviews, and tests.

Random Number:
A number whose value is not dependent upon the value of any other number; can result from a random number generator program and/or a random numbers table used to generate a sample.

Range:
Calculation of the spread of the numerical data. The range is calculated by subtracting the lowest value from the highest value.

Reliability:
The consistency of a measure over repeated use. A measure is said to be reliable if repeated measurements produce the same result.

Reporting:
Presentation, formal or informal, of evaluation data or other information to communicate processes, roles, and results.

Response Rate:
The percentage of respondents who provide information.

Sample:
A means by which units are taken from a population in such a way as to represent the characteristics of interest in that population.

Self-evaluation:
Self-assessment of program processes and/or outcomes by those conducting or involved in the program.

Stakeholder:
Someone who hasan interest or "stake" in a program, organization, or community effort. Stakeholders can be program participants, community leaders and organizations, contributors and funders, or program personnel. Stakeholders are both affected by and have an effect on programs and organizations.(adapted from Innovation Network, Inc.)

Stakeholder evaluation:
Evaluation in which stakeholders participate in the design, implementation, analysis, and/or interpretation of the evaluation.

Standard Deviation:
A measure of the spread, the square root of the variance; a statistic used with interval-ratio variables.

Statistics:
Numbers or values that help to describe the characteristics of a selected group; technically, statistics describe a sample of a population.

Statistical significance:
Provides for the probability that a result is not due to chance alone. Level of significance determines degree of certainty or confidence with which we can rule out chance. Statistical significance does not equate to value.

Summative evaluation
Evaluation conducted after completion of a program or after a portion of a program to determine program effectiveness and value.

Unique Identifier:
A number or letter used to increase anonymity with a dataset, usually placed in the corner of the physical data sheet and entered into the data set with the number or letter corresponding to the same participant.

Utilization focused evaluation
A process for helping primary intended users select the most appropriate content, model, methods, theory, and uses for their particular situation. The evaluator, rather than acting as an independent judge, becomes a facilitator of evaluative decision-making by the intended users.

Validity:
The extent to which a measure actually captures the concept of interest.

Variables:

   A. Categorical variable in which values represent theoretically discrete categories that cannot be further deconstructed.

   B. Independent (input, manipulated, treatment, or stimulus) variables, so called because they are "independent" of the        outcome; instead, they are presumed to cause, effect, or influence the outcome.

   C. Dependent (output, outcome, response) variables, so called because they are "dependent" on the independent variable        the outcome presumably depends on how these input variables are managed or manipulated.

Variance:
A measure of the spread of the values in a distribution. The larger the variance, the larger the distance of the individual cases from the group mean.

Selected references: