Originally posted at Evaluation is an Everyday Activity
What is the difference between need to know and nice to know? How does this affect evaluation? I got a post this week on a blog I follow (Kirkpatrick) that talks about how much data does a trainer really need? (Remember that Don Kirkpatrick developed and established an evaluation model for professional training back in the 1954 that still holds today.)
Most Extension faculty don’t do training programs per se, although there are training elements in Extension programs. Extension faculty are typically looking for program impacts in their program evaluations. Program improvement evaluations, although necessary, are not sufficient. Yes, they provide important information to the program planner; they don’t necessarily give you information about how effective your program has been (i.e., outcome information). (You will note that I will use the term “impacts” interchangeably with “outcomes” because most Extension faculty parrot the language of reporting impacts.)
OK. So how much data do you really need? How do you determine what is nice to have and what is necessary (need) to have? How do you know?
- Look at your logic model. Do you have questions that reflect what you expect to have happen as a result of your program?
- Review your goals. Review your stated goals, not the goals you think will happen because you “know you have a good program”.
- Ask yourself, How will I USE these data? If the data will not be used to defend your program, you don’t need it.
- Does the question describe your target audience? Although not demonstrating impact, knowing what your target audience looks like is important. Journal articles and professional presentations want to know this.
- Finally, ask yourself, Do I really need to know the answer to this question or will it burden the participant. If it is a burden, your participants will tend to not answer, then you have a low response rate; not something you want.
Kirkpatrick also advises to avoid redundant questions. That means questions asked in a number of ways and giving you the same answer; questions written in positive and negative forms. The other question that I always include because it will give me a way to determine how my program is making a difference is a question on intention including a time frame. For example, “In the next six months do you intend to try any of the skills you learned to day? If so, which one.” Mazmaniam has identified the best predictor of behavior change (a measure of making a difference) is stated intention to change. Telling someone else makes the participant accountable. That seems to make the difference.
Mazmanian, P. E., Daffron, S. R., Johnson, R. E., Davis, D. A., & Kantrowits, M. P. (1998). Information about barriers to planned change: A Randomized controlled trail involving continuing medical education lectures and commitment to change. Academic Medicine, 73(8).
P.S. No blog next week; away on business.
To Comment, visit the original post here: Evaluation is an Everyday Activity