What are the the most common mistakes or weaknesses that the evaluators can have or conduct and how they can be avoided?
That’s a great question! I think there are quite a few mistakes that evaluators commonly make. The one that I will highlight is collecting data without a purpose explicitly defined for the data (don’t just collect data because you can). Related to that is failing to create a good/long lasting data storage plan to store any collected data as well as not pre-planning what analyses will be performed when creating the methods that will be used to collect it (e.g., surveys, interviews, pre-existing data, etc.).
I think one way to avoid this, but not the easiest thing to do, is to think about what information informs your evaluation questions (indicators) and build that into your data collection methods. Mapping it out is extremely useful and conceptualizing ways that you can analyze the data based on how it is collected in conjunction with the collection methods is critical. Additionally, this is also the stage (or even before this) that developing a data management plan is really useful. Ideally, the data management plan should be part of the evaluation plan (at least based on my data management logic).
Thanks @jrmolle2 , but don’t you think this challange is bigger when the evaluation is done in teams. Where several people are collecting the data, or having many enumerators, and ending up with bulk of data. How to overcome this?
And could you please clarify more how to better do a data management plan?
@Alsalehi, There is an interesting book about failures and mistakes in evaluation;
“Evaluation Failures: 22 Tales of Mistakes Made and Lessons Learned”
It gives real examples of mistakes and challanges facing evaluators; here is the link on Amazon:
Evaluation Failures: 22 Tales of Mistakes Made and Lessons Learned https://www.amazon.com/dp/1544320000/ref=cm_sw_r_cp_api_i_1M.1CbEHHRR63
Given how contextual evaluation is, there are so many mistakes that could be made!
However, I think the fact that evaluation is so people-oriented (i.e., few evaluations could be done to our ethical standards and guiding principles without engaging stakeholders in some capacity) is the biggest challenge and is the root cause of most mistakes in evaluation. The great Evaluation Failures book @Hayat.Askar mentioned describes a lot of situations where the issue is in some nature interpersonal.
Unfortunately, few programs train people in the interpersonal skills necessary to do good evaluation work. We are trained in evaluation methods and, in some programs, evaluation theory. Our training program might have a practicum component where we actually conduct an evaluation. But the interpersonal skills aren’t necessarily explicitly taught. I’m not even entirely sure how they might be explicitly taught. Rather, I think many people learn them simply through conducting evaluations and learning by trial and error. And therefore, a lot of mistakes will be made! That is, unless you have a good team that can help each other catch mistakes early and mitigate issues that arise.
I think this is an interesting concept. I’m hoping to work with my M&E team to help them learn some of these interpersonal “soft” skills in order to conduct stronger evaluations. We are going to start with an activity on developing a reflective practice that I’m adapting from my M&E Program curriculum.
Is there anything else that would be good to include? Thanks!
It definitely gets harder when more people are on the team. In those situations it is critical that everyone is on the same page with procedures and that they are written up for continual reference and activities tracked. For example using a journal where each person notes what they did that day, on what the did it, items of note, and issues they encountered. This can be achieved with most project management software (as one suggestion).
I will post a sample data management plan a little later today. I seems they get under utilized in evaluation. However, I could be wrong about that. I just haven’t come across many unless I am the evaluator working with a big data team.
Hey @EvaluationMaven, saw you sign up, this might be a nice thread to jump in, for what I think are obvious reasons
Great to see the discussion on common mistakes in evaluation.
When I was editing the Evaluation Failures book, I aimed for a diversity of mistakes/challenges (which I got) but there were very clear themes among these and in the subsequent lessons learned, such as the importance of engaging stakeholders and taking context into account.
I think being a constantly reflective practitioner is one of the best things we can do as an evaluator. That and not being afraid to share and learn from our failures. I’m in the middle of one now!
Neglecting to approach evaluation through a culturally responsive lens towards racially equity…or at least making movement towards that end.
I haven’t forgotten about the sample…just insanely busy at the moment. I will get to it as soon as possible. Cheers!!
I didn’t have a chance to pull one of my own…but here are some great online resources for creating data management plans with examples:
Thank you Kylie,
I so much like the book, and your example of social mapping was really useful.
Thanks folks for all of your comments and inputs, it is a pleasure to be amongst experts like you all.