Today “TheGuardian” website published an article on monitoring and evaluation of development projects. The subtitle of the article is: “Providing evidence for funders might seem unimportant when you’re saving lives, but it is vital to improving de…
Good goals should meet SMART criteria. They should be:
Good indicators should meet CREAM criteria. They should be:
– Clear (precise and unambiguous)
– Relevant (appropriate to the subject at hand)
– Economic (available at reasonable cost)
– Adequate (able to provide sufficient basis to assess
– Monitorable (amenable to independent validation).
SMART criteria are very well known. CREAM criteria are becoming more and more popular. They were introduced by Kusek and Rist in their book “Ten Steps to a Results-Based Monitoring and Evaluation System“
One of the common issues we face while evaluating programs is: when people try really hard to develop SMART goals they tend to use indicators instead of goals. This mistake has a number of negative consequences. The best and most comprehensive evidence-based description of those consequences I am aware of was provided by Ordóñez, L. D., Schweitzer, M. E., Galinsky, A. D., & Bazerman, M. H. in their article Goals Gone Wild: The Systematic Side Effects of Over-Prescribing Goal Setting (2009). In particular, they argue that when people are supposed to achieve a goal that is formulated as an indicator they often demonstrate unethical behavior. For instance, they can fix cars that are not broken to generate expected income per person (“SMART” goal).
Interestingly, there are more and more publications that apply SMART criteria to indicators but not to goals. I think, this is a very good idea. Try to google “SMART indicators”, and you will get over 25000 links.
This kid is playing guitar. Guitar is fake. Music in his soul is real. Are evaluators always able to make such a distinction?Here is the story. This boy was 2,5 years old at that time. He said: “Let me play my guitar.” Then he took the ‘guitar’ an…
Yesterday in the presentation on evaluator competencies I mentioned a worrisome trend in the evaluation market that I described as increased quasi-demand for quasi-evaluation of quasi-programs. It seemed to resonate. Several people talked to …
Evaluation associations should not provide evaluation services to the external clients. This is extremely important for sustainability of associations. Let me explain why. Evaluation associations serve their members and develop profession. If they…
The answer is YES, they are! Of course, evaluation associations are uniqe organizations that serve their members and are oriented towards development of evaluation profession. But it would be a big mistake to deny the existence of competition between t…
Here is a brief description of the software proposed by the manufacturer: ”Timeless Time & Expense® gives you the flexibility to track your time and expenses in a way that works for you. Rather than forcing you to track your time the way someone else thinks you should, Timeless Time & Expense allows you to model your time tracking the way you do business. It’s easy and quick to use, so you can start tracking your time immediately.”
The guests of my blog may want to look at ECDG blog that is focused on evaluation capacity development and - in particular – at my most recent post there discussing characteristics of evaluation conference as indicators of ECD.
Evaluation Space: Overcoming interdisciplinary barriers: observation on one trend in evaluation capacity development in the region
In Russia and other Newly Independent States one of the trends on the early stages of evaluation development was differentiation of evaluation from the other disciplines. This is an important stage in the development of any new profession.
It seems like now we are entering the next stage that could be characterized by integration of evaluation into various subject areas, development of subject specific evaluation approaches (presumes specialization of evaluators) and collaboration between evaluators and subject experts.
I have just returned from the annual conference of the International Program Evaluation Network in Astana, Kazakhstan. One of the conference highlights for me was presentation of a group of psychologists working with teenagers in Moscow. They introduced themselves as non-evaluators developing evaluation approaches relevant to their professional area. My understanding of their major concern was that they work in the traditions of humanistic psychology and are worried that the “traditional” evaluation approaches will not fit their values, will not allow to measure actual progress and results of their programs and can make harm to their program participants. So, they established an interdisciplinary working group that involves psychologists, sociologists, social workers, professional evaluators, representatives of the donor agencies, practitioners working with teenagers. Today this group is very close to publishing a declaration of principles of evaluating programs aimed at children. I would describe the approach they propose as “humanistic evaluation” (rooted in humanistic psychology).
Two things seem very important to me in this initiative since they indicate a new stage of evaluation capacity development in the region:
- This is one of the first cases in our region when subject experts initiated development of evaluation approaches specific to their subject area.
- These approaches are developed in collaboration between subject experts and professional evaluators.