Monitoring Program Quality (or fidelity)


#1

Hello evaluators!

I work for a national youth program and I am currently developing a framework for how my organization could approach monitoring program quality. I’ve found lots of great tools and approaches, including self-report surveys, activity logs, and observation rubrics. What I haven’t found a lot of is information on how to implement such a system on a national scale (I’m in Canada, but any examples of national monitoring frameworks would help). I can tell from my scoping phase that good implementation is hugely important to the success of the initiative. Has anyone done work like this or come across any relevant literature? For example, I’m looking at things like: who should do the monitoring, how many sites is it reasonable to monitor per person, should it be an external evaluator, an administrator, a peer, the youth? Should it focus on sharing best practices or performance management or both? Should data stay local or get aggregated to the provincial or national level? How much would all of this cost? How to roll out a training program to develop evaluation literacy and get buy-in from the field? I’d love it if you could share your experience and wisdom.


#2

I don’t know of any literature on this, but I think a lot of it depends on your context and your needs at the national, provincial, and site level. One organization I worked with at the site level had their national organization attempt to implement these types of things, but quite frankly it wasn’t to the level of quality of what the site was doing. The national organization was attempting some nice things: site-level reports, especially comparing to similar sites and to the national organization, surveys that encompassed the entire national organization’s goals, etc.


#3

Yep that sounds similar! Could you elaborate on what you mean by it wasn’t to the level of quality of what the site was doing? not sure what you mean. I’m guessing the attempts did not work?


#4

I meant that at our sites, we were doing high quality external and internal observations, student surveys, staff surveys, and sometimes parent surveys. National only did what I thought was a low quality student survey, something that overall wasn’t very beneficial or helpful to our site, even if we ignored the work we were already doing and learning from.


#5

Oh I see. How lucky for your sites that they had such high quality work! :slight_smile: Thank you for sharing your experience


#6

Thanks @nkoustova for the question. Have you checked evaluation policies at big organization like USAID for example. Some other national policies might answer these questions, since national evaluation policies govern the evaluator work at the national level. Not sure if these policies can be easily found, bit here is a case study about the Swiss policy: https://www.seval.ch/app/uploads/2017/07/case_study_Switzerland_kstolyarenko.pdf