I immediately thought of Dr. Christie’s dissertation, glad you shared that, @danawanzer. I find @c_camman’s distinction about eval theory apt and like the idea of ‘alignment’ as I’m not even sure eval theorists or exemplars would consider 100% fidelity 100% of the time possible or even ideal, how else would we innovate and improve? It assumes infallibility of prescriptions and precise fit with context. Even the notion of contingency theories of evaluation, picking the right evaluation model for the evaluation context, suggests there needs to have degrees of freedom for adaptability as we don’t have an equal amount of eval theories that perfectly align with all the contingencies evaluators face in the real world.
That being said, there are some evaluation researchers and descriptive theories of evaluation that can inform how we apply and align or depart from prescriptive theories of eval, even those prescriptions we’re theorizing in this thread.
First, Nick Smith distinguishes between eval theory, models, and approaches. Theories are ideas about issues in eval (and not theories inn the scientific sense), like the role of casual explanation in eval. Models are collections of resolutions to those issues comprising prescriptions of what good eval practice looks like, like the collections of prescriptions known as Realist Evaluation (which resolves the issue about the role of explanation in eval). Approaches are collections of models that share elements in their broad application, like theory-based/driven eval (of which Realist Eval is a form of).
With this distinction, recommendations that lean closer to approaches like developmental eval, provide more room for ‘realignment’ or flexibility from initial prescriptions. Models provide less flexibility, although some more than others. I see models and approaches as a spectrum of specificity for prescriptions for good eval practice.
I find another eval theory/research framework adds light to the question of fidelity here. Robyn Miller offers five criteria for empirical examinations of eval theory: operational specificity, range of application, feasibility in practice, discernable impact, and reproducibility. Models with high degrees of operational specifity, claims of discernable impact, and reproducibility would seem to require higher alignment or fidelity. Models with high degrees of range of application and feasibility would seem to lend themselves to more flexibility and realignment.
Realist Evaluation would score higher for demanding a degree of alignment than other models. In fact, Pawson spent much of his 2013 manifesto book talking about fidelity and alignment, while ceding the ability for flexibility with certain constraints.
As with much of eval, it would appear to depend, but would agree with others that there’s less fidelity in practice than we might think.