For many programs, data collection and reporting is becoming autonomous. Especially if that program relies heavily on the web. Is that leading to a diminished role evaluators? Even if the role doesn’t change, are the evaluation systems people purchase lowering the budgets available for professional help?
Thinking on the first question there…When I was a kid, my auntie had a Mac computer. It had this program that emulated a psychotherapy session. You’d type in a problem, or anything really, and the
“therapist” would give you a generic response (“Elaborate on that please”). Does anyone else remember this program? I can’t remember its name… ANYWAYS, I bring it up because the program couldn’t match the complexity of human behaviour. It couldn’t offer any real meaningful therapy.
Of course automation has come a long way since early 90s Macs, but I can’t imagine machines being able to deal with the nuances necessary for quality evaluation. Machines can’t engage in evaluative thinking, which is crucial to good eval. But maybe automation of data collection, reporting, etc. could advance to a point that basic cookie-cutter evaluations are pumped out with minimal involvement from us evaluators. And I think this is what could really diminish the role of evaluators. Given the choice, would a consumer choose a quality human-led evaluation, or an easy, automated (and lower quality) cookie-cutter evaluation?
Not even a little bit! More than ever the evaluator as a systems thinker, teacher, facilitator, and question asker is needed. I see the pieces being automated as building in efficiency so I can spend more time face-to-face with clients engaging them in their data and helping develop evaluative thinking.
I don’t see our role as synonymous with data collection and reporting. If anything, those are the aspects of my work that I’m emphasizing less and less while I spend more time doing coaching, facilitation, planning and capacity-building work to support sense-making and translating data into action. Just today I heard from a prospective client that they’re less interested in just knowing if they’re meeting their metrics and more interested in learning what kind of impact they’re having (which involves metrics and more). That’s a perspective I’ve encountered a lot lately and one which is much harder to automate or distill to a dashboard.
The budget trade-off can be real though, since investing in robust data collection isn’t cheap and funding for monitoring and evaluation activities is nearly always too limited within the already limited funding for most nonprofits and service sector organizations. I’ve got one contract in jeopardy around this, but the requests coming in for complex, non-automatable, people-centered work are outweighing that.
I’d suggest that emerging technology has the potential to raise the profile and role of evaluators by helping practitioners recognize the benefit of systematic inquiry. These tools will likely need non-artificial intelligence to support processes and outputs. However, Michael Bamberger has been speaking for some time about two futures for evaluation: getting serious about considering how to incorporate big data, machine learning, artificial intelligence and other emerging technology into evaluation, or having much of the evaluative work of the future done by data scientists.
Interesting, I think a major challenge is whether or not those that engage in evaluation professionally consider themselves part of the evaluation community. There lots of business analysts, human centered designers, data scientists, UX designers, non-profit program managers, and all sorts of others engaged in similar evaluative work but unaware that such work exists as a profession.
You’re totally right. My sister works as a UX researcher for Dropbox and more and more our professional conversations are converging.