Discussion about this post

User's avatar
Julian King's avatar

Thanks Anthony, I enjoyed your review (and the meta-study) because they unintentionally beg for a Sorting Hat treatment. The house balance the paper reveals seems pretty clear: NDE over 2010–24 looks strongly Gryffindor, with a small but vocal Ravenclaw minority, Slytherin pulling strings in the background, and Hufflepuff largely left out in the cold.

(The Sorting Hat metaphor is just for fun and not meant to be taken too seriously - still, it does throw light on an interesting pattern here. More in this post if curious: https://juliankingnz.substack.com/p/evaluator-sorting-hat).

Gryffindor feels well catered for: the dominance of human services and international development domains, and the journal’s longstanding attention to practice and profession, line up with a strong normative commitment to "social betterment".

The Ravenclaws are there, but they’re a minority voice. There's a stable slice of research on evaluation, but it’s only ~10% of articles, mostly using basic designs and descriptive statistics with very little reporting of effect sizes, reliability, or validity, so the hardcore methods crowd gets a little space but probably remains a little frustrated.

Slytherin shows up in the "Old Directions" institutional ecology: agenda‑setting power is concentrated in a relatively small cluster of US universities, and the rise and fall of international authorship suggests influence is not evenly distributed, even if everyone is speaking the language of “the field.” Ambitious evaluators still know exactly which common rooms matter.

Hufflepuff, in my typology, is about who gets included in evaluation (e.g., participation, co‑creation, stakeholders’ voices). On that front, when I look at the meta‑study I hear crickets. It would seem that either NDE, or perhaps the meta-study authors, didn't see stakeholder inclusion, power sharing, collaboration, and developing value as important. Or more charitably, perhaps they just didn't look for it. Absence of evidence isn't evidence of absence, but it does tell us something about what was considered salient.

And that brings us to evaluative reasoning - the core (imho) of what it means to evaluate. Every house does it, but they favour different approaches. What’s striking to me is that evaluative reasoning – valuing, criteria, standards, synthesis, warranting value judgements – is missing from this NDE meta-study, even while we slice the field finely by topic, method, and author demographics. I know the evaluative reasoning theme does exist in the journal because the 2012 NDE on valuing (edited by George Julnes) and the 2018 edition on evaluative thinking (Vo & Archibald), are two of my favourites, but the analytic frame didn't pick them up.

Maybe that's the real invitation for the meta-study: if we're serious about "new directions", the next round of analysis might not ask who publishes what and where, but how well our flagship journals are actually supporting evaluators to reason their way to defensible value judgements... across all four houses :-)

No posts

Ready for more?