Welcome to the Program Evaluation Substack
Where we are doing evaluation theory
What I am trying to do
In this Substack, I discuss program evaluation theory - the evidence-based determination of the merit or worth of social programs. Program evaluation as practiced today involves a wide variety of approaches: there are now dozens of recognized ways of doing program evaluation, each with their own preferred principles and methodologies. It isn't easy to catch up or keep up. This Substack will try to do a bit of both.
The payoff for catching up and keeping up with program evaluation theory is that we will learn about the standards by which social programs - which are ubiquitous - are considered to be successful or not, the kinds of advice that decision-makers are given to improve them, and the ways that this whole process might itself improve in the near future.
What we could do together
There are only a few Substacks about program evaluation as of this writing. Part of my purpose is to bring people together and hopefully inspire a few more. Program evaluators (and their near relatives in organizations and social sciences) need to talk to one another, in long form. Journals are too slow for real conversations, and the barriers to entry to the conversations they host are too high. New ideas have a very hard time getting in. Substack is a much better place for actually discussing evaluation theory.
Why focus on evaluation theory
Beyond catching up and keeping up with evaluation theory, it's high time for some new frameworks. While we have a lot of options for evaluation approaches, most of the options are now quite old. The most popular frameworks are among the oldest. I take this to be a bad sign for the development of evaluation theory. The American Evaluation Association acknowledges as much in their description of the theme for the 2024 conference:
"The lack of engagement of new and emerging perspectives in the evaluation field threatens diversity, sustainability, and the evolution of evaluation."
New perspectives are latent here, but practitioners aren't engaging very much with them.
The reasons for this lack of engagement with emerging evaluation perspectives are complex. I do not have a single story to tell you. However, I do want to propose a few reasons for the problem that AEA is nominally attempting to address.
1. Preparation to engage with evaluation theory
The first reason concerns the level of preparation that people who now call themselves "evaluators" receive in evaluation theory at the graduate level. Few universities have qualified instructors in evaluation theory. Talking with my colleagues about the evaluation theory courses they were able to take, I've learned that even when evaluation theory courses are taught, they tend to be land on the smorgasbord approach to theory: now that you have learned about several theories, feel free to mix and match elements. The result is that people who get professional training in evaluation theory can be more inconsistent in applying evaluation frameworks than they might have been if they had never taken the course and just read one book. There are similar relevant issues with the teaching of social science theories and philosophy in graduate programs. Many modern evaluation theories rely on knowledge of political texts and history - such as decolonial thought - that most evaluators have never read. It is insulting to nod along pretending that one understands these theories. The future of evaluation may require a much broader academic preparation in general, including in the usual areas like statistics and economics. Professional development later on can only fix inadequate preparation so much.
2. Testing evaluator competencies
Another obvious reason for the lack of engagement with new and emerging evaluation perspectives is the lack of a disciplinary guild structure to set and test competency standards. Think of the American Medical Association, the American Bar Association, and the American Institute of Certified Public Accountants. The American Evaluation Association can set all the voluntary guidelines it likes, but we need to test ourselves on those standards for them to mean anything in a practical sense. Otherwise, we will not know whether the average "evaluator" is capable of performing the disciplinary equivalent of a routine appendectomy. The fact that evaluation is a broad and diverse field is no argument against testing for disciplinary knowledge - so is medicine.

3. Lack of dialog with clients about evaluation approaches
Some of the barriers to developing evaluation theory may also come from the way evaluations are funded. Recently, I spoke with Michael Patton about the issue of RFPs that are prescriptive about evaluation methodologies. He was clear that this is does not fit his utilization-focused perspective, since the evaluator needs to engage in serious dialogue with the client in order to uncover their true needs for information. I would argue that the same is true for evaluation theory. The client may imply in the RFP that they would like a utilization-focused evaluation - because they know that this is the most popular method and they like the sound of it - but our later dialogues reveal that they really want a classic feminist evaluation or a goal free evaluation. Letting funders decide in advance what kind of evaluation theory they would like us to apply short-circuits the dialog about theory. Dialogues about theory in practical cases are the best ways to advance theory.
4. Lack of constructive critique among evaluators
So far, I've named some institutional causes that might be contributing to the problem that AEA is trying to address. These issues will obviously take serious effort to fix. However, not all of these issues are institutional. A major cultural issue that evaluators need to address is the low level of internal constructive critique within the field. For all our theoretical talk about meta-evaluation, we rarely critique each others' work. Perhaps this is some sort of gentleman's agreement among consultants - I won't discuss your mediocre evaluation if you keep quiet about my mediocre evaluation. Even without the institutional fixes I mentioned above, we can still make a cultural shift today towards constructive critique within the field. This critique should begin between colleagues, but there is no reason that it should not extend to the media. When a new Rand evaluation comes out, journalists could seek out an independent voice in the evaluation field to explain the importance of the evaluation (if any) and offer an opposing perspective - this is now standard in science journalism.
The connection between the basic condition in which we find evaluation as a field today and the failure to take up new evaluation theory is important. Yes, evaluators should read and write more. There are also major factors at play that have ground the development of theory in our discipline to a near halt. I suspect that evaluators would engage more at an individual level in the development of theory if these major factors were to shift. In academia, many crucial conversations about theory stagnate and die as researchers move on to trendier topics. I don't think that evaluation can afford to be like academia in this way. Evaluation theory needs to advance for the practice of evaluation to actually carry out its mission.
Who I am
My name is Dr. Anthony Clairmont, and I work as a full-time external evaluator in California. I undertook my PhD at UC Santa Barbara, and now I live in Long Beach. I have been the lead evaluator for publicly-funded mental healthcare programs, substance-use programs, recidivism reduction initiatives, college and high school programs. In addition, I've evaluated museums, libraries, and architecture. In the process of conducting this work, I've been exposed to many evaluation theories and a huge variety of practical issues. Evaluation is my vocation, and I believe that evaluation will be an important part of any good society we might wish to build.

