Posted by Steve Waddell in M&E on August 10, 2010
Do you want to evaluate an initiative similar to this:
Major systems change initiative where the intervention initiative aims to “tip” a system in a major new direction…
Or maybe like this:
Many different agencies and project teams working collaboratively on the same problem with complicated interactions, impossible-to-attribute outcomes, diverse responses to unexpected events…the challenge is ongoing development of the collaborative effort and providing feedback about its effectiveness.
These are two of 10 scenarios that Michael Quinn Patton gives to illustrate where developmental evaluation is an appropriate evaluation approach. These are in his new book Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use.
Patton is father of the field of developmental evaluation (DE), as distinct from development evaluation. DE is a burgeoning field working with complex adaptive systems, contrasting to the traditional log frame and other static, linear approaches that can cause havoc with change efforts. Patton explains that DE “…facilitates ongoing innovation by helping those engaged in innovation examine the effects of their actions, shape and formulate hypotheses about what will result from their actions, and test their hypotheses about how to foment change in the face of uncertainty in situations characterized by complexity.” (p. 14)
The book is a wonderful overview not just of the approach, but of the many years of Patton’s work to develop it. DE is not a methodology – it encompasses many different methods, including traditional ones such as surveys and participant observation. I think of it as a stance: as in action research, the evaluator is a co-participant in the development of the initiative, actively working with others to draw out learnings and integrate them into actions with action-reflection-planning-action cycles being built into daily worklife. DE is a practice, that aims to pragmatically guide workday actions and promote the type of leaderful culture so important for change networks.
This contrasts with formative evaluation that gets a program model ready (working out the bugs) and summative evaluations at the end of a project to assess “did it work?” These use frameworks like SMART (Specific, Measurable, Achievable, Relevant, Time-bound). The evaluator is typically thought of as outside of the project being evaluated, being a dis-interested observer and analyst who delivers periodic reports.
Good questions and learning are foundations that unite all the evaluation approaches. Patton brings up single-loop learning (asking questions within the established policies, structures and goals…eg: are we doing well at providing people fish to eat) and double-loop learning (asking questions about the policies, structures and goals…eg: should we instead be teaching how to fish for people to feed themselves). Oddly he does not raise triple-loop learning (asking questions about how we think about an issue…eg: how do we understand the eco-systems-fish-consumption relationships). This omission seems rather odd, since triple-loop learning is really about learning how to learn…and perhaps defines a limitation of Patton (triple-loop learning is a more recent concept) rather than of DE itself. He places DE’s focus more at the double-loop level; I would say that it moves into triple-loop as well.
The book provides a very good comprehensive discussion of fundamentals concepts behind DE, such as systems thinking, adaptive cycles and approaches to change. He presents an additional useful take on the distinctions between simple-complicated-complex: the key variables are the degree of agreement about what to do and the ability to define the impact of actions. (Exhibit 4.5)
However, all this is similar to many other books. The key value is Patton’s ability to give great examples. He identifies 10 types of complex systems development, including the two described at the beginning of this blog. Then he gives detailed analysis of how to address each, including key questions, time lines and design/methods options. For each he also gives an applied example with commentary.
Patton likes the word “bricolage”, a French term he appropriates to describe “combining old things in new ways, including alternative and emergent forms of data collection and transformed evaluator-innovator relationships.” Together with his descriptions of formative and summative evaluation, this makes me think there is great value in understanding how to combine these traditions with DE to create comprehensive approaches to the concept of evaluation. After all, as Patton emphasizes the old traditions have their place – it just needs redefining along with DE.
Thanks for your review of the Developmental Evaluation (DE) book. I agree that the omission of triple-loop learning is an oversight on my part — and unfortunate. While the opening chapter notes the distinction between single and double loop learning as one tradition on which DE draws, the actual processes of and interactions around developmental evaluation more often than not involve and include triple-loop learning. This is part of what is covered in the concept of "process use," the learning about evaluation-based learning that results from going through a developmental evaluation process, which is a form of learning quite distinct from findings-focused learning and findings use. Process use (see the book’s index) is inherently triple-loop learning. However, as you point out, the connection is only implicit in the book and should have been made explicit — and will be made so in the next edition. Thank you for that.