The Before: A take on complexity and emergence in evaluation
Updated: Mar 13
Ten years ago, my roommate Michael (shout out) introduced me to the concept of complex adaptive systems (CAS), and so much fell into place for me. I finally had the language to describe what I felt intuitively. Since then, I’ve pulled from different fields’ applications of complexity theory, including my background in environmental science, in an attempt to explain CAS in my life and in my work as an evaluator. In case you’re interested in a view of my rabbit hole, I’ve linked a bunch of the reading I’ve devoured over time at the end of this blog.
The foundation for my inspiration continues to be my personal connection to nature; I feel like we have a lot to learn from paying attention to our part in natural systems. In fact, this year I introduced what I call my "Mycelium Network." It’s an hour on my calendar twice a week where I take a silent walk in nature by myself. The openness of this time to let my brain wander, connect dots, and be refreshed by the woods, has been a really important new routine for me. As a bonus, sometimes these walks can lead to fun whiteboard sessions like this one below.
What you are looking at is an initial brain dump of applied CAS theory in the context of evaluation that I scribbled down eight months ago. I’ll be spending the next six months digging deeper into these initial whiteboard ideas by participating in this year’s Emergent Learning (EL) cohort (deep gratitude to my EC team members for your support here).
As one of the core concepts in Complex Adaptive Systems is emergence, I’ll be looking for ways to link CAS more formally to Emergent Learning and share my own emerging reflections (you see what I did there) as the cohort progresses. The table below - a cleaner, more detailed version of the whiteboard scribbles above - lists core concepts of CAS, some EL practices, and how they can be applied to evaluation. I wanted to share these ideas as they are forming to document a starting point and their adaptations.
Core CAS concepts
Applications to evaluation
Whole is greater than the sum of its parts; flow is the structure
Many of us are trained to break things into their parts to understand them [read: analysis]. How can synthesis also allow us as evaluators to spotlight the relationships between parts? Analysis would be like teaching someone a new dance move by describing its parts - one arm up, one arm down. But what about the flow between your arms? And how do their distinct movements sync up with the rest of the body? Essentially some of our evaluation practices are giving robot dancing vibes when reality is more nuanced, rhythmic, and connected.
Emergent patterns are generated through the interaction of agents
There is an element here that definitely ties back to my robot dance analogy (see above). To go in a different direction, I imagine tools like the Human Systems Dynamics Institute Pattern Spotters could provide a way to assess the patterns, linkages, and relationships between individuals, organisms, and systems in our evaluation. I am also excited by the discussions in the field that encourage moving away from project-based evaluations to community/regional evaluations that explicitly look at how interactions impact our understanding of our work and impact.
Patterns in systems cannot be understood by separate components
The biggest connection for me here is what we as evaluators understand in the inherent tensions of a system. Instead of just focusing on where there is convergence (or themes) we also consider divergence. Some of our best discussions with clients have come from reviewing tensions – understanding how multiple things may simultaneously be true, or where programs need to pivot to adjust the balance they seek within the context of tensions. Principles-focused evaluation may be a guide here because it enables a multitude of strategies to be understood holistically together, rather than separately.
Nonlinear, minor changes may produce large effects; “period doubling cascade”
This is the classic “we’re looking for contribution, not attribution” in impact evaluations. In fact complexity may go so far as to say that being able to point to causality may be futile because of the multitude of variables in the system. I think the importance of real-time learning using developmental evaluation or Emergent Learning shines through here. We also need to consider how to not write off the little things in making big changes (sometimes casually referred to as the butterfly effect). Note: I could see the butterfly effect being dangerously interpreted by funders or decision-makers as a justification for investing in personal responsibility over structural change, which is not what I am saying.
Cannot predict the future so look at patterns retrospectively, nondeterministic
If we’re trying to pull in tools again here that can ground us in complexity more in our evaluation, outcome harvesting could be really beneficial. As is often a tension in our evaluation, can we be conscious that we’re identifying outcomes to help folks better navigate their work – not for the sole purpose of marketing impact, which could potentially be exploitative.
Diversity, and requisite variety
Another thread here for me is how cognitive diversity can help us better make sense of our findings. Cognitive diversity is another way of saying people think differently, their brains process differently, and their worldview and experiences are different, all of which have value and offer unique gifts. I often describe my brain as being organized like a tropical rainforest (lots of sound, lots of color, lots of all over the place interactions – it's not good or bad, that’s just my brain. How would you describe how your brain works?!). Are we building evaluation teams with cognitive diversity? Are we inviting staff and program participants into data interpretation in ways that ensure cognitive diversity?
*Note: I’ve laid out an initial draft in a table format but am thinking a less linear or more multidimensional model may ultimately make sense in a second draft. There is a lot here that I am finding in common with the table we made for our white supremacy + eval series (link). Perhaps a combined model is on the horizon…
This blog serves as part one of a two-part series: Before the Emergent Learning cohort and after. I hope to gain specific language and frameworks, as well as a network of colleagues to challenge and deepen my thinking. Plus a bunch of other unanticipated outcomes I can’t yet begin to imagine. See ya here again in six months.
Some resources from over the years:
Note: I know there is room to grow here, and more direct connections and links to be made to the work cited above.