top of page
  • Writer's pictureLeah Josephson

The objectivity myth

This is Part 3 of a series of blog posts that dive deeper into how our team is thinking about and acting on equity in evaluation. Check out the previous posts here.


For the past few years I’ve periodically assisted a researcher in conducting qualitative data collection, usually interviews with Jewish community leaders. I enjoy the topic areas (and putting my Judaic studies graduate degree and past professional communal experience to good use) so I’m always happy to help out.

The thing I love about these interviews is that they are consistently awesome. For me as an evaluator this means I’m on the same wavelength as the interview subject quickly: I’m asking the right questions, the conversation flows freely, and we breeze through the fact finding questions to get to the REAL conversation – when people stop being polite and start getting real.

Reaching this part of the conversation may be my absolute favorite part of being an evaluator. The victories, the struggles, the learnings they describe are all just so HUMAN, and the connection comes easily to me from years of being steeped in the community as a professional and a scholar – but also as a member with a personal and spiritual connection. My conscious lack of “objectivity” here serves as a strength in developing an authentic connection that leads to deeper, more honest feedback about a program.

AEA network members Lila Burgos, Tsuyoshi Onda, and Cristina Magaña described another aspect of this experience when they wrote about the experience of code switching in evaluation for the AEA365 blog. In a project focused on equitable philanthropy with a racially diverse project team, they noticed “We quickly found ourselves openly revealing our lived experiences, nuanced understanding of racial and economic inequities, and MLE expertise in a way that felt authentic and valued.”

I feel vulnerable admitting that I am more likely to reach this magical flow during these culturally specific interviews with Jewish community members, when the interview subject is one of MY people, with all the sacred messiness and complexity that it entails.

Does this observation mean I do a terrible job when I’m engaging in evaluation outside my community? No, but I have to work a lot harder, think a lot more, and constantly question my assumptions. This is especially important when interview topics reflect experiences outside of the dominant cultures of white supremacy, Christian hegemony, and other systems of power and control.

Conducting hundreds of culturally specific interviews has made me think a lot about culturally responsive evaluation and how it might be addressed on our team. What would it look like to onboard other members of our team to this project? Would I ever feel truly confident that another team member from outside my community could get beyond a surface-level understanding? What would they miss? Would they ask the right questions?

Our team is constantly asking ourselves who is the most appropriate person to gather data as we confront the objectivity myth, rooted in white supremacy culture, that assumes evaluators can separate our own experiences and assumptions from what we hear and interpret. At Emergence Collective this means we often support (and appropriately compensate) program participants, front-line staff, or community members who will connect most easily with participants in defining interview protocols and conducting data collection. They’re not objective, but they’re the experts on the program, hold deep and trusting relationships, and know how to ask the right questions.

On the other hand, I often deeply appreciate the fresh “outsider” perspective my colleagues have brought to a few of our Jewish communal projects. I recall one of my team members asking a question on a site visit that stopped me in my tracks; her perception of a community issue was far different than my own as someone who had been thinking about it for a long time. She brought me a new perspective. Other times we have worked on projects where data collection was conducted by folks uninvolved with the program or community due to challenges of privacy or even internal dysfunction or mistrust.

The editing process for this post brought up several interesting conversations on our team, about the balance we seek to bring to the work in honoring both lived experience and outsider perspective, and consciously leaning into a both/and posture. Most importantly it reinforced the importance of participatory methods to enhance rigor in our evaluation projects, a topic we’ve discussed on our blog before.

What has this topic brought up for you as you reflect on your practice? We’d love to hear your questions and comments below or on Twitter.

179 views0 comments


bottom of page