Measuring the "Touchy-Feely" Stuff: Takeaways from a Qualitative Methods Workshop


“How do we measure the touchy-feely stuff?”

This question was posed to me at a recent pre-conference workshop I led at the 2018 HANO Conference, titled “Making Voices Heard: Capturing and Amplifying Constituent Perspectives Through Qualitative Methods.” The workshop was itself the result of an observation I’d made during previous presentations on qualitative data collection—namely, that many nonprofits are hungry to capture the perspectives of their constituents but feel they lack the know-how to do so. As live poll results from the workshop demonstrate, many nonprofits make the effort to collect qualitative data, but then aren’t sure what to do with it, leaving the data to collect proverbial dust on the shelf:

Screen Shot 2018-10-13 at 10.46.28 AM.png

What is more, nonprofits frequently feel an either-or proposition exists, that in order to generate quantitative data, qualitative data must be sacrificed along the way. However, I strongly believe that qualitative data is the yin to quantitative data’s yang; together, they create a more balanced, more complete understanding of an organization, target population, or community. And as I’ve described elsewhere, the sum of each type of data is greater than its parts—yielding what I think of as a “1+1=3” understanding and richer basis for learning.  

So how can you get started incorporating qualitative data collection into your organization’s practice? Here are six key takeaways from the workshop:


Take time to refine your qualitative question. I often say that good thinking begins with great questions. While it’s natural to get excited about the “doing”—creating a new survey, planning a focus group, or scheduling interviews—it’s important to be intentional about the “thinking” first. What question are we posing that we feel qualitative data will help us answer? What subjective information (e.g., opinions, motivations, beliefs, or interpretations of experiences) are we hoping to learn about? Taking time to explore and refine our learning question is important, because it will drive the qualitative data collection process. Some sample questions that might be informed by qualitative data collection:

·      How is our organization perceived within the local community?

·      Why do our older clients stay more engaged with our programs than our younger clients?

·      How can we maximize the experience volunteers have with our organization?

·      What do our supporters feel are the most important issues in the upcoming election?

·      What are the greatest challenges facing arts and culture organizations today?



Sometimes less is more in capturing your constituent’s (literal) voices. While we are visual creatures and social media inundates us with videos, photos, and gifs, sometimes experiencing stories in their simplest form—through audio only—can be truly powerful. Fans of StoryCorps, the national nonprofit organization that works to preserve and share stories of people from all walks of life, can no doubt attest to being moved to laughter or tears simply by listening to an audio exchange of people having a conversation. For hearing audiences, audio stories can strip away all but the most essential elements of conversation, allowing emotions and meaning expressed through our voices as well as our words to shine through. In the workshop, we listened to the audio of a Vietnamese-American mother talking with her grown daughter, reflecting on the challenges of raising her within two cultural worlds: 

Imagine your organization’s constituents, telling their stories in their own words and voice—what might they say, and what might audio recordings capture in a way that visuals cannot?


Look for themes to get an initial handle on qualitative data. For organizations that have collected stacks of open-ended survey responses or pages of interview transcripts, it can be overwhelming to know where to begin tackling all that narrative information. One way to make the gathered information manageable is to conduct a simple thematic analysis. In a nutshell, thematic analysis refers to a systematic review and coding of narrative data to identify patterns and themes that emerge from the text. While high-level thematic analysis can be complex and involve sophisticated software, for many organizations undertaking a light-touch, manual version of such analysis—what I dubbed “baby thematic analysis” in the workshop—is often enough to get an organization started on understanding the broad brushstrokes of respondents’ perspectives. What might that look like in practice? This brief and informative video below, produced by ModU, provides an excellent introduction to coding and thematic analysis:

Find a qualitative data “study buddy.” Like many new skills, analyzing qualitative data can be daunting to tackle on your own! It can be helpful to identify a study buddy—someone who is either walking the same journey of exploring qualitative data, or someone who is more experienced and can serve as a mentor—with whom you can practice these new ideas and skills. Not quite sure if the themes you see emerging from focus group responses are a single concept, or two distinct ones? Your study buddy may be able to help. Wondering if the wording for one of the survey questions you are drafting might be biased? Your study buddy can serve as a double-check. Hopefully you can return the favor, and each of you benefits by growing your knowledge, experience, and know-how in the process. Finding a study buddy outside of your own organization may help you gain objective feedback from someone who isn’t already steeped in your organizational lingo and assumptions.  


Qualitative data collection can be an avenue to constituent empowerment. Fundamentally, qualitative data collection is a form of story collection. Each person’s experiences, perspectives, and opinions are part of the tale of who they are and shape how they move through the world. Simply having one’s story heard can be transformative and life-affirming, particularly for members of marginalized communities. 

One of the forms of qualitative data collection that most resonated with the workshop participants was PhotoVoice. PhotoVoice combines visual storytelling with community building, and provides participants with guidance for using photography as a means of self-expression, community dialogue, and advocacy. The video below, produced by Health Share of Oregon, provides a powerful example of PhotoVoice in action—in this case, illuminating the intersection between individuals’ racial, ethnic, gender, and sexual identities and their experiences with health care:

Following group discussion and community-building, photographs are chosen and captioned by the participants. Capturing views of the world through participants’ eyes and accompanied by their own words can grow participants’ sense of agency while also educating a larger audience. One of the most visible examples of PhotoVoice in action in Hawaii was a community-based participatory evaluation of a Housing First program, conducted by a team of researchers at the University of Hawaii-Manoa. Sample photos from the project demonstrate the impact of Housing First participants’ images and captions.


Close the loop and amplify constituent voices. Almost all of us enjoy having our contributions acknowledged. When people take the time to provide their personal perspective, we owe it to them to close the loop on the information we’ve collected. For example, sharing out survey results allows respondents to see how their input relates to overall findings. Interviewees and focus group participants appreciate seeing final reports, white papers, and presentations that their insights helped inform. This loop-closing can be part of larger efforts to amplify the voices of your organization’s constituents, across multiple avenues for dissemination. Your organization’s website provides valuable real estate for moving first-person accounts into public spaces. Honolulu-based Domestic Violence Action Center, for example, features on its landing page a simple yet powerful video of a client who has benefitted from the organization’s programs.

Similarly, annual reports provide a vehicle for featuring the voices of your organization’s clients, supporters, and constituents; e-versions of annual reports (such as this 2017 annual report from StoryCorps) allow for live links to audio files, videos, or other online sources, adding additional dimensions to the data they typically include. Reports or white papers that include narrative passages or pull quotes (such as this report on institutional touchpoints of homelessness in Hawaii) can provide a visceral connection with individuals’ unique experiences. And public presentations—for instance, this share-out event featured at Honolulu Hale in July 2016 for the aforementioned Housing First PhotoVoice participants—can expand awareness and visibility of people whose voices might otherwise be overlooked.


How is your organization embracing qualitative methods of data collection? What have you or your organization learned through qualitative data collection that you wouldn’t have learned otherwise?

Five Career Lessons I Learned from Teaching


Yesterday, Labor Day, provided closure on the summer for many people across the country, bookending the season of leisurely days at the beach or pool, barbecues, and family road trips. Being a creature of habit, I must admit enjoying the return of routines that the new school year brings. Even so, I found myself fondly looking back this weekend on my family’s summer travels, including a special side trip down memory lane.

On a drive from the Washington DC area south through North Carolina, we stopped at Northern Vance High School in Henderson, NC, where more than 20 years ago I was a high school biology and chemistry teacher via Teach For America. Memories of students, colleagues, and classrooms flooded my brain as I stood near the entrance of the school that served as a home-away-from-home for my two years of teaching. Reminiscing about the naïve young woman who stood before her first class of students—many of whom were just a handful of years my junior—I realized that several of the lessons I learned as a teacher have stayed with me across my career. Most, in fact, have served me particularly well as a consultant. To wit, five career lessons I learned from my time in the classroom:


Relationships eat management strategies for lunch. On the surface, “classroom management” was teacher parlance for “keeping your students under control.” I recall experienced teachers warning me of the importance of classroom management, and I collected a bevy of strategies to keep students engaged, maintain order, and enforce consequences. What I found over time, however, is that classroom management was much less about discipline strategies than it was about the culture I developed in my classroom, and the relationships I established with my students. Creating a culture of trust and mutual respect often allowed classroom management to occur organically. Similarly, I’ve found in my work as a consultant, relationship-building has been foundational in “client management.” Developing trusting, respectful relationships that engender clear communication has often paved the way for smooth interactions with my clients.

Supportive colleagues are invaluable. Teaching, simply put, is hard work. In my first year, I struggled to find my balance as a teacher, carrying the weight of teaching responsibilities while also supporting my students’ navigation of the often-challenging circumstances they faced. Having a tight-knit network of teacher friends got me through that first year. Knowing they understood my frustrations and could also genuinely celebrate my successes was comforting and fortifying. Likewise, I have developed friendships with colleagues and mentors who have helped me find my way as a consultant. There is something deeply reassuring about finding your “tribe,” and knowing that they not only speak your language, but also intuitively understand your motivations for why you do what you do.

If you’re going to critique, be ready with alternatives. Working in the rural south, I was very accustomed to meals, sporting events, and various other activities beginning with a prayer. However, I once attended a diversity training sponsored by the county school system that began—I thought somewhat incongruously—with a call for a prayer.  Just as we prepared to bow our heads, I mustered up the nerve to ask the facilitator if, maybe in the spirit of the training’s purpose and recognizing perhaps not everyone in attendance shared the same faith, we might consider instead taking a minute of silence for people to pray or reflect as they felt moved to. After a few flustered moments, the facilitator did indeed invite us to take part in reflective silence, and the meeting moved on without issue (thankfully). My vice principal, an older Southern gentleman who was also at the training, later remarked, “Joyce, the reason why your comment worked in that context is because you didn’t just critique—you had another possibility available.” I hear his voice as a reminder to this day that constructive criticism can always be made more palatable by being paired with possibilities and viable alternatives. I remind myself, too, that my role as a consultant is not to critique, but rather to help clients learn and improve, and to work with a sense of possibility in finding pathways to make that happen.

Take job titles with a grain of salt. Several veteran teachers told me before I began teaching that the most important people at any school are the secretary and the janitor, and rightfully so. I found through experience that the secretary and janitor were each in their own way essential to the day-to-day machinery of the school day. The secretary served as administrative gatekeeper and manager of the back of the house, while the janitor ensured that the hallways and classrooms were clean, orderly spaces in which both students and faculty could take pride. Their modest job titles didn’t make the presence of the secretary and janitor any less essential. I have carried with me in my current role the recognition that board presidents, CEOs, and executive directors are undoubtedly integral to any organization’s functioning. But program staff, administrative assistants, and operations managers are every bit as essential—and sometimes even more so—to the daily, mission-driven work of an organization as their colleagues with loftier titles, and are due the respect that that fact warrants.


Be your authentic self. One of the most memorable pieces of advice I encountered as I prepared to enter the classroom was this gem: Don’t smile until Christmas. The advice wasn’t meant literally (at least I hoped it wasn’t), but the message was clear: be stern from the get-go, lay down the law, and let your students know you mean business. The problem with this advice was, I just didn’t have it in me to follow it! I could certainly assert myself with my students, but I couldn’t imagine taking on the role of drill sergeant. And so, I decided to forgo this bit of established wisdom, and chose to be myself instead—which meant I was periodically nerdy, silly, and irreverent, sometimes all at once. Being authentically myself allowed me to connect more personally with my students, and helped me realize that my version of being a teacher could be just as valuable as the stern, strict disciplinarian version that others advocated. Similarly, giving myself permission to be authentic has enabled me to find what I can uniquely contribute as a consultant, which in turn has helped me maintain a sense of satisfaction, purpose, and joy in my work.


What past advice did you chose to follow—or actively *not* follow—in previous positions you've held? What insights from your first job have you carried with you throughout your career?

What's Your Decision (Tree)?

Sometimes we choose the themes in our lives, sometimes the themes choose us.

Over the past three months I’ve been immersed in thinking about decision-making. Whether by luck or by subconscious intent, my personal reading, professional development, and even my podcast choices have recently intersected around decision-making. I guess the universe is trying to tell me something.

This theme seemed to crystallize when I sketched out a decision tree to guide my selection of consulting projects:


As simple as it is, just having the visual gave me incredible clarity on a number of things:

  • Everything starts with people. Looking back on past projects, I can say with certainty that the common thread through all the enjoyable ones has been the opportunity to engage with terrific people. I half-jokingly said to some colleagues recently, I could be doing a project on underwater basket weaving and as long as I was working with awesome people, I guarantee it’d be worthwhile.

What does “awesome people” mean for me? People who ask thoughtful questions, are passionate in their pursuits, and who think expansively. People who bring laughter and a sense of humor to their work. People who are compassionate, and navigate the world from a place of possibility and generosity. And this Venn diagram from LinkedIn CEO Jeff Weiner does a pretty good job of short-handing the type of people I like working with, too.

  • My decisions reflect my values. Although the particulars of my past projects and clients have varied widely, all of them have resonated in some way with my values: contribution to community, advancement of social justice, empowerment of the underserved, and protection of the vulnerable. Certainly important work is advanced in other areas, but these are ones that ring true to my own priorities. The projects that speak to me on a visceral level are the ones that I know are echoing my values.

The opportunities that I pass up reflect my values, too. Keeping my commitments to clients and colleagues is important to me, and I work hard to ensure I don’t spread myself so thin that I sacrifice my integrity. Likewise, being able to care for my family and myself are critical to my ability to give my best to others. As a result, family and self-care retain veto power if it seems either will have to be sacrificed too much for the sake of work.


  • Moving from auto-pilot to intention makes a world of difference. Slowing down and taking the time to carefully consider my options allows intention, rather than inertia, to drive my decisions. And intention, in turn, has unlocked new doors that I might not have opened otherwise.

For example, intentionally seeking to “spread my wings” has made me consciously look for opportunities a bit outside of my comfort zone, which has made my project work richer and more engaging. The research nerd in me loves diving into literature reviews of unfamiliar topics, just as the practitioner in me enjoys gaining new skills to add to my consultant’s toolbox. And as research suggests, recent opportunities to lead trainings and workshops have proved invaluable in advancing my learning, not only because facilitation pushes me to know the content better than before, but also because the questions and perspectives that participants share grow my own knowledge base.

Admittedly, being a consultant offers me a degree of decision-making privilege that many jobs don’t. And yet, it holds true that regardless of our occupation—or our demographics, or any other factors that “define” us—our decisions reflect our values. One question I keep asking myself is, How can I better align my decisions with my values? The decision tree above is one step toward figuring that out in my professional life.


What’s your decision tree, whether in your professional or your personal life? How does understanding your decision-making filters help clarify the values you bring to your work?

Why--and How--I'm Focusing on Relationships in 2017


Although I do love a good New Year’s resolution, I was intrigued by nonprofit technology maven Beth Kanter’s suggestion for a different approach to kicking off each new year: identifying a theme for the year.

It turns out that pinpointing a theme was the easy part. When I reflected back on the things over the past year that I was proud to accomplish, that brought me joy, that I want to grow and foster, and that I want to improve, a clear through-line emerged: Relationships. I know I’m not alone in wanting to deepen and broaden my connections with family, friends, and colleagues. But this theme feels, on a gut level, more significant in the wake of November’s elections, and in the more divisive social and political environment we are now all navigating.

It was helpful to articulate for myself: Why am I gravitating toward the theme of relationships? And how can I put that theme into action in my personal and professional life as I move forward? My reflections on those questions follow below…


>> Why Am I Choosing Relationships as My Theme for 2017?

Relationships are an antidote to mistrust. In the current political climate, it seems we are encouraged to judge one another on labels and appearances, and to assume the worst of one another’s motives based on demographics and group membership. By consciously creating and nurturing relationships that look beyond the superficial, we can work to undermine this agenda of mistrust. These days, relationship building—particularly the type that seeks out connection and understanding beyond our own echo chambers—feels like a micro-act of civil disobedience. And that’s a good thing.


Relationships are a long-term investment in community. As they say, sometimes you have to go slow to go fast. Relationships are built on trust, which very often must be developed over time. Once that trust is established, however, it allows subsequent collaborations and partnerships to happen quickly and smoothly.

A case in point: Over the past few years, I have come to know the fine folks at the Hawaii Alliance of Nonprofit Organizations (HANO) pretty well, and through many informal conversations we have gradually built relationships of mutual respect and trust in one another’s values and motivations. When an opportunity arose to have colleagues from Hawaii Community Benefit Consultants provide a “Consultant's Corner” at last fall’s HANO conference, the planning and discussions around the idea were easy to navigate and transparent. It’s hard to imagine the same sort of smooth process occurring had my previous relationship with HANO been a purely transactional one.

Relationships remind us of our best selves. Relationships are at the heart of our meaning and purpose. It is within relationships that we allow others to see our vulnerabilities, and to share our humanity with one another. Those who we trust to see us as complete, whole individuals can in turn serve as our mirrors—they can observe what we sometimes have difficulty seeing in ourselves. They can speak to the joy we bring others, the contributions we make, and why they appreciate our love and friendship. And on the days when we are less than our best, being reminded of our strengths can be powerfully reassuring and fortifying.


>> How Will I Turn the Theme of Relationships Into Actions in 2017?

By expressing gratitude. Expressing gratitude forces us to observe and notice others, their actions, and their intentions. Gratitude highlights our interdependence, and allows us to hone our ability to see the good and find connection. I’ve already begun keeping a list of people to whom I plan to send “gratitude notes” this year—just simple expressions of appreciation for the ways they have informed my thinking, shaped my experiences, or otherwise made my corner of the world better and brighter.

By catalyzing connections. While I don’t meet Malcolm Gladwell’s definition of a Connector, I do enjoy connecting people who might not know one another yet who share passions or pursuits. Of course, we are all capable of being connection catalysts, whatever the size of our social circle. When we are engaged in conversation with people who are motivated to effect change on an issue, we can ask ourselves catalytic questions, such as “Who else cares about this? What could they accomplish together that they might not be able to accomplish on their own?” And by bringing these individuals together, we can continuously grow a network of people with a shared sense of urgency to do good.

By leaning into discomfort. For many of us, growing relationships in soil we know to be fertile is easy to do. But taking the risk of planting seeds for new or challenging relationships in unfamiliar terrain can be intimidating. We owe it to ourselves and our communities to lean into the discomfort of exploring new territory, and to cultivate relationships that hold the potential to blossom outside of our existing comfort zones.

This could take shape in many different forms. We might start conversations with people of a different generation than our own, or who grew up in a different part of the world. We might ask to attend a religious service with an acquaintance or colleague whose faith differs from ours. Or, as was the case for me during the holidays, we might reach out to family members with vastly different political beliefs, and work to better understand—without judgment—the values that undergird those beliefs. In our increasingly balkanized world, pushing through discomfort is a powerful way to gain mutual understanding on the other side.


What themes do you see emerging for yourself in the New Year? How will you work to translate those themes into action?

Lies, Damn Lies, and Election Season Statistics: What Nonprofits Can Learn from the Use of Numbers in Politics

One of the disheartening casualties of this election season—in addition to run-of-the-mill civil discourse—has been the judicious use of data and statistics. Candidates of all political stripes wield figures (but not necessarily facts) to advocate for their positions and policies. And they do so because they know how most of us operate: We defer to the authority of numbers. Figures, data, and statistics seem to carry an air of irrefutability. When candidates cite dollar figures and percentages during their stump speeches and debates, many of us unconsciously think, “Well, someone’s calculated those amounts… you can’t really argue with data.”

But, as it turns out, you can argue with data—loudly and vigorously, as we have seen over the past few months! And because nonprofits increasingly intersect with the political world through advocacy and relationship building, organizations can learn a great deal from candidates’ use—or misuse, as the case may be—of data and statistics in the political realm. Here are a few key lessons nonprofits can take away from election season statistics:

1. Just because certain figures or percentages support our position doesn’t make them true. We all bear the responsibility of being “reasonable skeptics” when we encounter information we believe is share-worthy. Sure, it’s great to have data to support our organization’s positions, but has that information been vetted? Have we checked that the source is reliable and doesn’t have an overt agenda? Our national math anxiety can cause us to give numerical information an unchecked sense of power. Knowing how to interpret and assess statistics can equip us to be better consumers of data, which in turn makes us better informed advocates for our causes.


2. Assume you’ll be fact-checked. News and social media outlets have practically made a cottage industry out of fact checking, whether of national-level candidates or local politicians. Needless to say, avoiding a pants-on-fire rating regarding your nonprofit’s assertions is critical to maintaining the good faith of supporters and the legitimacy of your organization. Operate under the assumption that information you distribute will be checked for its truthfulness.

3. Be ready to cite your sources. Sometimes your organization’s communications will require formal citations. Source information is a given for infographics, research studies, or policy briefs. But in other forms of outreach, such as e-mail blasts, social media posts, public service announcements, or fundraising events, time and space often don’t allow for nitty-gritty citations. It can be tempting to think it’s not important to track sources in these circumstances. Be prepared to cite them anyway. Your organization is accountable for the information it disseminates in whatever format, and you’ll want to be ready for inquiries, whether from skeptics or curious supporters.


4. The plural of anecdote is not data… Those of us in the social sector love using stories and anecdotes to make a point. And this makes sense, because we are hardwired to relate to human stories. Collected anecdotes can certainly help illustrate a theme or provide qualitative depth. But we have to be careful about extrapolating individual stories to make sweeping generalizations about others, particularly when those generalizations involve group identity or social status. Rigorous studies involving significant numbers of people are much better fodder for identifying trends and patterns. As a fellow evaluation consultant rightly notes, the plural of anecdote is not data, it’s…anecdotes.

5. …However, an anecdote, when combined with reliable data, can be a powerful force for good. Stories plus data represent a 1+1=3 scenario: together, each is more effective than either is alone. Stories are the hook, the component that resonates with our emotions and shared sense of humanity. Data appeal to our analytic side, the part of us that wants to understand the scale, scope, and urgency of an issue. Together, they deliver a one-two punch that helps bridge the gap between sympathy and action. Although he’s not a politician, New York Times columnist Nicholas Kristof often writes on issues of political significance, and is masterful at combining stories with data, as demonstrated in this piece on strategies for breaking the poverty cycle.

How has your organization used data and statistics to strengthen its work? How have you combined stories and numbers to convey your organization’s impact?





Hawaii Appleseed: Social Justice Evaluation in Practice

In my last post, I posed a question: If you’re a social justice organization, how do you measure your impact? Hawaii Appleseed Center for Law and Economic Justice (Hawaii Appleseed), a nonprofit law firm dedicated to advocacy on behalf of Hawaii’s low-income communities, offers firsthand insights into the inherent challenges as well as the learning opportunities evaluation can provide to a social justice organization. A small but mighty force for change, Hawaii Appleseed’s efforts encompass: research on housing, health, education, immigrant rights, and economic justice; legislative and administrative advocacy to ensure that laws and policies impacting those in poverty are legal, fair, and effective; community education and outreach efforts; partnership with other community-minded groups in grassroots coalitions; and, when needed, litigation to protect the rights of low-income individuals and families.

Via email, I recently asked Gavin Thornton, co-Executive Director of Hawaii Appleseed, to describe the ways Hawaii Appleseed incorporates evaluation into its organizational work, and what evaluation makes possible for the organization and for those it serves. As he thoughtfully details below, evaluation has helped Hawaii Appleseed identify systems that perpetuate poverty, think strategically about where to invest its staff resources, ensure the organization’s day-to-day efforts are on the right track, and contribute to and build on existing knowledge about how to effectively increase the opportunities and well-being of those in poverty.


How has Hawaii Appleseed approached evaluation? What does evaluation make possible for a social justice organization like Hawaii Appleseed?

Hawaii Appleseed’s mission is to create a more socially just Hawaii, where everyone has genuine opportunities to achieve economic security and fulfill their potential. We change systems that perpetuate inequality and injustice through policy development, legislative advocacy, coalition building, and litigation. This is an ambitious mission, especially for a small organization with limited resources (currently with only three permanent staff and a budget of around $300,000).

To fulfill our mission, we search for projects that will maximize our impact and return on investment—projects that will get us the most bang for our buck. Much of the work that we do is based on what has worked well elsewhere—evidence-based practices or promising practices that have been demonstrated as an effective means of improving opportunities for self-sufficiency. To evaluate the impact of our projects, we collect data on what we are able given our limited resources, but often must rely on conclusions we can draw based on research done by others, as I describe in more detail below.

Evaluation is critical to our work because it allows us to strategically advocate for the changes necessary to achieve our mission. We need to know if what we are doing is working, and if it is not, we need to modify our approach or focus our energies elsewhere. Indeed, this really gets to the core of the organization. Our focus is looking at systems that are keeping people in poverty—systems that are broken, but continue to stumble along because no one has made the effort to step back and recognize the deficiencies and correct them. After identifying systemic problems, we figure out how to change the systems so they create opportunity instead of stifling it. For the most part, the policy analysis we conduct and the reports we write are evaluations of broken-down systems. Since these evaluations are at the core of our work, we recognize the importance of self-evaluation, and trying to ensure that we are making the most of what we have and that our work is accomplishing its intended purpose.  

Because the legal or policy successes Hawaii Appleseed pursues can take months or years to occur, how does the organization know—on a day-to-day basis—whether its work is heading in the right direction?

Perhaps surprisingly, we can often see immediate positive change resulting from our work. One example is a case we filed on behalf of the tenants at the Mayor Wright Homes public housing project, where over 360 households had endured years of unsanitary and unsafe living conditions including a lack of hot water, vermin infestation, and dangerous criminal activity on the premises due to lack of upkeep of the property and inadequate security. Even while we were still working up the case prior to its filing, the attention drawn to the problem resulted in the commencement of a significant rehabilitation of the project. The hot water system was fixed shortly after the case was filed, and by the time the suit was settled with a commitment to complete necessary repairs at the property, over $4 million in repairs had already been made, dramatically improving the condition of the project.

In situations like the Mayor Wright case, we do not even need to win the case to achieve a successful outcome—while a loss in court on Mayor Wright would have limited the impact of our work, it would not have discounted the benefits already accrued to the tenants (unless, of course, it was an early loss, before the benefits accrued). However, some of our work is more of an all or nothing proposition. For example, for the most part, legislative advocacy does not result in significant change unless the bill gets passed. Yet even in the legislative context we can still evaluate our progress on a day-to-day basis. We look at the number of partners we have recruited to our coalitions and the extent of their engagement; we count the number of legislators supporting our bills; we look at the media coverage of the issue—the number and quality of the stories. It is not necessarily a scientific process, but it does provide a general sense of whether we are getting traction on an issue. Ultimately though, none of this matters if the bill does not pass. Yet these indicators are critical in deciding whether it makes sense to continue pursuing an issue year after year.

What is a particular challenge Hawaii Appleseed faces in its evaluation efforts, and how has the organization faced that challenge?

One recurring problem that we often have in evaluating our work is that of attribution—there is often a degree of uncertainty regarding whether it was our actions or something else that created the change we sought. Success in the legislature requires a group effort, and so it is hard to say who or what was responsible for the result—the answer is usually that all, or nearly all, of the participants had something to do with the outcome, as well as external factors. While it would be nice to be able to quantify the impact of our work in some way—we expended x dollars and y hours which resulted in the achievement of z benefit—it is not practical, and we need to be content with more vague assessments of our work. For example, in advocating for Accessory Dwelling Units to be permitted on Oahu, we developed a policy brief to start a discussion (that we believe was not being had prior to our work on the issue, though at least one academic researcher had been looking at the issue). Then we spearheaded an advocacy effort to get a bill passed through the city council. Our work was cited in the mayor’s affordable housing plan for Honolulu, and we helped craft the bill that was ultimately passed. Based on this, we felt it was reasonable to conclude that that change was a direct result of our efforts—that it would not have occurred without our work—but it is still difficult to say definitively.  

This difficulty in attribution is similar for our litigation efforts. Frequently, the response to the cases we have brought has been, “We’ve been looking at this issue and working on it; this is something we were already in the process of fixing before the case was even filed.” I suspect that this response often reflects the genuine beliefs of person or agency making the claim. However, since nearly all of our cases relate to issues that had persisted openly for years, and on which no action was taken until we began work on the case, the claims of “we were going to do this anyway” are questionable. For example, in the Mayor Wright case, there had been multiple stories in the newspaper about the lack of hot water during the seven years preceding the filing of the case, and yet the problem was never adequately addressed. In a case we are currently working on regarding the inadequacy of payments made by the state for the care of children in foster care, the payments—which federal law requires to be regularly updated to account for inflation—were not increased for nearly 25 years, even after five years of advocacy in the legislature by foster parents and advocates seeking an increase to the payments. The payment was increased within months of filing our case, but the state claimed that it was something they were going to do anyway. In circumstances like these, where a problem has persisted for years without action, but is addressed (or at least partially addressed) shortly after we bring a case on the issue, it seems very likely that our work provided the critical push to create the action necessary, but we can never be 100% certain regarding what would have happened had we not taken action.

How does Hawaii Appleseed demonstrate community-level impact that may be difficult to quantify? How does the organization assess its progress, for example, in advancing self-sufficiency or economic security for low-income families?

This is incredibly difficult, especially for a small organization like ours that does not have sufficient resources for the type of evaluation necessary to evaluate community impact. Instead, we are often forced to draw conclusions based on research done elsewhere. The Mayor Wright case provides a good example of this. We know that we obtained over $4 million in repairs and improvements to the project, but obtaining improvements to public housing projects is not our mission. What we really care about is creating an environment that will allow people to become self-sufficient. What difference does it make that a child is growing up in a place with working hot water, no holes in the wall, no bedbugs, and that is relatively free from criminal activity, versus a place that does not have those things? We do not know. However, we do know that there is a strong correlation between healthy, safe housing and positive health and economic outcomes. If we had the resources, we might be able to quantify the impact of our work in those terms—though even with significant resources it would still be very hard to do. For now, we are left with more vague notions of, “we know that healthy and safe living environments are important, and we obtained an improvement” (which we have been able to assess, not only by dollars spent on repairs, but also surveys of the conditions at the project).

For many of our projects, we can get a much clearer picture of the ultimate results, but we are still required to draw conclusions based on studies done elsewhere. For example, we have a project that seeks to increase participation in the school breakfast program. Participation rates in the school breakfast program (i.e., the percentage of children that eat school breakfast) are not important to us in and of themselves. However, there is a strong body of research that shows that low-income children who eat school breakfast benefit in clear, quantifiable ways, such as improved academic performance and increased health—things which are correlated with self-sufficiency later in life. We gather whatever data we can, for example on participation rates, visits to the school nurse, absenteeism, perceptions of classroom behavior, etc. Then, based on research done elsewhere, we can get a rough idea of the increased opportunities for self-sufficiency that the children are likely receiving as a result of our efforts.

How does Hawaii Appleseed engage the clients it serves in its advocacy and research efforts? What has it learned from doing so?

Because of our limited size and our area of expertise, we often must rely on other partner organizations to engage with those that we serve. Nearly all of the issues that we work on come from the community through other community organizations that contact us seeking help to resolve a problem. For example, we recently worked on a case that resulted in the drivers’ exam being translated into multiple languages. The issue was brought to us by a faith-based community organizing group whose members/constituents had identified the inability to obtain a license due to language barriers as a significant issue that was preventing people from getting to work and supporting their families. The group had already engaged the community in a variety of advocacy actions prior to our involvement, and continued to work directly with the community while serving as a liaison between the community and our organization during the course of the case.

That type of structure is workable, but it requires good communication, strong relationships, and everyone doing their part to make it work well. There is another organization in the national Appleseed network—Nebraska Appleseed—that has community organizers on staff, as well as attorneys and policy analysts. That allows Nebraska Appleseed to have direct relationships with the community it is serving, while at the same providing policy development and advocacy expertise. That model is attractive because it makes communication easier and provides a lot more control over the work, but it requires significant resources. There have been times where we have been able to serve both roles, but it is very difficult to do so well given our size. As such, we recognize the importance of continuing to strengthen relationships with other organizations that have more direct contact with those we serve, and also of building capacity so that we can have more of that direct contact ourselves.

How has your organization faced the challenges of measuring social justice impact? What lessons has your organization learned in evaluating its efforts?

Measuring the Intangible: Social Justice Evaluation

If you’re a social justice organization, how do you measure your impact? How do you assess advances in racial equity, gender parity, or equal access to educational or economic opportunity? How do you evaluate progress toward the unmeasurable?

This question of how one measures the intangible sits squarely at the intersection of the changes social justice organizations are trying to effect, and the growing demand for impact assessment. But the answers don’t fit tidily into boxes, just as societal problems are not easy to disentangle from the systems that generate them. Nonetheless, heightened awareness about the need for and significance of equity-focused evaluation has given rise recently to new strategies and resources, particularly from the philanthropic sector. These newer approaches provide welcome tools in aligning organizations’ day-to-day work and their evaluation methods. 

The evaluation community has similarly recognized the need for equity and cultural competence to be “baked into” evaluation from the outset, as the American Evaluation Association’s (AEA) Statement on Cultural Competence demonstrates. Jara Dean-Coffey, Jill Casey, and Leon Caldwell explain in their article, “Raising the Bar—Integrating Cultural Competence and Equity: Equitable Evaluation”:

Whether implicit or explicit, social justice and human rights are part of the mission of many philanthropies. Evaluation produced, sponsored, or consumed by these philanthropies that doesn’t pay attention to the imperatives of cultural competencies may be inconsistent with their missions…Because the act of evaluation is itself part of the intervention, an equity lens is paramount when evaluating a program whose goals touch on issues of equity or inclusion.

Here are four actions that funders and social justice organizations can take as they seek to include that equity lens in their evaluation efforts:

Understand Why Equity-Focused Evaluation Matters: My M&E, a platform managed by the United Nations International Children’s Emergency Fund (UNICEF) and the International Organization for Cooperation in Evaluation (IOCE), offers monitoring and evaluation information, including the purpose, need, and importance of equity-focused evaluation.  As part of its overview on evaluation and good practices, UNICEF offers a handbook, “How to design and manage Equity-focused evaluations,” which provides rationale for such evaluations, strategies for managing equity-focused evaluations, and information on evaluation design, framework identification, and real-world challenges. And the aforementioned article by Dean-Coffey and her colleagues presents an equitable evaluation capacity-building (EECB) approach that can help organizations normalize and institutionalize equity-focused evaluation in a manner consistent with social justice goals.

Challenge Assumptions and Intentions Early and Often: Evaluators bring assumptions into their work—often unconsciously, and often with the best of intentions. It is critical that we explicitly face these assumptions and biases, question them directly, and consider how they will impact evaluation efforts, the data collected, and the audiences with whom their results will be shared. Fabriders offers a list of Questions to Ask Frequently (QAFs) When Working with Data and Marginalised Communities that helps evaluators understand and respect the relationship that will develop between themselves and the communities they seek to assess. Racial Equity Tools’ Getting Ready for Evaluation provides resources for groups preparing for the evaluation process, including Tip Sheets for considering the why, who, and how of assessing marginalized communities. 

Consider Embracing (Rather than Avoiding) Intangibles: Some evaluators are directly embracing intangibles as part of their evaluation process. The Inter-American Foundation, for example, recognizes that the grassroots development work it funds supports impact at various levels, and that both intangible and tangible results are meaningful. Its Grassroots Development Framework, which is the foundation of its evaluation approach, values both tangible and intangible returns, and acknowledges that each type of return can be evidenced at individual/family, organization, or societal levels. By explicitly assessing intangible benefits of its grassroots development efforts, the Foundation is able to better assess the longer-term changes it seeks to create through its grant making programs.

Create Conditions for Success: There is a strong urge to apply laboratory-like, case-control standards to evaluation of social and policy interventions. But the truth is, evaluations in the real world seldom identify cause-and-effect pathways with absolute clarity. That fact doesn’t undermine the value of evaluations, however—it merely points to the enduring merit of identifying “contribution, not attribution,” as Grantmakers for Effective Organizations (GEO) and the Council on Foundations put it. Phrased another way, attempting to show cause and effect definitively may be an exercise in futility within complicated, poorly controlled real-world environments. Increasingly, as Soya Jung notes in her article, “Foundations Share Approaches to Evaluating Racial Justice Work,” sponsors and consumers of social justice evaluations recognize they may “need to let go of the desire to pin down causality altogether and to focus instead on creating the conditions that make social change more likely to take place.”


How does your organization incorporate equity and cultural competence in its evaluation efforts? How has doing so advanced the mission and vision of the organization as a whole?

My Resolution for 2016: Develop As a Human "Being," Not a Human "Doing"

Like many people, I’ve spent a lot of time recently considering the past year, and thinking about the potential that 2016 holds. I love the sense of possibility that turning the page on a new year provides, yet I also feel a sense of anxiety: Will I meet the goals I set for myself? Will I be as productive as I think I should? At the close of this year, will I feel as accomplished as I had hoped to be at the outset?

So many of us are our own worst critics, and hold ourselves to a never-ending task list. Each day and week, we create tick-boxes for what we need to accomplish, and end up dissatisfied when we cannot check them all off. Our To Do lists become a proxy for our sense of productivity, driving us to focus on the end product rather than the process.

Don’t get me wrong: productivity is important, and outcomes and results matter. But they aren’t the only things that matter, and they don’t matter most in all contexts. The “doing” of our lives cannot—and should not—be the ultimate gauge of our self-worth.

Which is why this year, I’m trying something different. In 2016, I want to focus on developing as a human “being,” not just a human “doing.” What do I mean by that? I’ve decided to make process just as important in my life as results. In both my professional and personal life, I want to strive to sharpen my “being” in five ways:

Being Present: Studies show that for nearly half our waking hours, we are thinking about things other than what we are actually doing. I’ll be the first to admit, I’m often preoccupied with the next thing I need to do, or worried about an upcoming deadline, or distracted by the siren song of social media and devices. But I find when I’m fully engaged in an activity—whether it’s having a conversation with a colleague, meeting with a client, or simply enjoying a walk outdoors—I get so much more out of that time. My connection with others and understanding of myself are made easier and more meaningful when I’m in the moment, appreciating “now” rather than thinking about “next.

Being Reflective: I’ve found I’m at my best when I carve out time to reflect on my interactions with others and the events of each week. Reflection time allows me to deeply process new information and experiences, build new learning onto the scaffolding of existing knowledge, and connect lessons from the past to anticipated challenges ahead. Rather than just squishing in reflection time where I can manage to fit it—while driving in the car, brushing my teeth, or dropping off to sleep—I’ve begun scheduling regular time for reflection in my calendar, as a way of holding space for a process that I know energizes me.

Being Creative: As a kid, I had lots of creative outlets: I loved to draw, I took dance and piano lessons, and I wrote poems and short stories for fun. As an adult, however, such creative pursuits often fall to the wayside, as our “real” jobs and commitments crowd out time for creativity. But recently, my brain has been shouting out for a way to scratch these creative itches. So I’ll be looking for ways to make that happen, such as sitting down at the piano to awaken dormant music in my fingers, or returning to story and poetry writing to nurture my creative life.

Being Self-Caring: It’s often easier to take care of others’ needs rather than focusing on our own. But I know from experience that when I fail to take care of myself, I’m ultimately less able to care for those around me as well. I’ve found that adequate rest and mental downtime are vital to me, so I’ve begun reshaping my end-of-day routines to make those things priorities. I’m giving myself a lights-out time just as I do for my kids, and I wind-down before bed by immersing myself in a book rather than my phone.

Being Grateful: It’s often said that gratitude is a gateway to other emotions, and maybe that’s the reason I’m feeling especially committed to incorporating gratitude in my life this year. Feeling grateful for what I have makes it easier to find joy in the every day. I’ve begun keeping a gratitude journal—just a sentence or two, a few times a week, to record someone or something that I'm grateful for—and that simple practice is already helping me notice and find pleasure in little things. I’ve also begun sending notes of appreciation to family, friends, and colleagues to thank them for the ways that they have provided support in the past, and the ways they continue to make my life richer and my work more satisfying.

At this time next year, rather than crossing off items from a year-end checklist, I hope instead to see myself as a continuing work in progress: more continuum than endpoint, more journey than destination, and ultimately, a more developed “being” rather than a person successfully “doing.”


What or how do you hope to “be” in 2016? What qualities or mindset do you seek to cultivate in yourself in the coming months?

Aloha United Way Sharpens Its Focus on Evaluation

iStock_family frieze Small copy 2.jpg

Aloha United Way (AUW), one of Oahu’s best known social sector nonprofit organizations, faces a unique challenge: It is both a nonprofit in the traditional sense as it uses its revenues to further achieve its purpose or mission, and also serves as a funder, making grants to its nonprofit partner agencies as it looks to address key community issues through collaboration and collective action. Focusing on three impact areas—Education, Poverty Prevention, and Safety Net Services—AUW advances the work of its nonprofit partners not only through grant-making and fundraising assistance, but also through capacity building and mentorship.

In recent months, AUW has begun to focus on evaluation as a component of its capacity building support of nonprofit partner agencies. These efforts are being led by Ophelia Bitanga-Isreal, Associate, Grants & Foundation, and Marc Gannon, Vice President, Community Impact. As Hawaii nonprofits—like their mainland counterparts—are increasingly asked to demonstrate their effectiveness and social impact through evaluation, AUW has likewise sought to bring greater rigor in assessing its own work, as well as that of its nonprofit partner agencies’ funded programs. I reached out to Ophelia and Marc via email to learn more about AUW’s efforts on the evaluation front, and the leadership it hopes to provide to community organizations seeking to create meaningful impact to those they serve.


What has been the impetus for AUW’s greater focus on evaluation? Asked another way: Of the many challenges facing Hawaii’s nonprofits, why focus on evaluation, and why now?

We know that the concept of evaluating program effectiveness is not new to the nonprofit sector. There certainly are local nonprofit agencies that have already incorporated some level of evaluation in their program delivery. Child and Family Service, for example, is well advanced in its use of evaluation to measure the effectiveness of its programs. However, we have recognized a couple of trends over recent years.

First, funding organizations – especially federal grantmakers – are more frequently requiring their grantees to use an evaluative process as a requirement of their grants; and they’re requiring more than just outputs, such as the number of clients served. Funders want to know outcomes, the long-term impact that a program provides to its clients.

Second, private donors are becoming more sophisticated and savvy when it comes to their contributions. Today, donating is not simply about being charitable; it’s about investing in programs that demonstrate effectiveness and impact. Donors, more and more, want to know that the money they give makes a difference.

As a result, while a fair amount of our work is about funding and supporting our nonprofit partner agencies in their efforts to provide effective programs, we also have the onus to be good stewards of the investments of our donors. Evaluation then becomes a critical means of determining if the programs are effective, meeting the needs of the community, and the interests of donors.

Developing the capacity for evaluation is critical, for both our agency as a funder, and our nonprofit partner agencies as service providers. This is especially true as nonprofit organizations have had to operate in the years following the Great Recession and with limited resources to go around, funders have had to be more prudent about which programs to fund. Evaluation, again, is the means for assessing where resources can do the greatest good for the community.

Beyond that, AUW embraces “evaluative thinking” – that is, we understand that evaluation is a means of identifying how to adjust the work that we do to serve our community better. Evaluative thinking goes beyond collecting data; it’s a mindset – maybe even a work ethic – of always striving to improve and increase the lasting, positive impact to our community. More than helping our nonprofit partner agencies develop a methodology of evaluation, we want to support their transition to a culture of evaluative thinking; an understanding that evaluation shouldn’t be just a function of a grant award, but a way of doing things better.

What positions AUW to be a community leader on the issue of evaluation, i.e., to initiate these conversations on evaluation within the local nonprofit sector?

We’re uniquely positioned, both as a funder and as a convening organization, to be able to bring together a large network of nonprofit organizations to start a collective movement toward evaluation.  We’re also able to bring resources to the table, such as training workshops and technical assistance we’ve provided to our grantees. We’re not simply imposing evaluation upon them, but helping them build their capacity to do it. And, because we’re strengthening our own internal evaluation process, we think it sends a signal that we’re committed to this important endeavor and we’re bringing our nonprofit partners along with us.

We were recently asked if it’s the role of funders to lead the movement toward increased evaluation. We think that’s its not just our role, but our responsibility to our donors, nonprofit partner agencies and to our community. We can’t talk about improving the conditions of our community without evaluating our work for impact.


Many small-to-medium sized nonprofit organizations indicate they simply don’t have the capacity to deal with evaluation—they feel they are barely keeping their heads above water in their program work. What support or guidance does AUW offer to organizations in this position?

Aloha United Way is mindful of the challenges that many smaller nonprofit organizations face in providing services. Evaluative activities are often categorized as low priority, much like filing paperwork. The truth is that evaluation is very much a part of the program work being performed, maybe even equal in value to the work itself. Another way to look at this is opportunity cost. What am I getting for $1 invested in Program A versus $1 invested in Program B?

Agencies can begin the movement toward evaluation with the resources they already have and then build their capacity and infrastructure from there. As part of the value we bring to our nonprofit partner agencies, we provide technical assistance on measuring and evaluating programs. We’re excited that we will soon be an intermediary sponsor of AmeriCorps VISTA members, which are individuals recruited to work at local nonprofit organizations and public agencies to help build capacity. We’ll be able to deploy VISTAs to help agencies who want to develop their evaluative activities but who need resources to get started.


Donors to AUW (or other nonprofits) may feel that the focus on evaluation is a red herring, that money is better spent funding direct service programs. What would your response be to those donors?

When we first started our internal discussions about evaluation, we acknowledged that donors may not be as interested in funding evaluation efforts as they would direct service activities. Donors, of course, want to maximize an agency’s ability to do important work. But just as we believe that it’s our responsibility to support evaluation among our partners, we also believe that it’s our responsibility to educate our donors on the value of evaluation. This goes hand-in-hand with being good stewards of donors’ contributions. We think that part of telling the story of the work being done in the community is showing the effectiveness of that work. That can only be accomplished through evaluation, and we believe that donors will come to look at evaluation as a way of ensuring that their dollars are supporting important and useful work in the community.

What are AUW’s near-term and long-term goals regarding evaluation, both internally and in its grant making work?

We are very excited that we’ve embarked on this transformative journey toward evaluation. Internally, we’ve already begun to take deliberate steps toward incorporating evaluative thinking in all aspects of our work, from our administrative processes to our grant making efforts. Long term, we know that this will result in a work environment that challenges each of us to think about the work we do and to constantly strive to refine and improve that work. We also know that a more robust evaluative process will help guide our funding decisions to invest in programs that will have lasting impact. In the end, we’ll be able to bring more resources into the community to support those programs.

With regards to our grant making, potential applicants will notice a marked increase in our focus on evaluation. We recently released a request for proposals (RFP) that required our applicants to demonstrate their commitment to the evaluative process, or their willingness to develop an evaluation system. We supported this requirement by offering resources through an AmeriCorps VISTA funded through the grant award. We intend to continue incorporating evaluation as a requirement for funding as a means of inculcating evaluative thinking in our grantees; more and more, as this becomes standard practice, we expect to eventually help create a culture of evaluation among all our nonprofit partner agencies.


What does AUW feel that evaluation makes possible for its partner agencies? What does AUW believe embracing evaluation will make possible for the clients that those agencies serve?

This question is aptly worded – evaluation is about possibilities and not about imposing an onerous process. It will allow our partner agencies to look at their work in a different way; to see where they can make adjustments or course corrections to better serve the community in the way they envisioned. Additionally, it will help them to demonstrate to other funders the effectiveness of their work and the ability to monitor what they are doing.

More broadly, we also believe that, as all our nonprofit partner agencies more routinely incorporate evaluation into their work, the measurements they’ll be collecting will reveal where opportunities and connections can be made across agencies. It’s exciting to consider that, eventually, we’ll be working much more collaboratively with each other, maximizing the limited resources available to our nonprofit sector.

Of course, this will translate to more effective approaches to supporting, not just the clients at each individual agency, but our community as a whole. Evaluation allows us to improve services and to have genuine impact. That’s good for everyone. 


What would it take for your organization to initiate meaningful conversations about evaluation? What would evaluation make possible for your organization, and for those you serve?

Evaluation at Its Highest Potential

We all know there are conditions when people do an adequate job, and conditions where they thrive. Maybe a colleague of yours is in a role where she does a perfectly fine, serviceable job…but you know that when she’s working for a cause close to her heart, she really shines. Or perhaps you know a young person who is normally just an OK student…but when he’s challenged in the classroom and truly engaged in learning, he’s at his highest potential.

What if we thought of evaluation this way? If Evaluation were a person, in what conditions would Evaluation simply be doing its job, and in what conditions would Evaluation be at its best?

These are the thoughts and questions I posed at a recent community forum of executives and staff members from Hawaii nonprofits, philanthropies, and public sector agencies, convened by Aloha United Way. Evaluation, in my mind, is a critical component in creating meaningful community solutions. But just like people, Evaluation has conditions in which it is merely serviceable, and conditions in which it is maximizing its potential:

Evaluation Just Doing Its Job         vs          Evaluation at Its Best
Focuses on Accountability                              Focuses on Learning
Captures the Past                                           Used in Service to the Future
Experienced                                                   Shared


Evaluation that focuses on accountability vs. Evaluation that focuses on learning: Evaluation that is primarily concerned with accountability feels like bean-counting. It seeks to answer questions like, “Did you spend the money as you said you would?” or “Did you serve the number of clients you projected?” And as one attendee at the forum noted, a focus on accountability sends a message, fundamentally, about a lack of trust. Evaluation that focuses on learning asks altogether different questions, such as: “What’s working in your programs? What’s not working? How can we take what we’ve found, and improve going forward?” Trust is baked into learning-focused evaluation, so that even failures are seen as opportunities to gain new insights.

Evaluation that captures the past vs. Evaluation that is used in service to the future: Evaluation that is concerned primarily with capturing past performance feels like rote data collection. Past results are dutifully recorded, logged, and tucked away. But Evaluation that is, to borrow language from my colleague Hildy Gottlieb, used in service to the future, looks to identify paths to improvement. In this context, Evaluation is a tool that can actively inform strategies, helping us to shift course towards our goals in a thoughtful and responsive way.

Evaluation that is experienced vs. Evaluation that is shared: Evaluation that is simply experienced often involves objectification of some form—either the evaluation is “done to you” by evaluators, or “done by you” to clients or staff members—creating a dynamic of judgment. But Evaluation that is shared can create an altogether different dynamic in two ways. First, it can be part of a shared process where, rather than seeing others as people to be judged, they are offered a genuine place at the table to identify common areas for learning as part of a collaborative effort. Second, Evaluation can be shared as a tangible product of that process, in which the knowledge gained moves beyond a single organization’s walls. It informs the thinking of like-minded organizations in a communal way. And in both cases, Evaluation that is shared manifests an abundance mindset, something the social sector could certainly use more of.


What comparisons would you add to the table above? In what ways has your organization been able to maximize Evaluation's potential?