Lies, Damn Lies, and Election Season Statistics: What Nonprofits Can Learn from the Use of Numbers in Politics

One of the disheartening casualties of this election season—in addition to run-of-the-mill civil discourse—has been the judicious use of data and statistics. Candidates of all political stripes wield figures (but not necessarily facts) to advocate for their positions and policies. And they do so because they know how most of us operate: We defer to the authority of numbers. Figures, data, and statistics seem to carry an air of irrefutability. When candidates cite dollar figures and percentages during their stump speeches and debates, many of us unconsciously think, “Well, someone’s calculated those amounts… you can’t really argue with data.”

But, as it turns out, you can argue with data—loudly and vigorously, as we have seen over the past few months! And because nonprofits increasingly intersect with the political world through advocacy and relationship building, organizations can learn a great deal from candidates’ use—or misuse, as the case may be—of data and statistics in the political realm. Here are a few key lessons nonprofits can take away from election season statistics:

1. Just because certain figures or percentages support our position doesn’t make them true. We all bear the responsibility of being “reasonable skeptics” when we encounter information we believe is share-worthy. Sure, it’s great to have data to support our organization’s positions, but has that information been vetted? Have we checked that the source is reliable and doesn’t have an overt agenda? Our national math anxiety can cause us to give numerical information an unchecked sense of power. Knowing how to interpret and assess statistics can equip us to be better consumers of data, which in turn makes us better informed advocates for our causes.

 

2. Assume you’ll be fact-checked. News and social media outlets have practically made a cottage industry out of fact checking, whether of national-level candidates or local politicians. Needless to say, avoiding a pants-on-fire rating regarding your nonprofit’s assertions is critical to maintaining the good faith of supporters and the legitimacy of your organization. Operate under the assumption that information you distribute will be checked for its truthfulness.

3. Be ready to cite your sources. Sometimes your organization’s communications will require formal citations. Source information is a given for infographics, research studies, or policy briefs. But in other forms of outreach, such as e-mail blasts, social media posts, public service announcements, or fundraising events, time and space often don’t allow for nitty-gritty citations. It can be tempting to think it’s not important to track sources in these circumstances. Be prepared to cite them anyway. Your organization is accountable for the information it disseminates in whatever format, and you’ll want to be ready for inquiries, whether from skeptics or curious supporters.

 

4. The plural of anecdote is not data… Those of us in the social sector love using stories and anecdotes to make a point. And this makes sense, because we are hardwired to relate to human stories. Collected anecdotes can certainly help illustrate a theme or provide qualitative depth. But we have to be careful about extrapolating individual stories to make sweeping generalizations about others, particularly when those generalizations involve group identity or social status. Rigorous studies involving significant numbers of people are much better fodder for identifying trends and patterns. As a fellow evaluation consultant rightly notes, the plural of anecdote is not data, it’s…anecdotes.

5. …However, an anecdote, when combined with reliable data, can be a powerful force for good. Stories plus data represent a 1+1=3 scenario: together, each is more effective than either is alone. Stories are the hook, the component that resonates with our emotions and shared sense of humanity. Data appeal to our analytic side, the part of us that wants to understand the scale, scope, and urgency of an issue. Together, they deliver a one-two punch that helps bridge the gap between sympathy and action. Although he’s not a politician, New York Times columnist Nicholas Kristof often writes on issues of political significance, and is masterful at combining stories with data, as demonstrated in this piece on strategies for breaking the poverty cycle.

How has your organization used data and statistics to strengthen its work? How have you combined stories and numbers to convey your organization’s impact?

 

 

 

 

Hawaii Appleseed: Social Justice Evaluation in Practice

In my last post, I posed a question: If you’re a social justice organization, how do you measure your impact? Hawaii Appleseed Center for Law and Economic Justice (Hawaii Appleseed), a nonprofit law firm dedicated to advocacy on behalf of Hawaii’s low-income communities, offers firsthand insights into the inherent challenges as well as the learning opportunities evaluation can provide to a social justice organization. A small but mighty force for change, Hawaii Appleseed’s efforts encompass: research on housing, health, education, immigrant rights, and economic justice; legislative and administrative advocacy to ensure that laws and policies impacting those in poverty are legal, fair, and effective; community education and outreach efforts; partnership with other community-minded groups in grassroots coalitions; and, when needed, litigation to protect the rights of low-income individuals and families.

Via email, I recently asked Gavin Thornton, co-Executive Director of Hawaii Appleseed, to describe the ways Hawaii Appleseed incorporates evaluation into its organizational work, and what evaluation makes possible for the organization and for those it serves. As he thoughtfully details below, evaluation has helped Hawaii Appleseed identify systems that perpetuate poverty, think strategically about where to invest its staff resources, ensure the organization’s day-to-day efforts are on the right track, and contribute to and build on existing knowledge about how to effectively increase the opportunities and well-being of those in poverty.

 

How has Hawaii Appleseed approached evaluation? What does evaluation make possible for a social justice organization like Hawaii Appleseed?

Hawaii Appleseed’s mission is to create a more socially just Hawaii, where everyone has genuine opportunities to achieve economic security and fulfill their potential. We change systems that perpetuate inequality and injustice through policy development, legislative advocacy, coalition building, and litigation. This is an ambitious mission, especially for a small organization with limited resources (currently with only three permanent staff and a budget of around $300,000).

To fulfill our mission, we search for projects that will maximize our impact and return on investment—projects that will get us the most bang for our buck. Much of the work that we do is based on what has worked well elsewhere—evidence-based practices or promising practices that have been demonstrated as an effective means of improving opportunities for self-sufficiency. To evaluate the impact of our projects, we collect data on what we are able given our limited resources, but often must rely on conclusions we can draw based on research done by others, as I describe in more detail below.

Evaluation is critical to our work because it allows us to strategically advocate for the changes necessary to achieve our mission. We need to know if what we are doing is working, and if it is not, we need to modify our approach or focus our energies elsewhere. Indeed, this really gets to the core of the organization. Our focus is looking at systems that are keeping people in poverty—systems that are broken, but continue to stumble along because no one has made the effort to step back and recognize the deficiencies and correct them. After identifying systemic problems, we figure out how to change the systems so they create opportunity instead of stifling it. For the most part, the policy analysis we conduct and the reports we write are evaluations of broken-down systems. Since these evaluations are at the core of our work, we recognize the importance of self-evaluation, and trying to ensure that we are making the most of what we have and that our work is accomplishing its intended purpose.  

Because the legal or policy successes Hawaii Appleseed pursues can take months or years to occur, how does the organization know—on a day-to-day basis—whether its work is heading in the right direction?

Perhaps surprisingly, we can often see immediate positive change resulting from our work. One example is a case we filed on behalf of the tenants at the Mayor Wright Homes public housing project, where over 360 households had endured years of unsanitary and unsafe living conditions including a lack of hot water, vermin infestation, and dangerous criminal activity on the premises due to lack of upkeep of the property and inadequate security. Even while we were still working up the case prior to its filing, the attention drawn to the problem resulted in the commencement of a significant rehabilitation of the project. The hot water system was fixed shortly after the case was filed, and by the time the suit was settled with a commitment to complete necessary repairs at the property, over $4 million in repairs had already been made, dramatically improving the condition of the project.

In situations like the Mayor Wright case, we do not even need to win the case to achieve a successful outcome—while a loss in court on Mayor Wright would have limited the impact of our work, it would not have discounted the benefits already accrued to the tenants (unless, of course, it was an early loss, before the benefits accrued). However, some of our work is more of an all or nothing proposition. For example, for the most part, legislative advocacy does not result in significant change unless the bill gets passed. Yet even in the legislative context we can still evaluate our progress on a day-to-day basis. We look at the number of partners we have recruited to our coalitions and the extent of their engagement; we count the number of legislators supporting our bills; we look at the media coverage of the issue—the number and quality of the stories. It is not necessarily a scientific process, but it does provide a general sense of whether we are getting traction on an issue. Ultimately though, none of this matters if the bill does not pass. Yet these indicators are critical in deciding whether it makes sense to continue pursuing an issue year after year.

What is a particular challenge Hawaii Appleseed faces in its evaluation efforts, and how has the organization faced that challenge?

One recurring problem that we often have in evaluating our work is that of attribution—there is often a degree of uncertainty regarding whether it was our actions or something else that created the change we sought. Success in the legislature requires a group effort, and so it is hard to say who or what was responsible for the result—the answer is usually that all, or nearly all, of the participants had something to do with the outcome, as well as external factors. While it would be nice to be able to quantify the impact of our work in some way—we expended x dollars and y hours which resulted in the achievement of z benefit—it is not practical, and we need to be content with more vague assessments of our work. For example, in advocating for Accessory Dwelling Units to be permitted on Oahu, we developed a policy brief to start a discussion (that we believe was not being had prior to our work on the issue, though at least one academic researcher had been looking at the issue). Then we spearheaded an advocacy effort to get a bill passed through the city council. Our work was cited in the mayor’s affordable housing plan for Honolulu, and we helped craft the bill that was ultimately passed. Based on this, we felt it was reasonable to conclude that that change was a direct result of our efforts—that it would not have occurred without our work—but it is still difficult to say definitively.  

This difficulty in attribution is similar for our litigation efforts. Frequently, the response to the cases we have brought has been, “We’ve been looking at this issue and working on it; this is something we were already in the process of fixing before the case was even filed.” I suspect that this response often reflects the genuine beliefs of person or agency making the claim. However, since nearly all of our cases relate to issues that had persisted openly for years, and on which no action was taken until we began work on the case, the claims of “we were going to do this anyway” are questionable. For example, in the Mayor Wright case, there had been multiple stories in the newspaper about the lack of hot water during the seven years preceding the filing of the case, and yet the problem was never adequately addressed. In a case we are currently working on regarding the inadequacy of payments made by the state for the care of children in foster care, the payments—which federal law requires to be regularly updated to account for inflation—were not increased for nearly 25 years, even after five years of advocacy in the legislature by foster parents and advocates seeking an increase to the payments. The payment was increased within months of filing our case, but the state claimed that it was something they were going to do anyway. In circumstances like these, where a problem has persisted for years without action, but is addressed (or at least partially addressed) shortly after we bring a case on the issue, it seems very likely that our work provided the critical push to create the action necessary, but we can never be 100% certain regarding what would have happened had we not taken action.

How does Hawaii Appleseed demonstrate community-level impact that may be difficult to quantify? How does the organization assess its progress, for example, in advancing self-sufficiency or economic security for low-income families?

This is incredibly difficult, especially for a small organization like ours that does not have sufficient resources for the type of evaluation necessary to evaluate community impact. Instead, we are often forced to draw conclusions based on research done elsewhere. The Mayor Wright case provides a good example of this. We know that we obtained over $4 million in repairs and improvements to the project, but obtaining improvements to public housing projects is not our mission. What we really care about is creating an environment that will allow people to become self-sufficient. What difference does it make that a child is growing up in a place with working hot water, no holes in the wall, no bedbugs, and that is relatively free from criminal activity, versus a place that does not have those things? We do not know. However, we do know that there is a strong correlation between healthy, safe housing and positive health and economic outcomes. If we had the resources, we might be able to quantify the impact of our work in those terms—though even with significant resources it would still be very hard to do. For now, we are left with more vague notions of, “we know that healthy and safe living environments are important, and we obtained an improvement” (which we have been able to assess, not only by dollars spent on repairs, but also surveys of the conditions at the project).

For many of our projects, we can get a much clearer picture of the ultimate results, but we are still required to draw conclusions based on studies done elsewhere. For example, we have a project that seeks to increase participation in the school breakfast program. Participation rates in the school breakfast program (i.e., the percentage of children that eat school breakfast) are not important to us in and of themselves. However, there is a strong body of research that shows that low-income children who eat school breakfast benefit in clear, quantifiable ways, such as improved academic performance and increased health—things which are correlated with self-sufficiency later in life. We gather whatever data we can, for example on participation rates, visits to the school nurse, absenteeism, perceptions of classroom behavior, etc. Then, based on research done elsewhere, we can get a rough idea of the increased opportunities for self-sufficiency that the children are likely receiving as a result of our efforts.

How does Hawaii Appleseed engage the clients it serves in its advocacy and research efforts? What has it learned from doing so?

Because of our limited size and our area of expertise, we often must rely on other partner organizations to engage with those that we serve. Nearly all of the issues that we work on come from the community through other community organizations that contact us seeking help to resolve a problem. For example, we recently worked on a case that resulted in the drivers’ exam being translated into multiple languages. The issue was brought to us by a faith-based community organizing group whose members/constituents had identified the inability to obtain a license due to language barriers as a significant issue that was preventing people from getting to work and supporting their families. The group had already engaged the community in a variety of advocacy actions prior to our involvement, and continued to work directly with the community while serving as a liaison between the community and our organization during the course of the case.

That type of structure is workable, but it requires good communication, strong relationships, and everyone doing their part to make it work well. There is another organization in the national Appleseed network—Nebraska Appleseed—that has community organizers on staff, as well as attorneys and policy analysts. That allows Nebraska Appleseed to have direct relationships with the community it is serving, while at the same providing policy development and advocacy expertise. That model is attractive because it makes communication easier and provides a lot more control over the work, but it requires significant resources. There have been times where we have been able to serve both roles, but it is very difficult to do so well given our size. As such, we recognize the importance of continuing to strengthen relationships with other organizations that have more direct contact with those we serve, and also of building capacity so that we can have more of that direct contact ourselves.


How has your organization faced the challenges of measuring social justice impact? What lessons has your organization learned in evaluating its efforts?


Measuring the Intangible: Social Justice Evaluation

If you’re a social justice organization, how do you measure your impact? How do you assess advances in racial equity, gender parity, or equal access to educational or economic opportunity? How do you evaluate progress toward the unmeasurable?

This question of how one measures the intangible sits squarely at the intersection of the changes social justice organizations are trying to effect, and the growing demand for impact assessment. But the answers don’t fit tidily into boxes, just as societal problems are not easy to disentangle from the systems that generate them. Nonetheless, heightened awareness about the need for and significance of equity-focused evaluation has given rise recently to new strategies and resources, particularly from the philanthropic sector. These newer approaches provide welcome tools in aligning organizations’ day-to-day work and their evaluation methods. 

The evaluation community has similarly recognized the need for equity and cultural competence to be “baked into” evaluation from the outset, as the American Evaluation Association’s (AEA) Statement on Cultural Competence demonstrates. Jara Dean-Coffey, Jill Casey, and Leon Caldwell explain in their article, “Raising the Bar—Integrating Cultural Competence and Equity: Equitable Evaluation”:

Whether implicit or explicit, social justice and human rights are part of the mission of many philanthropies. Evaluation produced, sponsored, or consumed by these philanthropies that doesn’t pay attention to the imperatives of cultural competencies may be inconsistent with their missions…Because the act of evaluation is itself part of the intervention, an equity lens is paramount when evaluating a program whose goals touch on issues of equity or inclusion.

Here are four actions that funders and social justice organizations can take as they seek to include that equity lens in their evaluation efforts:

Understand Why Equity-Focused Evaluation Matters: My M&E, a platform managed by the United Nations International Children’s Emergency Fund (UNICEF) and the International Organization for Cooperation in Evaluation (IOCE), offers monitoring and evaluation information, including the purpose, need, and importance of equity-focused evaluation.  As part of its overview on evaluation and good practices, UNICEF offers a handbook, “How to design and manage Equity-focused evaluations,” which provides rationale for such evaluations, strategies for managing equity-focused evaluations, and information on evaluation design, framework identification, and real-world challenges. And the aforementioned article by Dean-Coffey and her colleagues presents an equitable evaluation capacity-building (EECB) approach that can help organizations normalize and institutionalize equity-focused evaluation in a manner consistent with social justice goals.

Challenge Assumptions and Intentions Early and Often: Evaluators bring assumptions into their work—often unconsciously, and often with the best of intentions. It is critical that we explicitly face these assumptions and biases, question them directly, and consider how they will impact evaluation efforts, the data collected, and the audiences with whom their results will be shared. Fabriders offers a list of Questions to Ask Frequently (QAFs) When Working with Data and Marginalised Communities that helps evaluators understand and respect the relationship that will develop between themselves and the communities they seek to assess. Racial Equity Tools’ Getting Ready for Evaluation provides resources for groups preparing for the evaluation process, including Tip Sheets for considering the why, who, and how of assessing marginalized communities. 

Consider Embracing (Rather than Avoiding) Intangibles: Some evaluators are directly embracing intangibles as part of their evaluation process. The Inter-American Foundation, for example, recognizes that the grassroots development work it funds supports impact at various levels, and that both intangible and tangible results are meaningful. Its Grassroots Development Framework, which is the foundation of its evaluation approach, values both tangible and intangible returns, and acknowledges that each type of return can be evidenced at individual/family, organization, or societal levels. By explicitly assessing intangible benefits of its grassroots development efforts, the Foundation is able to better assess the longer-term changes it seeks to create through its grant making programs.

Create Conditions for Success: There is a strong urge to apply laboratory-like, case-control standards to evaluation of social and policy interventions. But the truth is, evaluations in the real world seldom identify cause-and-effect pathways with absolute clarity. That fact doesn’t undermine the value of evaluations, however—it merely points to the enduring merit of identifying “contribution, not attribution,” as Grantmakers for Effective Organizations (GEO) and the Council on Foundations put it. Phrased another way, attempting to show cause and effect definitively may be an exercise in futility within complicated, poorly controlled real-world environments. Increasingly, as Soya Jung notes in her article, “Foundations Share Approaches to Evaluating Racial Justice Work,” sponsors and consumers of social justice evaluations recognize they may “need to let go of the desire to pin down causality altogether and to focus instead on creating the conditions that make social change more likely to take place.”

 

How does your organization incorporate equity and cultural competence in its evaluation efforts? How has doing so advanced the mission and vision of the organization as a whole?

My Resolution for 2016: Develop As a Human "Being," Not a Human "Doing"

Like many people, I’ve spent a lot of time recently considering the past year, and thinking about the potential that 2016 holds. I love the sense of possibility that turning the page on a new year provides, yet I also feel a sense of anxiety: Will I meet the goals I set for myself? Will I be as productive as I think I should? At the close of this year, will I feel as accomplished as I had hoped to be at the outset?

So many of us are our own worst critics, and hold ourselves to a never-ending task list. Each day and week, we create tick-boxes for what we need to accomplish, and end up dissatisfied when we cannot check them all off. Our To Do lists become a proxy for our sense of productivity, driving us to focus on the end product rather than the process.

Don’t get me wrong: productivity is important, and outcomes and results matter. But they aren’t the only things that matter, and they don’t matter most in all contexts. The “doing” of our lives cannot—and should not—be the ultimate gauge of our self-worth.

Which is why this year, I’m trying something different. In 2016, I want to focus on developing as a human “being,” not just a human “doing.” What do I mean by that? I’ve decided to make process just as important in my life as results. In both my professional and personal life, I want to strive to sharpen my “being” in five ways:

Being Present: Studies show that for nearly half our waking hours, we are thinking about things other than what we are actually doing. I’ll be the first to admit, I’m often preoccupied with the next thing I need to do, or worried about an upcoming deadline, or distracted by the siren song of social media and devices. But I find when I’m fully engaged in an activity—whether it’s having a conversation with a colleague, meeting with a client, or simply enjoying a walk outdoors—I get so much more out of that time. My connection with others and understanding of myself are made easier and more meaningful when I’m in the moment, appreciating “now” rather than thinking about “next.

Being Reflective: I’ve found I’m at my best when I carve out time to reflect on my interactions with others and the events of each week. Reflection time allows me to deeply process new information and experiences, build new learning onto the scaffolding of existing knowledge, and connect lessons from the past to anticipated challenges ahead. Rather than just squishing in reflection time where I can manage to fit it—while driving in the car, brushing my teeth, or dropping off to sleep—I’ve begun scheduling regular time for reflection in my calendar, as a way of holding space for a process that I know energizes me.

Being Creative: As a kid, I had lots of creative outlets: I loved to draw, I took dance and piano lessons, and I wrote poems and short stories for fun. As an adult, however, such creative pursuits often fall to the wayside, as our “real” jobs and commitments crowd out time for creativity. But recently, my brain has been shouting out for a way to scratch these creative itches. So I’ll be looking for ways to make that happen, such as sitting down at the piano to awaken dormant music in my fingers, or returning to story and poetry writing to nurture my creative life.

Being Self-Caring: It’s often easier to take care of others’ needs rather than focusing on our own. But I know from experience that when I fail to take care of myself, I’m ultimately less able to care for those around me as well. I’ve found that adequate rest and mental downtime are vital to me, so I’ve begun reshaping my end-of-day routines to make those things priorities. I’m giving myself a lights-out time just as I do for my kids, and I wind-down before bed by immersing myself in a book rather than my phone.

Being Grateful: It’s often said that gratitude is a gateway to other emotions, and maybe that’s the reason I’m feeling especially committed to incorporating gratitude in my life this year. Feeling grateful for what I have makes it easier to find joy in the every day. I’ve begun keeping a gratitude journal—just a sentence or two, a few times a week, to record someone or something that I'm grateful for—and that simple practice is already helping me notice and find pleasure in little things. I’ve also begun sending notes of appreciation to family, friends, and colleagues to thank them for the ways that they have provided support in the past, and the ways they continue to make my life richer and my work more satisfying.

At this time next year, rather than crossing off items from a year-end checklist, I hope instead to see myself as a continuing work in progress: more continuum than endpoint, more journey than destination, and ultimately, a more developed “being” rather than a person successfully “doing.”

 

What or how do you hope to “be” in 2016? What qualities or mindset do you seek to cultivate in yourself in the coming months?

Aloha United Way Sharpens Its Focus on Evaluation

iStock_family frieze Small copy 2.jpg

Aloha United Way (AUW), one of Oahu’s best known social sector nonprofit organizations, faces a unique challenge: It is both a nonprofit in the traditional sense as it uses its revenues to further achieve its purpose or mission, and also serves as a funder, making grants to its nonprofit partner agencies as it looks to address key community issues through collaboration and collective action. Focusing on three impact areas—Education, Poverty Prevention, and Safety Net Services—AUW advances the work of its nonprofit partners not only through grant-making and fundraising assistance, but also through capacity building and mentorship.

In recent months, AUW has begun to focus on evaluation as a component of its capacity building support of nonprofit partner agencies. These efforts are being led by Ophelia Bitanga-Isreal, Associate, Grants & Foundation, and Marc Gannon, Vice President, Community Impact. As Hawaii nonprofits—like their mainland counterparts—are increasingly asked to demonstrate their effectiveness and social impact through evaluation, AUW has likewise sought to bring greater rigor in assessing its own work, as well as that of its nonprofit partner agencies’ funded programs. I reached out to Ophelia and Marc via email to learn more about AUW’s efforts on the evaluation front, and the leadership it hopes to provide to community organizations seeking to create meaningful impact to those they serve.

 

What has been the impetus for AUW’s greater focus on evaluation? Asked another way: Of the many challenges facing Hawaii’s nonprofits, why focus on evaluation, and why now?

We know that the concept of evaluating program effectiveness is not new to the nonprofit sector. There certainly are local nonprofit agencies that have already incorporated some level of evaluation in their program delivery. Child and Family Service, for example, is well advanced in its use of evaluation to measure the effectiveness of its programs. However, we have recognized a couple of trends over recent years.

First, funding organizations – especially federal grantmakers – are more frequently requiring their grantees to use an evaluative process as a requirement of their grants; and they’re requiring more than just outputs, such as the number of clients served. Funders want to know outcomes, the long-term impact that a program provides to its clients.

Second, private donors are becoming more sophisticated and savvy when it comes to their contributions. Today, donating is not simply about being charitable; it’s about investing in programs that demonstrate effectiveness and impact. Donors, more and more, want to know that the money they give makes a difference.

As a result, while a fair amount of our work is about funding and supporting our nonprofit partner agencies in their efforts to provide effective programs, we also have the onus to be good stewards of the investments of our donors. Evaluation then becomes a critical means of determining if the programs are effective, meeting the needs of the community, and the interests of donors.

Developing the capacity for evaluation is critical, for both our agency as a funder, and our nonprofit partner agencies as service providers. This is especially true as nonprofit organizations have had to operate in the years following the Great Recession and with limited resources to go around, funders have had to be more prudent about which programs to fund. Evaluation, again, is the means for assessing where resources can do the greatest good for the community.

Beyond that, AUW embraces “evaluative thinking” – that is, we understand that evaluation is a means of identifying how to adjust the work that we do to serve our community better. Evaluative thinking goes beyond collecting data; it’s a mindset – maybe even a work ethic – of always striving to improve and increase the lasting, positive impact to our community. More than helping our nonprofit partner agencies develop a methodology of evaluation, we want to support their transition to a culture of evaluative thinking; an understanding that evaluation shouldn’t be just a function of a grant award, but a way of doing things better.

What positions AUW to be a community leader on the issue of evaluation, i.e., to initiate these conversations on evaluation within the local nonprofit sector?

We’re uniquely positioned, both as a funder and as a convening organization, to be able to bring together a large network of nonprofit organizations to start a collective movement toward evaluation.  We’re also able to bring resources to the table, such as training workshops and technical assistance we’ve provided to our grantees. We’re not simply imposing evaluation upon them, but helping them build their capacity to do it. And, because we’re strengthening our own internal evaluation process, we think it sends a signal that we’re committed to this important endeavor and we’re bringing our nonprofit partners along with us.

We were recently asked if it’s the role of funders to lead the movement toward increased evaluation. We think that’s its not just our role, but our responsibility to our donors, nonprofit partner agencies and to our community. We can’t talk about improving the conditions of our community without evaluating our work for impact.

 

Many small-to-medium sized nonprofit organizations indicate they simply don’t have the capacity to deal with evaluation—they feel they are barely keeping their heads above water in their program work. What support or guidance does AUW offer to organizations in this position?

Aloha United Way is mindful of the challenges that many smaller nonprofit organizations face in providing services. Evaluative activities are often categorized as low priority, much like filing paperwork. The truth is that evaluation is very much a part of the program work being performed, maybe even equal in value to the work itself. Another way to look at this is opportunity cost. What am I getting for $1 invested in Program A versus $1 invested in Program B?

Agencies can begin the movement toward evaluation with the resources they already have and then build their capacity and infrastructure from there. As part of the value we bring to our nonprofit partner agencies, we provide technical assistance on measuring and evaluating programs. We’re excited that we will soon be an intermediary sponsor of AmeriCorps VISTA members, which are individuals recruited to work at local nonprofit organizations and public agencies to help build capacity. We’ll be able to deploy VISTAs to help agencies who want to develop their evaluative activities but who need resources to get started.

 

Donors to AUW (or other nonprofits) may feel that the focus on evaluation is a red herring, that money is better spent funding direct service programs. What would your response be to those donors?

When we first started our internal discussions about evaluation, we acknowledged that donors may not be as interested in funding evaluation efforts as they would direct service activities. Donors, of course, want to maximize an agency’s ability to do important work. But just as we believe that it’s our responsibility to support evaluation among our partners, we also believe that it’s our responsibility to educate our donors on the value of evaluation. This goes hand-in-hand with being good stewards of donors’ contributions. We think that part of telling the story of the work being done in the community is showing the effectiveness of that work. That can only be accomplished through evaluation, and we believe that donors will come to look at evaluation as a way of ensuring that their dollars are supporting important and useful work in the community.

What are AUW’s near-term and long-term goals regarding evaluation, both internally and in its grant making work?

We are very excited that we’ve embarked on this transformative journey toward evaluation. Internally, we’ve already begun to take deliberate steps toward incorporating evaluative thinking in all aspects of our work, from our administrative processes to our grant making efforts. Long term, we know that this will result in a work environment that challenges each of us to think about the work we do and to constantly strive to refine and improve that work. We also know that a more robust evaluative process will help guide our funding decisions to invest in programs that will have lasting impact. In the end, we’ll be able to bring more resources into the community to support those programs.

With regards to our grant making, potential applicants will notice a marked increase in our focus on evaluation. We recently released a request for proposals (RFP) that required our applicants to demonstrate their commitment to the evaluative process, or their willingness to develop an evaluation system. We supported this requirement by offering resources through an AmeriCorps VISTA funded through the grant award. We intend to continue incorporating evaluation as a requirement for funding as a means of inculcating evaluative thinking in our grantees; more and more, as this becomes standard practice, we expect to eventually help create a culture of evaluation among all our nonprofit partner agencies.

 

What does AUW feel that evaluation makes possible for its partner agencies? What does AUW believe embracing evaluation will make possible for the clients that those agencies serve?

This question is aptly worded – evaluation is about possibilities and not about imposing an onerous process. It will allow our partner agencies to look at their work in a different way; to see where they can make adjustments or course corrections to better serve the community in the way they envisioned. Additionally, it will help them to demonstrate to other funders the effectiveness of their work and the ability to monitor what they are doing.

More broadly, we also believe that, as all our nonprofit partner agencies more routinely incorporate evaluation into their work, the measurements they’ll be collecting will reveal where opportunities and connections can be made across agencies. It’s exciting to consider that, eventually, we’ll be working much more collaboratively with each other, maximizing the limited resources available to our nonprofit sector.

Of course, this will translate to more effective approaches to supporting, not just the clients at each individual agency, but our community as a whole. Evaluation allows us to improve services and to have genuine impact. That’s good for everyone. 

 

What would it take for your organization to initiate meaningful conversations about evaluation? What would evaluation make possible for your organization, and for those you serve?

Evaluation at Its Highest Potential

We all know there are conditions when people do an adequate job, and conditions where they thrive. Maybe a colleague of yours is in a role where she does a perfectly fine, serviceable job…but you know that when she’s working for a cause close to her heart, she really shines. Or perhaps you know a young person who is normally just an OK student…but when he’s challenged in the classroom and truly engaged in learning, he’s at his highest potential.

What if we thought of evaluation this way? If Evaluation were a person, in what conditions would Evaluation simply be doing its job, and in what conditions would Evaluation be at its best?

These are the thoughts and questions I posed at a recent community forum of executives and staff members from Hawaii nonprofits, philanthropies, and public sector agencies, convened by Aloha United Way. Evaluation, in my mind, is a critical component in creating meaningful community solutions. But just like people, Evaluation has conditions in which it is merely serviceable, and conditions in which it is maximizing its potential:

Evaluation Just Doing Its Job         vs          Evaluation at Its Best
Focuses on Accountability                              Focuses on Learning
Captures the Past                                           Used in Service to the Future
Experienced                                                   Shared

 

Evaluation that focuses on accountability vs. Evaluation that focuses on learning: Evaluation that is primarily concerned with accountability feels like bean-counting. It seeks to answer questions like, “Did you spend the money as you said you would?” or “Did you serve the number of clients you projected?” And as one attendee at the forum noted, a focus on accountability sends a message, fundamentally, about a lack of trust. Evaluation that focuses on learning asks altogether different questions, such as: “What’s working in your programs? What’s not working? How can we take what we’ve found, and improve going forward?” Trust is baked into learning-focused evaluation, so that even failures are seen as opportunities to gain new insights.

Evaluation that captures the past vs. Evaluation that is used in service to the future: Evaluation that is concerned primarily with capturing past performance feels like rote data collection. Past results are dutifully recorded, logged, and tucked away. But Evaluation that is, to borrow language from my colleague Hildy Gottlieb, used in service to the future, looks to identify paths to improvement. In this context, Evaluation is a tool that can actively inform strategies, helping us to shift course towards our goals in a thoughtful and responsive way.

Evaluation that is experienced vs. Evaluation that is shared: Evaluation that is simply experienced often involves objectification of some form—either the evaluation is “done to you” by evaluators, or “done by you” to clients or staff members—creating a dynamic of judgment. But Evaluation that is shared can create an altogether different dynamic in two ways. First, it can be part of a shared process where, rather than seeing others as people to be judged, they are offered a genuine place at the table to identify common areas for learning as part of a collaborative effort. Second, Evaluation can be shared as a tangible product of that process, in which the knowledge gained moves beyond a single organization’s walls. It informs the thinking of like-minded organizations in a communal way. And in both cases, Evaluation that is shared manifests an abundance mindset, something the social sector could certainly use more of.

 

What comparisons would you add to the table above? In what ways has your organization been able to maximize Evaluation's potential?

Hui Pie: Serving Up a Slice of Abundance

This being Thanksgiving week, it seems appropriate to talk about pie. And how we who work in the social sector view the size of that pie. And why sharing slices of that pie can be preferable to eating them ourselves.

About two years ago, on the heels of a successful roundtable discussion led by several Hawaii grant writing consultants, an idea was seeded. Several of us realized the local community of social sector consultants was robust, but fragmented. We each knew of nonprofits' frustrations in locating and researching consultants to meet their needs. What is more, we consultants realized that many of us worked independently, but craved connectedness with colleagues. The situation seemed ripe for the creation of a consultants’ hui.

In Hawaii, the term “hui” roughly translates to a club or an association, but as I took the lead in reaching out to colleagues to create a consultants’ group, I gravitated more toward the idea of the hui as a network. We were each “nodes” in our own right, with our own spheres of influence. But we could be infinitely more useful to the community, and to one another, by linking our nodes through more intentional connections with one another.

Some of the colleagues that I reached out to “got it” right away. They viewed the idea from an abundance mindset, and realized that such a hui could expand opportunities for all of us, creating a powerful resource for the nonprofits many of us served. Several others, however, were resistant to the idea of collaboration. Scarcity thinking revealed itself in their questions: Won’t we be in competition with each other? Won’t we be cutting into each other’s piece of the pie? What would be in it for me?

As I recruited colleagues to the hui over the next few months, I shared several truths from my own experience that I hoped would address those kinds of questions:

  • There is always more than enough pie to go around. I have found that among nonprofits, there is always a greater demand for expertise than there is supply. Particularly in Hawaii, where more than 5,000 charitable nonprofits serve the community, demand for consultants’ services far outpace their ability to meet organizations' needs.
  • Sometimes sharing your slice of pie is tastier—and better for you—than eating it yourself. In my consulting experience, I have found that some projects are simply too much to tackle alone. Collaborating with colleagues allows each of us to take on more challenging or complex projects than we might choose to do solo. By tapping into a cadre of fellow consultants, we open ourselves up to greater opportunities within a broader range of work, often in ways that are more satisfying to us professionally. And collaborations allow us to learn from the insights of our colleagues, bounce ideas with a partner, and generally get out of the echo chamber in our heads.
  • If you specialize in making apple pies, it’s just good business to know fellow bakers who specialize in pumpkin pies. Although some consultants are successful generalists, most of us specialize in some way—which means that we can’t be all things to all organizations. For example, maybe a consultant is terrific at fundraising. But she is at a loss when asked by an organization to help with strategic planning, or board training, or technology implementation. Since most of us hate leaving potential clients empty-handed when they ask for help or referrals, having a network of colleagues whose expertise complements our own is extremely useful. It allows us to help organizations meet their needs when we cannot. And over time and through relationships, we ultimately benefit in the same way, becoming the shared referral when others realize they are not the “right fit” for a particular organization’s needs.

I am glad—and deeply grateful—that enough of my colleagues embraced abundance thinking that we were able to grow that seed of an idea into a full hui, now known as Hawaii Community Benefit Consultants. What started off as a group of roughly 20 social sector consultants has since grown to more than 50; the majority of members choose to include themselves in our online directory as a free service to Hawaii nonprofits seeking expertise to support their efforts. As consultants to community benefit organizations, I believe we have to model the kinds of values we hope to see around us: collaboration, community, and relationship building. We have to “walk the talk” of our values, both with one another as colleagues and in our interactions with our clients. Taking time to create and foster connection with fellow social sector consultants seems an appropriate way to do just that.

 

How have you embraced abundance thinking in your organization or work? How have huis that you are a part of—either formal or informal—allowed you to grow?

What Researchers Can Learn from Bacon's Bad Week

If you’re a bacon lover like me, you probably read Monday’s news on the World Health Organization’s classification of processed and red meats as carcinogens with some mixture of alarm, disappointment, and an utter lack of surprise. I think we’ve all known for a long time that red meat is less than healthy for you, and that anything as processed (and tasty!) as bacon probably isn’t good for you, either.

But the speed and breadth with which the news of the WHO’s report spread was something to behold. The “bad week for bacon” has been a fascinating case study, too, revealing some unfortunate truths about the clarity—or lack thereof—with which researchers communicate with lay consumers. And while this particular example is from the realms of nutrition science and public health, social scientists and social sector researchers can certainly learn some lessons as well.

Researchers need to speak more clearly to lay audiences. To its credit, the International Agency for Research on Cancer (IARC), the working group that issued Monday’s report on red and processed meat, distributed a Question & Answer brief as a way of concisely addressing public concerns that might arise from their report. Unfortunately, even the Q&A would be difficult for a layperson to understand, given its high reading level and the scientific nuances in the interpretation of data. This opens up the door for mass media to reduce the findings to sensationalistic headlines, such as “Processed meats rank alongside smoking as cancer causes—WHO.”  Which leads me to my next point…

Researchers need to better understand—and respond to—the layperson’s interpretation of scientific information. In the case of this week’s report, the IARC’s placement of red and processed meats in the same carcinogenic classification group as cigarette smoking understandably leads many to conclude that eating bacon and burgers are the dietary equivalent of puffing a pack a day. But that’s not quite the case. As Cancer Research UK explains in its post, “Processed meat and cancer—what you need to know,” the IARC classifications reflect strength of scientific evidence that a substance is carcinogenic; they do not, however, reflect the actual increased risk that that substance causes cancer. That subtlety—that although they share the same classification category, bacon and cigarettes are not remotely equivalent carcinogens—gets lost in in the messaging. And as Sarah Zhang of Wired points out in her article “Bacon Causes Cancer? Sort of. Not Really. Ish.,” the IARC recognizes that risk assessment is part of how the public wants to understand health and other scientific data—it has just chosen to be unresponsive to that fact. Leading some to simply throw up their hands and declare, everything you love will kill you, so why even bother trying to be healthy?

 

So we know what some of the problems are with researchers’ communication patterns. The bigger question is, what can researchers do differently to successfully share important findings with the public?

Check for clarity among lay audiences. Much like market research is conducted, scientific communities could “market test” sample research findings with public audiences to check on readability and clarity of messaging. In addition, questions that arise in response to these tests could be better anticipated and addressed. Visuals, such as the infographics created by Cancer Research UK, also go a long way in helping distill and simplify complex scientific information for the general public.

Translate findings into practice. Researchers need to ask themselves, What do these findings mean for people in their day-to-day lives? The IARC’s Q&A brief, unfortunately, offers unclear, somewhat confusing advice to readers seeking answers on whether, and to what extent, to change behaviors. (One example: “Q: Should I stop eating meat? A: Eating meat has known health benefits. Many national health recommendations advise people to limit their intake of processed meat and red meat…”) Cancer Research UK is again a great counter-example to the IARC’s obfuscation. In the graphic below, the organization helps readers estimate how much meat a person might typically eat in a day, and offers helpful suggestions on how to decrease processed and red meat consumption.

Consider Blogging. Leading news outlets—such as The New York Times and The Wall Street Journal—have science or wellness blogs, as do research sources such as the Centers for Disease Control and Prevention and Harvard Health Publications. These blogs help readers sift through scientific mumbo-jumbo by culling key information, suggesting practical lifestyle modifications, and making research findings less intimidating. If one of the goals of the research community is to inform and change behaviors, it behooves researchers to use tools—such as blogs—to make their findings as accessible and user-friendly as possible.

What challenges has your organization had in using, or sharing, research findings? What suggestions would you make to improve researchers’ communications with the public? 

Child & Family Service's Evolving Journey to Outcomes

As I mentioned in my last post, measuring outcomes in theory can be quite different from measuring them in practice. Child & Family Service (CFS) is a Hawaii-based social service nonprofit that, in many ways, is “ahead of the curve” locally in its active embrace of performance assessment. Subsequently, CFS offers a unique case example for insights and lessons learned. Howard Garval, President and CEO of CFS, has championed more rigorous evaluation of social service programs for a numbers of years, and has “walked the walk” at his organization, leading a culture shift toward outcomes-based assessment. Recently, via email, I asked Howard about the changes he’s witnessed, the lessons he’s learned, and the insights he’s gained along the way. As he describes below, it’s a process that takes time, patience, and perseverance, but that is ultimately—and rightly—focused on improving outcomes for those served.

 

What specific changes or progress have you experienced in recent years at CFS that you attribute to the organization's embrace of an outcomes-based culture?

We have done two key things to move us to an outcomes-based culture:

(1) We adopted the Results-Based Accountability (RBA) model for program performance measures, developed by Mark Friedman of the Fiscal Policy Studies Institute and author of Trying Hard is Not Good Enough. RBA asks three primary questions: (a) How much did you do?—this is the outputs question; (b) How well did you do it?—this is the quality question; and (c) Is anyone better off?—this is the outcomes question. I have added a fourth question: How can we use the data to get better?

(2) We implemented Efforts to Outcomes (ETO)/Social Solutions electronic record software, which is the system Geoffrey Canada uses for the Harlem Children’s Zone that formed the basis for the Federal Promise Neighborhoods grant funding. Our staff has given ETO/Social Solutions rave reviews, and there has been practically zero resistance to implementation of the electronic record. Staff members have become more comfortable talking about data and outcomes. CFS has developed a common language as a result of RBA. Performance measures continue to evolve, and we keep drilling down to the question: “Is anyone better off?”

What has been most surprising about CFS' journey toward performance measurement over the past few years?

Human service professionals are in this field for their heart; in general, they did not come into this field because they liked data and measurement. However, a smart thing we did was create an agency-wide steering committee and coaches who have been both our cheerleaders and trainers to support our program staff to measure outcomes. I expected more resistance, and even though we have had some pockets of this, overall staff members have moved forward with the journey better than I anticipated. However, we have to keep motivating staff to see how performance measurement can help them improve their services to our clients. Once our direct service staff can see how the data and outcome measurement can help them deliver better services, we think their buy-in will be strengthened. Direct service staff in some programs have stepped up and taken some ownership, breaking down barriers to data collection and actively having a voice.

What has been the greatest challenge in becoming a high performing organization?

I think we have learned that it is better to get a program on ETO/Social Solutions software first, before doing the RBA work. For example, The Institute for Family Enrichment, or TIFFE, fully merged with CFS effective July 1, 2015. Based on what we’ve learned, we will be sure to move TIFFE programs onto ETO/Social Solutions first, then do the RBA work. The greatest challenge is to help human service professionals get comfortable thinking about outcome measures and asking the right questions about how we know a program is effective, and how we know that we are producing a measurable benefit for the people we are serving.

What suggestions would you share with smaller nonprofits that are interested in outcomes assessment, but that may not have the human or financial resources to fully invest in changing organizational culture?              

I think the RBA model has a simplicity to it that makes it accessible to smaller nonprofits. Through our new Institute for Training & Evaluation, CFS is now the only licensed RBA provider in the state of Hawaii. For a reasonable cost, we can assist organizations to learn and implement this model; it also comes with a RBA scorecard developed by the Results Leadership Group. Organizations could potentially pool resources to have us work with them. Also, “train the trainer” models could be a less expensive way to build capacity, as we did with our coaches. I would also say small organizations at least need to ask themselves, What would tell us that one of our programs is actually working and producing a measurable benefit for the people our program serves? It’s a process. It takes time, patience, and perseverance. It certainly helps if resources can be dedicated to move this forward.

What do you think being performance-based makes possible for your organization? For those you serve?

I think performance-based assessment positions us to be a leader in human services, and will enable us to garner additional support from funders and other donors. I think it will also help us sustain funding as funders will increasingly demand outcome measures that show their investment is producing impact. Using data to tell our direct service staff how we are doing and to demonstrate what is working will enable us to really do continuous quality improvement and get better at delivering services that work. In addition, our staff are becoming more aware of the impact they are having on program participants, and that in turn is improving the quality of their work.

For our clients, being performance-based has supported their individual journeys and self-awareness regarding the need for services. For example, one of our program’s PTSD “pre-tests” has led to clients recognizing their own need for counseling services. We can see that one of the benefits of our increased emphasis on outcome measurement is empowerment of our clients, who gain additional insights in tracking their personal growth.

 

Nonprofit leaders: Does your organization have experience with “walking the walk” of a performance-based culture? What successes and challenges have you encountered in creating that culture?

Funders: Is your foundation/philanthropy leading by example when it comes to being an outcomes-based organization? What unique challenges have you encountered in assessing the effectiveness of your philanthropic efforts?

Child & Family Service's "Leap" Toward Outcome-Based Assessment

Nonprofits are often looking for resources to start them on the road toward impact measurement. Moving beyond outputs (the count data, or answers to the "how much" question) to outcomes (the impact data, or answers to the "what meaningful changes did you effect" question) can be overwhelming, especially for smaller organizations. One of my go-to resources for nonprofits seeking guidance on performance assessment is the monograph Leap of Reason, which I describe below. And although I originally published this post in March 2012, it certainly bears revisiting--not only because outcomes-based assessment continues to be a "hot" topic in the nonprofit sector, but also because our understanding of what it takes to be an impactful organization continues to develop.

While talking about measuring outcomes is all well and fine in theory, it's helpful to know what it looks like in practice. Here in Hawaii, I sometimes hear from local nonprofits that few examples exist of organizations that really "get" performance assessment. That's why I'm happy to shine a spotlight on Child & Family Service (CFS), a well-known and respected local social service organization that can serve as a model for others. What is more, CFS has been open about its journey and continuing evolution in fully embracing an outcomes-based culture. I'll be sharing more about CFS' journey in my next post--stay tuned!

****************************************************************************************************************

Have you ever read a book that so clearly, concisely, and compellingly distilled an issue, you just felt the need to share it? Recently, I encountered such a book on outcomes-based management for the nonprofit sector, titled “Leap of Reason: Managing to Outcomes in an Era of Scarcity.” A monograph by Mario Morino, Chair of Venture Philanthropy Partners, Leap of Reason is a call to nonprofits to move toward the rigorous identification and measurement of outcomes to drive the impact of their work. Morino makes the case that in the current climate of tightened budgets and reduced funding from government and philanthropic sources, a paradigm shift toward meaningful, measurable impact is both necessary and desirable throughout the social sector.

I had the pleasure of being introduced to Leap of Reason by Howard Garval, President and CEO of Child and Family Service (CFS). Garval is a veteran nonprofit manager, having served for more than a decade as COO and CEO of The Village for Families and Children in Hartford, Connecticut, before his current tenure at CFS. During his time as an executive at The Village, Garval became familiar with Result-Based Accountability, a framework for producing measurable improvements in the public sector developed by Mark Friedman, Director of the Fiscal Policy Studies Institute. Impact assessment, then, is not a new concept for Garval; he has long embraced the idea of measurable outcomes at the organizations of which he’s been a part. As a result, “Leap of Reason really hit home,” Garval said. Last week, Garval and I had an opportunity to talk about the book, its resonance with his own experience in social services, and the ways in which outcomes-based management is helping shape the management and future direction of CFS.

Garval shared a few key take-aways:

1.    When we talk about outcomes, we are ultimately—and most importantly—talking about creating impact for those we serve. Garval pointed out that for him, the goal of improving CFS, its programs, and its operations is an intermediary step. His ultimate goal in utilizing outcomes-based management is to provide “evidence that we produce measurable benefit, and are truly making a difference [to those we serve].” Likewise, Morino states in Leap of Reason: “The greatest dividends [of managing to outcomes]…accrue to the communities, the families, and the individuals with whom we work. They benefit from stronger schools, smarter clinics, and safer communities—all because of nonprofits’ commitment to becoming better.”

2.     While it’s important for nonprofit leaders to “buy-in” to a performance culture, a top-down approach alone won’t ensure meaningful changes within an organization. Garval noted that in his experience, having direct line staff that subscribe to a culture of measurable impact is just as important as having leadership that does the same. Sometimes, he said, “line staff have the best ideas for producing [measurable] benefits,” precisely because of their direct contact with the individuals and families served. Garval further stated that identifying staff who are “early adopters” of an outcomes approach is helpful in engaging staff overall, since peer-to-peer engagement may be a stronger influence than that exerted by an organization’s leaders. Morino makes this same point as well: “Leaders can’t simply create by edict the organizational cultures they desire.”

3.    Identifying the right questions to ask is challenging, but critical to an organization’s work. Garval described taking part in a recent CFS leadership training in which the group reviewed the organization’s outcomes for five core service areas. Looking critically at the outcomes was “the best part of the training,” said Garval. “We drilled down deeper into our measures to [examine if] we are measuring the right stuff.” The exercise, however, sometimes led to more questions than answers: Are we measuring what’s most important? How are we using the information we collect to continually improve our services? Are we collecting outcome data that will ultimately strengthen programs and, consequently, be most beneficial to those we serve? In Leap of Reason, Morino states the challenge this way: “…With all the rhetoric around mission, scaling, accountability and the like, the reality is that we often have to go back to basics and ask, ‘To what end?’ Defining an organization’s true purpose is absolutely essential to cultivating a performance culture.”

4.    Better outcome measurement may have negative short-term implications, but it’s a crucial investment in long-term improvement. Garval described another CFS leadership training exercise that involved identifying forces that support and restrain the organization’s increasing shift toward a performance culture. The worry that better outcome measurement may initially mean less impressive results for an organization was named as a restraining force, and is certainly a valid concern. Yet, Garval noted this is a concern that must be overcome, because the collection of data on baseline performance and subsequent goals for the organization’s improvement are what will allow CFS to identify and maximize its impact on those it serves. Morino recognizes this challenge—and opportunity—as well: “…The transition to outcomes-oriented management will almost certainly have some negative near-term implications for the organization. These changes, though, will just as certainly have a positive impact for the nonprofit in the long run as it becomes more effective in achieving its mission.”

Nonprofits leaders: Does your organization employ an outcome-oriented approach to its work? How has this approach influenced the management and impact of your organization?

Funders: To what extent have your funding decisions been driven by nonprofits’ outcome performance? How has your funding organization supported nonprofits’ efforts to improve their impact in measurable ways?