Many leaders in the nonprofit sector have noticed that the landscape has changed significantly regarding the results donors and other external constituents expect from them. The days are long gone that the public naively assumes that nonprofits are doing an effective job with real community impact merely because they are performing good deeds. Savvy nonprofit donors are demanding that nonprofit organizations evaluate their programs and report outcomes. And wise nonprofit leaders are establishing organization-wide evaluation practices that result in internal program improvement.
Sonoma Valley nonprofits that receive funding from Bay Area foundations and philanthropic organizations, including Sonoma Wine Country Weekend, are discovering that they are being required to clearly identify the outcomes they expect to achieve through the funding they are requesting, plus articulate specifically how they will measure these anticipated outcomes. For example, simply measuring and reporting the number of students who received mentors isn’t enough. Instead, reporting the extent to which these students’ grades increase, their school attendance increases, and their anti-social behavior decreases as a result of being mentored is what’s important.
Tina Baldry, Program Director for Sonoma Valley Mentoring Alliance, is currently ramping up the rigor with which the organization evaluates its programming. Baldry reported, “Evaluating our program leads to successful program design, awareness of the impact our programs make on the youth we serve, and provides our staff with a clearer picture of how we are contributing to the growth of their mentoring relationship, as well as engaging youth in activities that help foster social, emotional and academic success. Evaluation is crucial in supporting us in reaching the goals of our mission: To enhance, engage and enrich the lives of youth so that they develop tools that will lead to a healthy, well balanced, successful life.”
Baldry continued, “Currently, we are implementing surveys for the 2015-2016 school year to evaluate the overall success of the Stand By Me Mentoring Program. In addition, our Road Map to Your Future Program and our STEM (Science, Technology, Engineering, and Math) materials grants will be evaluated throughout the school year.” In addition to tracking academic performance and in-school behavior, Baldry will be using simple Survey Monkey online tools to obtain and analyze feedback from the mentors and mentees to report results to funding partners.
The late Milton Friedman, Nobel laureate and economist, said, “One of the great mistakes is to judge policies and programs by their intentions rather than their results.” Measuring results requires clarity on knowing what you plan to measure. Charting Impact is an initiative started three years ago years by GuideStar USA, Independent Sector, and the BBB Wise Giving Alliance to help nonprofits document their effectiveness. Using the Charting Impact framework, nonprofit leaders can ask themselves these five questions: (1) What is your organization aiming to accomplish? (2) What are your strategies for making this happen? (3) What are your organization’s capabilities for doing this? (4) How will your organization know if you are making progress? And (5) What have and haven’t you accomplished so far?
Another Sonoma Valley nonprofit that’s implementing a robust evaluation plan is Teen Services Sonoma. According to Cristin Lawrence Felso, Executive Director, “Our evaluation plan includes using Survey Monkey to issue pre- and post-tests for all our workshops so that we may evaluate the degree to which participants gained knowledge of the workshop content. We also use Survey Monkey to give satisfaction surveys to all our volunteers, employers and customers. When we place a young person on the job with local employers, our job coaches meet with employees to complete a 30, 60 and 90 day evaluation of the teen employee’s on-the-job performance. We use this information to continuously evaluate and improve the quality and effectiveness of our programs and services. Our pre- and post-tests and survey questions are developed to gather information that is directly related to our expected program outcomes as listed on our program logic model. That way, we can verify that we are achieving the goals of our program. These outcomes are what we expect to see through our programs, based on literature reviews of similar programs that have been proven to be effective.”
The plans that Felso and Baldry are using to guide their evaluation implementation are based on best nonprofit practices and include the following components: (1) purpose of the evaluation; (2) audience for whom the evaluation is intended; (3) research questions that will be answered through the evaluation, or in other words, what will be measured; (4) staff and technology resources the organization has available to dedicate toward the evaluation; (5) challenges the organization may face in implementing the evaluation; (6) what the evidence-based literature reports the organization can expect to achieve through its programming; (7) methods the organization will use to collect data; (8) how confidentiality and anonymity of program participants will be assured; (9) how the organization will analyze the data collected; (10) timeline for collecting and analyzing the date; and (11) how the evaluation results will be reported and disseminated to the community.
By developing a solid evaluation plan and implementing it with fidelity, a nonprofit’s leadership can report with confidence the level of impact the organization is making in the community and the extent to which those served by the organization are receiving real value.
Be First to Comment