Webinar: How to Use Evaluation Findings for Project Continuous Improvement

Webinar

Presented by: The Rucks Group

Now what? How to Use Evaluation Findings for Project Continuous Improvement

Now what? How to Use Evaluation Findings for Project Continuous Improvement

April 30, 2020

You’ve won the grant and now you need to do a program evaluation. There are two general mindsets toward this work: “proving” to funders what you’ve done or “improving.” The Rucks Group recommends you focus on improving because it overlaps with proving and uses information from the program evaluation to advance your efforts. Having an improving mindset also completely changes how you address the program evaluation and reduces much of the angst around the evaluation.

Transcript

Part 1:

Welcome Everyone,

Thank you for joining us today on our coffee break webinar on Now What? How to use program evaluation findings for project continuous improvement. We’re so thrilled that you decided to take the time to join us this afternoon. Hopefully you have a hot cup of coffee with you as you take a quick break from other assignments that you have.

We want to make sure that your experience this afternoon is a positive one and to help aid that, make sure to ask any questions that you may have.

If you look on your screen on the righthand there’s a place for you to be able to enter questions and we have with us, Madison Doty, our research Assistant, and Alyce Hopes our Intern to help answer some of those questions.

Pivoting. Pivoting is a word that we’ve heard a lot of right now. Pivoting is when the data tells you that you need to go in a different direction. Pivoting is when your expected approach to achieve an outcome doesn’t work – you may need to change or pivot to another direction or another approach.

In grants needing to pivot to achieve goals and objective is common, and I think part of the utility of evaluation provides a formal process for pivoting. While there are many different definitions for evaluation the one that I like to use is this one: evaluation is the use of social science research methods to systematically investigate the effectiveness of social intervention programs.

Let me take a moment right here to really quickly introduce myself. I’m Lana Rucks, the Principal Consultant with the Rucks Group. The Rucks Group is a research and evaluation firm that gather analyzes and interprets data to enable our clients to measure the impact of their work. We were formed in 2008 and over the past several years we have worked with primarily higher identities and grants funded by federal agencies such as the National Science Foundation, Department of Education and Department of Labor.

During our time together there are four thing I’m hoping to accomplish. First, I want to show how reframing the purpose of evaluation will help aid thinking about using evaluation for continuous improvement purposes. Next, I wanted to talk about developmental evaluation and how developmental evaluation is another way of using evaluation findings toward continuous improvement efforts. Then I want to address another issue that can be very challenging for projects and that is how to get other people on board.  How do you rally the troop in the sense of other individuals who are willing to use program evaluation results for continuous improvement. While its listed here last, it’s certainly not least, we want to make sure that we’re answering your questions. So again, any question that come up, make sure to pose those on the questions side of your screen.

So let’s go ahead and get started.

Part 2:

In thinking about program evaluation and the use of evaluation findings, there are really two broad purposes. One is for learning (improving as I call it) or measuring impact (or improving).  And those two different frames are really important in how you think about evaluation in terms of those frames, are critical in terms of how you’re actually going to interact with evaluation, if you’re working with an external evaluator, or just an evaluation champion who may be internal to the team. Let see what I mean by this:

Take this ambiguous figure. You’re looking at this, trying to understand what the figure is and you may conclude that “What I’m looking at is the letter H” And so as a consequence, you will gather and say “oh its ‘the”. That’s how I can interact with this as an H”

But what if it’s not and “H”. What if it’s actually and “A” and you could make the word cat out of the letter. Well this is kind of a really simple example, but the point is how we frame something really determines how we interact with it. So whether we frame evaluation as purely for reporting our to funders and for proving purposes, or whether we’re talking about improving, would really make a different in how we approach that work.

As another example of this, one of my favorite books is by Bill Walsh who was the head coach for the San Francisco 49ers and when he began has head coach in the late 70s, early 80s, the 49ers were not the winning team. It was under his leadership that it really became a winning team. And essentially his philosophy was, don’t earning scores or winning per se, instead, focus on excellence. Excellence is a much broader area, it will take you much farther, and just focusing the scores is limiting, hence the title of the book: The Score takes care of itself. So essentially what is philosophy was is this: Let’s not focus on the score, let’s focus on excellence and the scores is nested in excellence. Well I liken that same idea to evaluation. You don’t have to focus on proving and you don’t have to focus on how you report out to the funders or internal stakeholders on the outcomes of an initiative. If you focus on improving, proving will take care of itself.

So, here’s an example in action:

One of the projects that we’re working on is geared towards increasing participation in STEM by increasing the number of students who are majoring in STEM areas and that’s going to be achieved through retaining students in those majors and staying in those majors. One way to help with retention is to increase performance by offering students tutoring. Sometimes students are very reluctant to take advantage of tutoring even though it’s offered and even though its free. So, in this situation, the project team is really interested in understanding what the impact of tutoring and particularly because naturally the create really three different conditions in which students were participating in tutoring. So, in this project there were three conditions:

In the first group they were required to participate in tutoring to receive a stipend. So, they actually received money and financial incentive if they participated in tutoring. In group two, these were individuals who weren’t required, but they did participate in tutoring in some way. In group three these are students that didn’t participate in tutoring at all. And what we found in this project was that for individuals who are actually participating in tutoring and they incentivized in some way to participate, that they actually had the lowest rates of failed across the three conditions. But regardless whether they were incentivized, or they just participate in tutoring, all, their rates of failure decreased. What’s also interesting is that those individuals who received an incentive also had the highest rates of A’s and B’s combined, but particular the highest rates of A’s.

This also provides some insight into the dose that is needed for those types of outcomes. Now I have to caution this and say these are preliminary findings but they are encouraging and the project team is able to use this to help persuade faculty members and administrators around having some type of requirement for students to participate in tutoring. So, ultimately by doing that, and by doing it early within the project, they’ll be more likely to see the outcomes later down the road. Let look at another example:

In this example, this was a workshop that was geared towards career readiness. Often you will hear about individuals who don’t necessarily have the soft skills, they’re not “job ready”, they don’t have the career readiness skills, so this was a workshop designed to address that. Much like a lot of workshops and surveys, we disseminated a pre and post survey (actually it was a counterfactual survey) the findings weren’t horrible surprising that they participated in this workshop and they learned more afterwards overall than what they learned before.

But what stood out is this interesting finding here; When asked if you know what you’re supposed to bring to an interview, students reported that they understood that more before participating in a session than afterwards. What was great from the project team’s standpoint is they looked at those results and started to self-reflect and we had discussion on it to try to understand why we were observing that finding and concluded that maybe there was some confusion. So, they’re going to go back and look at the section of the workshop and see if there was something that could be reworked to make sure that it could be more understood. So, by doing that, and by doing that on the early side, we are more likely to actually achieve the outcomes.

Part 3:

With continuous improvement there are a lot of ways in which developmental evaluation really can help in terms of how to deal with and address new initiatives and new projects. Let me give first a definition of what developmental evaluation is: developmental evaluation supports innovation development to guide adaptation to emergent and dynamic realities in complex environments. There are a lot of words there so let me tease that apart for a moment.

When we talk about innovation for a lot of the projects that we work with, these are new projects – new initiatives in some way. Even if there’s some piece of the project that’s replicating something that’s been done previously, it’s being implemented in a new context because it’s being implemented a new part of the country, or perhaps it involves different individuals or different majors. So, it’s really talking about this newness, this idea that an individual may have, and they’re trying to figure out how to implement.

The other piece is adaptation to emergent and dynamic realities. If you’ve worked in the project, you know that there’s a really different experience in terms of what you write when you’re developing your proposal and what those realities are when you’re actually implementing. A developmental evaluation framework helps to create a way of thinking about how to approach and handle these emergent components. And “emergent” can be big or small. They could be small things like you have a shift in an individual who’s a champion and the organization for the initiative and so that person has left the organization and somebody who comes in is not quite as enthused. Or it could be big like having to deal with COVID-19 and social distancing – how do you deal with those emergent factors? We know that we have a lot of projects right now that have either summer camps for students or project development for teacher and faculty members and are trying to figure out how do we address these emergent and dynamic realities.

The other piece, these “complex environments” is that you’re putting that initiative in the system, it’s organic. When you put in an initiative you have to be sensitive to those complex factors and sensitive to that complexity.

So, one way to help think about this is in framing it in regards or comparing this against the processes that are associated within research and how you think about research. A lot of times you hear this overlap between the commonalities of research and evaluation and there are many commonalities, as I said, in terms of those processes. In a research you have a hypothesis, you develop research questions, you test the hypothesis, and then you analyze the findings (And then importantly, you refine those hypotheses). Well in evaluation it follows a very similar process; you start with a theory of change about how an initiative will work, you develop evaluation questions, you test that theory of change through the project implementation, you analyze your findings, and then you refine your theory of change through the implementation process.

So, in thinking about evaluation from this developmental evaluation lens, the engagement of evaluation and the life cycle of the project will be slightly different. In thinking about the activity life cycle of a project, you have these different decision points and if you’re not thinking about evaluation as being an integral part to help with continuous improvement or form developmental evaluation, you may not be as focused on aligning evaluation to those decision points. So, the evaluation and the decisions of the projects are not in alignment. Whereas, within the developmental context, what you want to make sure you’re doing is within those decision points, you’re also closely aligning the evaluation to be leveraged to help with really good data informed decision making processes.

Let’s look at an example of this:

In another project that we were working with the project team held an outreach event to students and their parents to introduce them to a new major that the school was implementing and it disseminated out the survey at the end. When they disseminated the survey, they had about a 9% response rate. Well, that’s not really the response rate that you really want – you want something much higher than that. So, we had some discussions around that with the project team and about what could be done to change a dissemination method. So, there were a lot of tweaks with that and in a similar initiative that occurred about a month later, we implemented these changes and had about a 40% response rate. Now you could say this is really small because it’s just increasing the response rate on a small initiative, but by having larger representation of their population and the people that were there, they now have better data to be able to make decisions and to incorporate those stronger decisions moving forward and that will help them to achieve their outcomes – and that’s really the point of developmental evaluation.

Part 4:

Very often in dealing with evaluation and evaluation issues, you’re really hopeful that people are enthused and excited about that topic and sometimes they’re just not – not because they don’t want to have that informed information, but for a host of reasons. So, instead of people looking like this, sometimes they’re looking like this. So, the question for people when you’re actually sold on the idea and you believe in using evaluation for continuous improvement purposes, is “how do you get other people to buy into it as well?” Here are a couple of things that you can do:

The first item is in regards to making sure that you’re really linking out the evaluation to a larger frame, making sure that it’s linked to you a greater mission or achievement – that could be the larger mission of what a project is or actually on a much larger scale it could be linked to the mission of the organization as well. When that’s linked to a mission and something bigger, then it’s not feeling as though it’s something extra and something that has to be added on top of everything else that has to be done.

The other item that you can do is a stakeholder analysis. One benefit of an actual stakeholder analysis (and it doesn’t have to be done in a highly formal way it could be slightly informal) is understanding that every stakeholder, has a stake. Every person who’s invested has something that they want out of that and they have slightly different perspectives. You may have the P.I., a project owner, administrator, institutional researcher, program officer, or funder and everyone has a different perspective. So at least being very conscious of what their perspective is and what they may want out of the evaluation, is really helpful in regard to how you frame that message. While this sounds like a lot of extra work, really in practice it’s not. It’s really just taking the perspective of the other person and helping them frame a message in that regards for them.

The other piece I think that’s useful too, is making sure to listen for pain points. Make sure that you find those moments that are “culturable” or learning moments as well. One of the ways that I think about this is in the old adage “You can lead a horse to water but you can’t make him drink.”. I often tease and say, “but you can’t put salt in his hay”. The idea is, once someone has salt in their hay they’re a little more thirsty for water. In the same idea, when there are some pain points with the project and helping to show how evaluation to help in terms of addressing those pain points, it’s much more likely to be able to be persuasive with why evaluations are of value and why evaluation can help for continuous improvement purposes.

I think that we are pretty close to time here so I’m going to go ahead and stop. I just want to say thank you so much for joining us today, I really hope you were able to get a lot of great information from this webinar experience. You will receive an email with information on a survey and if you can complete that survey, we would really appreciate it. We are interested in our own continuous improvement processes and= so I would really like to have that feedback so that we can make sure that we’re offering other resources that would be helpful for you all. I hope you have a great rest of the day. Thank you.

Now you know us.

Isn’t it time we get to know you?

    *We will not sell or share your information.

Send us your information and we will get right to you.