Webinar: Three-Part Formula: The Finale

Webinar

Presented by: The Rucks Group

Part III: The Finale of the Three-Part Formula for Writing a Grant Proposal Evaluation

Part III: The Finale of the Three-Part Formula for Writing a Grant Proposal Evaluation

July 29, 2020

The Finale: Evaluation Use,  Sharing Findings, and Evaluation Teams. In Part III, the final key elements of a program evaluation will be discussed, such as using program evaluation for continuous improvement, reporting to funding agencies, sharing the findings with different audiences, including the necessary personnel involved in the entire process.  Most important, the final segment will pull together all three parts to show participants how to adjust the elements of an evaluation plan to fit the size of the program proposal.

Transcript

Part 1:

Welcome everyone. Thank you so much for joining us today in part three of our three-part series on writing a grant proposal evaluation. If you’ve been on the other two webinars, then you’re familiar with this housekeeping item – we want to make sure that we’re answering your questions, so please use the question function on your computer. we have with us our fabulous intern, Alyce Hopes, who will help to moderate the Q & A portion.

Let me introduce myself as well. I’m Lana Rucks – I am principal consultant of The Rucks Group and we’re a research and evaluation firm

that gathers analyzes and interprets data to enable our clients to measure the impact of their work. We were formed in 2008 and over the past several years we’ve worked primarily with higher education institutions on grants funded by federal agencies such as the National Science Foundation, Department of Education, and the Department of Labor.

Now, the purpose of this webinar series is really wrapped around three objectives: One, I wanted to provide an organizing framework to be able to understand the various evaluation terms that are often used. Then, I also wanted to provide a formula for writing evaluation plans. Taking that together really was hoping to reduce the evaluation angst and towards those three objectives making sure that we’re answering your questions, so again, make sure they use that question function.

So, let’s get started with a quick recap of what we went over in the last webinar:

In the last webinar, I reinforced a framework for thinking about an evaluation plan that was really centered around six elements: The first two which are the foundations, which we covered at the beginning of the month, focus on the theory of change and evaluation Questions. The next two that we discussed on the last webinar, focused on the evaluation design data gathering and analysis. In this final piece, The Finale, we’re going to focus in on use of the findings as well as the operational approach.

So, in thinking about these six elements what I want you to think about is that when writing an evaluation plan for varying scales of projects, not to necessarily eliminate one of those elements, but just provide less details and less information. A small-scale evaluation may just have a sentence around each of those elements whereas a large-scale evaluation may have a paragraph or much more detail. Last time we also introduced step three of the formula – and in step three we talked about choosing an evaluation design that’s going to be appropriate to the context and the level of resources that you’re using. Towards that end, we provided the comparison of a number of common evaluation designs and some of the considerations in thinking about what design you want to use (like the extent of which it addresses internal validity the implementation resources and things of that nature). Then we also talked about the fact that these design approaches really vary in the extent to which we can conclude that A cause B, or the extent to which they are able to effectively address internal validity.

The other element that we talked about last time was in regard to the data gathering process. So, step four talked about considering what you’re gathering, how you’re gathering it, and when you’re gathering it and then describing how you’ll analyze it. We talked about data gathering strategies and what types of data can be gathered from a qualitative and a quantitative standpoint. And finally, we also introduced the idea of a data matrix – in a data matrix you’re able to present a lot of information that will connect the evaluation questions as well as the data gathering and the analytical approach together.

Part 2:

Well let’s go ahead and get started on the fourth element and that’s use of evaluation findings.

If you want a way of conceptualizing these different elements, then think of the first four elements as the “bricks” and these last two around the use of evaluation findings and the operational approach is more like the “mortar” – it’s what’s going to pull everything else together. Those first four may be a little more difficult, a little more inaccessible to think about but these last two, I think, are a little more intuitive, although it may challenge the way that you think about evaluation just a bit.

Let’s go ahead and jump in and I’ll show you what I mean:

In thinking about use of evaluation findings there are really two primary uses for the evaluation findings: they’re either going to be related to proving or related to improving. Proving is focused in on accountability, it’s focused in on did you do what you said you were going to do; did you retain the students that you said you were going to retain; did you graduate the number of students that you said you were going to graduate? Improving is much more focused in on the continuous improvement process. So, in improving you’re much more focused in on how can we get better at retaining individuals? How can we get better at graduating individuals? What’s nice about these two purposes is that they’re interconnected with each other. The way that I think about proving and improving, is to think of proving being nested within improving so that if you’re focused on the improving component, then you’re going to take care of the proving piece. Let me give you an example of that:

Over the course of this webinar series one of the examples that we’ve been reviewing is related to a Department of Labor TAACCCT grant that we worked on and in this grant, the purpose was to be able to recruit individuals who would need additional certifications, particularly in mechatronics as a mechatronics technician. One of the goals, of course, was to actually give people credentials (so, they needed to be retained and they needed to actually complete the program). Early on within the project we were gathering a lot of different information, but one of the things that we started to look at was actual retention data – to what extent were people being retained and completing the project?

People were enrolled in this project through cohorts. There were about 15 people and 24 cohorts over about a year and a half time frame. Looking at the first four cohorts, these were our findings – not really a clear pattern here and we’re unsure how to interpret these findings. We were talking with the project team, we were regularly engaged, we were gathering data from students and getting feedback from them and from instructors. As we started looking at this more clearly, we realized instead of organizing the data in this format we needed to organize it this way – by putting cohorts one and three together and cohorts two and four. Why is that? Well, it started to become apparent was that people who were in the odd cohorts, those were the day cohorts, so they were enrolled in the program during the day and they had slightly different characteristics than those who are enrolled in the evening cohorts – different motivations and different level of educational background as well. So, in looking at that and understanding the demographic information, the project team was able to implement a number of different support services for them. They were able to identify and isolate out some services that would have been supportive in terms of maintaining retention.

So, if we looked at just the trend of those two first cohorts, what we would have seen off of those first two cohorts for those enrolled in the day program would have been at 47%. With those additional support services, the remaining cohorts had a retention rate closer of 61%. Even though these initiatives or those additional supports were really being targeted for people in the day program, they also were implemented for the evening – and there too we saw an increase from 71 percent for those first two cohorts to 79%. What’s really encouraging around this is that they were able to improve on the proving piece because they are focused on continuous improvement.

So, what does that look like in practice in terms of what you’re actually supposed to look like in terms of interacting with an evaluator or with the evaluation?

If you think about the life cycle of a project, there’s all these moments where there are decision points that are occurring. Very often the evaluation or the involvement of the evaluator is not linked to where those decisions are actually occurring. When you have a much more continuous improvement mind frame, now you’re going to make sure that you’re syncing up the involvement of the evaluator or the evaluation when those decision points actually occur. I go into greater depth on this whole role of continuous improvement in the webinar that we actually posted in April of this year, Now what? How to Use Evaluation Findings for Project Continuous Improvement. Also, this September we will host the Developmental Evaluation: What is it? How do you do it? webinar which will go in greater depth around this idea as well but for these purposes, what I want to emphasize is part of step five – you want to make sure that you’re including a statement that’s talking about how the evaluations are going to be used for continuous improvement and then the reporting for that much more kind of proving component. This is really simple.

Here again is another example that we’ve been using throughout this webinar series. It was a Department of Education grant and it’s a small-scale grant. As you can see, we just have a summary statement about how the evaluation findings are going to be summarized and the report shared with the project team, but also how formative evaluation findings will be used to inform the refinement of project activities. In a similar vein, here’s another example that we’ve been using throughout the webinar series. This was another Department of Education grant and it’s a moderate scale. Again, what you’ll see is a sentence around the reporting and continuous improvement purposes for the evaluation findings and that the external evaluator will be working with the project team for recommendations for project modifications throughout the project’s life cycle.

So, the take-home lesson here is to just summarize how evaluation findings will be used for continuous improvement and reporting purposes – very simple sentences or statements that need to be included.

Part 3:

Let me continue on to the next piece the operational approach and this is the final piece of the formula. In this piece what you want to make sure that you’re focusing in on are the practical aspects of the evaluation – how the evaluation is going to actually be operated and how it’s actually going to be implemented from a practical standpoint. So, for step six you want to make sure you describe who will be involved in the evaluation and the timing of those evaluation activities. So who are some of the individuals that are usually involved in the evaluation? Of course, you have the principal investigator, but another person that you want to engage with is the institutional researcher. It’s also really important that you engage with the institutional researcher during the proposal phase – they may have real insight in terms of the what will impact the data gathering process and you’ll want to make sure that that is reflected within the narrative. They also may have a need for additional resources to be able to meet a gap to be able to gather the needed data and you’ll want to make sure that that is captured so it’s reflected in the budget during the proposal phase as well.  You also just may have some other stakeholders who are involved within the process too. Some of these other stakeholders could be a Co-PI, another project team member, somebody else within the institution, or maybe  even another partner. Those are some other individuals that you want to make sure to involve in within the evaluation. And of course, there’s also the external evaluator.

It’s best to be able to work in a collaborative standpoint with these individuals and that you’re collaboratively thinking about and implementing the evaluation, particularly again with that lens towards continuous improvement. Sometimes what you’ll see is a term to refer to this group of individuals is an evaluation team, but let me take a moment here just to talk a little bit about the role of the external evaluator.

An external evaluator should be providing guidance in the development of the evaluation and of course, they also should be leading the implementation of the evaluation, but again, the implementation should really ideally occur within this collaborative standpoint. I also think that the external evaluator should be the “champion” for the evaluation. I put “champion” in quotation marks because it’s a champion for the evaluation, not the project per se. What do I mean by championing the evaluation? Well you want someone who is really looking for ways to learn through the continuous improvement process. You also want someone who’s really staying, what I call, “close” on the project to be able to identify emergent data gathering processes that may not have been clear before the project was actually being implemented. And then you also want to make sure that there’s someone who’s helping catalyze the process for using the data to inform the decision-making process. Again, basically what they’re doing is really helping to use the evaluation or this continuous improvement and not just for the proving piece. In that way there’s a lot of value that an external evaluator can give to a project beyond just being able to check a box to say that there’s an external evaluator who’s serving on the project.

So, in regard to talking about the team and what does this look like, very simple statements can be included within the narrative. In this case we just give a description of our firm who’s leading the evaluation and then who’s going to be involved. So, here it was a project director and the institutional researcher. Similarly, in another example we talked about, it was the PI the co-PI’s who would be involved giving information about myself and about the firm. Again, depending on the space you can provide more detail or less detail around these elements.

Another piece that you could consider adding into the evaluation plan would be a reference to the cycle of evaluation activities. What you want to be able to do is convey that the evaluation is not something separate, but it’s integral to the project. On this type of example, we’re highlighting out when the evaluation design is going to be articulated, the data gathering, as well as when reporting is going to occur. This image is also conveying that we’re meeting frequently – most of the time when working with our clients we minimally are trying to meet with clients once a month because we do deal with these innovative, new projects and there are a lot of emergent issues that we want to make sure that we’re capturing information for.

So, the take home lesson here is really describe who will be involved in the evaluation and when the evaluation activities will occur.

[Q & A Portion not transcribed.]

In terms of putting everything together, we started off by the foundations and we talked about the theory of change  with the theory of change we said to be able to summarize that in a sentence and to be able to include a logic model at the proposal stage for moderate and large-scale initiatives. Then we also asserted that every proposal should have evaluation questions and that what really is going to vary is not the presence or absence, but the number of those questions. Next, we talked about choosing an evaluation design and what you need to consider in terms of the data gathering and analysis. And finally, the final two elements which were in regard to the use of the findings and the operational approach. When you put all those together what you’ll get is an evaluation plan.

We started this journey together talking about the evaluation word salad and we talked about all these different words that may not be necessarily clear and hopefully after this time together there’s a little more clarity on where these terms actually land and where they actually live. Hopefully that angst is reduced and the objectives that we had have been met.

Now you know us.

Isn’t it time we get to know you?

    *We will not sell or share your information.

Send us your information and we will get right to you.