There are many moving pieces in submitting a winning grant proposal; building the case for the need, clearly describing your approach, and articulating how the initiative will be monitored and evaluated. While evaluation may seem to be separate from the other components, it’s not. Development of the project should take into consideration the evaluation and by extension, the external evaluator. In this webinar, we are pleased to have the esteemed Dr. Yvette E. Pearson, Founder and Principal Consultant of The Pearson Evaluation and Education Research Group, LLC (The PEER Group) as our guest co-presenter. Capitalizing on Dr. Pearson’s myriad of experiences which include serving as a Principal Investigator, former National Science Foundation program officer, and program evaluator, we will explore strategies for how to best optimize a project for successful implementation.
For an optimal learning experience, we encourage you to review our previous Coffee Break Webinar, 5 Best Practices for Hiring the Right Evaluation Partner.
Lana Rucks:
Welcome everyone to our current installment of our Coffee Break Webinar: Optimizing Success: Strategies to Leverage Program Evaluation for Project Implementation. We’re so thrilled to have you here with us. Before we jump into everything, I do real quickly have to wish my mom ‘Happy Birthday.’ Hi mom, happy birthday! And because I know Dad is watching this with you because they always watch, hi Dad! I hope you have a great time celebrating mom’s birthday today.
With that out of the way. Let me go ahead and get started with kind of setting the context for today’s conversation. In thinking about projects and in project implementation, one of the ways that you can think about them is to think about them as puzzle pieces. And I think in terms of projects that have worked with grant funded work, I think this is an analogy that will resonate. There is a lot of different pieces involved. A lot of things you have to really put together to be able to successfully implement an initiative. Sometimes project teams are challenged a bit in regard to thinking about then, where does evaluation fit within this puzzle and all these various pieces, particularly because when introduced to program evaluation, it’s called things like external evaluation, third party independent. What we really want to be able to do today is build the case around the idea that you shouldn’t think about evaluation as something separate to the project implementation. Rather, that it’s an integral part of implementation, and that’s how you are going to be able to really optimize success.
With that as context, there are three things that we want to be able to share. First, we really want to be able to emphasize what the value of evaluation is beyond the motivation of some project teams around doing evaluation because it’s required. So really giving a 360 view of that. The other piece that we want to be able to do is give strategies around how you actually optimize success. What are some of the things you can actually do from a practical level to be able to leverage your evaluation for optimization of project implementation? And of course, as always, we want to make sure that we are answering your questions.
To help us in this conversation, I am absolutely thrilled to have with us, Dr. Yvette Pearson. I’ve had the privilege of working with Yvette on a variety of projects over the last couple of years, and I just have an immense amount of respect for her, and she is well respected in the education research space, and the STEM education research space, as well. She really brings a unique perspective to this conversation as a principal investigator, evaluator, and program officer. So, with that, let me turn things over to Yvette for a few moments so that she can share more about her background. Yvette?
Yvette Pearson:
Thank you so much, and it is such a pleasure to get a chance to tag team with the great Dr.Rucks on this webinar. I always tell people that Lana is the evaluator’s evaluator, so that truly is the case with our relationship. Again, thank you for having me.
A little bit about my background in evaluation – it really started at what a lot of faculty members, especially in engineering, which is my disciplinary area, a lot of folks think of assessment and continuous improvement, especially with it comes to accreditation, as the grunge work that nobody wants to do. And that’s really where my journey with assessment and evaluation started. Leading assessment committees for our undergraduate programs, as we were being visited and revisited by ABET. That grew into greater levels of responsibility, with assessment and continuous improvement, and as Lana mentioned, as I evolved from my work on educational research projects as a PI and co-PI, I later became a Program Officer at NSF. During that time, I was blessed to be able to complete a Graduate Certificate in Educational Research Methodology. And not long, I guess around the same time I completed that, is when I started The PEER Group, which is my consulting firm. I am in the unique position that I get to wear all three hats, and I am excited to be able to share with you from different lenses with those hats on.
Lana Rucks:
Fabulous. Thank you so much for that. And thank you for your kind words, as well. If you don’t know who I am, I am Lana Rucks, Principal Consultant with The Rucks Group. We are a research and program evaluation firm that gathers, analyzes, and interprets data to enable our clients to measure the impact of their work. So, for close to 15 years, we’ve had the privilege of working with a number of project teams from higher education institutions with federally funded work. So, that is a bit about me. Let me turn things over to Alyce, so that she can share who she is and what her role is on this Coffee Break Webinar, as well. So, Alyce?
Alyce Hopes:
Hello. It is so nice to have everyone here this afternoon. As Lana mentioned, my name is Alyce Hopes, and I am the Outreach Coordinator with TRG. For today’s webinar, however, I will be facilitating the Q&A sessions, and we have a few Q&A portions sprinkled throughout the webinar. As questions may emerge for you, feel free to drop that in the question box which you can find on the right side of your screen, and it’s denoted by that little question mark symbol. I should say, too, that there is a little bit of a delay between the time that you send the questions, and we actually receive them on our end, so if we don’t get to your question during that particular segment, we will do our best to get to it at the next segment, and anything that we don’t get to you, we’ll be sure to follow up with you all in email.
Additionally, there is a chat function that we may use to communicate additional resources as those emerge, and with the chat function, it will only appear when there is a new message for you to review, so be sure to look out for any notifications that may pop up. With that, I’ll go ahead and pass it back over to you, Lana.
Lana Rucks:
Alright, great, thanks Alyce. Let’s go ahead and jump into our conversation for today. Let’s first start talking about the value of evaluation. As I was alluding to, when creating the context for this conversation, I referenced the fact that very often, individuals may be initially motivated to complete an evaluation because it’s required. There’s a lot of value of evaluation beyond that. I am wondering, Yvette, from your perspective of the multiple different lenses that you wear, if you can talk about what the value of evaluation is through that lens of a PI, Evaluator, and Program Officer.
Yvette Pearson:
Sure. There are a lot of different values that are really all related to all three different roles and I would say to different degrees within each of the three different roles. On the screen, we have triangles that represent PI teams, hexagons that represent Evaluators, and squares that represent Program Officers. And there is different shading with the shading to show that the solid shapes are emphasizing primary areas of focus, while the striped shapes are indicating the secondary area. With that, I will kind of walk through each of these.
With all projects, as PIs one of our primary areas of focus should be understanding our successes and also understanding our challenges because I look at hurdles as something designed to be overcome, kind of like running a race. Hurdles are designed to be overcome. You don’t win a race by running around a hurdle. You have to jump over them. So, as a PI, you really want to have that deep understanding. Also, Program Officers and Evaluators have an interest in that, as well. Program officers want to understand the challenges you are facing. I know during my time as a Program Officer, when I saw in an annual report that a PI team was having a particular challenge, it may be that I might have run into that challenge myself with a project, or I might have seen that challenge come up with another project. Sometimes I would pick up the phone and call the PI and say, “Hey, I noticed you are having a challenge with this. Might I offer some suggestions that might be helpful?” So, PIs, can have those kinds of communication with the program officer to help overcome those hurdles. Also, of course understanding and communicating those successes.
Another important part is learning what works and what doesn’t. You don’t want to keep trying things and you know this doesn’t work but “I’m gonna try it again, one more time.” So, after you’ve run into the wall enough times, you should know, “hey, I should probably try to go through another way.” So, it’s important to learn what works and what doesn’t. And again, it’s very important to both the PI and the Program Officer to know this. For PI’s, I think it’s pretty clear. Again, like I said, you don’t want to keep trying something in a way that doesn’t work. You want to figure out how to do it differently. For the program officers, I know as PIs it’s our tendency a lot of times to talk about all of the great things that are going on with our projects in the annual report, and not necessarily want to communicate the problem that was not working. Whereas your Program Officer wants to see what’s not working. Not only can they be helpful in the way that I described earlier, but it also helps to inform them. When they are thinking about the portfolio, when they are thinking about how to write that next solicitation and what they’re looking for and how they’re guiding the PI community on approaches and what should be included, they want to know what’s not working, so that they can make sure to voice those things in future calls.
Of course, maybe this would’ve been an overarching bubble, but informing continuous improvement. You want to make sure you are always learning from the results of the evaluation, implementing what you need to implement to go to the next step with an eye toward continuous improvement. That is very important for both the PI and the evaluator to really have that communication between the two to be able to say, “Hey, this is how we get from point A, to point B, to point C, and ultimately, to where you are trying to with the outcomes of your project.”
And then, I probably missed shading the Program Officer as a solid here, but demonstrating impact is very important. For the PI, you want to demonstrate to your Program Officer that they made a good investment when they decided to recommend your proposal for funding. You want to demonstrate to your institution that you’re contributing to the institution’s mission and growth. And then Program Officers also want to be able to communicate the impact of their work when they are going for future funding. They want to be able to say, “Hey our PIs have been able to accomplish this, or advance knowledge in this area.” And so, it’s very important to be able demonstrate the impacts of the work.
While we have these circles lined up, I want to communicate that these are not necessarily linear processes. These are really continuous improvement cycles within themselves. One area can inform the other. As you understand successes and hurdles and learn what works and doesn’t work, you are feeding back information into your process to continue to grow that understanding. As you learn what works and doesn’t work, you are informing your continuous improvement, and your continuous improvement is going back to help to kind of test and evaluate what is going on in your project. And then, continuous improvement is informing how you communicate your impact or demonstrate your impact. Then, as you look at that, and think of impact not just being at the end of the project, but those milestones along the way, you can come back and say, “Ok we saw this, now how we can grow that and invest in the continuous improvement process in itself.”
So, I just wanted to share those four ways. There are probably many others. You probably could have added ten more to this list, I’m sure, Lana.
Lana Rucks:
No, I think that those are excellent points. You know one of the ways I conceptualized what you were sharing is that sometimes, project teams may think that there are really competing views and competing goals around these three different roles. What’s really nice about what you’re sharing is that really, everyone is rowing in the same direction. There may be different emphasis, different ways in the view you look at it, but everyone is really rowing in the same direction trying to accomplish the same goal. And the other piece I thought was really cool, too, that you said, was that hurdles are something to jump over, not get around. So, some really great points. Why don’t we pause there for a moment to see if there are any questions? Alyce?
Can you speak to this a bit more, and in what instances the program officer would not necessarily be supportive if something didn’t work?
Yvette Pearson:
Yea, so when program officers are evaluating the projects that they are planning to recommend or not for funding, a couple of the things that they are looking at are for the potential to contribute to the body of knowledge or to whatever outcomes the funding program is designed to obtain. With that, they understand that there is risk involved. You will hear the term, “High Risk, High Reward.” So, sometimes with some of the things that are going to be the most transformative, you see a lot of potential for risk of it actually not working. There is a balance that they are trying to reach. They don’t expect everything to work out perfectly. They just want to understand that there are plans in place. When you are looking at the proposal, you have a well-conceived plan in place. If we knew what was going to work, especially, if we are talking about research vs. implementation, if we knew the answers already, we wouldn’t need to research it. So, there is inherent risk in all projects. What they always want to know, if things aren’t working, how are you mitigating the challenges? What are you doing to shift direction? And how are you dealing with those things that are not working?
Like I said, it also helps so that the next time, if for instance, a lot of times you will see solicitations that are a bit more prescriptive. Some of them are a bit more prescriptive than others. That helps to inform what goes into the solicitations as well. Also, what you’ll notice in the solicitation, and I’m speaking with an NSF hat on because that is where I’ve worked so I know different agencies are different, but within the solicitations, you will also see them pointing to literature that will help inform how you shape and how you conceptualize your proposal. A lot of that literature actually comes from projects that NSF has funded. So, it is a way, again, to share, “Hey this has worked in the past. Or this hasn’t worked in the past” in a way that helps inform what goes into future projects.
Lana Rucks:
Those are all really great points, and if there are more questions, we’ll just hold those for a moment, just so that we can go to the next point. I think what you were bringing up is an excellent Segway for our next conversation, which is in regard to really talking about using evaluation findings. And I think that what you were talking about before in terms of continuous improvement and the opportunity to have evaluation inform that process, I guess the question in this case is really from a practical standpoint: What can you provide in terms of suggestions to project owners, the project team, the PI, to really be able to incorporate the findings for a project implementation?
Yvette Pearson:
Yeah. Actually, for this one, I am really happy that you started with your puzzle because as you mentioned, a lot of times people think, “Oh it is an external evaluator,” and they have the evaluator disconnected from the team. And as you and I both know, it’s not uncommon to get that frantic email a week before the proposal is due that says, “Hey, I need an evaluator!” For that I say, begin with the end in mind, which doesn’t seem like it really answers your question, but it really does because your evaluator should be involved from the very start. And your evaluator can help you define outcomes that are measurable, they can provide insights on what has and has not worked from prior experience. So, having that conversation and having your evaluator integrated with your team at that point, at the very beginning of conceptualizing your project is so key to actually getting findings that you can do something with.
The next thing is to think about how, when you get the results from the evaluator, don’t think about it in terms of, “Ok, it’s time for an annual report. I need an evaluation report. Can you get it to me? My report is going to be overdue in a week, and I have another proposal that’s pending.” You don’t want your evaluation relationship to work that way. Optimally, you’ll be engaging your evaluator throughout the year. They’ll be engaged in project meetings because the evaluator wants to observe and needs to observe the process. They need to be privy to the conversations, to the complexities, to the challenges, so that they can make an evaluation of the process itself and also provide feedback that helps you with that process. Also, they can document and assess findings toward your outcome all along. It’s not necessarily just one time a year depending on the nature of your project. What that’s doing is providing you information in, what I call a, “just in time” sort of way that informs your internal decision-making, which is huge because what you don’t want to do is wait until you get into that mode where, “I need this evaluation report,” and you don’t even have a chance to digest the findings and use them to plan for what’s to come next. That evaluation communication in an ongoing way is very critical to internal decision-making.
And then, a third point would be thinking about going back to our last bubble on the last question, is communicating the impact. Again, you’re not waiting until the 5-year project is over to talk about impact. You are looking for those little nuggets along the way. As you have those conversations with your evaluator, they’re able to highlight those impacts that are standing out. If you notice in your NSF annual report, there is always a section for you to describe your significant findings, or a significant achievement or accomplishment, and that is where you say, “Ok, you know we are only a year in, but we were able to recruit x number of students, or we were able to get out a paper our first year.” And so, those sorts of things, or if you see something that you’ve learned that have gone into practice at your institution. Those are impacts you want to be able to communicate along the way, and not just at the end of the project.
Lana Rucks:
Good, good. Really excellent points. I am not going to take a moment away from question time. Alyce, are there any questions that have been offered?
In regard to communicating impact, what kind of considerations should be made at the development stage?
Yvette Pearson:
Again, if you sit down with your evaluator at the very beginning, probably one of the first things they’ll want to talk to you, after they kind of get the gist of the nature of your project, is they’ll probably want to talk to you about a logic model. And a logic model is basically a table, or it can be a figure, that maps the outcomes and the outputs of your project to the activities and the inputs. And so, when doing a logic model, an evaluator typically is going to start at the right side of the diagram, that says, “At these different stages (So, that might be short-term, mid-term, and long-term), we’ll expect to achieve these different outcomes,” and then work backwards from there. So, you know, what would be the outputs at different stages? Those are your more tangible kind of deliverables. Then you start asking the questions that kind of help form the meat and potatoes of the project itself. What activities are going to get us the deliverables? What activities are going to get us to accomplish these outcomes? And you start to fill in that.
That is very helpful because I think a lot of people think of a logic model only when they think about evaluation, but I use it regularly when coaching project teams on proposal development, on developing programs and initiatives. We start with a logic model because it really helps you focus your activities, and it helps to keep you from throwing the kitchen sink at whatever it is that you’re doing. You know that there are fifty (50) best practices out there for what it is that you’re trying to accomplish, but how do you focus yours specifically on the outcomes you’re trying to obtain? This logic model that your evaluator will help you work through is something that’s a great plan. You get all of those activities, and then you work another step backward in the chart, all the way on the left side, then you’re looking at, “Ok, what do I need as inputs to make these activities possible?” And so, that’s one thing that your evaluator can help you do right off the bat when you are conceptualizing your project.
Lana Rucks:
You know, in conversations that I’ve had with you on similar topics, one of the words that you’ve used that I really like is the intentionality. I think that is what you are reflecting in the logic model. It’s not just this evaluation that’s separate but using it for the project development and being very intentional in thinking about what you’re trying to accomplish.
Yvette Pearson:
Absolutely, and I’ll add this, you know some solicitations will ask you to include a logic model. Try to create space to include one even if it’s not asked for because it just shows an extra level of thought and intentionality that’s gone into the way you’ve designed your project to work.
Lana Rucks:
Right. Very good points. Let’s go on to the last topic that we wanted to cover for today which is the unexpected findings. You know, as you’ve been talking about, things don’t necessarily go the way that you plan. So, the question to you is really, how should project teams handle evaluation findings when it’s not what they expected, or maybe not even what they wanted?
Yvette Pearson:
Yes. And maybe I’ll start with this, it’s not what they wanted, well certainly when you get findings that you don’t want, they’re unexpected, right? And so, there are four different things to think about here. One is how we can shift our frame in terms of how we are viewing evaluation. As we’ve mentioned, it’s not this real external piece. It’s really integral to your project’s success. And so, when we shift our frame to think of it in that way, we start to think about things a little bit differently, and our relationship with the evaluator. And I would say think of your evaluator as that good friend or that trusted colleague who will not just flatter you, but who can sit down and have a conversation and tell you the truth about yourself because you know sometimes, we just have to have those moments with those people who can tell us like it is. If we think about that, and think about how we would accept feedback from a loved one or a trusted colleague who is trying to look out for the best for us, and in this case, our project, that helps us go a long way with accepting those unexpected findings, especially when they’re things we don’t want to see.
Then the next thing is thinking about how we, and actually if you’ll click the next one as well because these two really go hand and hand, really understanding reality. We cannot run away from reality. I used the hurdle example earlier, but I will use another very real example for me, and that is you know how we try to avoid the reality of challenges we are facing. For me, it’s when I go into the bathroom every morning, and that scale is right there to my left when I go in, and I don’t want to get on that scale because I don’t want to face the reality of what those little numbers are going to tell me. But the thing is, by not getting on the scale, I’m just avoiding it and it’s not helping things get better. I have to be able to and be willing to say, “Hey, let me step on this scale and get the reality and then accept what the reality is.” It’s the same thing with evaluation findings when they are things that aren’t going to be exactly like we planned. We can’t deal with anything that we can’t acknowledge. If I can’t acknowledge the numbers on the scale, I can’t do anything about them. And same thing with evaluation results. We have to be able to take them, process them, and then accept them as fact, and then figure out the next part, and that is: how we do the very hard work to move forward and to correct where we need to correct. And, again, going back to that continuous improvement cycle, how do we continue to grow and evolve?
Let me put this other spin on it because I know we don’t like talking about the negative, but its necessary. But also, you can have unexpected findings and unexpected things come up that are really good things. In one project that I worked on, if you looked at our process evaluation, it would tell you we were behind schedule, and we hadn’t gotten to this stage of the research yet, and we hadn’t done these things that we were supposed to have done by these milestones. But what ended up happening that caused those delays, was when we did some initial data collection, we found some interesting things in our data that led to an unplanned first year conference paper that ended up winning a best paper award. And so, that’s one of those unexpected things that came along the way that was a really good thing that had a really good outcome for our project and produced an extra output that we hadn’t even planned on.
There are sometimes things, whether it’s with progressing towards your outcome, it might be something with your team dynamic, that you really need to fix, and so be willing to sit and grapple with those findings, and use your evaluator, because trust me, we have seen a lot of things. You can use us to kind of be a sounding board and say, “hey it’s not just about us providing the feedback, but we can also provide input on recommendations on how you get past your hurdle.”
Lana Rucks:
Those are all really great points, and I think that you’re right. Sometimes we think of the unexpected as negatives, but they’re also positives. We are at time, so we can’t get to the last question segment, but we will follow up by email with any questions.
Yvette, I just want to thank you so much for being with us today and sharing your insights. I will try to do a quick summary of some of the points that you’ve brought up. One, in regard to using evaluation in terms of helping to improve processes and outcomes. And in a very related vain, that to be able to optimize project implementation, using evaluation to be able to inform project-level decision making. And then also really important is grapple with the evaluation findings, whether it’s what you’re expecting or not. Grapple with it, and if it’s positive or negative, just make sure that you’re actually looking at it. Yvette, do you have any quick final thoughts here before we wrap up?
Yvette Pearson:
No, I just want to say thank you again for inviting me. This was great and I actually did have coffee during my coffee break, so thank you.
Lana Rucks:
Very good, well thank you! And just real quickly before we go. Everyone mark your calendar for our next Coffee Break Webinar, which will be in October, which is “Don’t Tell Me I Have to Write that” 4 Tips to Reduce the Grant Proposal Frenzy.
Everyone, have a great rest of the day and thank you so much for joining us. Talk to you soon.