Webinar: Developmental Evaluation

Webinar

Presented by: The Rucks Group

Developmental Evaluation: What is it? How do you do it?

Developmental Evaluation: What is it? How do you do it?

September 10, 2020

Developmental evaluations apply evaluation principles to nascent innovations and complex systems. This approach is useful for evaluating new initiatives as evidenced by the Department of Labor’s solicitation for Strengthening Community Colleges Training Grants prescribing a developmental or adaptive evaluation to be incorporated into the grant funded work.

In this webinar, we will introduce the key concepts of developmental evaluation and explain how to use them by walking through a Department of Labor-funded project that used developmental evaluation. Participants will leave the webinar with a better understanding of what developmental evaluation is and how it is applied.

Transcript

Part 1:

Welcome everyone! thank you for joining us today for our current coffee break webinar on Developmental evaluation what is it and how do you do it. Hopefully you have a cup of coffee with you, and if the weather is like here in Dayton, Ohio, then you have a cold cup of coffee. Before we get started, let me go over a couple of housekeeping items:

It’s important that you have an opportunity to ask questions, so use the question function on your computer to be able to do that. We have with us today Alyce Hopes, our outreach coordinator who will be helping to moderate the Q & A as well as to address any questions or computer issues that you may have. Let me introduce myself to you as well – I’m Lana Rucks, the principal consultant of The Rucks Group. The Rucks group is a research and evaluation firm that gather, analyzes, and interprets data to enable our clients to measure the impact of their work. We were formed in 2008 and over the past several years we’ve had the privilege of working with a variety of clients primarily within higher education and grants funded by private foundations and federal agencies such as the National Science Foundation, Department of Education, and the Department of Labor.

To create context for today, I want you to think about when you’ve renovated a space or remodeled a room. And if you’re like me you had expectation for something like this – very pristine and a nice clean space. Also, if you’re like me, the reality of family intrudes and the way the room looks is probably more like this – not so pristine and a little messy. Well than can happen too when we are talking about grants.

When we’re planning out a grant, we have expectations that things will go a certain way, but then reality hits and some of those emerging factors impact on an otherwise well-crafted plan. This bring us to the importance of what development evaluation is. Development evaluation is really intended as a structure for handling the messy and unexpected aspects of implementing a new initiative. Developmental evaluation used to be a concept that was primarily discussed among evaluation circles, but I’m starting to see that conversation move beyond the evaluation space. For instance, in the recent funding opportunity announcement by the Department of Labor (DOL) there was an encouragement to use developmental evaluation.

With this context, there are a couple of things I want to be able to do during out time together;

  • I want to define what developmental evaluation is;
  • Then I want to simplify that and give some information about what that actually looks like in practice;
  • As well as provide some examples of how to incorporate developmental evaluation into an initiative; and
  • I want to be able to answers your questions. Again, please make sure to use that chat function on your computer.

So, let’s start first by talking about, what is developmental evaluation?

Developmental Evaluation was originally introduced into the literature by Michael Quinn Patton who is a prolific writer on evaluation. He defines developmental evaluation as the following: “[developmental evaluation] supports innovation development to guide adaptation to emergent and dynamic realities in complex environments”. Let’s try to unpack this just a little bit.

It “support innovation and development”. Developmental evaluation is really relevant to new ideas and new initiatives as well as project that have not been well established, or initiative in which there’s not a lot of associated research, even in regard to how to appropriately implement.

The other pieces of this is that it’s intended to “guide adaption”. So, it helps to provide insight into how to be flexible and how to adapt to situations and occurrences that are not expected. This is going to happen because they’re being implemented within “emergent and dynamic realities”. So, I know for many of us right now, in the current context with the COVID-19 pandemic, there’s a lot that we’ve had to adjust in terms of emergent and dynamic realities.

But even in normal situations, very often, implementing new initiatives can occur within these emergent and dynamic contexts and thus the environment in which they’re being implemented is complex. When you take all these factors together, there’s just a lot of variables that are impacting on how the project is being implemented.

So, that gives you a sense of what developmental evaluation is in terms of its denotation. Let me see if I can try to put some meat on the bones by thinking about the characteristics, or some of the connotations associated with developmental evaluation, or at least the connotations that I think of (so this is really from my perspective and trying to really unpack what exists in the literature around developmental evaluation).

The first characteristic of developmental evaluation is that it reflects a high level of interest in learning by the project team. Let me give you an example of that:

We work with a project that was funded by the Department of Labor and the purpose of this project was really to create a national model for flexible apprenticeships to increase the pipeline of workers in a high demand area. What’s important to know is that in this context the project team partnered with an evaluation entity, even though the funder didn’t require it. That’s in large part because they wanted to make sure that that learning, and that evaluative information was intentionally being gathered.

Not only did this project team have this emphasis on evaluation, but the other piece is that they make sure to actually use the evaluation findings. So, when measuring out the before and after learning of a workshop, they are looking at abnormalities in the findings. For instance, when people are reporting that they knew more on a particular topic before their workshop than afterwards, the teams digs back through that workshop training to understand if something that was  conveyed that may have been confusing? Even on a small level looking at something like response rates it’s something that they’re very responsive to.

In a situation in which they were disseminating a survey and they had a nine percent response rate, they were uncomfortable with that in terms of the quality of the data and questioned its ability to guide decisions (because we know that the larger the response rate the more representative it is of the target audience). So, they went back and strategized on how they could increase the response rate and were able to increase that to 40 percent. Together, this really reflects that high level and high emphasis and interest in learning.

I think another piece to keep in mind about developmental evaluation is that it’s not necessarily new methods and new techniques – it’s how you approach traditional evaluative approaches and how they’re combined together, as well as the perspective that you take in the implementation process. If you think about traditional evaluation you have the formative evaluation and then you have the summative evaluation that’s associated with it. In this situation what you can have instead from a developmental standpoint, is a developmental emphasis and the summative evaluation after that. The way to think about that developmental piece, is that it really maps onto that “plan-do-study-act” model and you’re cycling through in terms of tweaking and making modifications to the initiative until you feel comfortable to really be able to begin the summative evaluation piece of that.

I want to talk about another DOL grant. In this particular situation, the project was really focused on being able to provide displaced and incumbent workers an opportunity to quickly advance their credentials in the field with job openings. So, on a rolling basis they were participating in a credentialing program and what was really important was retention and completion of that program. Over a 21-month period we tracked out retention rates and this is what retention rate looked like over that time frame. However, I should highlight that those first four cohorts were really focused in on developmental evaluation, in which there was a lot of consideration and a lot of changes in terms of how the project was being implemented. Once that phase was completed, then the actual outcomes or the summative evaluation was conducted. That’s important because in that first kind of new learning phase, retention rate was close to 48%, but once a lot of the new tools and resources were implemented retention rate was actually increased to 63%. So, this is a way in which developmental evaluation can be folded into an evaluation even though you’re still primarily looking at and interested in summative evaluation components.

Part 2:

Let me go on and talk about a couple of more characteristics of developmental evaluation. I think another characteristic of a developmental evaluation is really related to how the evaluator is really empowered to play a consultative role. So, let me talk for a minute about what I mean by that:

When you’re talking about definitions of evaluation there’s really a common definition that’s used with evaluation and a lot of that is really centered around this idea of determining the merit, worth, value, or significance. Then there’s a more developmentally inspired definition as well and in that definition it’s much more focused in on the use of social science research methods to systematically investigate the effectiveness of social intervention programs. What’s really key about that in terms of the differences in those definitions, is that that first definition is much more focused on the accountability and the judging piece, whereas the second piece is much more focused on the improving and the learning piece. The definitions that you bring to the project will impact the evaluator’s role as well as the perspective that the evaluator will bring to the project. This has implications in regard to how the project team works together.

Another aspect of this is that in thinking about the project team, when you’re thinking about the evaluator playing a consultative role, you’re looking at the evaluation from a much more team perspective – and you’re looking at it much more from a collaborative perspective. One piece that the evaluator is striving to do within this context is to translate the evaluation findings into action-oriented data. What’s really key to able to accomplish that is that the project owners have to provide some transparency on the challenges and issues. By the two entities working together in this way, the external evaluator can help to identify additional data gathering approaches and additional insights in terms of looking at the data that may be able to help the project owners in terms of dealing and addressing some of the challenges and issues that they’re encountering.

I did want to just go back for a moment to my comments on the differing definitions related to evaluation.

I want to make clear that I’m not saying that accountability is not an important part of the evaluation – I think it is. I think that accountability, what I call proving, is really key, but I also believe that proving is nested within improving. If you focus in on improving (the learning component), you will be able to take care of the proving component too, but you won’t necessarily be able to take care of the improving if you’re only focused on proving component.

The final characteristic that I think that exists around developmental evaluation really relates to addressing ongoing emergent factors. When you’re addressing ongoing emergent factors, you’re able to do that because all the previous elements that we’ve talked about have already been addressed as well.

Because there’s this high level of learning, because you’re using traditional evaluative approaches but you’re using them in a slightly different way, and because the evaluator is empowered to play a consultative role, you’re able to address varying emergent factors. And how does that play out in reality? Well, one piece has a lot to do with the way in which the activity of the life cycle occurs.

One challenge that sometimes can happen is that there are these varying decision points that are occurring within the project and the data or the involvement of the evaluator is not necessarily in alignment with where those involvement points are occurring, so the evaluation is not available when those decision points are actually occurring. From a developmental evaluation perspective one piece that you want to make sure you’re doing is making sure that there’s much more alignment in regard to the involvement of the evaluation and when those decision points are actually occurring.

I think we’re very close to time here, and there are a couple of things that I did want to go over before we leave here:

I want to emphasize what the take-home lesson from this conversation which is that developmental evaluation is not necessarily about unique tools, but it is about a unique perspective. It’s not about something that’s really different in terms of the tools and methods so that are being utilized, it’s how they’re being folded into the project that really helps to bring value and understanding and helping to adapt to these really unique situations.

I didn’t directly mention this, but I do want to highlight one of the previous webinars that we produced earlier this year on Now What? How to Use Evaluation Findings for Project Continuous Improvement. Some of the thoughts that are presented in this webinar are expanded on in that webinar and may be something else that you may want to review.

I also want to just very quickly to give a plug for our next webinar which will be on November 5th and this is going to be focusing in on 5 Best Practices for Hiring the Right Evaluation Partner.

That concludes our coffee break webinar today thank you so much for joining and I look forward to talking with you next time.

Now you know us.

Isn’t it time we get to know you?

    *We will not sell or share your information.

Send us your information and we will get right to you.