I had a professor who believed that you could measure anything, even the impact of prayer. For many, that may seem like an arrogant pronouncement, but what he was illustrating was that in measuring fuzzy constructs you have to think outside of the box (and besides, there are actual studies that have measured the impact of prayer).
In much of the work at The Rucks Group, we encounter things that are difficult to measure. We often deal with clients’ understandable angst about identifying key nebulous variables such as measuring changes in a coordinated network, the impact of adding a new role like a coach or navigator, or the impact of outreach activities to increase interest in a particular field.
One approach to measuring difficult-to-measure constructs is through the counterfactual survey (click here to read our blog about counterfactual surveys).
Whether or not you use a counterfactual survey when measuring difficult-to-measure variables, it is essential to build a case that the intervention is making a difference through the “preponderance of evidence.” There is rarely a single magic bullet. The evidence, instead, usually comes from multiple observable outcomes. In legal terms, it is akin to building a circumstantial case.
With preponderance of evidence in mind, our team often talks about “telling the story” of a project. Here are two approaches for effectively “telling the story” in an evaluation context.
Incorporate mixed-methods for data gathering
Using a mixed-methods approach in an evaluation can paint a compelling picture. For instance, many of the projects we work with strive to build relationships with industry partners for their important work in curriculum development. Measuring the changes in industry partners’ involvement as well as the impact of these relationships is very challenging. However, we have found three useful ways to measure industry partnerships. They are
1. conversations with the project team to obtain information regarding the impact of the industry partnerships (e.g., any stories of donations, assistance in identifying instructors, etc.);
2. data from industry partners themselves (gathered either through surveys or interviews); and
3. rubrics for tallying quantitative changes that result from industry partnerships.
Incorporating multiple approaches to data gathering is one way to measure otherwise nebulous variables.
Leverage what is easily measurable
Another common challenge is measuring the broader impact of outreach activities. For one client with this goal, our team struggled to find credible evidence because outreach involved two different audiences: individuals within a grant-funded community and the larger general audience of individuals who may be interested in the work of the grant-funded community.
For some time we really struggled with how to find an approach to demonstrate successful outreach to the general audience. As we reviewed the available data it dawned on us that we could leverage the data related to the visits to the project’s website because the grant-funded audience had a known size. We made an assumption around how many hits the website would have if the known community members were to visit it. By subtracting that number from the total website visitors, we arrived at the number we identified as the general audience of individuals from outside the grant-funded community who accessed the project’s website.
We then employed a mixture of methods to combine our audience calculation with other data to tell a cogent story. We have used this approach for other clients, sometimes using Google searches and literature searches to find a number as a reference point.
Hopefully these tips (along with a prayer or two to help with insight) will help the next time you’re confronted with difficult-to-measure variables.