Questions Frame the Lens for Answers*

Far better an approximate answer to the right question … than the exact answer to the wrong question.

— John Tukey, Statistician

 If I had an hour to solve a problem and my life depended on the solutions, I would spend the first 55 minutes determining the proper question to ask, for once I know the proper question, I could solve the problem in less than five minutes.

 — Albert Einstein, Physicist

Much of what we do both at the individual and organizational levels are driven by questions. Questions are the lens by which we see what is and is not possible.  It has been my experience that teams and organizations, regardless of whether they are working at improving student success or addressing workforce demands, sometimes go astray when they seek answers to the wrong questions.

This error generally happens not because of lack of intelligence, work ethic, or even passion, but because those working on the problem respond too quickly to the high pressure for a solution. The perception that they have to hurry often results in teams moving too quickly from the problem-space into the solution-space because we are often metaphorically “building the plane while we are also flying it.”

This pressure is keenly felt when attempting to evaluate an initiative. Consequently, the focus generally is on “What can we measure?” which on the surface would be the exact question that should be asked. But I have found that frustrations mount when the question of what to measure becomes the focus of the evaluation before the project team and evaluator together address other important questions such as: “What do we want to know?”

One time I had a client project team grappling with what should be measured for the evaluation to demonstrate project outcomes. Rather than dwelling on their dilemma about what to measure, I asked a series of questions such as “What do you want to learn about your project?” “How does this project change behavior?” “What do your stakeholders want to know?” As team members answered those questions, I pointed out how their responses led to what they really should be measuring regardless of how difficult it could be to obtain relevant data.

This experience reminded me that once you focus on what people want to learn from an intervention, it is easier to figure out how to measure outcomes.

My advice is do not skip over the questions of what you want to learn. Sure, those questions can be challenging because of a fear that those items cannot be measured. But doing this deeper thinking up-front avoids angst at the end about inadequate data or measures that lack meaning and often reveals novels ways of measuring outcomes that may have at first seemed impossible to measure.  Yes, the preliminary work will take time, but the benefits are so worth it.

*Portions were originally published in October 2012 issue of the Dayton B2B Magazine.

Using Counterfactual Surveys to Improve the Evidence-Gathering Process

Contextual information plays an important role in interpreting findings. Many of us have experienced this when a child has come home and said they have gotten 43 points on a test. Was it 43 out of 45, 100, or some other point system? Depending on the response, there is either praise or a very serious conversation.

In evaluation and research the same need for context to interpret findings exists. But how you get to that context can vary widely.

One common approach to create context is to utilize a pre-/post-testdesign (pre-/post-test). The purpose of a pre-/post-test is to compare what was occurring before an intervention to what is occurring after that intervention by focusing on particular outcome measures.

One challenge to the pre/post-test is responders’ standards for the basis of a judgment may shift because of the intervention. With additional information your perception of what “good” is and how good you are can change. This occurrence can result in similar pre-intervention and post-intervention responses.

One solution to this problem that our team has successfully incorporated into much of our work is the use of a counterfactual survey, also called retrospective survey. In these types of surveys, respondents are asked to consider their current attitudes or perceptions and their attitudes or perceptions prior to participating in the intervention at the same time. In this way respondents are able to make their own adjustments about how they perceive the intervention.

To understand completely how a counterfactual survey looks in practice, let’s examine one of our first projects in which we incorporated this approach.

In this project, STEM academic administrators were participating in a year-long professional development opportunity to enhance their leadership skills. Prior to participating in any activities, we disseminated a survey for participants to rate on a scale of 1 (least like me) – 7 (most like me) their self-perceptions as a leader. Consistent with a traditional pre-/post-test, participants were then asked to complete the survey at the end of the professional development opportunity as shown in the figure below.

03.04.19 - Image 1

To incorporate the counterfactual survey, after participants answered items about how they perceived themselves as leaders, participants were presented with items asking them to rate how they would have rated the items before participating in the professional development opportunity. Therefore, the counterfactual design looks like this:

03.04.19 - Image 2

It should be noted that a counterfactual design does not require including a pre-test questionnaire; in this situation we just happened to do so.

What is interesting is that the pre-/post-test responses on several items were very similar, which is not that uncommon of an occurrence (selected items presented below).

03.04.19 - Image 3

However, when you add in the counterfactual design responses, an interesting pattern emerges—respondents rated themselves lower using a counterfactual than they had in reality.03.04.19 - Image 4

In follow-up interviews with participants, it was apparent that the standard that participants used had indeed shifted. In other words, they didn’t know what they didn’t know and so rated themselves higher on items before the intervention than after learning more about leadership.

A counterfactual survey holds a lot of promise, particularly in conjunction with gathering other data points. A counterfactual survey is only appropriate for attitudinal or perception data and not for objective measures of skill or knowledge. But utilizing a counterfactual survey may serve to illuminate changes that would otherwise go undetected.