Correlation vs. Causation: Understand the Difference for Better Interventions

If you have taken a research methods course at some point, you may remember the mantra “correlation does not imply causation.” People say they understand the difference between a correlation and causation, but when I hear them talk, I can tell that they don’t.

As a quick refresher, correlation simply refers to what occurs when two variables co-vary together. Essentially as one variable increases so does another variable (positive correlation, see graph on the left).  Or as one variable increases another variable decreases (negative correlation, see graph on the right). On the other hand, causation can be thought of as a specialized correlation in which two variables are co-varying because of one of the variables.

Figure 1. Simplified representation of a “positive” and “negative” correlation.

The distinction between correlation and causation is clearer when we look at variables that are correlated simply by chance. For example, a correlation exists between letters in the winning word in the Scripps National Spelling Bee and deaths due to venomous spiders (Vigens, 2015). Basically, as the number of letters in the Scripps winning word increased, so too did the number of deaths that year by venomous spiders increase.

If your reaction to that correlation is that the two cannot be correlated because there is no reason for the correlation to occur, what you are actually trying to establish is a causal relationship. In which case you are correct, there is no causal relationship between these two variables.

The lack of a causal relationship is clearer when two variables are in no way conceptually related. However, a causal relationship still has not been established even when there is a correlation established between two variables that appear to be related.

Take, for example, the obvious correlation between class attendance and course performance. The two variables are correlated such that course performance tends to increase with class attendance. If we do not address the possibility of other variables, we cannot say with certainty that class attendance increases performance because class attendance could be a proxy variable for course engagement, for instance, or some other circumstance.

Why is it important to disentangle these two concepts?

Disentangling these two concepts is more than just an interesting intellectual exercise; the distinction is important to achieve optimal outcomes. For example, when making big decisions about what to do to improve student success, we have to be careful that we are pressing on the right levers that will lead to the return on investment. When we think about interventions, the more we understand about the causal variable itself, the better the intervention we will have.

Consider the prevailing understanding that first-generation students are at risk for not completing a degree. It is critical for us to understand what the causal factor is in order to figure out a better approach for helping students who are the first in their families to attend college to persist and complete degrees. Any of the following could be causing the challenge that is “correlated” with a first-generation student not completing a degree: not understanding how to navigate college expectations; not having a strong resource network to troubleshoot issues; or feeling like an “imposter” whose lack of familiarity with campus life can lead to thinking that one does not belong in college.

If we understand what is occurring at the causal level and not simplify or misuse the concept of correlation, then we will be in a better position to design more effective interventions.  

Vigen, T. (2015). Spurious Correlations. New York, NY: Hachette Books.

Obtaining Credible Evidence of “Long” Long-Term Outcomes

Another challenge that The Rucks Group team sees across projects is what we call “aspirational goals.” This phrase is how we refer to goals and objectives that will likely not occur until after a project’s grant funding ends. Many projects have them. The question is: How do you measure them?

We struggled with measuring aspirational goals until, through a conversation with another evaluator, the idea of using the transitive mathematical property to address this challenge created an “aha” moment. 

As you may (or may not) recall from math class, the transitive property is this:

If a = b, and b = c, then a = c.

We can apply this mathematical property to the evaluation of grant-funded projects as well.

If, for instance, a college receives a three-year grant to increase the number of underrepresented individuals in a non-traditional field, progress toward the goal (which is unlikely to occur within the three-year time frame when the first year will be dedicated to implementing the grant) can be gauged using a sequence of propositions that follow the logic of the transitive property:

  • Proposition A = Start with a known phenomenon that is linked to the desired outcome.

Green and Green (2003) [1] argue that to increase the number of workers in the field, the pipeline needs to be increased.

Proposition B = Establish that the project’s outcomes are linked to Proposition A.


The current project has increased the pipeline by increasing the number of underrepresented individuals declaring this field as a major.


  • Proposition C = Argue that while the project (because of time) has not demonstrated the desired outcome, based on established knowledge it likely will.

If the number of individual majors increased, assuming a similar rate of retention, then there will be more individuals graduating and prepared to work in the field.


By using the transitive property it is possible to create a persuasive evidence-based projection that by increasing the number of individuals majoring in the field and in the pipeline to become workers, the project has instigated the changes to achieve its aspirational goals.


[1] This is a fictitious citation of illustration purposes only.

When Fuzzy Wuzzy Isn’t a Bear, But What You Need to Measure

I had a professor who believed that you could measure anything, even the impact of prayer. For many, that may seem like an arrogant pronouncement, but what he was illustrating was that in measuring fuzzy constructs you have to think outside of the box (and besides, there are actual studies that have measured the impact of prayer).

In much of the work at The Rucks Group, we encounter things that are difficult to measure. We often deal with clients’ understandable angst about identifying key nebulous variables such as measuring changes in a coordinated network, the impact of adding a new role like a coach or navigator, or the impact of outreach activities to increase interest in a particular field. 

One approach to measuring difficult-to-measure constructs is through the counterfactual survey (click here to read our blog about counterfactual surveys). 

Whether or not you use a counterfactual survey when measuring difficult-to-measure variables, it is essential to build a case that the intervention is making a difference through the “preponderance of evidence.” There is rarely a single magic bullet. The evidence, instead, usually comes from multiple observable outcomes. In legal terms, it is akin to building a circumstantial case. 

With preponderance of evidence in mind, our team often talks about “telling the story” of a project. Here are two approaches for effectively “telling the story” in an evaluation context.

Incorporate mixed-methods for data gathering

Using a mixed-methods approach in an evaluation can paint a compelling picture. For instance, many of the projects we work with strive to build relationships with industry partners for their important work in curriculum development. Measuring the changes in industry partners’ involvement as well as the impact of these relationships is very challenging. However, we have found three useful ways to measure industry partnerships. They are 

    1. conversations with the project team to obtain information regarding the impact of the industry partnerships (e.g., any stories of donations, assistance in identifying instructors, etc.);
    2. data from industry partners themselves (gathered either through surveys or interviews); and 
    3. rubrics for tallying quantitative changes that result from industry partnerships. 

Incorporating multiple approaches to data gathering is one way to measure otherwise nebulous variables.

Leverage what is easily measurable

Another common challenge is measuring the broader impact of outreach activities. For one client with this goal, our team struggled to find credible evidence because outreach involved two different audiences: individuals within a grant-funded community and the larger general audience of individuals who may be interested in the work of the grant-funded community. 

For some time we really struggled with how to find an approach to demonstrate successful outreach to the general audience. As we reviewed the available data it dawned on us that we could leverage the data related to the visits to the project’s website because the grant-funded audience had a known size. We made an assumption around how many hits the website would have if the known community members were to visit it. By subtracting that number from the total website visitors, we arrived at the number we identified as the general audience of individuals from outside the grant-funded community who accessed the project’s website. 

We then employed a mixture of methods to combine our audience calculation with other data to tell a cogent story. We have used this approach for other clients, sometimes using Google searches and literature searches to find a number as a reference point. 

Hopefully these tips (along with a prayer or two to help with insight) will help the next time you’re confronted with difficult-to-measure variables. 

Questions Frame the Lens for Answers*

Far better an approximate answer to the right question … than the exact answer to the wrong question.

— John Tukey, Statistician

 If I had an hour to solve a problem and my life depended on the solutions, I would spend the first 55 minutes determining the proper question to ask, for once I know the proper question, I could solve the problem in less than five minutes.

 — Albert Einstein, Physicist

Much of what we do both at the individual and organizational levels are driven by questions. Questions are the lens by which we see what is and is not possible.  It has been my experience that teams and organizations, regardless of whether they are working at improving student success or addressing workforce demands, sometimes go astray when they seek answers to the wrong questions.

This error generally happens not because of lack of intelligence, work ethic, or even passion, but because those working on the problem respond too quickly to the high pressure for a solution. The perception that they have to hurry often results in teams moving too quickly from the problem-space into the solution-space because we are often metaphorically “building the plane while we are also flying it.”

This pressure is keenly felt when attempting to evaluate an initiative. Consequently, the focus generally is on “What can we measure?” which on the surface would be the exact question that should be asked. But I have found that frustrations mount when the question of what to measure becomes the focus of the evaluation before the project team and evaluator together address other important questions such as: “What do we want to know?”

One time I had a client project team grappling with what should be measured for the evaluation to demonstrate project outcomes. Rather than dwelling on their dilemma about what to measure, I asked a series of questions such as “What do you want to learn about your project?” “How does this project change behavior?” “What do your stakeholders want to know?” As team members answered those questions, I pointed out how their responses led to what they really should be measuring regardless of how difficult it could be to obtain relevant data.

This experience reminded me that once you focus on what people want to learn from an intervention, it is easier to figure out how to measure outcomes.

My advice is do not skip over the questions of what you want to learn. Sure, those questions can be challenging because of a fear that those items cannot be measured. But doing this deeper thinking up-front avoids angst at the end about inadequate data or measures that lack meaning and often reveals novels ways of measuring outcomes that may have at first seemed impossible to measure.  Yes, the preliminary work will take time, but the benefits are so worth it.

*Portions were originally published in October 2012 issue of the Dayton B2B Magazine.

Using Counterfactual Surveys to Improve the Evidence-Gathering Process

Contextual information plays an important role in interpreting findings. Many of us have experienced this when a child has come home and said they have gotten 43 points on a test. Was it 43 out of 45, 100, or some other point system? Depending on the response, there is either praise or a very serious conversation.

In evaluation and research the same need for context to interpret findings exists. But how you get to that context can vary widely.

One common approach to create context is to utilize a pre-/post-testdesign (pre-/post-test). The purpose of a pre-/post-test is to compare what was occurring before an intervention to what is occurring after that intervention by focusing on particular outcome measures.

One challenge to the pre/post-test is responders’ standards for the basis of a judgment may shift because of the intervention. With additional information your perception of what “good” is and how good you are can change. This occurrence can result in similar pre-intervention and post-intervention responses.

One solution to this problem that our team has successfully incorporated into much of our work is the use of a counterfactual survey, also called retrospective survey. In these types of surveys, respondents are asked to consider their current attitudes or perceptions and their attitudes or perceptions prior to participating in the intervention at the same time. In this way respondents are able to make their own adjustments about how they perceive the intervention.

To understand completely how a counterfactual survey looks in practice, let’s examine one of our first projects in which we incorporated this approach.

In this project, STEM academic administrators were participating in a year-long professional development opportunity to enhance their leadership skills. Prior to participating in any activities, we disseminated a survey for participants to rate on a scale of 1 (least like me) – 7 (most like me) their self-perceptions as a leader. Consistent with a traditional pre-/post-test, participants were then asked to complete the survey at the end of the professional development opportunity as shown in the figure below.

03.04.19 - Image 1

To incorporate the counterfactual survey, after participants answered items about how they perceived themselves as leaders, participants were presented with items asking them to rate how they would have rated the items before participating in the professional development opportunity. Therefore, the counterfactual design looks like this:

03.04.19 - Image 2

It should be noted that a counterfactual design does not require including a pre-test questionnaire; in this situation we just happened to do so.

What is interesting is that the pre-/post-test responses on several items were very similar, which is not that uncommon of an occurrence (selected items presented below).

03.04.19 - Image 3

However, when you add in the counterfactual design responses, an interesting pattern emerges—respondents rated themselves lower using a counterfactual than they had in reality.03.04.19 - Image 4

In follow-up interviews with participants, it was apparent that the standard that participants used had indeed shifted. In other words, they didn’t know what they didn’t know and so rated themselves higher on items before the intervention than after learning more about leadership.

A counterfactual survey holds a lot of promise, particularly in conjunction with gathering other data points. A counterfactual survey is only appropriate for attitudinal or perception data and not for objective measures of skill or knowledge. But utilizing a counterfactual survey may serve to illuminate changes that would otherwise go undetected.

 

 

 

Looking for an Individual to Join Our Team!

Last Team Meeting - Team Photo July 10 2018We have recently experienced transitions in our team: Jeremy who had been with us since 2015 left to work on his doctorate degree at Penn State and Maggie Jaeger who started with us as a research assistant is now working on her doctorate degree at the University of Minnesota. We are sad to have them leave us, but are excited about the opportunities that are ahead for them!

As a consequence, we are seeking to bring another individual on our team. If you want to work at a firm that discusses the nuisances of survey design, optimal non-parametric tests, best practices in data visualization, and yes, gets excited about Pi Day, then we invite you to review the job description and submit an application!

Job Description – Research and Evaluation Associate – Final – 09.14.18rev

Happy Pi Day!

I love celebrating Pi Day largely because it’s such a gloriously geeky thing to do! What makes it even “geekier” Pi - Day Picturewas the cool Pi pen holder (using about the first 300 digits of Pi) The Rucks Group team made through additive manufacturing (3D Printing) to celebrate a team member’s birthday. Why?

We have the pleasure of working with Iowa State University on a project funded by the National Science Foundation Division of Engineering Education and Centers to “promote a platform to bring together a network of under-represented minority (URM) women in engineering” towards increasing participation in advanced manufacturing and towards career advancement for URM women faculty in engineering.  As part of that work, I participated in a 2 ½ day workshop in October that covered a variety of topics including several presentations on additive manufacturing.

I shared with the team many of the advances in additive manufacturing and got the team excited about the topic as well. So of course, we needed to experience it firsthand, so we went over to our local 3D printing bar (yes, there is one just around the corner from our office and yes, they serve beer) and made our Pi pen holder.

In much of our evaluation work, we strive to gain a deep understanding of the subject to optimally implement the evaluation. For this project, it brought together all our geeky tendencies!

To learn more about the Advanced Manufacturing Workshop: Preparing the Next Generation of Researchers project, visit https://www.imse.iastate.edu/advanced-manufacturing-workshop/

 

Active Listening: A Way to Change the World

If I could do one thing to change the world, it would be to teach a critical mass of people how to engage in active listening. Active listening is when an individual is fully engaged to the verbal and nonverbal points that a communicator is sharing. It means not preparing what you are about to say or assuming you know what the person is about to say before it is said. Instead, it involves pausing to take in each message and summarizing that message before responding. If you’re actively listening then the conversation will look and feel different. The response time to what someone is saying will be slower. For me, I usually need to have a notebook to write down my thoughts to focus on the speaker rather than rehearsing my thoughts while apparently listening (or worse yet, jumping in the middle of their sentence … ugh!).

Not too long ago, I had an “aha” moment about the transformative nature of active listening. I was invited to facilitate a group of 20 individuals about standardizing evaluation expectations for their grantees. After allowing each individual to share about their expectations for the work session, I began to engage the group in conversation. I listen intensely to understand what was said. When there was a natural pause, I would summarize what was said by saying, “What I heard you say was … is that correct?” or “I understand that … is very challenging” or “It sounds like you’re very excited about …” The group naturally was able to come to agreement on several key components. Of course, there were a couple tense moments in which someone would share a thought that was probably shared a dozen times, and someone else would quickly respond, as they probably had a dozen times. In those moments, I summarized what the concern was from each person’s perspective. Often the concerns shared by each person was supported by the “opposing” person. In other words, it sounded and was treated like a disagreement but they really weren’t disagreeing.

During the entire session, I didn’t really say much. I even went to the client and explained that while it may not have looked like I was doing much, I was actually doing a lot (and she said she knew I was)! Once the work session was over, a number of individuals shared that the work session was one of the most productive meetings they had ever had!

Why was actively listening transformative? Because often in arguments and conflicts, the real problem is that the individuals are not understood. When we feel understood and supported for that perspective, we can let go of our position, and focus on solving-problems. Moreover, when we truly understand someone else’s concerns, better solutions are identified. One of the reasons that clients report having a positive experience with our team is because we actively listen to them and can offer better solutions to their struggles and approaches to telling their story. I suppose doing so is our way of actually taking steps to change the world, one client at a time.

Celebrating 10 Years!

years-anniversary-pictogram-vector-icon-10th-year-birthday-logo-label-vector-id642856090 (1)

This year is exciting for many reasons:  250th anniversary of the publication of the Britannica, 50th Anniversary of Mister Rogers, the 23rd Winter Olympics (yay, Curling!), and the 10th anniversary of The Rucks Group! Considering that the firm was started in 2008 just months before the economy went off a cliff, that’s the milestone I am particularly excited!

While starting a business during a recession is faced with imaginable challenges, there are also some benefits. One of benefit is that it forced us to focus on certain principles. A few years ago, I articulated these principles and they are more and more reflected in our hiring, reviewing, and processes and have evolved into our core values:

* Contribute to a fun and productive environment
* Bring value to the client
* Provide academic excellence at business level speed
* Grow individually and collectively

I believe that the more we focus and reflect these values, the more successful we are.  How are we operationalizing these core values (spoken like an evaluator)? Or phrased differently, how do you see our core values in practice? Here are just a few examples:

1. Twice a year, we set a full day aside to reflect on the question: What are we doing well and where can we improve?
2. Throughout the year we participate in frequent lunch ‘n learns to develop our skills covering client management, data visualization, evaluative best practices, R, reporting, and time management topics.
3. We completed an extensive undertaking to outline our processes to ensure consistency of quality and timely completion of deliverables as we continue to grow.
4. We don’t let you forget about us. We reach out to you and have processes in place to serve as reminders for deliverable if we haven’t spoken to you at least once a month.
5. We continue to strive to reduce the stress and anxiety around gathering evidence of outcomes for our clients.

And we continue to identify ways that we can get better at what we do. Because at the end of the day, our job is to make our clients’ lives a little bit easier.

Thank you to our clients who have allowed us to partner with them on a vast array of projects! We look forward to another 10 years!