Many of the grant funded projects we provide evaluation
services to have an objective to expand industry/college partnerships. Because
of the lack of available instruments to measure changes in these relationships,
we developed the Partnership Rubric. The
Partnership Rubric was designed as a tool to quantify the involvement of
outside partners in a given project or center by measuring the changes in the
number of and level of involvement of those partnerships in targeted areas. While
the instrument was useful in quantifying changes, a key limitation of the
rubric was that it lacked validation.
Beginning in 2018, The Rucks Group and the National Science Foundation (NSF) Advanced Technological Education (ATE) Working Partners research project (DUE #1501176) teams began collaborating to address this key limitation and ultimately to widen the dissemination of the tool. As context, the Working Partners research project, employs a mixed methods approach to document and examine community college/industry partnership models, gain a better understanding of how these models are used in real world situations, and gather data about impacts of the partnerships.
We have begun piloting the rubric to gather feedback about the utility and areas for improvement. At the 2018 NSF ATE Principal Investigators’ Conference, we facilitated a roundtable discussion to introduce the rubric and gather this feedback. Based on that feedback, a number of revisions were made and we introduced the revised version at the 2019 High-Impact Technology Exchange Conference (Hi-TEC) in July. We are highlight encouraged by the response and believe that this tool will meet a critical need for many projects. If you are interested in learning more, download the Partnership Rubric and click on this link to provide us with information on your experience with instrument.
our evaluation firm focuses on measuring the effectiveness of initiatives,
ultimately our goal is about identifying effective student success
practices. While serving as the evaluator of John Carroll
University’s Aligned Learning Communities and Student Thriving: A First in
the World Project, Terry L. Mills, Ph.D., the project director, shared with our
team a blog that he had written about how student success is defined.
In this two-part blog on student success, we shared his blog in which Dr. Mills provides his perspective (click here to read). Be sure to read all the way to the conclusion where he lists questions to consider when one is defining student success. Dr. Mills is assistant provost for Diversity and Inclusion and sociology professor at John Carroll University. He applied for the First in the World grant, and John Carroll University was one 17 institutions to receive this grant from the U.S. Department of Education grant in 2015.
this second blog on the topic, I am providing the perspective of a researcher
and program evaluator on this key issue.
The current post-secondary educational landscape is vastly
different than a few decades ago. The students seeking a post-secondary
education are far more diverse now than with previous generations. The
diversity is not just based on demographic factors but also on educational
motives and academic preparedness. Take for instance, many family-sustaining
jobs which historically only required a high school equivalence degree, now
require some form of post-secondary credential. Four-year institutions which
traditionally saw few students working upwards of 20 hours or more per week,
now are witnessing an increased number of students that need to work for more
than just discretionary funds. From my own teaching experience, it was not
uncommon for a particular community college course to have half of the students
in the midst of a career transition and already holding a bachelor’s degree.
This diversity introduces complexity that is not fully reflected
in the prevailing definition of student success. It is encouraging that there
is an awareness of the limitation of the current “accepted” definition of
student success that needs to reflect a more student-centered that involves
examining engagement and thriving not just academic performance. Hopefully,
these conversations will lead to the extensive system-wide changes needed to
fully embrace a more flexible definition. Because as is, the definition impacts
on how schools are measured, and in many states funded and how students are
able to obtain financial aid.
In the absence of a system-wide change, a “work
around” to address the issue may be to consider the difference between how
student success is defined and what promotes
student success. Factors such as engagement, thriving, and student-centeredness
can be conceptualized as leading indicators of student success within the
current framework. Admittedly, this definition is not in alignment with the
goal of every student which is why the conversation on the definition of
student success should continue. However, if institutions are able to incorporate
these components within the support services provided to students then it could
be very impactful on requisite outcomes measure related to degree completing
students and all students.
You probably have heard of a FOIA (Freedom of Information
Act) request, but it was probably in the context of journalism. Often,
journalists will submit a FOIA request to obtain information that is not
otherwise publicly available, but is key to an investigative reporting project.
There may be times when your work could be enhanced with information that requires submitting a FOIA request. For instance, while working as EvaluATE’s external evaluator, The Rucks Group needed to complete a FOIA request to learn how evaluation plans in ATE proposals have changed over time. And we were interested in documenting how EvaluATE may have influenced those changes. Toward that goal, a random sample of ATE proposals funded between 2004 and 2017 was sought to be reviewed. However, in spite of much effort over an 18-month period, we still were in need of actually obtaining nearly three dozen proposals. We needed to get these proposals via a FOIA request primarily because the projects were older and we were unable to reach either the principal investigators or the appropriate person at the institution. So we submitted a FOIA request to the National Science Foundation (NSF) for the outstanding proposals.
For me, this was a new and, at first, a mentally daunting task. Now, after having gone through the process, I realize that I need not be nervous because completing a FOIA request is actually quite simple. These are the elements that one needs to provide:
Nature of request: We provided a detailed description of the proposals we needed and what we needed from each proposal. We also provided the rationale for the request, but I do not believe a rationale is required.
Delivery method: Identify the method through which you prefer to receive the materials. We chose to receive digital copies via a secure digital system.
Budget: Completing the task could require special fees, so you will need to indicate how much you are willing to pay for the request. Receiving paper copies through the US Postal Service can be more costly than receiving digital copies.
It may take a while for the FOIA request to be filled. We
submitted the request in fall 2018 and received the materials in spring 2019.
The delay may have been due in part to the 35-day government shutdown and a
possibly lengthy process for Principal Investigator approval.
The NSF FOIA office was great to work with, and we
appreciated staffers’ communications with us to keep us updated.
Because access is granted only for a particular time, pay
attention to when you are notified via email that the materials have been
released to you. In other words, do not let this notice sit in your inbox.
One caveat: When you submit the FOIA request, there may be encouragement
to acquire the materials through other means. Submitting a FOIA request to colleges
or state agencies may be an option for you.
While FOIA requests should be made judiciously, they are
useful tools that, under the right circumstances, could enhance your evaluation
efforts. They take time, but thanks to the law backing the public’s right to
know, your FOIA requests will be honored.
While our evaluation firm focuses on measuring the effectiveness of initiatives, ultimately our goal is about identifying effective student success practices. While serving as the evaluator of John Carroll University’s Aligned Learning Communities and Student Thriving: A First in the World Project, Terry L. Mills, Ph.D., the project director, shared with our team a blog that he had written about how student success is defined.
In this two-part blog on student success, we are first sharing this article in which Dr. Mills provides his perspective. Be sure to read all the way to the conclusion where he lists questions to consider when one is defining student success. Dr. Mills is assistant provost for Diversity and Inclusion and sociology professor at John Carroll University. He applied for the First in the World grant, and John Carroll University was one 17 institutions to receive this grant from the U.S. Department of Education grant in 2015.
In the second blog on this topic (that will post on August 15), I will provide the perspective of a researcher and program evaluator on this key issue.
the Higher Learning Commission (HLC) accreditation body released a report
suggesting “current discussions and measures of student success are based
on a construct that does not represent students now enrolled in U.S.
postsecondary education institutions.”
particular, HLC said the focus on completion too often ignores individual
students’ intent or educational goals. The current use of completion metrics
and approaches often result in privileging certain types of learners, and do
not adequately address the barriers or priorities of nontraditional students. This
current approach also undervalues certain types of institutions and programs,
such as community and technical colleges. The challenge in using the current
approach to define students’ success is that many community and technical
colleges typically do not fare as well as four-year institutions on completion
metrics because most of their students are working adults and not first-time,
to the HLC, a more flexible student success framework, with students at its
center, would include measures of “attainment of learning outcomes,
personal satisfaction and goal/intent attainment, job placement and career
advancement, civic and life skills, social and economic well-being, and
commitment to lifelong learning,”
institutions make grand claims about the educational experiences they seek to
provide. You can find such claims in various institutional documents and
communications such as in mission statements, admissions materials, at
commencement ceremonies, at trustee meetings. These claims then become an
important part of the “cultural language” of the institution that serves as a
sort of moral compass that keeps us on the path toward the core values of our
colleges (Jennings, et al.).
Perhaps routinely, these core values are tightly woven into the standards by which we measure our success in educating students. If our students lose themselves in “intellectual discovery” or become “men and women for others” to make a difference in the world, we will have done our job. For sure many of our students hope they will indeed graduate with these abilities. But our students are also exposed to numerous other perspectives on the college experience. And no perspective is more prominent, particularly in these tough economic times, than the one that defines college success as landing a good (i.e., high-paying) job or gaining admission to a top-ranked graduate or professional school. From this standpoint, the question “will a liberal arts degree be worth it?” means “will it pay off financially?”
With this understandable concern vying for students’ attention, how well do the life aspirations expressed in our colleges’ mission statements and core values shape the way students define their own success? In this regard, Jennings and colleagues conducted a study of students’ definition of success over the four years of their college experience. They found, for example that academic achievement (e.g., getting good grades, declaring a major, planning for study abroad) was more important than academic engagement, such as developing a breath of knowledge, or a love of learning. More than 80% of the students defined success using one of these academic achievement themes, with “getting good grades” being the most common response.
Jennings et al. study also found that social
and residential life to be significant to students’ definition of success.
This includes making new friends, maintaining relationships, participation in
extracurricular activities. This category was most prominent in the first year
(71%), and declined over the college experience, resting at 56% in year four.
Life management themes also were associated with students’ definitions of success. Elements of life management included maintaining psychological and physical well-being, work-ethic issues (e.g., better time management, developing effective study skills), and balancing academics with one’s social or personal life. Defining success in terms of life management was relatively common (44–82% each year), but the peak was during year three (82%), and lowest in first two years.
Another category focused on academic engagement: expressing a desire to learn, to take interesting classes or explore new subject areas, or to engage in independent research. Jennings and colleagues were surprised that more students did not define success in these terms. Those who did (30–53% each year) mostly talked about wanting to learn—until the senior year, when students linked their definitions of success to independent research or honors projects.
So, why is
it that for students, success is more related to getting good grades than being
academically engaged? Jennings and colleagues suggest that to answer this
question, we need to learn more about how they learned, what they learned, what
challenges their ideas, or what really got their attention?
Fain, P. (2018). Accreditor on Defining Student Success. December 12, 2018. Inside Higher Education
Higher Learning Commission. (2018). Defining Student Success Data: Recommendations for Changing the Conversation.
N., Lovett, S., Cuba, L., Swingle, J., and Lindkvist, H. (n.d)
Terry Mills, PhD is the former inaugural assistant provost for diversity and inclusion, and chief diversity officer at John Carroll University, University Heights, OH. Currently, he serves as project director for the John Carroll First in the world grant that focuses on factors associated with student success and thriving.
to joining John Carroll, he served as dean of humanities and social science at
Morehouse College, Atlanta, GA; and associate dean for minority affairs at the
University of Florida.
Dr. Mills is a Fellow of the Gerontological Society of America, the 2009 recipient of the Outstanding Mentor Award from the GSA Taskforce on Minority Issues in Aging, and a 2005 recipient of the William R. Jones Outstanding Mentor Award from the Florida Education Fund/McKnight Doctoral Fellows Program.
If you have taken a research methods course at some point, you may remember the mantra “correlation does not imply causation.” People say they understand the difference between a correlation and causation, but when I hear them talk, I can tell that they don’t.
As a quick refresher, correlation simply refers to what occurs when two variables co-vary together. Essentially as one variable increases so does another variable (positive correlation, see graph on the left). Or as one variable increases another variable decreases (negative correlation, see graph on the right). On the other hand, causation can be thought of as a specialized correlation in which two variables are co-varying because of one of the variables.
The distinction between correlation and causation is clearer when we look at variables that are correlated simply by chance. For example, a correlation exists between letters in the winning word in the Scripps National Spelling Bee and deaths due to venomous spiders (Vigens, 2015). Basically, as the number of letters in the Scripps winning word increased, so too did the number of deaths that year by venomous spiders increase.
If your reaction to that correlation is that the two cannot
be correlated because there is no reason for the correlation to occur, what you
are actually trying to establish is a causal relationship. In which case you
are correct, there is no causal relationship between these two variables.
The lack of a causal relationship is clearer when two
variables are in no way conceptually related. However, a causal relationship
still has not been established even when there is a correlation established
between two variables that appear to be related.
Take, for example, the obvious correlation between class
attendance and course performance. The two variables are correlated such that course
performance tends to increase with class attendance. If we do not address the
possibility of other variables, we cannot say with certainty that class
attendance increases performance because class attendance could be a proxy
variable for course engagement, for instance, or some other circumstance.
Why is it important
to disentangle these two concepts?
Disentangling these two concepts is more than just an
interesting intellectual exercise; the distinction is important to achieve
optimal outcomes. For example, when making big decisions about what to do to improve
student success, we have to be careful that we are pressing on the right levers
that will lead to the return on investment. When we think about interventions,
the more we understand about the causal variable itself, the better the
intervention we will have.
Consider the prevailing understanding that first-generation
students are at risk for not completing a degree. It is critical for us to
understand what the causal factor is in order to figure out a better approach
for helping students who are the first in their families to attend college to
persist and complete degrees. Any of the following could be causing the
challenge that is “correlated” with a first-generation student not completing a
degree: not understanding how to navigate college expectations; not having a
strong resource network to troubleshoot issues; or feeling like an “imposter”
whose lack of familiarity with campus life can lead to thinking that one does
not belong in college.
If we understand what is occurring at the causal level and
not simplify or misuse the concept of correlation, then we will be in a better position
to design more effective interventions.
Vigen, T. (2015). Spurious
Correlations. New York, NY: Hachette Books.
challenge that The Rucks Group team sees across projects is what we call
“aspirational goals.” This phrase is how we refer to goals and objectives that will
likely not occur until after a project’s grant funding ends. Many projects have
them. The question is: How do you measure them?
with measuring aspirational goals until, through a conversation with another
evaluator, the idea of using the transitive mathematical property to address
this challenge created an “aha” moment.
As you may
(or may not) recall from math class, the transitive property is this:
If a = b, and b = c, then a = c.
We can apply
this mathematical property to the evaluation of grant-funded projects as well.
instance, a college receives a three-year grant to increase the number of
underrepresented individuals in a non-traditional field, progress toward the
goal (which is unlikely to occur within the three-year time frame when the
first year will be dedicated to implementing the grant) can be gauged using a
sequence of propositions that follow the logic of the transitive property:
Proposition A = Start with a known phenomenon that is linked
to the desired outcome.
Green and Green (2003)  argue that to increase the number of workers in the field, the pipeline needs to be increased.
Proposition B = Establish that the project’s outcomes are linked to Proposition A.
The current project has increased the pipeline by increasing the number of underrepresented individuals declaring this field as a major.
Proposition C = Argue that while the
project (because of time) has not demonstrated the desired outcome, based on established knowledge it likely will.
If the number of individual majors increased, assuming a similar rate of retention, then there will be more individuals graduating and prepared to work in the field.
By using the transitive property it is possible to create a
persuasive evidence-based projection that by increasing the number of
individuals majoring in the field and in the pipeline to become workers, the project has instigated the changes to achieve its aspirational goals.
This is a fictitious citation of illustration purposes only.
I had a professor who believed that you could measure anything, even the impact of prayer. For many, that may seem like an arrogant pronouncement, but what he was illustrating was that in measuring fuzzy constructs you have to think outside of the box (and besides, there are actual studies that have measured the impact of prayer).
In much of the work at The Rucks Group, we encounter things that are difficult to measure. We often deal with clients’ understandable angst about identifying key nebulous variables such as measuring changes in a coordinated network, the impact of adding a new role like a coach or navigator, or the impact of outreach activities to increase interest in a particular field.
Whether or not you use a counterfactual survey when measuring difficult-to-measure variables, it is essential to build a case that the intervention is making a difference through the “preponderance of evidence.” There is rarely a single magic bullet. The evidence, instead, usually comes from multiple observable outcomes. In legal terms, it is akin to building a circumstantial case.
With preponderance of evidence in mind, our team often talks about “telling the story” of a project. Here are two approaches for effectively “telling the story” in an evaluation context.
Incorporate mixed-methods for data gathering
Using a mixed-methods approach in an evaluation can paint a compelling picture. For instance, many of the projects we work with strive to build relationships with industry partners for their important work in curriculum development. Measuring the changes in industry partners’ involvement as well as the impact of these relationships is very challenging. However, we have found three useful ways to measure industry partnerships. They are
1. conversations with the project team to obtain information regarding the impact of the industry partnerships (e.g., any stories of donations, assistance in identifying instructors, etc.);
2. data from industry partners themselves (gathered either through surveys or interviews); and
3. rubrics for tallying quantitative changes that result from industry partnerships.
Incorporating multiple approaches to data gathering is one way to measure otherwise nebulous variables.
Leverage what is easily measurable
Another common challenge is measuring the broader impact of outreach activities. For one client with this goal, our team struggled to find credible evidence because outreach involved two different audiences: individuals within a grant-funded community and the larger general audience of individuals who may be interested in the work of the grant-funded community.
For some time we really struggled with how to find an approach to demonstrate successful outreach to the general audience. As we reviewed the available data it dawned on us that we could leverage the data related to the visits to the project’s website because the grant-funded audience had a known size. We made an assumption around how many hits the website would have if the known community members were to visit it. By subtracting that number from the total website visitors, we arrived at the number we identified as the general audience of individuals from outside the grant-funded community who accessed the project’s website.
We then employed a mixture of methods to combine our audience calculation with other data to tell a cogent story. We have used this approach for other clients, sometimes using Google searches and literature searches to find a number as a reference point.
Hopefully these tips (along with a prayer or two to help with insight) will help the next time you’re confronted with difficult-to-measure variables.
Far better an approximate answer to the right question … than the exact answer to the wrong question.
— John Tukey, Statistician
If I had an hour to solve a problem and my life depended on the solutions, I would spend the first 55 minutes determining the proper question to ask, for once I know the proper question, I could solve the problem in less than five minutes.
— Albert Einstein, Physicist
Much of what we do both at the individual and organizational levels are driven by questions. Questions are the lens by which we see what is and is not possible. It has been my experience that teams and organizations, regardless of whether they are working at improving student success or addressing workforce demands, sometimes go astray when they seek answers to the wrong questions.
This error generally happens not because of lack of intelligence, work ethic, or even passion, but because those working on the problem respond too quickly to the high pressure for a solution. The perception that they have to hurry often results in teams moving too quickly from the problem-space into the solution-space because we are often metaphorically “building the plane while we are also flying it.”
This pressure is keenly felt when attempting to evaluate an initiative. Consequently, the focus generally is on “What can we measure?” which on the surface would be the exact question that should be asked. But I have found that frustrations mount when the question of what to measure becomes the focus of the evaluation before the project team and evaluator together address other important questions such as: “What do we want to know?”
One time I had a client project team grappling with what should be measured for the evaluation to demonstrate project outcomes. Rather than dwelling on their dilemma about what to measure, I asked a series of questions such as “What do you want to learn about your project?” “How does this project change behavior?” “What do your stakeholders want to know?” As team members answered those questions, I pointed out how their responses led to what they really should be measuring regardless of how difficult it could be to obtain relevant data.
This experience reminded me that once you focus on what people want to learn from an intervention, it is easier to figure out how to measure outcomes.
My advice is do not skip over the questions of what you want to learn. Sure, those questions can be challenging because of a fear that those items cannot be measured. But doing this deeper thinking up-front avoids angst at the end about inadequate data or measures that lack meaning and often reveals novels ways of measuring outcomes that may have at first seemed impossible to measure. Yes, the preliminary work will take time, but the benefits are so worth it.
*Portions were originally published in October 2012 issue of the Dayton B2B Magazine.
Contextual information plays an important role in interpreting findings. Many of us have experienced this when a child has come home and said they have gotten 43 points on a test. Was it 43 out of 45, 100, or some other point system? Depending on the response, there is either praise or a very serious conversation.
In evaluation and research the same need for context to interpret findings exists. But how you get to that context can vary widely.
One common approach to create context is to utilize a pre-/post-testdesign (pre-/post-test). The purpose of a pre-/post-test is to compare what was occurring before an intervention to what is occurring after that intervention by focusing on particular outcome measures.
One challenge to the pre/post-test is responders’ standards for the basis of a judgment may shift because of the intervention. With additional information your perception of what “good” is and how good you are can change. This occurrence can result in similar pre-intervention and post-intervention responses.
One solution to this problem that our team has successfully incorporated into much of our work is the use of a counterfactual survey, also called retrospective survey. In these types of surveys, respondents are asked to consider their current attitudes or perceptions and their attitudes or perceptions prior to participating in the intervention at the same time. In this way respondents are able to make their own adjustments about how they perceive the intervention.
To understand completely how a counterfactual survey looks in practice, let’s examine one of our first projects in which we incorporated this approach.
In this project, STEM academic administrators were participating in a year-long professional development opportunity to enhance their leadership skills. Prior to participating in any activities, we disseminated a survey for participants to rate on a scale of 1 (least like me) – 7 (most like me) their self-perceptions as a leader. Consistent with a traditional pre-/post-test, participants were then asked to complete the survey at the end of the professional development opportunity as shown in the figure below.
To incorporate the counterfactual survey, after participants answered items about how they perceived themselves as leaders, participants were presented with items asking them to rate how they would have rated the items before participating in the professional development opportunity. Therefore, the counterfactual design looks like this:
It should be noted that a counterfactual design does not require including a pre-test questionnaire; in this situation we just happened to do so.
What is interesting is that the pre-/post-test responses on several items were very similar, which is not that uncommon of an occurrence (selected items presented below).
However, when you add in the counterfactual design responses, an interesting pattern emerges—respondents rated themselves lower using a counterfactual than they had in reality.
In follow-up interviews with participants, it was apparent that the standard that participants used had indeed shifted. In other words, they didn’t know what they didn’t know and so rated themselves higher on items before the intervention than after learning more about leadership.
A counterfactual survey holds a lot of promise, particularly in conjunction with gathering other data points. A counterfactual survey is only appropriate for attitudinal or perception data and not for objective measures of skill or knowledge. But utilizing a counterfactual survey may serve to illuminate changes that would otherwise go undetected.
We have recently experienced transitions in our team: Jeremy who had been with us since 2015 left to work on his doctorate degree at Penn State and Maggie Jaeger who started with us as a research assistant is now working on her doctorate degree at the University of Minnesota. We are sad to have them leave us, but are excited about the opportunities that are ahead for them!
As a consequence, we are seeking to bring another individual on our team. If you want to work at a firm that discusses the nuisances of survey design, optimal non-parametric tests, best practices in data visualization, and yes, gets excited about Pi Day, then we invite you to review the job description and submit an application!