We recently shared insights about implementing initiatives that are resilient to unanticipated changes in our Coffee Break Webinar entitled Making Your Project “Change-Quake-Proof”. The title is a play on the concept of an “earthquake-proof” building which is built with the intention of reducing damage during earthquakes. Similarly, “change-quake-proof” initiatives are designed to withstand changes that will likely occur when implementing an initiative, particularly a large, system-wide initiative.
Our guest speaker for this Coffee Break Webinar was Elizabeth “Betsy” McIntyre; she is the Director of the Tristate Energy and Advanced Manufacturing (TEAM) Consortium. I got to know Betsy and her team while I served as an external evaluator on an ARC grant that had been awarded to the TEAM Consortium. Over the two years of working on the grant, I was impressed with her leadership style and ability to lead a large-scale initiative effectively. This was quite remarkable because the project involved so many stakeholders from government, industry, and higher education across three states (Ohio, Pennsylvania, and West Virginia).
In this 30-minute webinar, Betsy shared how she builds team resiliency, engages with partners, and manages disruption. Here are three themes that emerged through the conversation:
Focus on Assets: We all have a tendency to identify deficits and then use those deficits as a way to drive change. Indeed, many project proposal solicitations require that some sort of need statement or gap analysis be included. The very nature of this process directs a project team to look at what is missing. While that may be required during the proposal phase, as the team moves into implementation there should be a greater focus placed on assets because assets create more energy than deficits. With that said, the project team does not need to be Pollyannaish or overly optimistic to the point of ignoring problems. Instead the team’s attention should be on assets and on determining what positive attributes currently exist, and how they can be built upon to address gaps.
Surround Yourself with Quality People: It’s critical to surround yourself with experts that you trust and who will provide real-time insights and input. The conditions around implementation are constantly changing and it is important that the individuals that you consult with can provide the information needed to drive data-informed decisions. These individuals should have the capability to provide you with good quality data in a timely fashion. With solid, up-to-date information you will be able to determine if you need to stay the course or if you need to pivot.
Involve all Stakeholders: The leader must work to cultivate a culture of consensus. This is particularly important so that the weight of the initiative is not entirely on the shoulders of the leader. If the initiative relies on the energy and passions of the leader alone, that can be problematic if other pressing business matters arise, health issues occur, or the leader’s stamina fades. To create a culture of consensus the project team must have a shared common understanding of the purpose of the initiative. A shared common understanding should be established very early within the life of the initiative. Once that’s set, then the team should invite people who have a stake in the outcome whether or not they are likely to participate. As the initiative moves forward, the team should be sure to keep the stakeholders’ end user in mind as services, tools, and resources are developed.
These are just some highlights from the webinar. For more in-depth information on each of these topics, I invite you to review the recording of thewebinar.
As a partner to the Working Partners team, we are pleased to share information about an upcoming event to offer a collaborative, interactive professional development opportunity designed to support ATE community members with identifying, assessing, and planning for successful industry-education partnerships. The free 8-week workshop will be held from May 23, 2021 – July 20, 2021.
Dates: May 23 – July 20, 2021
Tuesday Live Sessions: Presentations, expert panel discussion, Q&A, interactive exercises, learning together, and more. Occur on Tuesdays from 3:00 – 4:30p EDT
Canvas Cohort Hub: Resources, planning guides, cohort support, announcements, recordings, exercise submission, and more available 24/7 on the Canvas online course site.
HI-Tech Conference Registration: The workshop’s final live session will be held on Tuesday, July 20, just prior to the July 21 & 22 HI-TECH virtual conference. Participants who complete the workshop will be provided up to four free registrations to the conference.
The success of initiatives aimed at the development of a skilled technical workforce increasingly depends on the continual growth and strengthening of professional interpersonal connections such as those between education and industry professionals. Social network analysis (SNA) provides a useful methodology for evaluating and describing the structure and development of interpersonal connections within these contexts. This tool provides an actual example to introduce the foundational concepts of SNA and practical guidelines for capturing the necessary survey data to conduct the SNA.
We are all deeply aware of the need to “pivot” in these ever-evolving times. Determining the best strategy for how to change directions appropriately can be challenging. We learned about this excellent example of pivoting from an in-person career fair to a virtual one through our work with Columbus State Community College. With their permission, we share this resource, including the language used for disseminating information with our audience.
Hello Educator Friends!
We are functioning in a new norm. This means new approaches to education and new opportunities for your students’ futures. As a result of COVID-19, education is quickly transitioning to the virtual arena. This means exploring new approaches to traditional education opportunities such as career fairs, college visits, and classroom presentations. In the wake of this pandemic, now more than ever, Central Ohio is primed for a different kind of worker, one that is equipped with the hands-on skills needed to succeed in STEM, Healthcare and Hospitality fields.
The attached “Virtual Career Fair” digital handbook, is meant to provide students and educators a resource for career exploration in an online space. Students now more than ever, have the capacity from home to explore future opportunities and begin setting a plan in motion. Specifically, this virtual career fair will explore STEM, Healthcare, and Hospitality career fields while highlighting corresponding experiential learning programs offered at Columbus State.
The digital handbook will guide students through a career exploration assessment, talk students through their results, and provide additional information about the following programs:
· IT Flexible Apprenticeship
· Modern Manufacturing Work-Study
· Hospitality Management and Culinary Arts
· Health Careers Opportunities
Within the handbook you will also find points of contact for educators who want to bring more information about these programs to their students, and contact information for students who want to talk directly with program coordinators. Are your students ready to take the next step to embrace all that Columbus State has to offer?
Admissions is Here to Support You! At Columbus State, we never stop working to help you meet your educational and career goals – even from a distance. In response to COVID-19 and social distancing requirements, we’ve moved all admission, teaching and learning, and student support services to remote delivery. That means you can visit, apply, complete orientation, see your advisor and attend classes online, over the phone and through email.
We invite you to join us for our Virtual Events and One-on-One appointments with an Admissions Representative, which you can access without leaving your house. Visit our Admissions webpage for up-to-date information and to submit your application today.
The abrupt move from in-person to virtual instruction impacted several of the program evaluation projects that The Rucks Group team works on. In response, we developedsurvey items that instructors could disseminate to understand the impact of this transition on students and to aid in the decision-making process moving forward. We have started to gather data from the survey findings to understand students’ experience of the transition to non-face-to-face (non-f2f) instruction. The key emerging finding is that students like, or are at least OK with, online learning but not remote learning.
Remote and online learning are two distinguishable types of virtual or non-f2f instruction. Remote learning is the mere use of technology as a platform, whereas, online learning involves a more thoughtful approach to instructional design to optimize learning.
Our analysis of these new data found that instructors who were able to deliver virtual courses that resemble the careful design and planning of multi-dimensional online learning experiences were reported more favorably with students than those instructors who were not able to utilize technology as a pedagogical tool beyond a delivery system. Specifically, our emerging findings point to five tips for instructors as they consider designing courses for summer and fall terms.
1.Ensure high levels of communication and responsiveness to students.
Students’ responses to surveys suggest that what they found most effective in the non-f2f learning environment were “communicative and responsive” instructors. Responsiveness could have been through email, phone, or the availability to talk before or after class via the technology used to deliver the course. Conversely, students rated instructors who were not highly communicative as the least effective.
2. Allow more time for questions.
Students who had not previously taken a virtual course, reported that they preferred f2f instruction because it is a more optimal learning environment. One reason for this preference is that it is easier for students to ask questions in-person. To translate the “ease” of asking questions to a virtual environment, we are finding that instructors need to allow more time (perhaps what feels like an unnatural long period of time) for questions, because of the time lag in technology.
3. Facilitate more student-to-student interaction.
Based on the emerging data, another challenge of the non-f2f learning environment is the diminished natural or informal learning that occurs among students. Increasing student-to-student interaction could be remedied by having students introduce themselves at the beginning of each class, using the break-out room function in Zoom, or other online options for small-group meetings that allow students to interact with each other. In regards to how to incorporate these types of interactions, responses were mixed, however, they suggest that students prefer organic connections with classmates and not required interactions.
4. Include helpful supplemental resources.
Students particularly appreciated supplemental resources such as videos, PowerPoint slides and lecture recordings, but only if these resources were perceived as “helpful.” Based on students responses, “helpful” it interpreted as resources that truly aid in the understanding of learning objectives.
Supplemental resources were considered the least effective when supplemental resources such as homework and practice materials were not related to the chapter content; videos that did not cover the topics that students needed; insufficient video options; or items that students could not access because of technical problems.
5. Set clear expectations.
When instructors were able to set clear expectations for deadlines and use of the related technologies, students considered this an effective approach. For examples, some students reported that quizzes were unfair or not appropriate. Based on students’ overall responses, it may have been because clear expectations about what topic areas would be covered on examinations.
We know that teaching non-f2f in response to COVID-19—has been extremely challenging and took considerable energy because of the haste of these transitions and the health concerns that lingered in the background. It important to note that university and community college students, alike, reported appreciating instructors’ efforts in making this transition.
We hope these tips based on actual emerging data help instructors reassess their non-f2f teaching and move toward creating effective online learning environments.
In response to the COVID-19 pandemic, higher education institutions had to quickly convert in-person courses to remote courses. Understandably, the experience of students during this term may be quite unique. Many of the standard items included in end-of-course surveys may not be able to adequately capture these unique experiences. Provided in this document are a sampling of potential items that can be included in end-of-course surveys to gather student experiences in order to make appropriate plans for summer and fall terms. The items are grouped by topic and are not intended to be used as a whole, but single items can be used as appropriate. For additional questions, contact us at email@example.com or 937-219-7766 (during normal operations call 937-242-7024).
Looking to better measure your clients’ partnerships? Start by helping refine our partnership assessment tool!
The Rucks Group and the NSF ATE Working Partners Research Project team are co-developing a rubric intended to better measure the depth, breadth, and impact of industry partnerships on ATE projects and centers. After soliciting and integrating feedback from educators and PIs, we are turning to the evaluation community to gather feedback from another perspective.
We invite you to join us for a one-hour webinar on Wednesday February 26th at 2 pm EST to provide your expert feedback regarding this tool.
If you are interested in participating:
– Prior to the webinar, the partnership assessment tool, call agenda, and discussion questions will be emailed to participants a week before the call.
– During the webinar, the assessment tool will be presented followed by a guided discussion.
– At the conclusion of the webinar, we will ask that you complete a short survey indicating your initial thoughts and suggestions for rubric improvement.
Also, the opportunity to take part in a pilot study will also be presented during the call, which will provide deeper feedback on the tool.
Click the link below to let us know your level of interest. Please complete by Tuesday, February 18. We hope you will consider joining us to provide your expertise and insights to help improve this instrument and increase its utility.
Increasing the participation of students in STEM fields
often requires increasing skills within those areas by offering tutoring
services. It is not uncommon for projects to report challenges in actually getting
students to use the tutoring services, however. In our project with the STEM
Success Center (SSC) at Central State University (CSU), one approach is emerging
as a potentially successful strategy.
CSU, a public, historically black university in Wilberforce,
Ohio, received U.S. Department of Education funding to provide a comprehensive
suite of services for STEM students to prepare them for their post-undergraduate
careers and educational opportunities. The services offered by the project
include tutoring, advising, mentoring, experiential learning opportunities, and
professional development. The project focuses on students enrolled in 10 gateway
STEM courses. Given that lack of student preparation contributes to low
retention, persistence, and course passage rates, the project encourages freshmen
to utilize as many of the services as possible. As part of the project, some
students were offered stipends for attending tutoring.
Consequently, The Rucks Group is studying the impact of
tutoring services under three different conditions:
Group 1: Students who attended and received at least one stipend of $50 (n=32).
Group 2: Students who attended tutoring, but did not receive a stipend (n=56).
Group 3: Students who did not participate in SSC tutoring during the semester (n=222).
Students who received the stipend were required to participate in at least two hours of tutoring sessions for seven weeks. They were also required to meet with a STEM Success Manager on a monthly basis and participate in other STEM community-building events organized by SSC.
To assess the impact of the SSC’s services on grade
performance, final grades from these courses were analyzed (see Figure 1 below).
While these findings are preliminary and this was not an
experimental design, they do provide several interesting leads for further
study and similar experiments.
First, students who received tutoring passed their courses at
higher rates regardless of whether they were paid or not. Second, students who
received the stipend for attending tutoring sessions had the lowest failure
rates and earned the highest percentage of As. This seems to speak to the
importance of “dose,” that is how much of tutoring was actually needed to improve
the performance of students in the course. Keep in mind that students who
received a stipend attended five times more tutoring sessions than those who
participated in tutoring but did not receive a stipend. These findings also
speak to the nature of what type of support is needed, because additional
support services were also provided to students.
We still have other questions about the implications of this
project that we hope the next another round of data will help to answer.
I have personally never been particularly fond of the idea
of having to pay students to attend tutoring, but these initial findings are
compelling. However, as I consider these findings and the qualitative data
regarding barriers to students’ participating in tutoring that we have found
across several projects, it makes sense to provide students with the ability to
attribute tutoring to “being paid” versus “needing help.” If paying small stipends results in students
accessing tutoring frequently enough to gain the momentum they need not just to
pass courses but excel in them, then it may be a viable way to help students
get the requisite skills they need to persist in STEM majors.
I have been known to obsess over small things. Once I had a
team member comb through a data set comprised of thousands of individuals to
understand why three people, who made up less than one percent of the data were
missing when we cut the data a certain way. Their presence or absence did not
have an impact on the results. But I have learned that while some data
anomalies are irrelevant, there are times when a small anomaly is a “canary in
the coal mine” because it is actually an indication of something much bigger.
That is for me the meaning of the Steve Jobs quotation:
“Simplicity came not by ignoring complexity, but rather by conquering it.”
Sometimes in seeking, interpreting, and understanding data,
there’s a tendency to simplify or ignore the complexity because it may
contradict what we deem to be true.
Sometimes in seeking, interpreting, and understanding data,
there’s a tendency to simplify or ignore the complexity because it may
contradict what we deem to be true. On the contrary, it is in the complexity,
when data patterns are not obvious that I have found that the most interesting
revelations emerge. And what we are looking at is only complex because we are using
the wrong mental model to understand. So, “conquering” means understanding the
truth that is being revealed, and when that happens simplicity will
However, it is in the complexity, especially when data
patterns are not obvious, that I have found are when the most interesting
revelations emerge. Sometimes what we are looking at is only complex because we
are using the wrong mental model to understand.
So, for me “conquering” means understanding the truth that
is being revealed. When that happens, simplicity emerges.
Each year, we try to create
a fun and interesting New Year’s card that reflects a component of the work
that we do. In this year’s card, we highlighted the Steve Jobs quotation,
“Simplicity came not by ignoring complexity, but rather by conquering.”
Many of the grant funded projects we provide evaluation
services to have an objective to expand industry/college partnerships. Because
of the lack of available instruments to measure changes in these relationships,
we developed the Partnership Rubric. The
Partnership Rubric was designed as a tool to quantify the involvement of
outside partners in a given project or center by measuring the changes in the
number of and level of involvement of those partnerships in targeted areas. While
the instrument was useful in quantifying changes, a key limitation of the
rubric was that it lacked validation.
Beginning in 2018, The Rucks Group and the National Science Foundation (NSF) Advanced Technological Education (ATE) Working Partners research project (DUE #1501176) teams began collaborating to address this key limitation and ultimately to widen the dissemination of the tool. As context, the Working Partners research project, employs a mixed methods approach to document and examine community college/industry partnership models, gain a better understanding of how these models are used in real world situations, and gather data about impacts of the partnerships.
We have begun piloting the rubric to gather feedback about the utility and areas for improvement. At the 2018 NSF ATE Principal Investigators’ Conference, we facilitated a roundtable discussion to introduce the rubric and gather this feedback. Based on that feedback, a number of revisions were made and we introduced the revised version at the 2019 High-Impact Technology Exchange Conference (Hi-TEC) in July. We are highlight encouraged by the response and believe that this tool will meet a critical need for many projects. If you are interested in learning more, download the Partnership Rubric and click on this link to provide us with information on your experience with instrument.
our evaluation firm focuses on measuring the effectiveness of initiatives,
ultimately our goal is about identifying effective student success
practices. While serving as the evaluator of John Carroll
University’s Aligned Learning Communities and Student Thriving: A First in
the World Project, Terry L. Mills, Ph.D., the project director, shared with our
team a blog that he had written about how student success is defined.
In this two-part blog on student success, we shared his blog in which Dr. Mills provides his perspective (click here to read). Be sure to read all the way to the conclusion where he lists questions to consider when one is defining student success. Dr. Mills is assistant provost for Diversity and Inclusion and sociology professor at John Carroll University. He applied for the First in the World grant, and John Carroll University was one 17 institutions to receive this grant from the U.S. Department of Education grant in 2015.
this second blog on the topic, I am providing the perspective of a researcher
and program evaluator on this key issue.
The current post-secondary educational landscape is vastly
different than a few decades ago. The students seeking a post-secondary
education are far more diverse now than with previous generations. The
diversity is not just based on demographic factors but also on educational
motives and academic preparedness. Take for instance, many family-sustaining
jobs which historically only required a high school equivalence degree, now
require some form of post-secondary credential. Four-year institutions which
traditionally saw few students working upwards of 20 hours or more per week,
now are witnessing an increased number of students that need to work for more
than just discretionary funds. From my own teaching experience, it was not
uncommon for a particular community college course to have half of the students
in the midst of a career transition and already holding a bachelor’s degree.
This diversity introduces complexity that is not fully reflected
in the prevailing definition of student success. It is encouraging that there
is an awareness of the limitation of the current “accepted” definition of
student success that needs to reflect a more student-centered that involves
examining engagement and thriving not just academic performance. Hopefully,
these conversations will lead to the extensive system-wide changes needed to
fully embrace a more flexible definition. Because as is, the definition impacts
on how schools are measured, and in many states funded and how students are
able to obtain financial aid.
In the absence of a system-wide change, a “work
around” to address the issue may be to consider the difference between how
student success is defined and what promotes
student success. Factors such as engagement, thriving, and student-centeredness
can be conceptualized as leading indicators of student success within the
current framework. Admittedly, this definition is not in alignment with the
goal of every student which is why the conversation on the definition of
student success should continue. However, if institutions are able to incorporate
these components within the support services provided to students then it could
be very impactful on requisite outcomes measure related to degree completing
students and all students.
You probably have heard of a FOIA (Freedom of Information
Act) request, but it was probably in the context of journalism. Often,
journalists will submit a FOIA request to obtain information that is not
otherwise publicly available, but is key to an investigative reporting project.
There may be times when your work could be enhanced with information that requires submitting a FOIA request. For instance, while working as EvaluATE’s external evaluator, The Rucks Group needed to complete a FOIA request to learn how evaluation plans in ATE proposals have changed over time. And we were interested in documenting how EvaluATE may have influenced those changes. Toward that goal, a random sample of ATE proposals funded between 2004 and 2017 was sought to be reviewed. However, in spite of much effort over an 18-month period, we still were in need of actually obtaining nearly three dozen proposals. We needed to get these proposals via a FOIA request primarily because the projects were older and we were unable to reach either the principal investigators or the appropriate person at the institution. So we submitted a FOIA request to the National Science Foundation (NSF) for the outstanding proposals.
For me, this was a new and, at first, a mentally daunting task. Now, after having gone through the process, I realize that I need not be nervous because completing a FOIA request is actually quite simple. These are the elements that one needs to provide:
Nature of request: We provided a detailed description of the proposals we needed and what we needed from each proposal. We also provided the rationale for the request, but I do not believe a rationale is required.
Delivery method: Identify the method through which you prefer to receive the materials. We chose to receive digital copies via a secure digital system.
Budget: Completing the task could require special fees, so you will need to indicate how much you are willing to pay for the request. Receiving paper copies through the US Postal Service can be more costly than receiving digital copies.
It may take a while for the FOIA request to be filled. We
submitted the request in fall 2018 and received the materials in spring 2019.
The delay may have been due in part to the 35-day government shutdown and a
possibly lengthy process for Principal Investigator approval.
The NSF FOIA office was great to work with, and we
appreciated staffers’ communications with us to keep us updated.
Because access is granted only for a particular time, pay
attention to when you are notified via email that the materials have been
released to you. In other words, do not let this notice sit in your inbox.
One caveat: When you submit the FOIA request, there may be encouragement
to acquire the materials through other means. Submitting a FOIA request to colleges
or state agencies may be an option for you.
While FOIA requests should be made judiciously, they are
useful tools that, under the right circumstances, could enhance your evaluation
efforts. They take time, but thanks to the law backing the public’s right to
know, your FOIA requests will be honored.
While our evaluation firm focuses on measuring the effectiveness of initiatives, ultimately our goal is about identifying effective student success practices. While serving as the evaluator of John Carroll University’s Aligned Learning Communities and Student Thriving: A First in the World Project, Terry L. Mills, Ph.D., the project director, shared with our team a blog that he had written about how student success is defined.
In this two-part blog on student success, we are first sharing this article in which Dr. Mills provides his perspective. Be sure to read all the way to the conclusion where he lists questions to consider when one is defining student success. Dr. Mills is assistant provost for Diversity and Inclusion and sociology professor at John Carroll University. He applied for the First in the World grant, and John Carroll University was one 17 institutions to receive this grant from the U.S. Department of Education grant in 2015.
In the second blog on this topic (that will post on August 15), I will provide the perspective of a researcher and program evaluator on this key issue.
the Higher Learning Commission (HLC) accreditation body released a report
suggesting “current discussions and measures of student success are based
on a construct that does not represent students now enrolled in U.S.
postsecondary education institutions.”
particular, HLC said the focus on completion too often ignores individual
students’ intent or educational goals. The current use of completion metrics
and approaches often result in privileging certain types of learners, and do
not adequately address the barriers or priorities of nontraditional students. This
current approach also undervalues certain types of institutions and programs,
such as community and technical colleges. The challenge in using the current
approach to define students’ success is that many community and technical
colleges typically do not fare as well as four-year institutions on completion
metrics because most of their students are working adults and not first-time,
to the HLC, a more flexible student success framework, with students at its
center, would include measures of “attainment of learning outcomes,
personal satisfaction and goal/intent attainment, job placement and career
advancement, civic and life skills, social and economic well-being, and
commitment to lifelong learning,”
institutions make grand claims about the educational experiences they seek to
provide. You can find such claims in various institutional documents and
communications such as in mission statements, admissions materials, at
commencement ceremonies, at trustee meetings. These claims then become an
important part of the “cultural language” of the institution that serves as a
sort of moral compass that keeps us on the path toward the core values of our
colleges (Jennings, et al.).
Perhaps routinely, these core values are tightly woven into the standards by which we measure our success in educating students. If our students lose themselves in “intellectual discovery” or become “men and women for others” to make a difference in the world, we will have done our job. For sure many of our students hope they will indeed graduate with these abilities. But our students are also exposed to numerous other perspectives on the college experience. And no perspective is more prominent, particularly in these tough economic times, than the one that defines college success as landing a good (i.e., high-paying) job or gaining admission to a top-ranked graduate or professional school. From this standpoint, the question “will a liberal arts degree be worth it?” means “will it pay off financially?”
With this understandable concern vying for students’ attention, how well do the life aspirations expressed in our colleges’ mission statements and core values shape the way students define their own success? In this regard, Jennings and colleagues conducted a study of students’ definition of success over the four years of their college experience. They found, for example that academic achievement (e.g., getting good grades, declaring a major, planning for study abroad) was more important than academic engagement, such as developing a breath of knowledge, or a love of learning. More than 80% of the students defined success using one of these academic achievement themes, with “getting good grades” being the most common response.
Jennings et al. study also found that social
and residential life to be significant to students’ definition of success.
This includes making new friends, maintaining relationships, participation in
extracurricular activities. This category was most prominent in the first year
(71%), and declined over the college experience, resting at 56% in year four.
Life management themes also were associated with students’ definitions of success. Elements of life management included maintaining psychological and physical well-being, work-ethic issues (e.g., better time management, developing effective study skills), and balancing academics with one’s social or personal life. Defining success in terms of life management was relatively common (44–82% each year), but the peak was during year three (82%), and lowest in first two years.
Another category focused on academic engagement: expressing a desire to learn, to take interesting classes or explore new subject areas, or to engage in independent research. Jennings and colleagues were surprised that more students did not define success in these terms. Those who did (30–53% each year) mostly talked about wanting to learn—until the senior year, when students linked their definitions of success to independent research or honors projects.
So, why is
it that for students, success is more related to getting good grades than being
academically engaged? Jennings and colleagues suggest that to answer this
question, we need to learn more about how they learned, what they learned, what
challenges their ideas, or what really got their attention?
Fain, P. (2018). Accreditor on Defining Student Success. December 12, 2018. Inside Higher Education
Higher Learning Commission. (2018). Defining Student Success Data: Recommendations for Changing the Conversation.
N., Lovett, S., Cuba, L., Swingle, J., and Lindkvist, H. (n.d)
Terry Mills, PhD is the former inaugural assistant provost for diversity and inclusion, and chief diversity officer at John Carroll University, University Heights, OH. Currently, he serves as project director for the John Carroll First in the world grant that focuses on factors associated with student success and thriving.
to joining John Carroll, he served as dean of humanities and social science at
Morehouse College, Atlanta, GA; and associate dean for minority affairs at the
University of Florida.
Dr. Mills is a Fellow of the Gerontological Society of America, the 2009 recipient of the Outstanding Mentor Award from the GSA Taskforce on Minority Issues in Aging, and a 2005 recipient of the William R. Jones Outstanding Mentor Award from the Florida Education Fund/McKnight Doctoral Fellows Program.
If you have taken a research methods course at some point, you may remember the mantra “correlation does not imply causation.” People say they understand the difference between a correlation and causation, but when I hear them talk, I can tell that they don’t.
As a quick refresher, correlation simply refers to what occurs when two variables co-vary together. Essentially as one variable increases so does another variable (positive correlation, see graph on the left). Or as one variable increases another variable decreases (negative correlation, see graph on the right). On the other hand, causation can be thought of as a specialized correlation in which two variables are co-varying because of one of the variables.
The distinction between correlation and causation is clearer when we look at variables that are correlated simply by chance. For example, a correlation exists between letters in the winning word in the Scripps National Spelling Bee and deaths due to venomous spiders (Vigens, 2015). Basically, as the number of letters in the Scripps winning word increased, so too did the number of deaths that year by venomous spiders increase.
If your reaction to that correlation is that the two cannot
be correlated because there is no reason for the correlation to occur, what you
are actually trying to establish is a causal relationship. In which case you
are correct, there is no causal relationship between these two variables.
The lack of a causal relationship is clearer when two
variables are in no way conceptually related. However, a causal relationship
still has not been established even when there is a correlation established
between two variables that appear to be related.
Take, for example, the obvious correlation between class
attendance and course performance. The two variables are correlated such that course
performance tends to increase with class attendance. If we do not address the
possibility of other variables, we cannot say with certainty that class
attendance increases performance because class attendance could be a proxy
variable for course engagement, for instance, or some other circumstance.
Why is it important
to disentangle these two concepts?
Disentangling these two concepts is more than just an
interesting intellectual exercise; the distinction is important to achieve
optimal outcomes. For example, when making big decisions about what to do to improve
student success, we have to be careful that we are pressing on the right levers
that will lead to the return on investment. When we think about interventions,
the more we understand about the causal variable itself, the better the
intervention we will have.
Consider the prevailing understanding that first-generation
students are at risk for not completing a degree. It is critical for us to
understand what the causal factor is in order to figure out a better approach
for helping students who are the first in their families to attend college to
persist and complete degrees. Any of the following could be causing the
challenge that is “correlated” with a first-generation student not completing a
degree: not understanding how to navigate college expectations; not having a
strong resource network to troubleshoot issues; or feeling like an “imposter”
whose lack of familiarity with campus life can lead to thinking that one does
not belong in college.
If we understand what is occurring at the causal level and
not simplify or misuse the concept of correlation, then we will be in a better position
to design more effective interventions.
Vigen, T. (2015). Spurious
Correlations. New York, NY: Hachette Books.
challenge that The Rucks Group team sees across projects is what we call
“aspirational goals.” This phrase is how we refer to goals and objectives that will
likely not occur until after a project’s grant funding ends. Many projects have
them. The question is: How do you measure them?
with measuring aspirational goals until, through a conversation with another
evaluator, the idea of using the transitive mathematical property to address
this challenge created an “aha” moment.
As you may
(or may not) recall from math class, the transitive property is this:
If a = b, and b = c, then a = c.
We can apply
this mathematical property to the evaluation of grant-funded projects as well.
instance, a college receives a three-year grant to increase the number of
underrepresented individuals in a non-traditional field, progress toward the
goal (which is unlikely to occur within the three-year time frame when the
first year will be dedicated to implementing the grant) can be gauged using a
sequence of propositions that follow the logic of the transitive property:
Proposition A = Start with a known phenomenon that is linked
to the desired outcome.
Green and Green (2003)  argue that to increase the number of workers in the field, the pipeline needs to be increased.
Proposition B = Establish that the project’s outcomes are linked to Proposition A.
The current project has increased the pipeline by increasing the number of underrepresented individuals declaring this field as a major.
Proposition C = Argue that while the
project (because of time) has not demonstrated the desired outcome, based on established knowledge it likely will.
If the number of individual majors increased, assuming a similar rate of retention, then there will be more individuals graduating and prepared to work in the field.
By using the transitive property it is possible to create a
persuasive evidence-based projection that by increasing the number of
individuals majoring in the field and in the pipeline to become workers, the project has instigated the changes to achieve its aspirational goals.
This is a fictitious citation of illustration purposes only.
I had a professor who believed that you could measure anything, even the impact of prayer. For many, that may seem like an arrogant pronouncement, but what he was illustrating was that in measuring fuzzy constructs you have to think outside of the box (and besides, there are actual studies that have measured the impact of prayer).
In much of the work at The Rucks Group, we encounter things that are difficult to measure. We often deal with clients’ understandable angst about identifying key nebulous variables such as measuring changes in a coordinated network, the impact of adding a new role like a coach or navigator, or the impact of outreach activities to increase interest in a particular field.
Whether or not you use a counterfactual survey when measuring difficult-to-measure variables, it is essential to build a case that the intervention is making a difference through the “preponderance of evidence.” There is rarely a single magic bullet. The evidence, instead, usually comes from multiple observable outcomes. In legal terms, it is akin to building a circumstantial case.
With preponderance of evidence in mind, our team often talks about “telling the story” of a project. Here are two approaches for effectively “telling the story” in an evaluation context.
Incorporate mixed-methods for data gathering
Using a mixed-methods approach in an evaluation can paint a compelling picture. For instance, many of the projects we work with strive to build relationships with industry partners for their important work in curriculum development. Measuring the changes in industry partners’ involvement as well as the impact of these relationships is very challenging. However, we have found three useful ways to measure industry partnerships. They are
1. conversations with the project team to obtain information regarding the impact of the industry partnerships (e.g., any stories of donations, assistance in identifying instructors, etc.);
2. data from industry partners themselves (gathered either through surveys or interviews); and
3. rubrics for tallying quantitative changes that result from industry partnerships.
Incorporating multiple approaches to data gathering is one way to measure otherwise nebulous variables.
Leverage what is easily measurable
Another common challenge is measuring the broader impact of outreach activities. For one client with this goal, our team struggled to find credible evidence because outreach involved two different audiences: individuals within a grant-funded community and the larger general audience of individuals who may be interested in the work of the grant-funded community.
For some time we really struggled with how to find an approach to demonstrate successful outreach to the general audience. As we reviewed the available data it dawned on us that we could leverage the data related to the visits to the project’s website because the grant-funded audience had a known size. We made an assumption around how many hits the website would have if the known community members were to visit it. By subtracting that number from the total website visitors, we arrived at the number we identified as the general audience of individuals from outside the grant-funded community who accessed the project’s website.
We then employed a mixture of methods to combine our audience calculation with other data to tell a cogent story. We have used this approach for other clients, sometimes using Google searches and literature searches to find a number as a reference point.
Hopefully these tips (along with a prayer or two to help with insight) will help the next time you’re confronted with difficult-to-measure variables.
Far better an approximate answer to the right question … than the exact answer to the wrong question.
— John Tukey, Statistician
If I had an hour to solve a problem and my life depended on the solutions, I would spend the first 55 minutes determining the proper question to ask, for once I know the proper question, I could solve the problem in less than five minutes.
— Albert Einstein, Physicist
Much of what we do both at the individual and organizational levels are driven by questions. Questions are the lens by which we see what is and is not possible. It has been my experience that teams and organizations, regardless of whether they are working at improving student success or addressing workforce demands, sometimes go astray when they seek answers to the wrong questions.
This error generally happens not because of lack of intelligence, work ethic, or even passion, but because those working on the problem respond too quickly to the high pressure for a solution. The perception that they have to hurry often results in teams moving too quickly from the problem-space into the solution-space because we are often metaphorically “building the plane while we are also flying it.”
This pressure is keenly felt when attempting to evaluate an initiative. Consequently, the focus generally is on “What can we measure?” which on the surface would be the exact question that should be asked. But I have found that frustrations mount when the question of what to measure becomes the focus of the evaluation before the project team and evaluator together address other important questions such as: “What do we want to know?”
One time I had a client project team grappling with what should be measured for the evaluation to demonstrate project outcomes. Rather than dwelling on their dilemma about what to measure, I asked a series of questions such as “What do you want to learn about your project?” “How does this project change behavior?” “What do your stakeholders want to know?” As team members answered those questions, I pointed out how their responses led to what they really should be measuring regardless of how difficult it could be to obtain relevant data.
This experience reminded me that once you focus on what people want to learn from an intervention, it is easier to figure out how to measure outcomes.
My advice is do not skip over the questions of what you want to learn. Sure, those questions can be challenging because of a fear that those items cannot be measured. But doing this deeper thinking up-front avoids angst at the end about inadequate data or measures that lack meaning and often reveals novels ways of measuring outcomes that may have at first seemed impossible to measure. Yes, the preliminary work will take time, but the benefits are so worth it.
*Portions were originally published in October 2012 issue of the Dayton B2B Magazine.
Contextual information plays an important role in interpreting findings. Many of us have experienced this when a child has come home and said they have gotten 43 points on a test. Was it 43 out of 45, 100, or some other point system? Depending on the response, there is either praise or a very serious conversation.
In evaluation and research the same need for context to interpret findings exists. But how you get to that context can vary widely.
One common approach to create context is to utilize a pre-/post-testdesign (pre-/post-test). The purpose of a pre-/post-test is to compare what was occurring before an intervention to what is occurring after that intervention by focusing on particular outcome measures.
One challenge to the pre/post-test is responders’ standards for the basis of a judgment may shift because of the intervention. With additional information your perception of what “good” is and how good you are can change. This occurrence can result in similar pre-intervention and post-intervention responses.
One solution to this problem that our team has successfully incorporated into much of our work is the use of a counterfactual survey, also called retrospective survey. In these types of surveys, respondents are asked to consider their current attitudes or perceptions and their attitudes or perceptions prior to participating in the intervention at the same time. In this way respondents are able to make their own adjustments about how they perceive the intervention.
To understand completely how a counterfactual survey looks in practice, let’s examine one of our first projects in which we incorporated this approach.
In this project, STEM academic administrators were participating in a year-long professional development opportunity to enhance their leadership skills. Prior to participating in any activities, we disseminated a survey for participants to rate on a scale of 1 (least like me) – 7 (most like me) their self-perceptions as a leader. Consistent with a traditional pre-/post-test, participants were then asked to complete the survey at the end of the professional development opportunity as shown in the figure below.
To incorporate the counterfactual survey, after participants answered items about how they perceived themselves as leaders, participants were presented with items asking them to rate how they would have rated the items before participating in the professional development opportunity. Therefore, the counterfactual design looks like this:
It should be noted that a counterfactual design does not require including a pre-test questionnaire; in this situation we just happened to do so.
What is interesting is that the pre-/post-test responses on several items were very similar, which is not that uncommon of an occurrence (selected items presented below).
However, when you add in the counterfactual design responses, an interesting pattern emerges—respondents rated themselves lower using a counterfactual than they had in reality.
In follow-up interviews with participants, it was apparent that the standard that participants used had indeed shifted. In other words, they didn’t know what they didn’t know and so rated themselves higher on items before the intervention than after learning more about leadership.
A counterfactual survey holds a lot of promise, particularly in conjunction with gathering other data points. A counterfactual survey is only appropriate for attitudinal or perception data and not for objective measures of skill or knowledge. But utilizing a counterfactual survey may serve to illuminate changes that would otherwise go undetected.
I love celebrating Pi Day largely because it’s such a gloriously geeky thing to do! What makes it even “geekier” was the cool Pi pen holder (using about the first 300 digits of Pi) The Rucks Group team made through additive manufacturing (3D Printing) to celebrate a team member’s birthday. Why?
We have the pleasure of working with Iowa State University on a project funded by the National Science Foundation Division of Engineering Education and Centers to “promote a platform to bring together a network of under-represented minority (URM) women in engineering” towards increasing participation in advanced manufacturing and towards career advancement for URM women faculty in engineering. As part of that work, I participated in a 2 ½ day workshop in October that covered a variety of topics including several presentations on additive manufacturing.
I shared with the team many of the advances in additive manufacturing and got the team excited about the topic as well. So of course, we needed to experience it firsthand, so we went over to our local 3D printing bar (yes, there is one just around the corner from our office and yes, they serve beer) and made our Pi pen holder.
In much of our evaluation work, we strive to gain a deep understanding of the subject to optimally implement the evaluation. For this project, it brought together all our geeky tendencies!
If I could do one thing to change the world, it would be to teach a critical mass of people how to engage in active listening. Active listening is when an individual is fully engaged to the verbal and nonverbal points that a communicator is sharing. It means not preparing what you are about to say or assuming you know what the person is about to say before it is said. Instead, it involves pausing to take in each message and summarizing that message before responding. If you’re actively listening then the conversation will look and feel different. The response time to what someone is saying will be slower. For me, I usually need to have a notebook to write down my thoughts to focus on the speaker rather than rehearsing my thoughts while apparently listening (or worse yet, jumping in the middle of their sentence … ugh!).
Not too long ago, I had an “aha” moment about the transformative nature of active listening. I was invited to facilitate a group of 20 individuals about standardizing evaluation expectations for their grantees. After allowing each individual to share about their expectations for the work session, I began to engage the group in conversation. I listen intensely to understand what was said. When there was a natural pause, I would summarize what was said by saying, “What I heard you say was … is that correct?” or “I understand that … is very challenging” or “It sounds like you’re very excited about …” The group naturally was able to come to agreement on several key components. Of course, there were a couple tense moments in which someone would share a thought that was probably shared a dozen times, and someone else would quickly respond, as they probably had a dozen times. In those moments, I summarized what the concern was from each person’s perspective. Often the concerns shared by each person was supported by the “opposing” person. In other words, it sounded and was treated like a disagreement but they really weren’t disagreeing.
During the entire session, I didn’t really say much. I even went to the client and explained that while it may not have looked like I was doing much, I was actually doing a lot (and she said she knew I was)! Once the work session was over, a number of individuals shared that the work session was one of the most productive meetings they had ever had!
Why was actively listening transformative? Because often in arguments and conflicts, the real problem is that the individuals are not understood. When we feel understood and supported for that perspective, we can let go of our position, and focus on solving-problems. Moreover, when we truly understand someone else’s concerns, better solutions are identified. One of the reasons that clients report having a positive experience with our team is because we actively listen to them and can offer better solutions to their struggles and approaches to telling their story. I suppose doing so is our way of actually taking steps to change the world, one client at a time.
This year is exciting for many reasons: 250th anniversary of the publication of the Britannica, 50th Anniversary of Mister Rogers, the 23rd Winter Olympics (yay, Curling!), and the 10th anniversary of The Rucks Group! Considering that the firm was started in 2008 just months before the economy went off a cliff, that’s the milestone I am particularly excited!
While starting a business during a recession is faced with imaginable challenges, there are also some benefits. One of benefit is that it forced us to focus on certain principles. A few years ago, I articulated these principles and they are more and more reflected in our hiring, reviewing, and processes and have evolved into our core values:
* Contribute to a fun and productive environment
* Bring value to the client
* Provide academic excellence at business level speed
* Grow individually and collectively
I believe that the more we focus and reflect these values, the more successful we are. How are we operationalizing these core values (spoken like an evaluator)? Or phrased differently, how do you see our core values in practice? Here are just a few examples:
1. Twice a year, we set a full day aside to reflect on the question: What are we doing well and where can we improve?
2. Throughout the year we participate in frequent lunch ‘n learns to develop our skills covering client management, data visualization, evaluative best practices, R, reporting, and time management topics.
3. We completed an extensive undertaking to outline our processes to ensure consistency of quality and timely completion of deliverables as we continue to grow.
4. We don’t let you forget about us. We reach out to you and have processes in place to serve as reminders for deliverable if we haven’t spoken to you at least once a month.
5. We continue to strive to reduce the stress and anxiety around gathering evidence of outcomes for our clients.
And we continue to identify ways that we can get better at what we do. Because at the end of the day, our job is to make our clients’ lives a little bit easier.
Thank you to our clients who have allowed us to partner with them on a vast array of projects! We look forward to another 10 years!
Starting a new project is a lot like going on a road trip to a new place. We know what our destination is, it’s just the actually getting there that’s a bit fuzzy. That’s why we use GPS (or maps, if you prefer the old-fashioned route!) I like to think of logic models as the GPS of a project because this tool can serve to provide a way to achieve the outcomes of a project.
A logic model is a pictorial representation that conveys the relationship between inputs, activities, and anticipated outcomes. The logic model is a living document that changes throughout the life of the project as a deeper understanding of the connection between activities and outcomes emerges. As the figure below demonstrates, these components usually flow from left to right, with some sort of visual connectors, such as arrows, to direct readers through the model.
A strength of the logic model lies not solely in the actual logic model but also in the internal alignment that occurs through the development of one. The process of developing a logic model focuses everyone to have a more systematic, theory-driven understanding of how actions are connected to desired outcomes. Creating the logic model provides an opportunity for team members to make assumptions explicit and gain consensus around project assumptions in a way that generally doesn’t naturally occur within the grant management process. It is not atypical during work sessions to develop a logic model to hear phrases such as “I was envisioning this differently” or “I assumed that we meant…”
It is important to note that a logic model is only as good as the information provided: If the individuals involved in the development of the tool don’t feel comfortable sharing their thoughts and comments, then some of these benefits will not be realized. So creating a safe space for stakeholders to share is an important part of developing a logic model.
After a logic model has been articulated, it is a great resource throughout the project’s life cycle. Project teams can rely on their logic model as a way to keep the project on track, by using it as a periodic checkpoint. By continually referring back to the logic model, team members will be able to identify leading and lagging indicators. Additionally, the logic model has utility as a decision-making tool. Logic models can help predict otherwise unintended consequences, allowing for teams to make informed decisions.
On practical note, there is only so much information that can and should be represented in the logic model. In other words, there needs to be a balance between simplicity and completeness in order to optimize the utility of a logic model.
Planning and executing a project is not an easy task. So if you’re struggling to find your way with a project, developing a logic model may help. Remember, you wouldn’t attempt a road trip without directions to your destination – so simplify your project by adding some project navigation.
If you’ve been working with grants for a while, you have probably noticed that the federal grant funding process is changing particularly as it relates to evaluation. In November of last year, I had the pleasure to work with Dr. Kelly Ball-Stahl and Jeff Grebinoski of Northeast Wisconsin Technical College (NWTC) on a presentation at the annual American Evaluation Association meeting regarding our shared experience of navigating these changing requirements.
Over the past few years, we’ve noticed remarkable changes in evaluation expectations. For instance, a few years ago, examining outputs was enough but now there is a much greater emphasis on outcomes. Similarly, there was a time when there was a large conceptual distinction between evaluation and research, those clear dividing lines are starting to blur through an increasing emphasis on evaluation rigor within the federally funding space, particularly from the Department of Ed, Department of Labor, and the National Science Foundation. It should be noted that this shift is not occurring simply to make evaluation and grant management more challenging. The underlying motivation is to ensure that the right interventions are being implemented to help the most number of individuals.
So, what are some consequences of these changes? Grant writers are increasingly involving evaluators in the evaluation planning of these types of grants because we have the expertise to be able to design experimental and rigorously designed quasi-experimental evaluations. For instance, at The Rucks Group we routinely work with grant writers to write evaluation plans for federal funding sources.
Another important consequence is that institutions have to strengthen their data collection systems. Data collection systems are all the entities within an organization that are involved in gathering facts, numbers, or statistics and effectively communicating this information to the right individuals. Having a fully functional data collection system is challenge by the “system” having a shared data language and an institutional research department that has the resources to appropriately respond to the data demands.
Although the changing requirements of federally funded grants do pose challenges for interested organizations, these changes also provide extraordinary opportunities. By increasing the rigor of evaluating projects, we are also creating a deeper understanding of what works. Through these efforts, our work collectively will improve the lives of as many individuals as possible.
Survey developers typically spend a great deal of time on the content of questionnaires. We struggle with what items to include, how to ask the question, whether an item should be closed-ended or open-ended; the list of consideration goes on. After all that effort, we generally spend less time on a smaller aspect that is incredibly important to web surveys: the subject line.
I have come to appreciate the extent to which the subject line acts as a “frame” for a survey. In simplistic terms, a frame is how a concept is categorized. Framing is the difference between calling an unwanted situation a challenge versus a problem. There is a significant literature that suggests that the nature of a frame will produce particular types of behaviors. For instance, my firm recently disseminated a questionnaire to gain feedback on the services that EvaluATE provides. As shown in the chart below, initially we received about 100 responses. With that questionnaire invitation, we used the subject line EvaluATE Services Survey. Based on past experience, we would have expected the next dissemination to garner about 50 responses, but we got closer to 90. So what happened? We had to start playing with the subject line.
EvaluATE’s Director, Lori Wingate, sent out a reminder email with the subject line, What do you think of EvaluATE? When we sent out the actual questionnaire, we used the subject line, Tell us what you think. For the next two iterations of dissemination, we had slightly higher than expected response rates. For the third dissemination, Lori conducted an experiment. She sent out reminder notices but manipulated the subject lines. There were seven different subject lines in total, each sent to about 100 different individuals. The actual questionnaire disseminated had a constant subject line of Would you share your thoughts today? As you see below, the greatest response rate occurred when the subject line of the reminder was How is EvaluATE doing?, while the lowest response rate was when Just a few days was used.
These results aren’t completely surprising. In the 2012 presidential election, the Obama campaign devoted much effort to identifying subject lines that produced the highest response rate. They found that a “gap in information” was the most effective (Thanks to Alejandro our intern for doing the background research). Using this explanation, the question may emerge as to why the subject line Just a few days would garner the lowest response rate, because it presents a gap in information. The reason this occurred is unclear. One possibility is that incongruity between the sense of urgency implied by the subject line and the importance of the topic of the email to respondents made them feel tricked and they opted not to complete the survey.
Taking all of these findings together tells us that a “rose by any other name would not smell as sweet” and that what something is called does make a difference. So when you are designing your next web survey, make sure crafting the subject line is part of the design process.
Originally posted on February 25, 2015 to https://www.evalu-ate.org/blog/rucks-feb2015/