Example of Columbus State Community College Pivoting to a Virtual Career Fair

We are all deeply aware of the need to “pivot” in these ever-evolving times. Determining the best strategy for how to change directions appropriately can be challenging. We learned about this excellent example of pivoting from an in-person career fair to a virtual one through our work with Columbus State Community College. With their permission, we share this resource, including the language used for disseminating information with our audience.


Image

Hello Educator Friends!

We are functioning in a new norm. This means new approaches to education and new opportunities for your students’ futures. As a result of COVID-19, education is quickly transitioning to the virtual arena. This means exploring new approaches to traditional education opportunities such as career fairs, college visits, and classroom presentations. In the wake of this pandemic, now more than ever, Central Ohio is primed for a different kind of worker, one that is equipped with the hands-on skills needed to succeed in STEM, Healthcare and Hospitality fields.

The attached “Virtual Career Fair” digital handbook, is meant to provide students and educators a resource for career exploration in an online space. Students now more than ever, have the capacity from home to explore future opportunities and begin setting a plan in motion. Specifically, this virtual career fair will explore STEM, Healthcare, and Hospitality career fields while highlighting corresponding experiential learning programs offered at Columbus State.

The digital handbook will guide students through a career exploration assessment, talk students through their results, and provide additional information about the following programs:

·       IT Flexible Apprenticeship

·       Modern Manufacturing Work-Study

·       Hospitality Management and Culinary Arts

·       Health Careers Opportunities

Within the handbook you will also find points of contact for educators who want to bring more information about these programs to their students, and contact information for students who want to talk directly with program coordinators. Are your students ready to take the next step to embrace all that Columbus State has to offer?

Admissions is Here to Support You!
At Columbus State, we never stop working to help you meet your educational and career goals – even from a distance. In response to COVID-19 and social distancing requirements, we’ve moved all admission, teaching and learning, and student support services to remote delivery. That means you can visit, apply, complete orientation, see your advisor and attend classes online, over the phone and through email.

We invite you to join us for our Virtual Events and One-on-One appointments with an Admissions Representative, which you can access without leaving your house. Visit our Admissions webpage for up-to-date information and to submit your application today.

5 Data-Informed Tips for Transitioning from Remote Learning to Online Learning

The abrupt move from in-person to virtual instruction impacted several of the program evaluation projects that The Rucks Group team works on. In response, we developed survey items that instructors could disseminate to understand the impact of this transition on students and to aid in the decision-making process moving forward. We have started to gather data from the survey findings to understand students’ experience of the transition to non-face-to-face (non-f2f) instruction. The key emerging finding is that students like, or are at least OK with, online learning but not remote learning.

Remote and online learning are two distinguishable types of virtual or non-f2f instruction. Remote learning is the mere use of technology as a platform, whereas, online learning involves a more thoughtful approach to instructional design to optimize learning.

Our analysis of these new data found that instructors who were able to deliver virtual courses that resemble the careful design and planning of multi-dimensional online learning experiences were reported more favorably with students than those instructors who were not able to utilize technology as a pedagogical tool beyond a delivery system. Specifically, our emerging findings point to five tips for instructors as they consider designing courses for summer and fall terms.

1. Ensure high levels of communication and responsiveness to students.

Students’ responses to surveys suggest that what they found most effective in the non-f2f learning environment were “communicative and responsive” instructors. Responsiveness could have been through email, phone, or the availability to talk before or after class via the technology used to deliver the course. Conversely, students rated instructors who were not highly communicative as the least effective.

2. Allow more time for questions.

Students who had not previously taken a virtual course,  reported that they preferred f2f instruction because it is a more optimal learning environment. One reason for this preference is that it is easier for students to ask questions in-person. To translate the “ease” of asking questions to a virtual environment, we are finding that instructors need to allow more time (perhaps what feels like an unnatural long period of time) for questions, because of the time lag in technology.

3. Facilitate more student-to-student interaction.

Based on the emerging data, another challenge of the non-f2f learning environment is the diminished natural or informal learning that occurs among students. Increasing student-to-student interaction could be remedied by having students introduce themselves at the beginning of each class, using the break-out room function in Zoom, or other online options for small-group meetings that allow students to interact with each other. In regards to how to incorporate these types of interactions, responses were mixed, however, they suggest that students prefer organic connections with classmates and not required interactions.

4. Include helpful supplemental resources.

Students particularly appreciated supplemental resources such as videos, PowerPoint slides and lecture recordings, but only if these resources were perceived as “helpful.” Based on students responses, “helpful” it interpreted as resources that truly aid in the understanding of learning objectives.

Supplemental resources were considered the least effective when supplemental resources such as homework and practice materials were not related to the chapter content; videos that did not cover the topics that students needed; insufficient video options; or items that students could not access because of technical problems.

5. Set clear expectations.

When instructors were able to set clear expectations for deadlines and use of the related technologies, students considered this an effective approach. For examples, some students reported that quizzes were unfair or not appropriate. Based on students’ overall responses, it may have been because clear expectations about what topic areas would be covered on examinations.

We know that teaching non-f2f in response to COVID-19—has been extremely challenging and took considerable energy because of the haste of these transitions and the health concerns that lingered in the background. It important to note that university and community college students, alike, reported appreciating instructors’ efforts in making this transition.

We hope these tips based on actual emerging data help instructors reassess their non-f2f teaching and move toward creating effective online learning environments.

For additional advice visit https://www.chronicle.com/interactives/advice-online-teaching

For more on the rapid transition to virtual instruction visit https://er.educause.edu/articles/2020/3/the-difference-between-emergency-remote-teaching-and-online-learning


Potential Questionnaire Items for In-Person to Remote Courses

In response to the COVID-19 pandemic, higher education institutions had to quickly convert in-person courses to remote courses. Understandably, the experience of students during this term may be quite unique. Many of the standard items included in end-of-course surveys may not be able to adequately capture these unique experiences. Provided in this document are a sampling of potential items that can be included in end-of-course surveys to gather student experiences in order to make appropriate plans for summer and fall terms. The items are grouped by topic and are not intended to be used as a whole, but single items can be used as appropriate. For additional questions, contact us at lrucks@therucksgroup.com or 937-219-7766 (during normal operations call 937-242-7024).

Coffee Break Webinar: Now what? How to Use Evaluation Findings for Project Continuous Improvement

Thursday, April 30, 2:00pET – 2:30pET

You’ve won the grant and now you need to do a program evaluation. There are two general mindsets toward this work: “proving” to funders what you’ve done or “improving.” The Rucks Group recommends you focus on improving because it overlaps with proving  and uses information from the program evaluation to advance your efforts.  Having an improving mindset also completely changes how you address the program evaluation and reduces much of the angst around the evaluation.

This Rucks Group webinar provides insights that will help you realize the true value of evaluation through the lens of continuous improvement, particularly given the COVID-19 pandemic.

  • This 30-minute webinar covers the following:
    • – How to demonstrate the importance of implementing the program evaluation starting on Day 1 of the grant.
    • – Introduction to the concept of developmental evaluation and how it relates to the improving mindset. 
    • – Best practices to rally a team to adopt an improving mindset. embrace the true value of evaluation.

Webinar For Evaluators: Partnership Assessment Tool

Looking to better measure your clients’ partnerships?  Start by helping refine our partnership assessment tool!

The Rucks Group and the NSF ATE Working Partners Research Project team are co-developing a rubric intended to better measure the depth, breadth, and impact of industry partnerships on ATE projects and centers. After soliciting and integrating feedback from educators and PIs, we are turning to the evaluation community to gather feedback from another perspective.

We invite you to join us for a one-hour webinar on Wednesday February 26th at 2 pm EST to provide your expert feedback regarding this tool.  

  • If you are interested in participating:
    • – Prior to the webinar, the partnership assessment tool, call agenda, and discussion questions will be emailed to participants a week before the call. 
    • – During the webinar, the assessment tool will be presented followed by a guided discussion. 
    • – At the conclusion of the webinar, we will ask that you complete a short survey indicating your initial thoughts and suggestions for rubric improvement. 

Also, the opportunity to take part in a pilot study will also be presented during the call, which will provide deeper feedback on the tool.

Click the link below to let us know your level of interest. Please complete by Tuesday, February 18. We hope you will consider joining us to provide your expertise and insights to help improve this instrument and increase its utility. 

Introduction to a Partnership Assessment Tool

Paying Students to Attend Tutoring Produces Interesting Findings

Increasing the participation of students in STEM fields often requires increasing skills within those areas by offering tutoring services. It is not uncommon for projects to report challenges in actually getting students to use the tutoring services, however. In our project with the STEM Success Center (SSC) at Central State University (CSU), one approach is emerging as a potentially successful strategy.

CSU, a public, historically black university in Wilberforce, Ohio, received U.S. Department of Education funding to provide a comprehensive suite of services for STEM students to prepare them for their post-undergraduate careers and educational opportunities. The services offered by the project include tutoring, advising, mentoring, experiential learning opportunities, and professional development. The project focuses on students enrolled in 10 gateway STEM courses. Given that lack of student preparation contributes to low retention, persistence, and course passage rates, the project encourages freshmen to utilize as many of the services as possible. As part of the project, some students were offered stipends for attending tutoring.

Consequently, The Rucks Group is studying the impact of tutoring services under three different conditions:  

  • Group 1:  Students who attended and received at least one stipend of $50 (n=32).
  • Group 2:  Students who attended tutoring, but did not receive a stipend (n=56).
  • Group 3:  Students who did not participate in SSC tutoring during the semester (n=222).

Students who received the stipend were required to participate in at least two hours of tutoring sessions for seven weeks. They were also required to meet with a STEM Success Manager on a monthly basis and participate in other STEM community-building events organized by SSC.

To assess the impact of the SSC’s services on grade performance, final grades from these courses were analyzed (see Figure 1 below). 

Figure 1. Distribution of Spring 2019 overall grades by tutoring attendance.

While these findings are preliminary and this was not an experimental design, they do provide several interesting leads for further study and similar experiments.

First, students who received tutoring passed their courses at higher rates regardless of whether they were paid or not. Second, students who received the stipend for attending tutoring sessions had the lowest failure rates and earned the highest percentage of As. This seems to speak to the importance of “dose,” that is how much of tutoring was actually needed to improve the performance of students in the course. Keep in mind that students who received a stipend attended five times more tutoring sessions than those who participated in tutoring but did not receive a stipend. These findings also speak to the nature of what type of support is needed, because additional support services were also provided to students.

We still have other questions about the implications of this project that we hope the next another round of data will help to answer.

I have personally never been particularly fond of the idea of having to pay students to attend tutoring, but these initial findings are compelling. However, as I consider these findings and the qualitative data regarding barriers to students’ participating in tutoring that we have found across several projects, it makes sense to provide students with the ability to attribute tutoring to “being paid” versus “needing help.”  If paying small stipends results in students accessing tutoring frequently enough to gain the momentum they need not just to pass courses but excel in them, then it may be a viable way to help students get the requisite skills they need to persist in STEM majors. 

“Simplicity came not by ignoring complexity, but by conquering it.” – Steve Jobs

I have been known to obsess over small things. Once I had a team member comb through a data set comprised of thousands of individuals to understand why three people, who made up less than one percent of the data were missing when we cut the data a certain way. Their presence or absence did not have an impact on the results. But I have learned that while some data anomalies are irrelevant, there are times when a small anomaly is a “canary in the coal mine” because it is actually an indication of something much bigger.

That is for me the meaning of the Steve Jobs quotation: “Simplicity came not by ignoring complexity, but rather by conquering it.”

Sometimes in seeking, interpreting, and understanding data, there’s a tendency to simplify or ignore the complexity because it may contradict what we deem to be true.

Sometimes in seeking, interpreting, and understanding data, there’s a tendency to simplify or ignore the complexity because it may contradict what we deem to be true. On the contrary, it is in the complexity, when data patterns are not obvious that I have found that the most interesting revelations emerge. And what we are looking at is only complex because we are using the wrong mental model to understand. So, “conquering” means understanding the truth that is being revealed, and when that happens simplicity will emerge. 

However, it is in the complexity, especially when data patterns are not obvious, that I have found are when the most interesting revelations emerge. Sometimes what we are looking at is only complex because we are using the wrong mental model to understand.

So, for me “conquering” means understanding the truth that is being revealed. When that happens, simplicity emerges. 

Each year, we try to create a fun and interesting New Year’s card that reflects a component of the work that we do. In this year’s card, we highlighted the Steve Jobs quotation, “Simplicity came not by ignoring complexity, but rather by conquering.”

To see our 2020 card, click here.

If you’re not receiving one of our New Year’s card, please fill out the information on the link below. Happy New Year!

Partnership Rubric: A tool for measuring industry relationships

Many of the grant funded projects we provide evaluation services to have an objective to expand industry/college partnerships. Because of the lack of available instruments to measure changes in these relationships, we developed the Partnership Rubric.  The Partnership Rubric was designed as a tool to quantify the involvement of outside partners in a given project or center by measuring the changes in the number of and level of involvement of those partnerships in targeted areas. While the instrument was useful in quantifying changes, a key limitation of the rubric was that it lacked validation.

Beginning in 2018, The Rucks Group and the National Science Foundation (NSF) Advanced Technological Education (ATE) Working Partners research project (DUE #1501176) teams began collaborating to address this key limitation and ultimately to widen the dissemination of the tool. As context, the Working Partners research project, employs a mixed methods approach to document and examine community college/industry partnership models, gain a better understanding of how these models are used in real world situations, and gather data about impacts of the partnerships. 

We have begun piloting the rubric to gather feedback about the utility and areas for improvement. At the 2018 NSF ATE Principal Investigators’ Conference, we facilitated a roundtable discussion to introduce the rubric and gather this feedback. Based on that feedback, a number of revisions were made and we introduced the revised version at the 2019 High-Impact Technology Exchange Conference (Hi-TEC) in July.  We are highlight encouraged by the response and believe that this tool will meet a critical need for many projects. If you are interested in learning more, download the Partnership Rubric and click on this link to provide us with information on your experience with instrument.

Student Success Part II – How is it defined and what promotes it?

While our evaluation firm focuses on measuring the effectiveness of initiatives, ultimately our goal is about identifying effective student success practices.  While serving as the evaluator of John Carroll University’s Aligned Learning Communities and Student Thriving: A First in the World Project, Terry L. Mills, Ph.D., the project director, shared with our team a blog that he had written about how student success is defined.

In this two-part blog on student success, we shared his blog in which Dr. Mills provides his perspective (click here to read).  Be sure to read all the way to the conclusion where he lists questions to consider when one is defining student success. Dr. Mills is assistant provost for Diversity and Inclusion and sociology professor at John Carroll University. He applied for the First in the World grant, and John Carroll University was one 17 institutions to receive this grant from the U.S. Department of Education grant in 2015. 

In this second blog on the topic, I am providing the perspective of a researcher and program evaluator on this key issue.

The current post-secondary educational landscape is vastly different than a few decades ago. The students seeking a post-secondary education are far more diverse now than with previous generations. The diversity is not just based on demographic factors but also on educational motives and academic preparedness. Take for instance, many family-sustaining jobs which historically only required a high school equivalence degree, now require some form of post-secondary credential. Four-year institutions which traditionally saw few students working upwards of 20 hours or more per week, now are witnessing an increased number of students that need to work for more than just discretionary funds. From my own teaching experience, it was not uncommon for a particular community college course to have half of the students in the midst of a career transition and already holding a bachelor’s degree.

This diversity introduces complexity that is not fully reflected in the prevailing definition of student success. It is encouraging that there is an awareness of the limitation of the current “accepted” definition of student success that needs to reflect a more student-centered that involves examining engagement and thriving not just academic performance. Hopefully, these conversations will lead to the extensive system-wide changes needed to fully embrace a more flexible definition. Because as is, the definition impacts on how schools are measured, and in many states funded and how students are able to obtain financial aid. In the absence of a system-wide change, a “work around” to address the issue may be to consider the difference between how student success is defined and what promotes student success. Factors such as engagement, thriving, and student-centeredness can be conceptualized as leading indicators of student success within the current framework. Admittedly, this definition is not in alignment with the goal of every student which is why the conversation on the definition of student success should continue. However, if institutions are able to incorporate these components within the support services provided to students then it could be very impactful on requisite outcomes measure related to degree completing students and all students.

Completing a National Science Foundation Freedom of Information Act Request

You probably have heard of a FOIA (Freedom of Information Act) request, but it was probably in the context of journalism. Often, journalists will submit a FOIA request to obtain information that is not otherwise publicly available, but is key to an investigative reporting project.  

There may be times when your work could be enhanced with information that requires submitting a FOIA request. For instance, while working as EvaluATE’s external evaluator, The Rucks Group needed to complete a FOIA request to learn how evaluation plans in ATE proposals have changed over time. And we were interested in documenting how EvaluATE may have influenced those changes. Toward that goal, a random sample of ATE proposals funded between 2004 and 2017 was sought to be reviewed. However, in spite of much effort over an 18-month period, we still were in need of actually obtaining nearly three dozen proposals. We needed to get these proposals via a FOIA request primarily because the projects were older and we were unable to reach either the principal investigators or the appropriate person at the institution. So we submitted a FOIA request to the National Science Foundation (NSF) for the outstanding proposals.

For me, this was a new and, at first, a mentally daunting task. Now, after having gone through the process, I realize that I need not be nervous because completing a FOIA request is actually quite simple. These are the elements that one needs to provide:

  1. Nature of request: We provided a detailed description of the proposals we needed and what we needed from each proposal. We also provided the rationale for the request, but I do not believe a rationale is required.
  2.  
  3. Delivery method: Identify the method through which you prefer to receive the materials. We chose to receive digital copies via a secure digital system.
  4.  
  5. Budget: Completing the task could require special fees, so you will need to indicate how much you are willing to pay for the request. Receiving paper copies through the US Postal Service can be more costly than receiving digital copies.
 

It may take a while for the FOIA request to be filled. We submitted the request in fall 2018 and received the materials in spring 2019. The delay may have been due in part to the 35-day government shutdown and a possibly lengthy process for Principal Investigator approval.

The NSF FOIA office was great to work with, and we appreciated staffers’ communications with us to keep us updated.

Because access is granted only for a particular time, pay attention to when you are notified via email that the materials have been released to you. In other words, do not let this notice sit in your inbox.

One caveat: When you submit the FOIA request, there may be encouragement to acquire the materials through other means. Submitting a FOIA request to colleges or state agencies may be an option for you.

While FOIA requests should be made judiciously, they are useful tools that, under the right circumstances, could enhance your evaluation efforts. They take time, but thanks to the law backing the public’s right to know, your FOIA requests will be honored.

To learn more, visit https://www.nsf.gov/policies/foia.jsp

A version of this blog was published on EvaluATE’s website (http://www.evalu-ate.org/blog/rucks-july19/) on July 15, 2019.

Correlation vs. Causation: Understand the Difference for Better Interventions

If you have taken a research methods course at some point, you may remember the mantra “correlation does not imply causation.” People say they understand the difference between a correlation and causation, but when I hear them talk, I can tell that they don’t.

As a quick refresher, correlation simply refers to what occurs when two variables co-vary together. Essentially as one variable increases so does another variable (positive correlation, see graph on the left).  Or as one variable increases another variable decreases (negative correlation, see graph on the right). On the other hand, causation can be thought of as a specialized correlation in which two variables are co-varying because of one of the variables.

Figure 1. Simplified representation of a “positive” and “negative” correlation.

The distinction between correlation and causation is clearer when we look at variables that are correlated simply by chance. For example, a correlation exists between letters in the winning word in the Scripps National Spelling Bee and deaths due to venomous spiders (Vigens, 2015). Basically, as the number of letters in the Scripps winning word increased, so too did the number of deaths that year by venomous spiders increase.

If your reaction to that correlation is that the two cannot be correlated because there is no reason for the correlation to occur, what you are actually trying to establish is a causal relationship. In which case you are correct, there is no causal relationship between these two variables.

The lack of a causal relationship is clearer when two variables are in no way conceptually related. However, a causal relationship still has not been established even when there is a correlation established between two variables that appear to be related.

Take, for example, the obvious correlation between class attendance and course performance. The two variables are correlated such that course performance tends to increase with class attendance. If we do not address the possibility of other variables, we cannot say with certainty that class attendance increases performance because class attendance could be a proxy variable for course engagement, for instance, or some other circumstance.

Why is it important to disentangle these two concepts?

Disentangling these two concepts is more than just an interesting intellectual exercise; the distinction is important to achieve optimal outcomes. For example, when making big decisions about what to do to improve student success, we have to be careful that we are pressing on the right levers that will lead to the return on investment. When we think about interventions, the more we understand about the causal variable itself, the better the intervention we will have.

Consider the prevailing understanding that first-generation students are at risk for not completing a degree. It is critical for us to understand what the causal factor is in order to figure out a better approach for helping students who are the first in their families to attend college to persist and complete degrees. Any of the following could be causing the challenge that is “correlated” with a first-generation student not completing a degree: not understanding how to navigate college expectations; not having a strong resource network to troubleshoot issues; or feeling like an “imposter” whose lack of familiarity with campus life can lead to thinking that one does not belong in college.

If we understand what is occurring at the causal level and not simplify or misuse the concept of correlation, then we will be in a better position to design more effective interventions.  

Vigen, T. (2015). Spurious Correlations. New York, NY: Hachette Books.

Obtaining Credible Evidence of “Long” Long-Term Outcomes

Another challenge that The Rucks Group team sees across projects is what we call “aspirational goals.” This phrase is how we refer to goals and objectives that will likely not occur until after a project’s grant funding ends. Many projects have them. The question is: How do you measure them?

We struggled with measuring aspirational goals until, through a conversation with another evaluator, the idea of using the transitive mathematical property to address this challenge created an “aha” moment. 

As you may (or may not) recall from math class, the transitive property is this:

If a = b, and b = c, then a = c.

We can apply this mathematical property to the evaluation of grant-funded projects as well.

If, for instance, a college receives a three-year grant to increase the number of underrepresented individuals in a non-traditional field, progress toward the goal (which is unlikely to occur within the three-year time frame when the first year will be dedicated to implementing the grant) can be gauged using a sequence of propositions that follow the logic of the transitive property:

  • Proposition A = Start with a known phenomenon that is linked to the desired outcome.

Green and Green (2003) [1] argue that to increase the number of workers in the field, the pipeline needs to be increased.

Proposition B = Establish that the project’s outcomes are linked to Proposition A.


The current project has increased the pipeline by increasing the number of underrepresented individuals declaring this field as a major.


  • Proposition C = Argue that while the project (because of time) has not demonstrated the desired outcome, based on established knowledge it likely will.

If the number of individual majors increased, assuming a similar rate of retention, then there will be more individuals graduating and prepared to work in the field.


By using the transitive property it is possible to create a persuasive evidence-based projection that by increasing the number of individuals majoring in the field and in the pipeline to become workers, the project has instigated the changes to achieve its aspirational goals.


[1] This is a fictitious citation of illustration purposes only.

When Fuzzy Wuzzy Isn’t a Bear, But What You Need to Measure

I had a professor who believed that you could measure anything, even the impact of prayer. For many, that may seem like an arrogant pronouncement, but what he was illustrating was that in measuring fuzzy constructs you have to think outside of the box (and besides, there are actual studies that have measured the impact of prayer).

In much of the work at The Rucks Group, we encounter things that are difficult to measure. We often deal with clients’ understandable angst about identifying key nebulous variables such as measuring changes in a coordinated network, the impact of adding a new role like a coach or navigator, or the impact of outreach activities to increase interest in a particular field. 

One approach to measuring difficult-to-measure constructs is through the counterfactual survey (click here to read our blog about counterfactual surveys). 

Whether or not you use a counterfactual survey when measuring difficult-to-measure variables, it is essential to build a case that the intervention is making a difference through the “preponderance of evidence.” There is rarely a single magic bullet. The evidence, instead, usually comes from multiple observable outcomes. In legal terms, it is akin to building a circumstantial case. 

With preponderance of evidence in mind, our team often talks about “telling the story” of a project. Here are two approaches for effectively “telling the story” in an evaluation context.

Incorporate mixed-methods for data gathering

Using a mixed-methods approach in an evaluation can paint a compelling picture. For instance, many of the projects we work with strive to build relationships with industry partners for their important work in curriculum development. Measuring the changes in industry partners’ involvement as well as the impact of these relationships is very challenging. However, we have found three useful ways to measure industry partnerships. They are 

    1. conversations with the project team to obtain information regarding the impact of the industry partnerships (e.g., any stories of donations, assistance in identifying instructors, etc.);
    2. data from industry partners themselves (gathered either through surveys or interviews); and 
    3. rubrics for tallying quantitative changes that result from industry partnerships. 

Incorporating multiple approaches to data gathering is one way to measure otherwise nebulous variables.

Leverage what is easily measurable

Another common challenge is measuring the broader impact of outreach activities. For one client with this goal, our team struggled to find credible evidence because outreach involved two different audiences: individuals within a grant-funded community and the larger general audience of individuals who may be interested in the work of the grant-funded community. 

For some time we really struggled with how to find an approach to demonstrate successful outreach to the general audience. As we reviewed the available data it dawned on us that we could leverage the data related to the visits to the project’s website because the grant-funded audience had a known size. We made an assumption around how many hits the website would have if the known community members were to visit it. By subtracting that number from the total website visitors, we arrived at the number we identified as the general audience of individuals from outside the grant-funded community who accessed the project’s website. 

We then employed a mixture of methods to combine our audience calculation with other data to tell a cogent story. We have used this approach for other clients, sometimes using Google searches and literature searches to find a number as a reference point. 

Hopefully these tips (along with a prayer or two to help with insight) will help the next time you’re confronted with difficult-to-measure variables. 

Questions Frame the Lens for Answers*

Far better an approximate answer to the right question … than the exact answer to the wrong question.

— John Tukey, Statistician

 If I had an hour to solve a problem and my life depended on the solutions, I would spend the first 55 minutes determining the proper question to ask, for once I know the proper question, I could solve the problem in less than five minutes.

 — Albert Einstein, Physicist

Much of what we do both at the individual and organizational levels are driven by questions. Questions are the lens by which we see what is and is not possible.  It has been my experience that teams and organizations, regardless of whether they are working at improving student success or addressing workforce demands, sometimes go astray when they seek answers to the wrong questions.

This error generally happens not because of lack of intelligence, work ethic, or even passion, but because those working on the problem respond too quickly to the high pressure for a solution. The perception that they have to hurry often results in teams moving too quickly from the problem-space into the solution-space because we are often metaphorically “building the plane while we are also flying it.”

This pressure is keenly felt when attempting to evaluate an initiative. Consequently, the focus generally is on “What can we measure?” which on the surface would be the exact question that should be asked. But I have found that frustrations mount when the question of what to measure becomes the focus of the evaluation before the project team and evaluator together address other important questions such as: “What do we want to know?”

One time I had a client project team grappling with what should be measured for the evaluation to demonstrate project outcomes. Rather than dwelling on their dilemma about what to measure, I asked a series of questions such as “What do you want to learn about your project?” “How does this project change behavior?” “What do your stakeholders want to know?” As team members answered those questions, I pointed out how their responses led to what they really should be measuring regardless of how difficult it could be to obtain relevant data.

This experience reminded me that once you focus on what people want to learn from an intervention, it is easier to figure out how to measure outcomes.

My advice is do not skip over the questions of what you want to learn. Sure, those questions can be challenging because of a fear that those items cannot be measured. But doing this deeper thinking up-front avoids angst at the end about inadequate data or measures that lack meaning and often reveals novels ways of measuring outcomes that may have at first seemed impossible to measure.  Yes, the preliminary work will take time, but the benefits are so worth it.

*Portions were originally published in October 2012 issue of the Dayton B2B Magazine.

Using Counterfactual Surveys to Improve the Evidence-Gathering Process

Contextual information plays an important role in interpreting findings. Many of us have experienced this when a child has come home and said they have gotten 43 points on a test. Was it 43 out of 45, 100, or some other point system? Depending on the response, there is either praise or a very serious conversation.

In evaluation and research the same need for context to interpret findings exists. But how you get to that context can vary widely.

One common approach to create context is to utilize a pre-/post-testdesign (pre-/post-test). The purpose of a pre-/post-test is to compare what was occurring before an intervention to what is occurring after that intervention by focusing on particular outcome measures.

One challenge to the pre/post-test is responders’ standards for the basis of a judgment may shift because of the intervention. With additional information your perception of what “good” is and how good you are can change. This occurrence can result in similar pre-intervention and post-intervention responses.

One solution to this problem that our team has successfully incorporated into much of our work is the use of a counterfactual survey, also called retrospective survey. In these types of surveys, respondents are asked to consider their current attitudes or perceptions and their attitudes or perceptions prior to participating in the intervention at the same time. In this way respondents are able to make their own adjustments about how they perceive the intervention.

To understand completely how a counterfactual survey looks in practice, let’s examine one of our first projects in which we incorporated this approach.

In this project, STEM academic administrators were participating in a year-long professional development opportunity to enhance their leadership skills. Prior to participating in any activities, we disseminated a survey for participants to rate on a scale of 1 (least like me) – 7 (most like me) their self-perceptions as a leader. Consistent with a traditional pre-/post-test, participants were then asked to complete the survey at the end of the professional development opportunity as shown in the figure below.

03.04.19 - Image 1

To incorporate the counterfactual survey, after participants answered items about how they perceived themselves as leaders, participants were presented with items asking them to rate how they would have rated the items before participating in the professional development opportunity. Therefore, the counterfactual design looks like this:

03.04.19 - Image 2

It should be noted that a counterfactual design does not require including a pre-test questionnaire; in this situation we just happened to do so.

What is interesting is that the pre-/post-test responses on several items were very similar, which is not that uncommon of an occurrence (selected items presented below).

03.04.19 - Image 3

However, when you add in the counterfactual design responses, an interesting pattern emerges—respondents rated themselves lower using a counterfactual than they had in reality.03.04.19 - Image 4

In follow-up interviews with participants, it was apparent that the standard that participants used had indeed shifted. In other words, they didn’t know what they didn’t know and so rated themselves higher on items before the intervention than after learning more about leadership.

A counterfactual survey holds a lot of promise, particularly in conjunction with gathering other data points. A counterfactual survey is only appropriate for attitudinal or perception data and not for objective measures of skill or knowledge. But utilizing a counterfactual survey may serve to illuminate changes that would otherwise go undetected.

 

 

 

Looking for an Individual to Join Our Team!

Last Team Meeting - Team Photo July 10 2018We have recently experienced transitions in our team: Jeremy who had been with us since 2015 left to work on his doctorate degree at Penn State and Maggie Jaeger who started with us as a research assistant is now working on her doctorate degree at the University of Minnesota. We are sad to have them leave us, but are excited about the opportunities that are ahead for them!

As a consequence, we are seeking to bring another individual on our team. If you want to work at a firm that discusses the nuisances of survey design, optimal non-parametric tests, best practices in data visualization, and yes, gets excited about Pi Day, then we invite you to review the job description and submit an application!

Job Description – Research and Evaluation Associate – Final – 09.14.18rev

Happy Pi Day!

I love celebrating Pi Day largely because it’s such a gloriously geeky thing to do! What makes it even “geekier” Pi - Day Picturewas the cool Pi pen holder (using about the first 300 digits of Pi) The Rucks Group team made through additive manufacturing (3D Printing) to celebrate a team member’s birthday. Why?

We have the pleasure of working with Iowa State University on a project funded by the National Science Foundation Division of Engineering Education and Centers to “promote a platform to bring together a network of under-represented minority (URM) women in engineering” towards increasing participation in advanced manufacturing and towards career advancement for URM women faculty in engineering.  As part of that work, I participated in a 2 ½ day workshop in October that covered a variety of topics including several presentations on additive manufacturing.

I shared with the team many of the advances in additive manufacturing and got the team excited about the topic as well. So of course, we needed to experience it firsthand, so we went over to our local 3D printing bar (yes, there is one just around the corner from our office and yes, they serve beer) and made our Pi pen holder.

In much of our evaluation work, we strive to gain a deep understanding of the subject to optimally implement the evaluation. For this project, it brought together all our geeky tendencies!

To learn more about the Advanced Manufacturing Workshop: Preparing the Next Generation of Researchers project, visit https://www.imse.iastate.edu/advanced-manufacturing-workshop/

 

Active Listening: A Way to Change the World

If I could do one thing to change the world, it would be to teach a critical mass of people how to engage in active listening. Active listening is when an individual is fully engaged to the verbal and nonverbal points that a communicator is sharing. It means not preparing what you are about to say or assuming you know what the person is about to say before it is said. Instead, it involves pausing to take in each message and summarizing that message before responding. If you’re actively listening then the conversation will look and feel different. The response time to what someone is saying will be slower. For me, I usually need to have a notebook to write down my thoughts to focus on the speaker rather than rehearsing my thoughts while apparently listening (or worse yet, jumping in the middle of their sentence … ugh!).

Not too long ago, I had an “aha” moment about the transformative nature of active listening. I was invited to facilitate a group of 20 individuals about standardizing evaluation expectations for their grantees. After allowing each individual to share about their expectations for the work session, I began to engage the group in conversation. I listen intensely to understand what was said. When there was a natural pause, I would summarize what was said by saying, “What I heard you say was … is that correct?” or “I understand that … is very challenging” or “It sounds like you’re very excited about …” The group naturally was able to come to agreement on several key components. Of course, there were a couple tense moments in which someone would share a thought that was probably shared a dozen times, and someone else would quickly respond, as they probably had a dozen times. In those moments, I summarized what the concern was from each person’s perspective. Often the concerns shared by each person was supported by the “opposing” person. In other words, it sounded and was treated like a disagreement but they really weren’t disagreeing.

During the entire session, I didn’t really say much. I even went to the client and explained that while it may not have looked like I was doing much, I was actually doing a lot (and she said she knew I was)! Once the work session was over, a number of individuals shared that the work session was one of the most productive meetings they had ever had!

Why was actively listening transformative? Because often in arguments and conflicts, the real problem is that the individuals are not understood. When we feel understood and supported for that perspective, we can let go of our position, and focus on solving-problems. Moreover, when we truly understand someone else’s concerns, better solutions are identified. One of the reasons that clients report having a positive experience with our team is because we actively listen to them and can offer better solutions to their struggles and approaches to telling their story. I suppose doing so is our way of actually taking steps to change the world, one client at a time.

Celebrating 10 Years!

years-anniversary-pictogram-vector-icon-10th-year-birthday-logo-label-vector-id642856090 (1)

This year is exciting for many reasons:  250th anniversary of the publication of the Britannica, 50th Anniversary of Mister Rogers, the 23rd Winter Olympics (yay, Curling!), and the 10th anniversary of The Rucks Group! Considering that the firm was started in 2008 just months before the economy went off a cliff, that’s the milestone I am particularly excited!

While starting a business during a recession is faced with imaginable challenges, there are also some benefits. One of benefit is that it forced us to focus on certain principles. A few years ago, I articulated these principles and they are more and more reflected in our hiring, reviewing, and processes and have evolved into our core values:

* Contribute to a fun and productive environment
* Bring value to the client
* Provide academic excellence at business level speed
* Grow individually and collectively

I believe that the more we focus and reflect these values, the more successful we are.  How are we operationalizing these core values (spoken like an evaluator)? Or phrased differently, how do you see our core values in practice? Here are just a few examples:

1. Twice a year, we set a full day aside to reflect on the question: What are we doing well and where can we improve?
2. Throughout the year we participate in frequent lunch ‘n learns to develop our skills covering client management, data visualization, evaluative best practices, R, reporting, and time management topics.
3. We completed an extensive undertaking to outline our processes to ensure consistency of quality and timely completion of deliverables as we continue to grow.
4. We don’t let you forget about us. We reach out to you and have processes in place to serve as reminders for deliverable if we haven’t spoken to you at least once a month.
5. We continue to strive to reduce the stress and anxiety around gathering evidence of outcomes for our clients.

And we continue to identify ways that we can get better at what we do. Because at the end of the day, our job is to make our clients’ lives a little bit easier.

Thank you to our clients who have allowed us to partner with them on a vast array of projects! We look forward to another 10 years!

 

 

Seeking a Research and Evaluation Associate to Join Our Team!

Position

The Rucks Group is seeking a Research and Evaluation Associate to join our team who will reflect our mission, vision, and core values. Through a collaborative team approach, this individual will be a key contributor to the full range of research and evaluative activities on small and/or large-scale evaluation projects.

Requirements

A Master’s degree in psychology, statistics, or other related field plus two years of program evaluation, social science research, and/or data preparation and reporting experience are required.

About

The Rucks Group is a small research and evaluation firm located in southwest Ohio (Dayton). Formed in 2008, our mission is to provide services that maximize the return of resources invested in initiatives for grant recipients and funding sources with the vision to be globally recognized as an organization whose research and thoughtful analysis influences improved decision-making. Our projects revolve around STEM, workforce development, K16 Education, public health, and foundation funding. Our offices are located at The Entrepreneurs Center, a business incubator located in downtown Dayton.

Applying

To apply please forward to lrucks@therucksgroup.com the following items: cover letter, resume/vita, and a list of references. Applicant screening will begin December 15, 2016 and will continue until the position is filled. Work will be conducted at The Rucks Group’s office in Dayton (or Columbus within the next 6 – 12 months).

Click here to learn more.

Using Logic Models to Navigate Projects

Starting a new project is a lot like going on a road trip to a new place. We know what our destination is, it’s just the actually getting there that’s a bit fuzzy. That’s why we use GPS (or maps, if you prefer the old-fashioned route!) I like to think of logic models as the GPS of a project because this tool can serve to provide a way to achieve the outcomes of a project.

A logic model is a pictorial representation that conveys the relationship between inputs, activities, and anticipated outcomes. The logic model is a living document that changes throughout the life of the project as a deeper understanding of the connection between activities and outcomes emerges. As the figure below demonstrates, these components usually flow from left to right, with some sort of visual connectors, such as arrows, to direct readers through the model.

Logic Model Template 012816

A strength of the logic model lies not solely in the actual logic model but also in the internal alignment that occurs through the development of one. The process of developing a logic model focuses everyone to have a more systematic, theory-driven understanding of how actions are connected to desired outcomes. Creating the logic model provides an opportunity for team members to make assumptions explicit and gain consensus around project assumptions in a way that generally doesn’t naturally occur within the grant management process. It is not atypical during work sessions to develop a logic model to hear phrases such as “I was envisioning this differently” or “I assumed that we meant…”

It is important to note that a logic model is only as good as the information provided: If the individuals involved in the development of the tool don’t feel comfortable sharing their thoughts and comments, then some of these benefits will not be realized. So creating a safe space for stakeholders to share is an important part of developing a logic model.

After a logic model has been articulated, it is a great resource throughout the project’s life cycle. Project teams can rely on their logic model as a way to keep the project on track, by using it as a periodic checkpoint. By continually referring back to the logic model, team members will be able to identify leading and lagging indicators. Additionally, the logic model has utility as a decision-making tool. Logic models can help predict otherwise unintended consequences, allowing for teams to make informed decisions.

On practical note, there is only so much information that can and should be represented in the logic model. In other words, there needs to be a balance between simplicity and completeness in order to optimize the utility of a logic model.

Planning and executing a project is not an easy task. So if you’re struggling to find your way with a project, developing a logic model may help. Remember, you wouldn’t attempt a road trip without directions to your destination – so simplify your project by adding some project navigation.

Research and Evaluation: Presentation from AEA

If yIMG_0455ou’ve been working with grants for a while, you have probably noticed that the federal grant funding process is changing particularly as it relates to evaluation. In November of last year, I had the pleasure to work with Dr. Kelly Ball-Stahl and Jeff Grebinoski of Northeast Wisconsin Technical College (NWTC) on a presentation at the annual American Evaluation Association meeting regarding our shared experience of navigating these changing requirements.

Over the past few years, we’ve noticed remarkable changes in evaluation expectations. For instance, a few years ago, examining outputs was enough but now there is a much greater emphasis on outcomes. Similarly, there was a time when there was a large conceptual distinction between evaluation and research, those clear dividing lines are starting to blur through an increasing emphasis on evaluation rigor within the federally funding space, particularly from the Department of Ed, Department of Labor, and the National Science Foundation. It should be noted that this shift is not occurring simply to make evaluation and grant management more challenging. The underlying motivation is to ensure that the right interventions are being implemented to help the most number of individuals.

So, what are some consequences of these changes? Grant writers are increasingly involving evaluators in the evaluation planning of these types of grants because we have the expertise to be able to design experimental and rigorously designed quasi-experimental evaluations. For instance, at The Rucks Group we routinely work with grant writers to write evaluation plans for federal funding sources.

IMG_0467

Another important consequence is that institutions have to strengthen their data collection systems.  Data collection systems are all the entities within an organization that are involved in gathering facts, numbers, or statistics and effectively communicating this information to the right individuals. Having a fully functional data collection system is challenge by the “system” having a shared data language and an institutional research department that has the resources to appropriately respond to the data demands.

Although the changing requirements of federally funded grants do pose challenges for interested organizations, these changes also provide extraordinary opportunities.  By increasing the rigor of evaluating projects, we are also creating a deeper understanding of what works. Through these efforts, our work collectively will improve the lives of as many individuals as possible.

New Team, New Year

Although we say this every year, it is truly hard to believe that another year has passed. And while I’m still getting use to writing “2016,” I am particularly excited about this year. We will continue to work with lots of great clients and exciting projects some in new domains.

To continue to realize our core values concerning the quality of our services and products, we have expanded our team to bring in new talents and expertise to the firm. In October of last year, we welcomed two new employees: Jeremy Schwob and AnnaBeth Fish.  Jeremy Schwob joins our team as a Research and Evaluation Associate. Jeremy is finishing his master’s at the University of Dayton (UD) in Clinical Psychology, which he expects to earn in Spring 2016. Jeremy completed his undergraduate degree at UD, where he received a BA in Psychology. Jeremy is analytical and really enjoys working with data.

AnnaBeth Fish is our Administrative Assistant. AnnaBeth has an MA in Organizational Communication from Ball State University, and a BA in Communication Studies from Hollins University. She helps the firm in many ways, including calendar management, maintaining our website, general office administration, and completing the notorious “other tasks not specified.”

In addition to welcoming new individuals, we’ve also expanded the roles of existing team members. Joe Williams joined The Rucks Group as an intern in May 2015. After completing his internship he stayed on with us, working a few hours per week during the fall 2015. He is graduating a semester early with a BA in psychology, and will work nearly full-time in 2016 until matriculating to graduate school in the fall.

Carla Clasen, who serves as a sub-contractor, still works closely with the firm. In 2016, she her work will begin to focus more on our projects within the public health area. As many of you may recall, Carla worked as an evaluator within public health for nearly 20 years.

If you’d like more information about any of our team members or the firm itself, be sure to check out the About page on our website.

As we move into this year, we will continue to develop and attract the best talent to be able to optimally provide research and evaluation expertise to our clients and projects.

From All of Us Here at The Rucks Group

Wishing You the Best of the Holiday Season

NY Greetings JUST Wreath

 

I often tell my children that life won’t stop giving you opportunities to get upset. And I encourage them to reflect on their blessings and everything for which they are grateful. To me, that is at the heart of the holiday season, being grateful for what we have, something made even more pronounced given the backdrop of current national events.

One of the things I am deeply grateful for is to wake up each day and work with the people I do, and to know that the team is providing our clients amazing value through our services. Thank you for allowing us to use our gifts and talents in this way!

We wish you all the very best of the Holiday Season now, and throughout the New Year!

                                                                                                                                                                                                                              — Lana

 

sig block

Need to disseminate NSF ATE work? Consider HI-TEC.

file-page1

Somehow, it is almost the end of 2015 – time just seems to fly by! As 2015 winds down, I’m beginning to start look at the calendar for 2016. Fast approaching is the submission deadline for what has become one of my favorite conferences of the year:  the High Impact Technology Exchange Conference (HI-TEC).

Hi-TEC is a “national conference on advanced technological education where secondary and postsecondary educators, counselors, industry professionals, trade organizations, and technicians can update their knowledge and skills.”

While there are lots of options for conferences, anyone who operates within the advanced technological education sphere should seriously consider attending this one. Because Hi-TEC is a relatively small conference, you are able to connect with other individuals with similar educational interests. Additionally, the conference offers many learning and best practice sharing opportunities. In 2016, the conference is in Pittsburgh.

Individuals interested in applying for an NSF ATE grant or who already have one would particularly benefit from attending this conference. And if you do already have an NSF ATE grant, you should consider submitting a proposal to the conference because this is a great venue for disseminating your work to a larger community.

There are three different formats for sharing your research: pre-conference workshops, traditional conference presentations, and poster sessions. The deadlines are Friday, January 15, 2016 for pre-conference workshops and Monday, February 1, 2016 for the other two formats.

To learn more, please visit https://www.highimpact-tec.org/. I look forward to seeing you in Pittsburgh this summer (July 25 – 28, 2016)!

 

The Rucks Group is seeking a Research and Evaluation Associate

Position

The Rucks Group is seeking a Research and Evaluation Associate to join our team who will reflect our mission, vision, and core values. Through a collaborative team approach, this individual will be a key contributor to the full range of research and evaluative activities. These responsibilities include contributing to the development of proposals, conducting literature searches, designing research and evaluation projects, developing and disseminating questionnaires, engaging in focus groups, interviews (in-person and telephone), quantitative and qualitative data analysis, writing report, making presentations to clients, and other related research and evaluation activities. The individual in this position will also be expected to be active within the research and evaluation community by presenting at conferences and authoring/co-authoring to scholarly publications. The Research and Evaluation Associate will also seek ways to improve on the services that the firm provides by staying current with research and evaluation trends and best practices and identifying opportunities for process improvement.

 

Candidate

A Bachelor’s degree in psychology, evaluation, or other related social science discipline is required. A Master’s degree and 3 – 5 years of program evaluation experience is preferred. The successful candidate should possess general knowledge of research and evaluation principles and have experience conducting literature searches, creating logic models, designing research and evaluation projects, analyzing quantitative and qualitative data with computer data analysis software experience (e.g., SPSS, SAS, STATA, and/or R), writing and presenting findings. The successful candidate should also have proficiency with Microsoft Office Suite, the ability to periodically travel, have an attention to detail, strong interpersonal skills to effectively interact with team members and clients, and an ability to work in a small business environment.

 

Applying

To apply please forward to info@therucksgroup.com the following items: cover letter, resume/vita, a writing sample (e.g., article, report, etc.), and either a list of graduate courses with corresponding grades or an official transcript. Applicant screening will begin July 17, 2015 and will continue until the position is filled.

 

About

The Rucks Group is a small research and evaluation firm located in southwest Ohio (Dayton). Formed in 2008, our mission is to provide services that maximize the return of resources invested in initiatives for grant recipients and funding sources with the vision to be globally recognized as an organization whose research and thoughtful analysis influences improved decision-making. Our projects revolve around STEM, workforce development, K16 Education, public health, and foundation funding. Our offices are located at The Entrepreneurs Center, a business incubator located in downtown Dayton.

Click here for a printable version.

A Rose Isn’t as Sweet by Any Other Name: Lessons on Subject Lines for Web Surveys

Survey developers typically spend a great deal of time on the content of questionnaires. We struggle with what items to include, how to ask the question, whether an item should be closed-ended or open-ended; the list of consideration goes on. After all that effort, we generally spend less time on a smaller aspect that is incredibly important to web surveys: the subject line.

blog graph 1 larger

I have come to appreciate the extent to which the subject line acts as a “frame” for a survey. In simplistic terms, a frame is how a concept is categorized. Framing is the difference between calling an unwanted situation a challenge versus a problem. There is a significant literature that suggests that the nature of a frame will produce particular types of behaviors. For instance, my firm recently disseminated a questionnaire to gain feedback on the services that EvaluATE provides. As shown in the chart below, initially we received about 100 responses. With that questionnaire invitation, we used the subject line EvaluATE Services Survey. Based on past experience, we would have expected the next dissemination to garner about 50 responses, but we got closer to 90. So what happened? We had to start playing with the subject line.

EvaluATE’s Director, Lori Wingate, sent out a reminder email with the subject line, What do you think of EvaluATE? When we sent out the actual questionnaire, we used the subject line, Tell us what you think. For the next two iterations of dissemination, we had slightly higher than expected response rates. For the third dissemination, Lori conducted an experiment. She sent out reminder notices but manipulated the subject lines. There were seven different subject lines in total, each sent to about 100 different individuals. The actual questionnaire disseminated had a constant subject line of Would you share your thoughts today? As you see below, the greatest response rate occurred when the subject line of the reminder was How is EvaluATE doing?, while the lowest response rate was when Just a few days was used.

blog graph 2 largerThese results aren’t completely surprising. In the 2012 presidential election, the Obama campaign devoted much effort to identifying subject lines that produced the highest response rate. They found that a “gap in information” was the most effective (Thanks to Alejandro our intern for doing the background research). Using this explanation, the question may emerge as to why the subject line Just a few days would garner the lowest response rate, because it presents a gap in information. The reason this occurred is unclear. One possibility is that incongruity between the sense of urgency implied by the subject line and the importance of the topic of the email to respondents made them feel tricked and they opted not to complete the survey.

Taking all of these findings together tells us that a “rose by any other name would not smell as sweet” and that what something is called does make a difference. So when you are designing your next web survey, make sure crafting the subject line is part of the design process.

Originally posted on February 25, 2015 to https://www.evalu-ate.org/blog/rucks-feb2015/

Ohio Program Evaluators’ Group: 35 Years and Going Strong

opeg logo for blog

The Ohio Program Evaluators’ Group is the state of Ohio affiliate for the American Evaluation Association. This year our organization is celebrating its 35th anniversary. At the spring conference, we will be celebrating this accomplishment through the theme of 35 Years of OPEG: Making a Difference, Past, Present, and Future.

OPEG is a community that The Rucks Group actively supports. Both Carla and I have served on the Board for the organization. At this event, we’ll present some findings on the use of retrospective surveys (I’ll actually talk about some of this in an upcoming blog). Our continued involvement within this community is due to convenient information sharing as well as overall commensuration; both of which allows us to become better at what we do.

If you’re interested in being part of the community or even presenting at the next conference, visit OPEG at https://www.opeg.org.

Also, make sure to plan on attending the spring conference on Friday, May 8 at Otterbein College in Columbus (Ohio).

I look forward to seeing you there!

New Innovative Program from the Department of Ed: Performance Partnership Pilots (P3) Program

There are over 5 million youth between the ages of 14 and 24 who are not in school or working, sometimes facing homelessness or involvement in foster care or the judicial system, without connections with family or social networks. If you are a state, local, or tribal agency who works with such a population, a new funding initiative from the Department of Education will fund 10 pilot projects to test innovative, cost-effective, and outcome-focused strategies for improving results for disconnected youth.

The P3 initiative will allow blending of funding streams and waivers of certain requirements to overcome hurdles in providing innovative programs to improve outcomes for disconnected youth, who are defined as youth 14-24 years of age, who are either low-income, homeless, involved in the juvenile justice system and/or foster care, not enrolled in school, or in danger of dropping out. Agencies will be allowed to pool discretionary funds they already receive through multiple federal agencies. In addition, start-up grant funding of up to $700,000 will be available for each project.

The pilots will be evaluated to determine the most effective strategies to serve the population. Projects must participate in a rigorous national evaluation of the initiative which will include all sites and look at multiple components. Projects are not required to provide a site specific evaluation, but it is important to note that they may receive competitive preference points if they do – up to 5 additional competitive performance points for proposing a quasi-experimental evaluation design, and up to 10 additional points for a random control trial design.

Proposals are due by March 4, 2015. For more information, and to obtain an application package, please go to https://findyouthinfo.gov/youth-topics/reconnecting-youth/performance-partnership-pilots.