Savvy Surveys: Strategies for Driving Insights

Overview

Surveys play a critical role in capturing patterns to facilitate innovation. This function is understood simply by the large number of organizations and entities that distribute surveys. While disseminating a survey is relatively easy because of the availability of various tools, disseminating a “savvy” survey that provides insights for continuous improvement that can also provide evidence of impact is much more challenging. To increase the likelihood of obtaining meaningful and actionable data, surveys should address articulated questions, reach the right individuals, and be easy for respondents to complete.

Providing insights into accomplishing these goals, we are pleased to have Rachael Bower, Director of Internet Scout Research Group and Principal Investigator of ATE Central, a National Science Foundation Advanced Technological Education project as a guest presenter in our next coffee break webinar. In Savvy Surveys: Strategies for Driving Insights, we will discuss our lessons learned in designing and disseminating the ATE Central annual survey to nearly 8,000 individuals over the last seven years. Specifically, the webinar seeks to share:

  • Techniques for developing strong questions and implementing a logical organization of these questions;
  • Insights on getting the right people to respond; and
  • Practical considerations of disseminating surveys.

Transcript

Part 1

Lana
Welcome, everyone. So excited to have you on the next installation of our Coffee Break Webinar on Savvy Surveys: Strategies for Driving Insights. Hopefully, on this cold January day, you have a hot cup of coffee or a hot cup of tea with you. And also, hopefully, you’ll learn a lot of great information in our short time together. I’m not sure if you’re like me, in which you get a lot of different invitations for surveys. I think the reason why we receive a lot of these invitations is that the dissemination platforms are plentiful. But the challenge here is not in terms of actually disseminating a survey. The challenge is actually in disseminating a savvy survey. So, what do we mean when we say a savvy survey? Well, it really covers about three different categories. First, and this one may be slightly obvious, is about question design and making sure that the questions are designed in a way that really answers the research questions that you have. The second category is in thinking about how the survey will be analyzed, interpreted, and used. And the final consideration for developing a savvy survey is in thinking about dissemination efforts. Now, admittedly, these are presented as three very distinct concepts, but there is some overlap between them. Still, I think just having these as categories is quite useful and these categories really fit the purpose of our topic. We’ll talk about some things that we’ve done to be able to address issues related to questions design as well as what we’ve done in terms of being able to design a survey to address analysis, interpretation, and dissemination efforts. And then, as always, we’re here to answer your questions. So, I am absolutely thrilled to have with us today, Rachael Bower, who is director of Internet Scouts and PI of Advanced Technological Education (ATE) Central to talk to us about this topic and I’m going to turn things over to you, Rachael, for a moment for her to introduce herself and tell us a little bit more about ATE Central

Rachael

Thanks so much, Lana. I’m so happy to be here! Hi everybody. I can’t see your smiling faces, but hopefully, as Lana said, you’ve got a nice warm drink and you’re ready to think a little bit about surveys with us. ATE Central acts, as the information hub for the ATE community. ATE stands for Advanced Technological Education and it is a funding stream at the National Science Foundation (NSF). There are around nearly 400 projects and centers funded currently across the U.S., mostly at two-year institutions, such as community colleges, and because it’s NSF, of course, it’s STEM, particularly STEM program improvement and development. And we’ll talk a little bit more about this, but, but with ATE acting as a central hub for this community and surveying them often, we’ve had lots of experiences and The Rucks Group, candidly, is in fact our external evaluators. So we work very closely with the team and as always, it’s lovely to be here with the folks from The Rucks Group today talking about this.

Lana
Thanks so much, Rachael, we really appreciate that. If you don’t know, I am Lana Rucks, principal consultant of The Rucks Group, and we’re a research and program evaluation firm that gathers, analyzes, and interprets data to enable our clients to measure the impact of their work. We were formed in 2008 and over the past several years worked primarily with higher education institutions in measuring federal grant outcomes and services. With that, let me turn things over to Alyce Hopes, our outreach coordinator, so she can share how she will be supporting us today.

Alyce
Good afternoon, everyone! I am Alyce Hopes, the outreach coordinator for The Rucks Group. My role for today’s webinar is going to be to facilitate the Q&A portions that are sprinkled throughout the webinar. These short breaks are going to be a great opportunity to ask any questions you may have coming into the webinar, or anything that may emerge for you throughout the webinar. To ask those questions, you’ll want to use the questions function which appears on the right side of your screen. You can find this by looking for the question mark symbol. I should mention that there is a little bit of a delay between the time that you send your questions and the time that we actually receive them on our end. So, if we’re not able to answer your question during that particular break, please do not feel discouraged from asking more questions. Anything that we don’t answer today, we will be sure to follow up on after the webinar. There may also be moments where we want to share additional insights or tools with you all and those communications will happen through the chat function which will appear just below the questions, but only when there is a new message for you to view. So be sure to keep both of those items on your radar, and with that, I’ll go ahead and give it back to you,

Lana
Thanks so much, Alyce. Why don’t we go ahead and get started and we’ll start by talking about question design? As Rachael was alluding to, for over 10 years, we’ve disseminated dozens of surveys, to thousands of individuals. Over that time, we’ve had to really wrestle with a lot of different issues. And I’m wondering if you can give a little bit of information about first, the type of information that you’ve been interested in gathering Rachel, and then also some of the challenges that we’ve experienced.

Rachael
Absolutely, happy to do that. So, um, as I mentioned ATE Central acts is this information hub for this large group of NSF (National Science Foundation) grantees. And we have lots of roles within the community. We’ve developed tools and services to support the work they’re doing, whether it’s professional development or curriculum design. We also focus a lot on outreach and resource collection. So, lots of these folks create amazing pieces of material or resources with their grant funding and we collect that and have it in one searchable database. Over the years, one of the things that have happened was the creation of an archiving service to make sure that these wonderful resources that are created with federal dollars stick around, even after that particular projector center isn’t funded anymore. So, one of the things that we got very interested in doing was trying to determine how folks felt about the fact that we were archiving their materials and how it contributed to the long-term sustainability of their project or center, which is one of the things NSF asked them about. So that was one big area related to our tools and services that we were interested in learning more about. So, we attempted to gather this data related to the users’ understanding and experience of all of that, but the responses we were getting didn’t seem to match up with what we’d heard from them in talking to them and what we intuitively knew. So, we had this kind of disconnect with the way we were getting the material and Lana is going to share a little bit more about that.

Lana
Yeah, absolutely. To build on what Rachel was sharing, when originally adding in the question about archiving and sustainability, we included this question, which was worded “ATE Central’s archiving service plays a significant role and my project/center and sustainability strategy” and based on the responses about 45% of individuals agreed with that. But given this context and given the fact that projects were actually required to archive, we were a little bit confused about what neither “agree nor disagree” meant, and like Rachael was sharing, we knew just from qualitative data that we gathered through interviews and focus groups, as well as just some open-ended comments and infuse throughout the that there may have been some confusion about how they should answer if they weren’t using that resources quite yet. So, we rephrased that question and in doing so there were a couple of things that we made modifications on. The first part was in regard to, focusing on ATE Central’s role of supporting, uh, the project or center in archiving. And the other piece in terms of how we phrased this question is we disentangled archiving and sustainability so that archiving is what they are familiar with, but for the purpose of sustainability. The other piece is that we changed the option items as well. So, there were different degrees of “agree” and “do not agree” but then also we had two different levels of not applicable where one level meant it ‘doesn’t apply to my work at all’ and another level indicated that ‘it applies, I just haven’t used it.’ In doing this reframe of the question, if you look at the responses, the percentage of people who responded agreement was fairly consistent. The motivation really wasn’t around trying to change the number of people who were agreeing or not agreeing, however, now we had a better understanding of the individuals who did not agree. By rephrasing that question item, it illuminated certain pieces in terms of what was being gleaned. And that’s one of the take-home messages in regards to how to think about question item design.

Rachael
Right, and I think one thing that’s really interesting here is that when people are new to the community when they’ve just gotten funded, they often take the survey that they’ve been asked to take, but they, they haven’t started thinking about archiving – they’ve literally been funded maybe a month and a half or maybe less than a year. So, that piece of it applying to their work, but they don’t use it makes so much sense now – now you realize that obviously, they wouldn’t be thinking about archiving yet because they’re completely new! And I suspect if we looked at that data, that’s what would show up there.

Lana
And what’s interesting about, the point that you just raised is, again, this is over multiple years of dissemination. One of the pieces that we’ve learned to ask is what year are you in your project and what year are you in the community to be able to farce out that piece of information. What this shows is that there’s a lot that you can figure out before you disseminate the survey, but sometimes it’s that continuous improvement, that iterative process. And sometimes it takes some lengthy conversations. We had a lot of conversations on the data when they come in to figure out how to interpret it out and had to get better at figuring out how we ask the question that we really want the answer to. Well, I will pause there and see if there are any questions.

It looks like you noticed a problem over time in the question, are there additional ways that you can catch this sooner?

Lana
Definitely. We’ve done usability testing and getting feedback before we’ve disseminated surveys and that has helped us to be able to catch a lot in terms of how to phrase things. But even with that, sometimes once it’s actually disseminated and you’re getting feedback, you realize, oh, we need to tweak this. So, um, I think usability testing is something that you can do, uh, beforehand and I think just having that consciousness of really analyzing and having conversations about the data on the backend, I think is helpful too. Rachael, do you have other thoughts on that?

Rachael
Yeah, I think this was a really unusual one for us and it was really funny like we kept struggling with it and when we finally figured out what was going on, it was because we just kept getting data back that was confusing. And as Lana said, it’s not that we wanted to change it in some way. We just knew it didn’t make sense to us. So, I think sometimes you can catch things, and sometimes it is just through the repetition of realizing there’s something a little bit off.

Lana
And I think that’s a really good point, Rachael, that this one was a little unusual. Cause I think a lot that we’re able to catch from pretesting and pilot testing, but sometimes, we don’t catch everything.

Part 2

Lana
Well, I’m going to keep going, but make sure to continue to use the question function in case you have additional questions. The next topic that we wanted to touch on is the issues around analysis and interpretation. We’ve talked already about direct design considerations, but can you talk a little bit about how design considerations impact analysis and interpretation? I think that’s often something people think about after they’re receiving the data, as opposed to when they’re designing that data. Can you give some insights and some perspective on that and how that’s impacted the ATE Central survey?

Rachael
So this gets into, and Lana’s going to unpack hack all of this as we go along, but this gets into something we were just talking about consistency versus adapting, right? It’s that issue of you’ve got a survey that’s running overtime, you want some of the questions to stay the same, and yet you need to adapt, i.e., COVID. We wanted to ask questions, about how people were faring during COVID – We hope we will not be asking those questions anymore sometime in the future! So, there new pieces of the program or new things you need task about or, how do you keep the data consistent, creating a context for understanding the things you’re finding like satisfaction levels, like when we’re getting 70%, is that good or is it sort of a ? What does it mean? And then, I think context in a different way, which is sort of trying to figure out how you can utilize something, like a survey, to not only get data back but maybe to also use it for other things. We were able to use it to help people learn more about the services that we offer. Lana’s going to give us some more in-depth, but that’s sort of the broad brush of these three sorts of areas that we wanted to talk about.

Lana
Very great overview. So, I’ll just dig deeper into what Rachel was sharing. The first piece is this question about consistency and adaptation. This comes up a lot particularly if you’re talking about a program or initiative that’s been around for a while, so our approach to this is that there are certain items that serve as anchors, and here is an example of some of those. After repeated survey dissemination, we don’t tweak with these too much, although they may change in terms of some of the services and products that are actually being offered. But, in terms of the way that they’ve asked, we’re extremely reluctant to make those types of modifications. However, there are sometimes topics that we want to add in such as things around needs, and so we’ll make sure to add those questions in. And then, as Rachel alluded to, we needed to ask a lot of questions about how the COVID pandemic was impacting individuals. So, we made sure to add those questions in, and that means sometimes you have to make decisions about length – does that mean you just lengthen out the survey, or do you take something else out? Again, those anchor items we’re going to leave in, and then we have to make decisions on those other issues.
Then, in regard to satisfaction, this was a real challenge and a good challenge for ATE Central. They very often had very high satisfaction rates and it was very challenging in terms of trying to figure out how to interpret this and understand what constitutes “good”. We actually, did some digging and we were able to borrow a concept called Net Promoter Score, which is common within the marketing space to be able to track satisfaction over time. At a very simple level, what the net promoter’s score does is basically quantifies satisfaction by characterizing individuals, either as promoters or detractors. As a concept of a lot of different entities who use the net promoter score, there was actually a context in which we could refer to, to be able to interpret what satisfaction was. Using that, we knew for the type of entity that aligned with what ATE Central was, that excellence was really around 55 or higher, then exceptional was 70 or higher. When you look at it from that standpoint, it really helped to be able to put into context what these scores are and to be able to bound out how high “high” should actually be. And then the other point that Rachael was mentioning too, and we already kind of brought this survey up, was this idea of having these links to be able to help inform individuals as they were going through the survey about different products or services. If they weren’t familiar with something, they could click on that link and, and get more information. What became really useful about that was that we added in question items about individuals’ awareness of the services highlighted, but then we also asked questions in regard to, awareness of this service or resource beforehand. What was really useful about that information is, again, we could start to disentangle people who are just non-users and forgetters and understand how often forgetting actually occurred. So that really helped, in terms of thinking about how we designed a question to be able to respond to and to interpret. Rachael, do you have any additional thoughts or comments on that?

Rachael

Well, one of the things that are funny is that I think as we’ve added this in, in the open-ended comments that people can put, we started getting more people saying things like, “I didn’t realize” or “I forgot about some of these services, I need to go to ATE Central more often.” And I think this, in particular, helped with people remembering some of ATE Central’s services. So, it’s just been a terrific way to, you know, how many people are using it and get our impact data but also to just utilize the survey to remind people that we have a bunch of services they may not have taken advantage of.

Lana
Right, absolutely. Let me pause there and see if there are any other questions.

How do your evaluation findings get impacted when you change your survey questions all the time?

Lana
Well one, we’re not actually changing it as often as it may sound. It’s really kind of context-dependent and I think our overall inclination is to not change items, but again, if you think about surveys over multiple dissemination or over multiple years, there are going to be times where you need to make some changes. By having the same types of questions and including those anchor questions can really help in terms of being able to be consistent over time. It also helps us to be able to measure and look at changes over time and to be sensitive to when certain products increase or decrease in their usage or knowledge or something of that nature. And it helps with being able to additional question items. Like I mentioned before we were talking about asking “what year is your project” to be able to help illuminate what we’ve been seeing. Rachael, did you have something to add to that as well?

Rachael
I totally agree with you. I think because we were talking about changes, it may seem like we change a lot, but we really don’t.

Lana
Yeah, we stay pretty consistent. Was there another question?

How were exceptional and excellent scores determined?

Lana
There’s an actual kind of mathematical equation that’s actually used with the net promoter score. Technically, with a net promoter score, you would use it on a 10-point scale. The individuals that respond at the top two levels, (9 and 10) serve as the promoters. You then subtract the people who are detractors (individuals who respond 6 or below). For the question items we use a 5-point scale, so we converted that to a 10-point scale. That’s the technical way in terms of how to do that calculation and we can pull some resources together and make sure that’s available for individuals to learn a little bit more about net promoter score. But what’s really useful about that is that there’s literature around this concept to be able to support the interpretation of your net promoter score within your context.

Part 3

Lana
Going on to the next portion here which is in regard to dissemination efforts. You spend a lot of time getting a survey together, you send it out and you would like people to respond to it. Rachael, can you talk a little bit from your perspective about the importance of the response rate or some issues or thoughts about dissemination? And then, some of those challenges, particularly around making sure the response rate is high?

Rachael
Absolutely. For one, I think survey fatigue is just a general problem. I think it’s actually might be worse in the ATE community because there are so many folks within ATE who are all surveying each other and we work together a lot in that way. Some of the projects or actual research projects, so they’re, they’re doing research on ATE specifically and then many of the other projects are trying to get feedback from the community as well as from their larger audience. So, there are a massive number of surveys that go out so trying to get people to feel like they should respond to yours is really important. By their seventh survey that year, they’re like, “oh my goodness”, but you obviously want to get as many people as you can, or you at least want to achieve “high enough”. Lana, you’ve told me before that there’s some percentage that you want to get, you to make it feel like you’ve gotten enough answers that and the data is meaningful.

Lana
Yeah and I just want to add, really quickly, to that thought and I can go into some of the other items too, but I think response rate also depends on who’s actually in your call deck, who you’re sending it out to. More engaged individuals will be more likely to respond than people who aren’t. And just because you send it out to a large number of individuals and you calculate your response rate, some of those just may not have necessarily responded because they may have just touched your organization for a real brief moment. With that said, what are some of the things we’ve done to try to increase the response rate? Well, one, we first make sure that an introductory email is coming from a credible source. Usually, Rachael will send an email beforehand, letting individuals know that a survey is coming. And to that point, because of the survey platform we use, we can change who is the sender of the survey, so that could be an option depending on what platform you’re using. Another piece that we make sure to include are items that are meaningful to responders in the survey. Towards that end, we allow people to preview the survey as well. I know sometimes there are some items that you can’t allow people to preview because it kind of sets out the branching, but to the extent that you can, allowing people to see that and to know what information is going to be asked of them is really helpful. And then the other piece that we do is to really think about the subject line, which sometimes can be given an afterthought, but we really think about that in the design phase so that it becomes something that we’re thinking about throughout the process. I will pause there just really quickly to see if there’s one quick question that we may be able to answer.

What value do incentives offer in trying to increase response rates?

Lana
I personally shy away from incentives for a host of reasons. And I think one is because an incentive is only an incentive if it actually increases the likelihood of completing the survey. Just because you call it incentive doesn’t mean it necessarily is. Like a giveaway doesn’t necessarily mean that it’s an incentive. A lot of times just making the survey meaningful and relevant to the responder and feeling as though the findings will help them is more of an incentive than what we traditionally will call an incentive. Rachael, do you have a quick thought on that?

Rachael
I would agree. When you’re doing a federally funded project like we are, I think it would probably be difficult for us to offer any kind of financial incentive, but I’m also not sure if the funder would feel that we were skewing the results of the survey by doing something like that. So, I would feel hesitant myself to do that and we’ve never had trouble. I mean, we’ve been lucky that we haven’t had problems getting a good response rate from the community.

Lana
Very good. Well, just a couple of quick wrap-up some things to think about to make your survey savvy. Remember to think about asking the right questions the right way. Also, consider how the results will be understood and then infuse elements in the survey to help reach the individuals that you want to make sure that you’re reaching.
Keep in mind that our next webinar is on April 14th when we’ll talk about developmental evaluation. And I want to thank you to Rachael for joining us today. This was so much fun to be able to have this conversation and for everyone else who’s in the audience, hopefully, this was informative for you, and thank you for coming as well. We hope you have a great rest of the day!

Rachael
Thanks! Bye, bye, everybody!