Where I work, there are many many internal and external surveys going around and there is always some arbitrary percentage for a target response rate. When I asked why the percentage is set set at certain level, I rarely receive a clear answer and even more troubling is when I ask if any sample size calculations have been done to know exactly how many responses would be needed to to hit a target confidence level and margin of error I get stared at by my like I’m speaking Latin and over complicating a simple task. The problem with that thinking is that we tend to jump to conclusions, based on the survey results, without fully understanding if the changes are due to an actual change in the population that is being surveyed and how easy it is to start doing sample size calculations based on a set confidence level and margin of error.
Without getting too mathy, here are explanations of the two measures that affect the survey results and something that needs to be taken into consideration when comparing surveys (which we can easily do with sample size calculations):
• Margin of error(also known as confidence intervals): the deviation between the results of the respondents and the entire population
o Example: You set the margin of error to 5% for your survey and 90% of survey respondents say they like pizza. A 5% margin of error means that you can be “confident” that between 85% and 95% of the entire population actually like pizza
• Confidence Level: This tells you how often the percentage of the population actually lies within the boundaries of the margin of error.
o Example: between 85% and 95% of the population likes pizza (like above) and we chose a 95% confidence level, we can then say 95% of the time, between 85% and 95% of the population likes pizza.
Alright, at this point you still might be asking yourself “is this important and/or how does this help with survey results?” Don’t worry, I’m going to try and address that more directly below(which is pretty much a TLDR of the example link above (TLDR = Too long, didn’t read)):
Let’s say we have an in-scope deployment group of 500 people.
We send the survey out and only 88 people respond.
If we look at the results and take into account a Confidence level of 95% (there is an equation to do this that I can show you or you can look it up on the interwebs), the margin of error is about 9.5% which means we would need to see a shift (positive or negative) of greater than 9.5% in the survey results before we could truly say there was any change at all in the population.
So if we are comparing two surveys and we see an increase of 8% in understanding of LMS tools but the margin of error is 10% for the survey, there was technically no change at all and that 8% could be attributed to noise and is not representative of the population we are trying to survey.
So how do we easily start calculating required sample sizes and the other mumbo-jumbo above? This is where the internet really come in handy because right here is a hand calculator that not only takes into account confidence level and margin of error, but also the estimated response rate!
If we start using the calculator (here again) to get the sample size, we can start comparing survey results with much more confidence in the fact that employee opinions are changing and we aren’t over processing by looking into opinion changes caused by statistical noise and not actual population opinion shifts.
-jess