April 15, 2022

For those of us in the northern hemisphere: 

“If we had not winter, the spring would not be so pleasant.” 

— Anne Bradstreet 

In keeping with this week’s focus on statistics.  

 [There are] lies, damned lies, and statistics…” 

 Mark Twain 

Brilliant on the Basics

 Why Participation Rates Matter in Organizational Surveys 

 Org-wide surveys often do not follow the rigor that a typical researcher would use when administering a survey. In a typical research scenario, the researcher randomizes the samples to be collected to ensure that biases are eliminated to the extent possible. In the case of organizational surveys, however, the survey is given to everyone within the organization with the request that they answer (an observational survey). This means there is a greater likelihood that biases might be found within the survey population. For example, unhappy employees may think the survey is a waste of time and not respond, while employee ambassadors may be eager to share what they love about their jobs.  

 So, how do we avoid bias in observational surveys? The good news is that we can account for biases when the sample size is large enough and participation rates are high. In this case, the distribution curve will match what we would normally expect in a regular random population.  

 At DecisionWise, we like to see participation rates that exceed 70-75%, and where possible, go above 80%. What happens if participation rates are lower than 70%? Then, the perceptions, attitudes, and beliefs that have been shared by the sample population should be viewed more anecdotally and from a qualitative perspective.

 Our goal is to strive for a participation rate that exceeds 80% 

No alt text provided for this image

Please note that qualitative data is not a bad thing; it simply means that the data and insights should be evaluated in the proper light – from the perspective a group sharing their feelings rather than a pure quantitative research survey.   

 Tip: If you ever run into the situation where participation is low, don’t give up. Please contact us for help. We have the experience that can help you get the most from the data you have collected.

Starting Right: A Series on Good Survey Methodology and Statistics (Part 1) 

In keeping with our discussion above on survey participation, we are going to highlight a series of articles over the coming weeks that focus on good survey methodology and the proper use and application of statistics. Our goal is to help you improve the quality of your data.  

In this series we will cover the following ideas: 

  • Finding the right places and populations from which to gather information. 
  • Making sure you ask the right questions to maximize your data’s effectiveness.  
  • Eliminating or avoiding bad data that could be tainted by biases. 
  • Eliminating artificial or extraneous noise through statistical methods.
  • How to effectively manage data by making it simpler and easier to understand. 
  • Uncovering the most important themes, insights, and stories from the data.  

Today, we will focus on the basics of survey methodology, which are codified in four pillars of survey research:  

1. Content Standards: Do our survey questions focus on the right things; do the questions get at the heart of what we are trying to understand? If we want to understand how an employee feels about their supervisor, we might ask a simple question like: Please rate your supervisor on a scale from 1 to 5, with 5 being the highest. Yet is that really the concept we are trying to understand. For example, if we are worried about the relationship between employee and manager, we may want to ask a series of questions that cover trust, communication (frequency and quality), respect, etc.  


  • Ask about observable behaviors instead of thought or motives, which are unknown. 
  • Use a consistent scale. 
  • Avoid merging two disconnected topics or ideas into one question. (e.g., Do you like the size of our breakroom and how often do you eat at your desk?) 
  • Protect confidentiality. 
  • Ensure face validity.  

2. Cognitive Standards: Will participants understand the questions, do they have enough information to answer the questions, and are they willing and able to formulate answers? In essence, will the questions be valid? At a high level, validity is the concept that similarly situated people will understand the survey in the same way – the survey measures what it claims to measure. Validity is concerned with whether the survey is accurate.  

 Reliability, on the other hand, is about consistency. Does understanding vary from person to person, or have we framed things so that a vast majority of participants understand what we are asking? In other words, does the survey return consistent results, or is some strange phenomenon at play?   

The following matrix can be helpful in thinking about validity and reliability.

No alt text provided for this image


  • Use simple words with well-established and universal meanings. (i.e., Use language an 8th grader can understand.) 
  • Consider participant knowledge and ability to answer meaningfully.
  • Avoid double negatives. 
  • Avoid jargon and acronyms.
  • Avoid emotionally charged words or language that has strong associations.  
  • Consider using pretests and pilot programs to refine your efforts before launching across the entire organization.  

3. Usability Standards: Will the insights and data that are collected help us understand the question we are trying to understand, and will the results be organized in a format that is useful? Will the results be tainted by an unreasonable amount of bias (in whatever form)?  


  • Does our survey eliminate biases?  
  • Are the questions leading?  
  • Does our sample size reflect a random population or a cluster of people who are unhappy with the current circumstances?  
  • Are we asking the right people to give me their feedback? 
  • Is this the right group to be answering the survey? For example, should we be asking peers that have minimal interaction to comment on a leader instead of asking that leader’s direct reports? 

For additional tips on combating bias, click HERE.  

4. Actionability Standards: Are we able to do something with the information we gather, and can we solve the problems that are uncovered? Will the data help us in our mission to solve business problems or in tackling our 6-month, 12-month, and 2-year business challenges? 

We’ll stop at this point for today, as this is a lot to unpack and review. Next week will focus on using statistics in preparing our data for study and review.  

What’s Happening at DecisionWise


We are thrilled to announce our employee engagement top performer awards for 2022. These awards represent clients that were in the top 10% in their employee engagement survey results for their respective company size from the year 2021. 



We want to let our readers know that we have an online training that teaches the basics of our Engagement MAGIC® model. “This is another great step forward for DecisionWise,” said Tracy Maylett, Ed.D, CEO of DecisionWise. “Our research is clear that employee engagement is most influential at the team level, and the manager of that team has a significant impact on the engagement levels of the team. We have been successfully providing this world-class training in live workshop settings for several years, and now we are thrilled to offer it in an online format.” 


HR News Roundup 

  • Here is a great Forbes’ article on 10 lessons learned by using people analytics. LINK
  • Another piece from Forbes HR Council on 4 steps to turn employees into brand ambassadors. 
  • Sinazo Sibisi and Gys Kappers make the case in an HBR online article that a great employee experience starts with the onboarding experience.
  • In an interesting piece on the future of work, Jared Spataro at Microsoft talks about how radical intentionality is a must in building a hybrid workplace that will… actually work.  
  • Here is a software review on the top 10 current HCM systems from CIO magazine.

Recommended Posts