The Truth About 360 Degree Feedback Validity

I am always surprised when I read a statement by a company touting the validity and reliability of their standard 360-degree feedback instrument. The reality is that 360-degree instruments are not valid in the statistical sense of the word. Sure, we can (and should) check for such statistical measures as content and construct validity. It should certainly have face validity. Anyone who tells you these are not important does not understand statistics. The problem is not one of statistics. The truth is, most 360s are not valid. Before I’m labeled a heretic, allow me to explain.

The Truth About 360 Degree Feedback Validity

  1. Reliability refers to the ability to produce the same results under the same conditions. The problem with this is that most 360 participants will see some gaps in their report– differences between how raters see their performance. Technically, many statisticians would say the survey is, therefore, not reliable. The truth is, these very gaps in perception are what make 360-degree feedback so powerful. They allow us to see the differences in perceptions that others might have of our performance.
  2. The scale used is generally not a “valid” measure. Nearly all 360 feedback surveys use a 4,5, 7, or 10-point Likert scale.  The problem here is that a “5” to one rater may be at a “6” to another rater– they mean to indicate the same level of performance, but they inherently score differently. Also, in the rater’s mind, is the difference between a 5 and a 6 on a 7-point scale the same distance as between a 3 and a 4?  Usually not. Giving a score of “3” is MUCH harsher (at least in some raters’ minds) than giving a “4.” Yet the difference between a 4 and a 5 may not be as significant in that same rater’s mind. Get the picture? Now, that said, this is mitigated in several ways: a) selecting a larger group of raters will help balance this effect; b) educating raters on the rating scale can go a long way; and c) 360 feedback is often best at sorting out the “outliers”– those very high and very low performers. Small differences between scores shouldn’t be overemphasized. However, very high and very low scores are areas for attention.
  3. Off-the shelf assessments generally do not measure what is most critical to a particular organization. We work with many organizations that come to us after purchasing an off-the-shelf product or software application, only to find it is not a valid measure of what’s important. Customizing an assessment to the needs of an organization is extremely important. However, this means that this customized version will not have the years of statistical testing that many claim is so important.

The real test of validity?  Is it a good indicator and measure of what’s important, and can I act on the feedback?  If the answer is “yes,” to both of these, you’re far better off than if you can rattle off a bunch of statistical findings.
Little Black Book of 360-Degree Feedback
Related Post: Problems with 360-degree Feedback Statistical Measures
Related Content: 360 Feedback Survey Validation
Related Post: Generic Traits Don’t Work for Leadership—or 360-Degree Feedback
Related Webinar: 360-Degree Feedback Best Practices

Recommended Posts