Using 360 Degree Feedback to Gauge Leadership Improvement

The vast majority of our DecisionWise clients administer 360-degree feedback assessments across multiple years, meaning 360s are conducted on an annual basis. Yearly administration gives individuals a sense of how they are progressing (or NOT progressing), as well as serves as a gauge for understanding whether an individual’s action plan from the previous administration had the intended results. Using 360-degree feedback is an effective way to measure this progress, and should be considered a valuable tool in any manager’s or organization’s toolkit.
While 360-degree feedback is an excellent tool when used correctly, there are some organizations that may not be using this year-over-year measurement appropriately. For example, if an individual had an overall 360-degree feedback score of 5.1 (on a 7-point scale), and that individual did not receive at least a 5.3 overall during the following administration, some would see this as lack of improvement, or even failure. Some would use this to question the value of the 360 process within their organization.

Using 360 Degree Feedback to Gauge Leadership Improvement

While this makes sense on the surface—track 360 results over time to gauge levels of improvement— there are several additional factors to consider when relying on this methodology to assess an individual’s progress or the overall effectiveness of a 360-degree feedback process:

  1. Rater Changes: Many 360 recipients will have different raters from administration to administration. Technically speaking, this changes a variable, making year-over-year comparisons inexact.
  2. Rater Mindset: Many organizations, when administering 360s over multiple years, become more accustomed to providing 360-degree feedback. While there may be some hesitance in Year 1 (due to perceived threats of retaliation, lack of confidentiality, intended use, whatever…), many of these misperceptions will have changed by Year 2 when raters see that their feedback was properly considered. This means that raters may respond with a different mindset in Year 2 versus Year 1, knowing that they are safe to open up.
  3. Changing Roles: In today’s environment of change, a year is a long time. During this year, the possibility that the individual being assessed will have changed roles or reporting relationships is quite high. Again, we’re assessing with a different set of variables.
  4. The Bar is Raised: In a performance-oriented organization, performance that is “great” one year will be considered merely “good” the next. While the individual’s actual performance levels in terms of key performance indicators may not have decreased, he/she will be rated according to (hopefully) increased expectations from one year to another. This may be reflected in 360 scores. 360s are reflective of performance relative to expectations (which change from year to year).
  5. Overall Average: Logic would seem to dictate that multi-rater (360) assessment results would improve when looking at the same subject group over a period of time. Several studies (Hazucha et al., 1993; London & Wohlers, 1991; Walker & Smither, 1999) appear to support this logic. However, Smither, London, and Reilly (2005) claim that much of this change will not be readily visible year-over-year in 360 overall results, due to the fact that most feedback programs suggest participants focus on only a few key areas for change. Subsequently, the recipient may only make meaningful changes in those areas; yet those changes may be significant in overall on-the-job performance. These changes, however significant they may be, would have little impact on the average ratings.

Certainly, an individual who shows dramatic improvement in 360 scores is likely one who has made meaningful strides since the previous administration. While reviewing changes year-over-year makes sense, and should certainly be a part of gauging progress, we must be careful basing our evaluation of an individual’s 360 success solely on looking for improvement from “5.1 to 5.3” overall, or on a particular competency or question.
—-
Hazucha, J. F., Hezlett, S. A., & Schneider, R. J. (1993). The impact of 360-degree feedback on management skills development. Human Resource Management, 32(2-3), 325-351.
London, M., & Wohlers, A. J. (1991). Agreement between subordinate and self-ratings in upward feedback. Personnel Psychology, 44(2), 375-390.
Smither, J. W., London, M., & Reilly, R. R. (2005). Does performance improve following multisource feedback? A theoretical model, meta-analysis, and review of empirical findings. Personnel Psychology, 58(1), 33-66.
Walker, A., & Smither, J.W. (1999). A five-year study of upward feedback: What managers do with their results matters. Personnel Psychology, 52(2), 393-423.
Related Content: 360 Degree Feedback
Related Post: Building a Leadership Culture: The difference between leaders and managers
Related Post: The Most Important Predictor of Leadership Development Success
Related Post: Why 360 Feedback?

Recommended Posts