Using 360 Performance Evaluations to Minimize Anchoring and Other Biases

Let me tell you how jellybeans can improve your performance evaluations.

Have you ever entered a jellybean counting contest?  You know the kind I’m talking about – a big jar, full of jellybeans, sits on a table and it’s your job to estimate how many are contained therein.  Guess close enough and win a prize.  What was your strategy?  Over the years, I’ve noticed two approaches that stood out above all others.  The first is to start counting jellybeans vertically and then reach waaaay back to high school and try and remember how to calculate the volume of a cylinder.  The second is to walk around the jar trying to get a feel for the amount, again, by rough counting.  In the absence of any other information, these are generally the best two strategies.  If you only used these strategies, you’d stand a reasonable chance at your next county fair.

But you’re a competitive sort, aren’t you?  You want to OWN the title of most accurate jellybean guesser and you want that giant stuffed walrus!  So naturally you seek out more information.  You pull the carny aside and ask for a hint.  And boom, you’re toast.  You see, more information isn’t always better, and what you’ve just set yourself up for is called anchoring. 

A prime example of our susceptibility to the availability bias, anchoring is when we overweight a piece of information too heavily which affects our subsequent decision-making.

For example, let’s say there were 756 jellybeans in the jar.  If I said, “it’s either more or less than 4000 jellybeans” well, technically, that’s true.  But your guess will now skew higher than it would have without that information.  Same thing if I said there were more or less than 100.  Now you’re skewing too low.  And this works even if the number isn’t even involved, like say there was a sign that said “X-5000 Jellybean Extravaganza”.  This is just one example from a whole field of research that takes great pleasure in pointing out how awful we can be when trying to make decisions.  Humans are cognitive misers, meaning we naturally take the path of least mental effort.  And as such we are pretty vulnerable to irrational influence.

If life is like a box of chocolates, then people are a lot like jars of jellybeans.  There might be a lot of surface information, but it’s difficult to know everything that’s going on inside. (I was going to add “we are full of sugar” but that might just be me!).  When we do a performance evaluation, whether on someone else or ourselves, we generally have the right idea about what to do (much like trying to count the height and circumference of the jellybeans) but that doesn’t necessarily always lead to the most accurate results.  Undue influence can easily be placed on factors like if they had recently told a joke, or lent us something, or what they look like, where they came from, race, gender, the list goes on and on.  So how can we avoid these problems when often we are not even aware of them?  How can we improve the accuracy of performance evaluations?

360-feedbackAny expert worth their sweet tooth would say the first step to improve accuracy would be to get more raters i.e. crowd-source.  There is a robust literature demonstrating the great wisdom in crowds.  While no one person may come close to the right answer, averaging the input from multiple raters will substantially increase the precision of your evaluation.  When people walk around the jellybean jar, they are trying to get multiple viewpoints.  In essence, they are trying to replicate themselves.  Much like the people who intuitively take a walk around the jar to get a clearer picture – a 360-degree view – involving multiple raters each with their unique perspective allows us to hone in closer on the actual truth of the matter.

So back to the original question – how can jellybeans improve your performance evaluations?

First – crowd-source. Create more data points. You can’t take an average without things to average. By using 360 performance evaluations, you have a much better chance of getting accurate and actionable data without any of the biases and misleading results that undermine our efforts, often without us even knowing.

Second – consult an expert. They are the ones who have been using 360 performance evaluations longer and have a feel for what to look for and the correct tools to measure performance accurately. They know how to avoid the pitfalls and biases, like anchoring, which so naturally beset us. And they know how to interpret the answers correctly.
So who should you call to get that expert advice? I’d tell you but, well…I’m biased.
360 Degree Feedback Survey Download

Recommended Posts