Date: Tuesday, April 18, 2023

Time: 1:00pm Eastern / 10:00 am Pacific

Presenters: McKay Kelly, Customer Success Manager

If you’re struggling to get useful feedback from raters or having difficulty choosing the right ones, we’ve got you covered.

McKay Kelly will guide you in selecting the right raters for effective feedback, covering everything from the total number of raters to maintaining confidentiality while choosing them and deciding whether participants should choose their own raters or have them pre-selected.

See how rater groups can impact reporting and gain knowledge on creating and changing rater groups within the platform.

Transcript

McKay Kelly | 00:00

Okay, let’s go ahead and get started here. Um, welcome everyone. My name is McKay Kelly. I’m a client Success Manager here at Decision Wise. Um, today we’re gonna be talking about rater groups for 360 assessments. Um, kind of going through how to decide how you’d like to handle the rater groups, and then going through some of the platform capabilities as well. Um, I have some of my colleagues on the call today as well. If you have any questions, feel free to put it in the chat. Uh, they can either answer there in the chat or kind of toss it over to me and we can discuss it here.

McKay Kelly | 00:34

So, some stuff, what we’re gonna be going over to today. First, we’re gonna talk about how to decide what you’d like to do with rater selection, whether that is gonna be, you know, preselected where you’re uploading all of the raters for the participants, or whether we’ll have the participants select themselves. Go over some general recommendations for raters and rater groups, as well as the difference between what required versus crucial raters means. We’ll also talk about what the, uh, rater groups look like in the reporting so you know how that’s broken out. And then we’ll jump in and show you, you know, once we have everything set up with the rater groups, some of the platform capabilities, uh, that make it pretty easy to start assessments and to, um, upload raters within the tool. So as we talk about, uh, whether or not we’re gonna have the participants select their own raters, or whether we’ll select them, one of the big questions here is if it, uh, we’re doing these three sixties for performance, or if we’re doing it for development.

McKay Kelly | 01:29

So, uh, oftentimes three sixties are used, um, in some sort of performance evaluation. They’re a great way to kind of get a read on, uh, certain individual’s performance. Now, if that is the case that you’re using, the three sixty’s more on the performance side, we would recommend going, uh, with a preselected route. So, uh, focus more on uploading the raters for the participants. This just ensures that you know exactly who is gonna be rating them and that you’ve selected the individuals most crucial to, uh, evaluate their performance. On the flip side, many of the 360 are used simply for development, whether that be for our executive leaders, middle managers, or on an individual contributor level 360 are a great way to, uh, find out some areas of opportunity as well as strengths that we can capitalize on. And if we’re focused mostly on development, we recommend going with the self-selected rater approach. Uh, it’s very helpful to have the participants select, um, at least a few of their own raters. This helps them, you know, one, know who’s gonna be rating them as well as buy into some of the feedback that they’re collecting.

McKay Kelly | 02:34

So there are certain benefits to each of the different rater selection methods, uh, that we’ll go through right now. So for the preselected raters where the, the admins will be uploading all of the raters for their participants, uh, to, to start, it allows for a quicker timeline. Uh, if we have the participants selecting their own raters, we need to then build, you know, usually at least three to five days into the timeline to allow them time to select their raters. But if you are selecting them, um, before the assessment begins, allows for a quicker timeline, we can jump straight into the feedback collection phase. It also allows us to control how many, um, how many assessments the raters need to take in a certain cohort. Oftentimes we have large cohorts, um, that kind of overlap. A lot of people work with each other within one cohort, and we want to avoid, you know, having one rater having to rate maybe 15, 20 participants, uh, in a two week span.

McKay Kelly | 03:28

That tends to get pretty taxing on them. Um, we get some rater fatigue there. So when you pre-select the raters, you can then look through and see to make sure to manage, uh, the number of assessments that each rater’s gonna be taking. The Self-selected rater approach, uh, provides an easy upload for the administrators. You’ll only need to upload, uh, the participants, but won’t have to worry about any of the raters. It also involves the participants in the process. Um, this allows them to select their own raters. They know who’s gonna be selecting them, and it also, um, will increase the trust in the results if they were the ones that selected who is going to be rating them, as opposed to having this more forced upon them and having their raters chosen for them, they tend to be a bit more open to the feedback.

McKay Kelly | 04:11

Now with the self-selected rater approach, it does allow for rater approval. So once they’ve selected their raters, either, uh, you as an admin or maybe their supervisor, uh, it does have the option to go in and, and review the list of raters that the participant has selected. And then you can either straight up approve it or you can add raters, remove any, and then approve. Uh, to move on to the feedback collection process, we do have a hybrid approach, uh, which is where the administrators will upload some of the raters, and then the participant also has the ability to jump in and select some of their own. So this also involves the participants, but it, um, will reduce the number of raters that the participant needs to, to select. For example, the, uh, admin on the front end can upload the supervisor and the direct reports and then allow the participant to upload the peer or the other category.

McKay Kelly | 05:01

So this means that for the, for the participant, it’s a, it’s a bit of a, a quicker process. They don’t have to upload every single one of their raters. Some of them are already in there for them. Uh, we do have a function as well where on the front end, the, the admin can lock the raters that they’d like to have in there. So if they upload the supervisor and the direct reports, and they wanna make sure that, that, that those raters are not changed by their participant, they can lock those to make sure that the participant can make no changes and then leave some of the rater groups open for the participant to add individuals as well. With this hybrid approach, we also do have the rater approval. Now, in general, we recommend always involving the participants, uh, in the rater selection process. We do a lot of coaching sessions and have found that the participants who have the ability to, um, be involved in the Raider selection process are much more, you know, open to the feedback.

McKay Kelly | 05:52

Uh, it’s much less confusing for them some of the results if they know, um, who gave them the feedback and if they, you know, actively chose who was going to be, um, rating them. Now with the preselected raters, you can also do, you can also involve the participants in some ways. You could either send them, uh, a template of who you’ve, uh, thought of prior to uploading them and just have them review it quickly outside of the platform. So there are ways in each of these different rater selection methods to make sure that the participants are involved.

McKay Kelly | 06:24

Some general recommendations that we have for raters and rater groups. One is to include about 10 to 15 raters per participant. So you’ll have one for the self, uh, which will be the participant one, one supervisor. Normally we recommend involving all of the direct reports, um, and just making sure that all of the direct reports have an a chance to rate, uh, their supervisor, no one feels left out. And then for the peer and the other category, um, you know, we tend to see three to five in each of these. Uh, oftentimes our minimum confidentiality threshold is set at two. So the reason we recommend having more than two is just in case somebody is not able to complete the assessment. Uh, we wanna make sure that we’re still meeting our confidentiality, um, thresholds. So having three to five gives a, a, a broader range, um, but also ensures that as long as most of the raters take the assessment, that we’ll still be able to see their feedback.

McKay Kelly | 07:21

So there’s no limit in the system of how many raters you can include. However, we recommend no more than 20 raters, uh, for a couple of reasons. One is that, you know, 20 or less raters, we wanna make sure we’re choosing raters that are close to the participant. And, uh, the more raters we get, it’s more and more likely that those rating them, uh, are probably not fully aware of the broad scope of their work. The other is that the more raters we get, the more it tends to average out. So instead of being able to see real differences within the rater groups, uh, they tend to form more towards an average, like I just mentioned. We wanna make sure that we’re only including raters that work closely enough with the participant to accurately answer the assessment items within the assessment. The raters do have the option of don’t know, or they have the option of skipping the question altogether. And we wanna make sure that to get the best data possible, that we’re only selecting those that really do work, um, you know, closely with the participant

McKay Kelly | 08:19

Prior to the assessment starting. We also wanna make sure that we’re communicating with our raters, uh, about the process. Um, you know, rating in a 360 can be a, a, um, a more, more vulnerable process because we’re rating individuals, uh, and we wanna make sure that, you know, um, everyone knows kind of what the process is and what to expect. Now on our end, we um, strongly recommend a pre-assessment webinar, uh, to, to explain the process. This is where one of our coaches can come in and speak with an entire cohort and kind of give them the rundown of what to expect throughout the process so that there’s no surprises, and that when they get that email in their inbox the morning, the assessment begins that they know what to expect. They know how their scores are gonna be shown, uh, and how the data is gonna be used. And then, as I just mentioned, we wanna ensure that the raters understand how their scores are gonna be reported, including the confidentiality thresholds. And if, uh, there’s gonna be group reporting and things like that.

McKay Kelly | 09:18

When you jump into the platform, you’ll see that with the rater groups, you have the option of marking them as required or crucial. When we’re talking about required, this requires the minimum number of raters to be added to the rater group. So, uh, the required field comes into play when we’re uploading or when we’re selecting raters. Uh, oftentimes, um, there’s a rater group that maybe we don’t have anybody to put in there. For example, direct reports. Sometimes we’re gonna be doing 360 4 individual contributors. Um, now the, the required, uh, we would wanna leave the direct reports not required so that they can leave that field blank. But any of the rater groups that are marked required means that the, either the participant when they select their own raters or you when you upload the raters, will have to meet the minimum threshold within that rater group. Now the self and supervisor are automatically required. Those are two, uh, very important rater categories that we wanna make sure we have. In there,

McKay Kelly | 10:13

You’ll also have the option of marking rater groups crucial. So this comes into play, uh, during the administration of the assessment. And, uh, it requires the minimum number of responses within the rater group before the assessment can close. So, uh, like I mentioned earlier with our confidentiality thresholds, uh, for the self and supervisor, that’s usually set at one ’cause there’s normally only one individual in that category. And then for the others, a minimum of two, uh, sometimes higher. So the crucial means that if we get to the end of the assessment stage, and let’s say we’ve marked our direct reports crucial, uh, but only one direct report has so far answered the assessment, then the assessment would not be able to close. ’cause we wanna make sure that we meet the minimum within that rater group. Now in our system, self and supervisor are automatically crucial, and then you’ll have the option of marking the other rater groups crucial as well if you want to ensure that we meet the minimums in every rater group before it can close.

McKay Kelly | 11:11

So now I’m gonna jump over into the platform and show, uh, a couple features. So one is I wanted to show what the reporting looks like. This is very crucial to know ahead of time what our reports look like, how the rater groups affect the reporting, uh, which will have a big impact on how you choose to set up the rater groups and do the minimums and so forth prior to the assessment. So, uh, within the, um, report, we have the rater summary. So this will show everyone who is invited to, uh, take it as well as everyone who participated in this sample data. 100% of the raters participated. We also have the option of showing the names of, uh, who all was invited to participate. We strongly recommend including the names, especially if you have used preselected raters or if you went through some sort of rater approval.

McKay Kelly | 11:59

Uh, this especially helps in coaching sessions when we’re talking with the, the participants about the different rater groups, if there are any sort of discrepancies. And the, the participant is well aware of who it was that was invited to, uh, rate them and kind of has a bit more context there when looking through their data Later on in the report. If for some reason we don’t meet the minimum in one of the rater groups, we do have these broken out competencies that show the average score. Now, even if you don’t meet the minimum, let’s say for the other category, um, there that individual scores, let’s say one of two participated, that individual scores will still roll up into the overall averages for the competencies. So we see here our foundational competencies as well as the competency summary. So even if somebody, you know, took the assessment but we’re not able to get a breakout ’cause we, uh, did not meet the minimum, they’ll still roll up into the averages.

McKay Kelly | 12:54

Now, farther down in the report, we show the scores for each individual item. Now this is where we really want to meet the minimums because if we don’t, then we won’t be able to show the individual breakouts down here. It’s really helpful though, to see for each individual items, for example, we can see, um, pretty big gaps here in the scoring. Um, and so we wanna be able to break it out by each individual rater group, uh, so that we can see if there’s any, uh, groups in in particular that are maybe scoring slightly lower. One important thing to note here, and another reason why we like to include more than just the minimums in the assessment. For example, if on one of these items, uh, one of the direct reports put, don’t know, instead of answering then the direct report column, even though both of them submitted the assessment as a whole down here on the individual items, uh, we would not be able to show the direct report because it would only be one out of two. So, um, an important thing to note. If anyone puts, don’t know or skips an item and that drops them below the minimum for that individual item, we won’t be able to show those scores.

McKay Kelly | 13:59

So throughout the rest of the report, we go through and see for each individual item how the breakout is. One of the other things that Raiders often want to know is how their open-ended comments are going to be shown. So the comments is where we do act, we do just simply put all of them together and we don’t show the rater groups. Typically, we do have that option, but typically, uh, we like to just keep them in a, in a list that is, is randomized. So in this way, since open-ended, comments tend to be a little bit more personal and, uh, lend itself to being able to know who said what. Uh, in that way we do randomize the comments, um, to kind of keep that, that confidentiality higher.

McKay Kelly | 14:41

Now let me jump into our tool, um, and just talk through a little bit about, uh, how to set up the rater groups in the first place. Rater groups are based on individual assessments. So if you have multiple different assessments going on, maybe one for individual contributors, one for executives, one for business leaders, you can make separate rater groups for each of these different assessments. Um, all of these you can customize if, for example, instead of peer, you wanna say teammate or uh, coworkers, something like that, you can customize the wording. Here’s where we set our minimums. Um, the minimum for the self is set at one because there’s only going to be one every once in a while. We get more than one supervisor. So you can change this. Um, and then our standard is to set the other rater groups at two, um, to ensure that confidentiality. Now if you wanna add a rater group that’s also available to you, you can remove rater groups and as we discussed prior, you can mark rater groups as required or as crucial.

McKay Kelly | 15:39

Now, these are flexible, uh, even after you’ve run an assessment or if it’s during admin, uh, and you wanna make a slight change to it, you can make that and it goes in live lifetime. So once we have our rater groups, uh, set up, there’s some cool functionality within the platform that makes it really easy, uh, for you as an administrator to, um, send out assessments. So I’m gonna jump over here to some sample data. Now we have two different things that really help with rater selection, uh, as well as starting assessments. One is the directory and one is the hierarchy. The directory is very simple. It’s a place where we can upload simply the employee ID name and the email for everyone in your organization. So what this does is if we do self-selected raters, then when the participant goes in to select their own raters, it will autofill.

McKay Kelly | 16:30

So once you start typing a name, the rest of the name as well as the email address will autofill and it makes the, uh, process much smoother for the participants. Cuts down on a lot of time as well as errors for them. Often what we see is when participants have to type in emails by hand or by, or names by hand, we see, uh, some typos and then emails don’t get sent out on time. And so when we upload the directory, it really helps kind of cut down on any of those errors. Another very nice thing about the directory is we can add demographic filters to that, which then help us to start an assessment. So for example, I’ve got a sample directory up here. Now, let’s say I’m gonna come up to the filters here. Let’s say I wanted to run an assessment for all of our executives.

McKay Kelly | 17:15

I can choose the executive group. We have our four executives here. Select all of them, and I can initiate an assessment. We’ll have them take the executive leader assessment starting Monday, and we’ll have them select their own raters. Now here you can also choose if you’d like to require rater approval or, or not. Um, for the executives, let’s say we won’t. Um, now when I hit continue, it’ll automatically populate all of the four executives with their information. I can add them and we’re good to go. So, you know, within 30 seconds or so, we’re able to start an assessment for four individuals to do self-selected raters. So the directory is very helpful in in, in initiating the assessments. Now we do have another option, uh, that includes our hierarchy. So for self-selected raters, um, the hierarchy is not as crucial because they’ll be selecting their own use the directory. But we do have the option to build out a hierarchy in our system, either, you know, if we’re doing a large cohort, we can do, um, just a snapshot in time hierarchy, or we can also work with you to provide some integration to have an ongoing hierarchy. With this, it allows you to auto fill raters, um, um, and makes the process much simpler. Let’s say we’re gonna run, uh, for our talent team, 29 individuals. Here, we’ll initiate an assessment using our business leader.

McKay Kelly | 18:39

And let’s say we’re gonna do the hybrid, so we’d like to upload some of them. And we have this option to autofill raiders from hierarchy. I’ll go ahead and do that. It’ll take just a minute here to populate as it’s going through the hierarchy, uh, and and generating for all 27, um, people that we’re running through. What’s really nice about this is that, um, we can now drop down here and see that from the hierarchy, it pulled everyone, it pulled the self, the supervisor, all of the direct reports as well as any of those kind of, that were on the same level as them or seen as peers. Now with this, you’ll want to go through and just take a quick glance and make sure that everybody who generated was correct. You can easily remove somebody. Uh, if you need to, you can add a rater as well.

McKay Kelly | 19:25

Now, as we talked about earlier, you can lock certain people in. So using this lock all button, now everybody, uh, is locked in. So when we send it to the participant, they’ll only have the option of adding their other category. The other category, uh, won’t generate from the hierarchy, but that’s why this hybrid is a great strategy where, you know, we kind of, uh, autofill everyone from the hierarchy that we know for sure that we want in there. And then it also allows some flexibility for the participants to select those that they’d like to rate them, uh, that don’t fit any of these other categories. Now, if you wanted, for example, to make self supervisor and direct report locked, um, but not the peer, you could come in here and unlock the peer. This would allow the participant to then jump in. They can see the peers that have been selected, but they have the option of removing any if it doesn’t make sense for them, or adding any peers that they’d like to as well. So, uh, the hierarchy and the directory are very, very helpful, uh, in kind of starting assessments. Once we have our rater groups set out and everything is ready to go, then it takes, you know, a minute, two minutes to get some assessments sent out, either with self-selected raters or the hybrid as well as the preselected. You can also autofill from the hierarchy with the preselected raters as well.

McKay Kelly | 20:43

Okay, that is what we had to go through today. Um, hope this was helpful for you. Any, um, kind of taking a look at the comment section here, if there’s any, um, questions. Okay, we have one question. Would we be able to create one for each team in my organization for staff to do an assessment on their managers, um, to create one? Um, is this create an assessment? One assessment?

Speaker 2 | 21:17

I think if I, I was trying to answer this, but I think it’s asking if we can create more than one hierarchy for each team in their organization for staff. I think that’s what they’re asking.

McKay Kelly | 21:30

Yeah, that’s a great question. And the answer is yes, you can create more than one hierarchy. Um, and then on the back end, you’ll just let the system know which hierarchy you’d like to pull from and it can auto-fill from from different hierarchies. Great question.

Speaker 1 | 21:44

<silence>

McKay Kelly | 21:45

Okay, great. Um, well thank you all for joining today. I hope this was helpful for you. If you have any questions, uh, about, you know, some of the more nitty gritty aspects of, uh, getting into the directory of the hierarchy or Raider groups, please feel free to reach out to us. Uh, always happy to answer some questions, but thank you all for joining.