Home > Talent Insights Blog > 360 Assessments > 5 Core Best Practices for 360 Surveys

5 Core Best Practices for 360 Surveys

Authored by: Robert Eichinger and Lisa-Marie Hanson

Most organizations use 360-degree feedback primarily for personal, managerial, and leadership development. It typically contributes to the creation of an Individual Development Plan (IDP), which forms part of an individual’s career growth journey.

We regularly receive client inquiries on how to best implement 360 initiatives. Often, clients without a clear or preferred point of view on 360 best practices request approaches that, while well-intentioned, can ultimately undermine their program and the experience, particularly the psychological safety of the focal subjects within a 360 process.

Let’s take a moment to explore some common client requests and the best practices for a successful 360-feedback program.

Core Best Practice 1

360 isn’t a quick fix for unresolved performance and career issues; Who should get 360s?

We regularly get that call asking, “I need one 360. So-and so needs 360 feedback!” Smells like an intervention to fix a problem, right?  Sometimes, clients realize a bit too late that a single person needs some feedback and self-awareness and think a 360 assessment is the quick way to go. Unfortunately, singling out one person for a 360, especially if everyone finds them troublesome, attaches a stigma to what could otherwise be a powerful leadership development tool for everyone.  Using the 360 for an underwhelming employee can prevent others from wanting to go through the process themselves.

Best PracticeMake 360 the gift that keeps on giving.  Invest your 360 efforts in present and future talent – career-minded people – to provide them feedback on things that matter toward self-improvement and job and career success.  Make 360s available to many. 

Resist the 360 band-aid.

Core Best Practice 2

360 is meant for development

Most agree that 360 feedback, from a range of individuals such as bosses, peers, direct reports, customers, etc. could be valuable for various organizational purposes, including performance management, but using it that way is not a best practice. The point here is that because performance management is often tied to compensation, it likely compromises the accuracy of 360 feedback. Most recommend keeping these processes separate and clearly defining the true purpose of a 360: developmental feedback.

Best Practice:  The best purpose of the 360 process is a development experience that invests in people’s performance and growth.

Core Best Practice 3

Use a research-based survey measure

Using home-grown measures may seem like a good solution, but it’s a trap many organizations fall into when trying too hard to simplify, or to “make it our own”.  Writing items and scale descriptions isn’t as easy as may assume, and they don’t realize until after the fact that the resulting rater data isn’t well-differentiated or even helpful.

Unless your 360 process measures critical Knowledge, Skills, and Attributes (KSAs) accurately, nothing else matters. The essential building blocks of success at each career level are already known. Use a comprehensive competency model for full career growth. Rely on a science-based, multi-levellibrary, leading to meaningful discussions at each level, providing a well-rounded view of success.

Best Practice:  Leverage a research-based model to drive meaningful growth.

Core Best Practice 4

Who owns the results?

This area of practice has been controversial. The organization pays for the process, which includes the survey and the facilitated feedback. The boss is interested in getting a copy of the results.  HR and TM could use the results as well. Team members could benefit from having everyone’s results.  BUT…

Here’s the settled science: when 360 results are known to be shared with anyone other than the person being developed, accuracy declines. Research says that when raters believe (actually or imagined) their feedback will be seen by the boss, HR, or anyone else, ratings tend to skew higher, the spread narrows, and the results become less accurate and less useful.

These findings complicate the issue of who facilitates the feedback. There are internal and external options. Internal feedback can be problematic. Usually, HR or TM professionals must have a titanium barrier between what they learn in the feedback process and what they can share with others in the organization (which is nothing).  Nothing.  That takes heavy-duty professional and personal discipline. One slip of prohibited information sharing can chill that process for everyone in the organization for a long time. Very senior managers and leaders ask for information. They may even say “we paid for the process, so we own the results”.  Consulting engagements have been ended when an external facilitator refuses to share. External facilitators and coaches need to be disciplined and firm.

Best Practice: The learner owns the data and controls their 360 results, with the ability to share it, at their discretion.

Core Best Practice 5

Who should select the raters?

There’s often a lot of debate about how raters should be selected. Should individuals pick their own raters? What if they only choose friends who they believe will give them positive feedback? Or should the organization pick the raters for them?  Wouldn’t that lead to more meaningful results?

At one point in the past, we were proponents of organizations selecting the raters to get better information. Sometimes the boss would choose. At other times HR or Talent Management would step in, with the goal of picking people who knew the learner well and would have the courage to provide honest feedback-whether that meant selecting a 1 or 2 on a 5-point scale when needed. This approach was intended to reduce halo bias (the tendency to rate higher on a scale) and lead to more meaningful post-feedback discussions.

We later learned that when the target person didn’t have a hand in selecting their raters, the coaching sessions often became filled with defensiveness, blame, and pushback. Learners would say things like, “These raters don’t know me well enough,” or “These were the wrong raters.” They’d even complain, “My boss doesn’t like me and picked my enemies to rate me.”  You can see where this line of reasoning goes downhill quickly.  Step one in development is accurate self-awareness. If the client doesn’t accept the results, no progress can occur.

Twofold Best Practice:

1: Let the learner choose their raters!  Learners select their own raters from key groups (boss, direct reports, peers, customers, etc.). They are told that these should be people who know them well.  On the other hand, they can select anyone they want. But, you say, wouldn’t that lead to higher scores if only your best friends were selected as raters?  Yes, but here is the secret:

Over time, we noticed that “best friends” give 5s on presentation skills, neutral raters give it a 4, and raters who may not have a high opinion of you give it a 3.  However, all raters give it their highest numbers on the typical 5-point scale.  In simple terms: Letting learners select their own raters WILL make a difference in terms of scale scores, but it WILL NOT make a significant difference in relative scores. Relative scores are created when using a forced or flat distribution across all 5 rating categories.

If you want acceptance of the results to be easier, have focal subjects select their own raters; but then you must use relative ranking instead of 5-point rating.

2: Shift away from raw score obsession. To tackle score inflation (the temptation to overrate) we need to move away from the typical 5-point Likert scales (where you can get inflated ratings like all 4s and 5s) and use a Relative (ranking) Comparison method in its place. This approach compares different competencies/KSAs against one another, identifying the learner’s highest, middle, and lowest competencies. This method reduces inflation by ensuring that every thoughtful leader’s feedback reflects a mix of relative strengths and potential areas for development.  Everyone who goes through the process has the same number of Highest and Lowest.  Whether any lowest skill should be part of the development plan is to be determined during the feedback discussion.

360 feedback, done right, fuels growth and meaningful development. By focusing on best practices, you can build a process that makes a real difference and produces meaningful results you can be proud of.

References:

Antonioni, D., & Park, H. (2001). The relationship between rater affect and three sources of 360-degree feedback ratings. Journal of Management, 27(4), 479–495. https://doi.org/10.1177/014920630102700405

Church, A. H., Bracken, D. W., Fleenor, J. H., & Rose, D. S. (Eds.). (2019). The handbook of strategic 360 feedback. Oxford University Press. https://doi.org/10.1093/oso/9780190879860.001.0001

Eichinger, Robert W., and Michael M. Lombardo. “Patterns of rater accuracy in 360-degree feedback.” Human Resource Planning, vol. 27, no. 4, Dec. 2004, p. 23+. Gale Academic OneFile,. Accessed 16 June 2021.

Fleenor, J. W., Taylor, S., & Chappelow, C. (2008). Leveraging the impact of 360-degree feedback. San Francisco: Pfeiffer.

Lombardo, M., and Eichinger, R. (2004). The Leadership Machine. Minneapolis,: Lominger Ltd.

Schedule a Consultation

Schedule a Consultation Directly with our Team to learn about how our 360 Survey can be used in your organization.