Category Archives: Performance Measurement

Performance measurement and review issues

Help Me, Help My Boss

Excellent Performance Review

Recently, a good friend of mine told me the story of a sick performance review and reward system. I mean “sick” as ill, broken, maybe even sickening — not “sick” as the youth today refer to something really cool.

The system bases rewards on the performance feedback from those around the person. This has been in vogue as “360 degree feedback” and other names in the past. In this case, it is actually codified in the union contract under which these people suffer.

But while rooted in a concept that seems to make sense (those closest to you are the best judges of your performance), it completely disregards politics, personalities, and fundamental laws of human nature. I’ve said it before, and I’ll say it again: define a system that effects people’s pay and they will find a way to game the system. Period. No exceptions.

Define a system that effects people’s pay and they will find a way to game the system.

In this case, the supervisor gets a significant bonus (>5%) if the people in their organization give them a 95% or better rating. The interesting point is that it doesn’t matter what makes up the definition of a 95% rating, the system is doomed to failure. Because it is relying on people to rate their boss (or their peer, or even their subordinates) it becomes little more than a beauty contest.

This system is no different than American Idol where people can choose to keep voting for a talentless kid just because he’s cute. People will find all kinds of reasons to rate people one way or the other. Especially when money is at stake.

In the case of my friend, soon there was a buzz in the office: “Rate him great, the last thing we want is for him to not get the bonus. He’ll be unbearable to live with if he doesn’t get it.” That’s probably not what the designers of the system had in mind. And remind me again why I should be pulling for my boss to get a big bonus, when all I got was a 2.1% “cost of living” increase (which he also got)?

Some people are naturally hard graders and others are naturally sycophants

But even if you somehow persuade people to treat the system seriously and give ratings that they truly believe in, you’re still stuck with the fact that some people are naturally hard graders, and others are naturally sycophants. I’ve yet to find a system that can account for individual grading “curves”.

I also don’t know how you account for the fact that some people just woke up on the wrong side of the bed the day the review was solicited. In fact, in the case at hand, one employee who was misbehaving was explicitly not disciplined until after the review was turned in, just so as not to spoil the manager’s score and bonus.

Of course, any good pollster will tell you that you can ignore these anomalies through the “law of large numbers”. Any suitably large group will self-control for these kinds of effects and you can basically ignore them (“the poll has an error range of +/- x%”).

But that’s exactly the problem here. Pollsters talk to 500, 1000, or more people. Your manager is unlikely to have more than a dozen or so people filling out their survey. One or two people swinging drastically bad (or good) can spoil the accuracy of the whole system.

One or two people swinging drastically bad (or good) can spoil the accuracy of the whole system.

And there is an even more insidious problem. Like most polls, this system gathers “classification data” — gender, age, ethnicity and so on. The goal is clearly to be able to tell the manager “your african-american staff hates you”, and some such useful feedback. That’s all good, no?

But what if you are the only asian female in the group? “Asian females think you are a jerk” might well be your pink slip ticket out of there.

All in all, these systems are a bust. Yes, I think some good, constructive feedback from those around you can help people become better managers and employees. And I think there is value in knowing how well you are doing, in the eyes of those around you, especially for the criminally un-self-aware.

But you certainly shouldn’t rely on these 360 feedback mechanisms for any kind of vital decisions. And for most folks, there’s little about work that’s more vital than money.

Crime of the Perfect Review

I just got forwarded an amazing thing: a perfect performance review. Actually, if you’re a manager or, especially a manager of managers, you’ve probably seen more than one of these in your career. The performance review with nothing but the highest possible scores, and not a word of anything that even remotely sounds like criticism. These reviews are a crime, a lie, and, most importantly, a missed opportunity.

Perfect Angel
The Mythical Perfect Employee

This perfect review is a crime, because top people are your most valuable resource. As I said in my post on Microsoft’s recent performance review changes, you should spend at least as much time and effort on nurturing and aiding your top employees as you do in cleaning out the bottom ones.

It has been said many times that top employees aren’t simply better, or twice as good as your average employee, but as much as ten times more productive. They deeply understand the mission, handle things without constant supervision, take on new parts of the challenge without being asked (or prodded), and they get it done more efficiently and with better work quality.

If you are lucky enough to have these people, they deserve all the love and kindness, and all the help to grow, that you can afford. Giving them a perfect review, especially to people who know they are good, is likely to get a “yeah, great, fine, whatever…” response. That is a crime of missed opportunity.

Here you sit, with a clear star of an employee, they are doing great work, all that you can throw at them, and they want more. You have your semi-annual review and they come into your office for their feedback, knowing they’ve done well. Yes, of course, you need to tell them, in no uncertain terms, that they did great work. You need to clearly say (out loud, to their face, and even if you both already know it) what they’ve done, how wonderfully they’ve done it, and how much you appreciate that hard work. They, and you, really need to make sure this gets said and that they very much feel appreciated. It’s simply never said enough, and can’t be said too much.

Give a superstar nowhere to go in your organization, they will go elsewhere to find it

And then you need to say “but…”. Yes, you really have to say the “but…”. You need to point out a couple of areas where they need to work a little harder or somewhat differently. Perfection is impossible, and everyone has something they can do better. Perhaps they need to play better with the other children, perhaps they need to spend less time in the break room, perhaps they need to go home every now and then, for gosh sakes. Whatever it is, you need to give these superstars some place to go. Everyone needs a goal, everyone needs something to shoot for. Give a superstar nowhere to go in your organization, they will go elsewhere to find it. And that’s the last thing you want.

When I’ve told people this before, they tell me: “but I have to give Fred a perfect review, or I can’t get them the [raise, promotion, bonus] they truly deserve.” Horse-hockey. Your senior managers are almost certainly not idiots and they realize that everyone has somewhere to go. They will in fact look down on you for giving this review with such lame feedback. And if your system doesn’t allow people to get a [raise, promotion, bonus] without a flawless review, the system is broken, and you, as a member of the management team, have an obligation to work toward its repair. Start by giving an obvious superstar meaningful review feedback and also that [raise, promotion, bonus].

The perfect performance review is a crime, a crime of missed opportunity. Those who commit it deserve to be punished, or at least to have their perfect employee promoted above them…

Microsoft Changes Performance Review Scoring

Microsoft Logo

Microsoft just announced a number of changes in personnel policies designed to improve sagging morale. Good for them. It’s the result of a year of work by a longtime colleague of mine and now SVP of HR, Lisa Brummel. According to the reports, she spent a year listening to people and came up with a range of changes designed to stem the tide of people leaving.

Most news reports have focused on things like putting towels back in the locker rooms and giving senior people more stock, which are all well and good. But one point that was overlooked and seems intriguing is the changes they made to the scoring of performance reviews, a personal hot button of mine.

One point that was overlooked is the the scoring of performance reviews

For decades Microsoft has done performance reviews with a rating system that was graded on a 10 point scale, from 0-5 on 0.5 point increments (in a rather silly attempt to avoid the look of a “beauty contest”). The ratings generally went like this:

  • 5.0 – You walked on water, then turned the water into a nice Merlot. Almost impossible to reach, given to maybe one person a year, I saw perhaps 5 in my career there. Used to mean you would get a surprise 1-1 visit from BillG in your office.
  • 4.5 – Outstanding work, really above and beyond the call. Used to mean something like 100 hour weeks, and with amazing results. Hard to get more than one of these without a promotion. Very small percentage of people: < 1%.
  • 4.0 – Great work, excellent results, clearly leading the pack. Something like 10-15% of the people would get this score.
  • 3.5 – Solid work, well done, everything is fine. Most people (e.g. 70+%) would get this score.
  • 3.0 – You have a number of things to work on, some of them are threatening to your livelihood. You must improve or you are at risk. This was managed by HR to be about 10% of the team. There was always pressure for managers to give someone a 3.0, although the 10% was never rock-solid. But come on, SOMEONE on that team isn’t doing everything perfectly. If you got a string of 3.0s you are in trouble.
  • 2.5 – This is the first step before the exit. If you get a 2.5 and don’t get fired, it means you got the message. If you get a 2.5, you had better either have an exit plan, or be working your butt off to save your job.
  • 2.0 – Security is waiting outside my door to take your badge and help you pack.
  • 0 – 1.5 were unused.

This system worked fine for years (e.g. the last 25 years), but was always a source of complaints. Some people didn’t like the subjective nature of reviews –- come on, performance reviews are subjective, that’s why you do them. Some people didn’t like being rated like cuts of beef (oh, get over it, you’re rated every day by your salary, by your peers…). But the biggest point of pain seemed to be the requirement for people to give a reasonable percentage of people a 3.0 or lower.

This can be seen as a Jack Welch’ian “toss out the bottom 10%” but in fact it just stemmed grade inflation. And reasonably speaking the world is not Lake Woebegone where everyone is above average. Some people in every group need to improve. So Microsoft required groups of more than just a few to have something like 10% rated 3.0 or below. This is just reasonable.

Now, in this new system they have gone away from numbers and gone to words. As I understand it, there are now three categories: “Exceptional”, “Strong”, and “Needs Improvement”. Seems to me that is the same as 4.0, 3.5, and 3.0 — but nobody asked me.

My concern is where it should be: at the top

More importantly, however, this has made a good system worse. Not at the bottom of the scale, where it simply replaced a number with a name — they still need to worry about grade inflation, and there will still be groups that get told “oh, come on, you have to have at least some ‘needs improvements’.” No, my concern is where it should be: at the top.

Rewarding good performance is at least as important as correcting poor performance. And now Microsoft has lumped all good performers together in a lump. No longer will the true stars stand out from the really hard workers. No longer will people who achieve “Exceptional” (aka “better than average”) have motivation to strive for more.

And why did they change this? Because people at the bottom were offended. Ouch. Seems like a big mistake to me. I’m a huge believer that the best performers aren’t just better than average, they are 10 times better than average. You need to worry more about those people than anyone else. This seems like a move in the wrong direction.

What time of year is best for reviews?

Set the time for reviews to be off-schedule from your main business processes, especially budgetting. The last thing you want is for your team to be overwhelmed with non-line-of-business work, or to give either budgetting or reviews short shrift. Neither your business, nor your employees will appreciate that.

For example, if you are on a calendar year schedule and you typically budget in the fall, do your big review right after the first of the year, with a “mini-review” in the summer. (See my FAQ on review frequency.) A typical schedule would be to set the reviews to be written and delivered in late January (due January 31), and then for the checkup review to be due by July 31st.

This approach lets you budget for the pay increases that will happen right after the first of the year. You will also have light weight reviews in the middle of the cycle, in time for managers to give their best guess as to necessary pay increases for the next budget season. See, there is a rhyme and reason to these things…

How often should we do reviews?

Immediately after an organization decides to do performance reviews, the first thing they ask is: how often should we do them? There is very real tension between providing enough feedback to the employees, and making painful busy work for the management team. Fortunately there is a good compromise that many people choose.

I recommend that you do performance reviews on a twice-a-year cycle that includes a comprehensive annual review with a mid-year “checkup” review. This approach has several important advantages:

I recommend that you do performance reviews on a twice-a-year cycle
  • Employees get feedback more than once a year, which is a real benefit. Most managers simply never give enough feedback, and this kind of system forces it to happen.
  • The burden on the team to go through the process is lessened by the lighter weight mid-year reviews.
  • You can offer pay increases and other rewards more than once a year. (I firmly believe in tying pay changes to the performance review.)
  • Practice makes perfect – if you only do reviews once a year, people get out of practice. By doing them more frequently, managers get better at delivering both good and bad messages, and employees get used to getting feedback.

Reviews don’t have to be that hard. If you would like to see examples of what I think performance reviews should look like, see the performance review whitepaper elsewhere on the site.