Leave a comment

steer May 23 2014, 14:37:53 UTC
Actually I'm assuming quite the opposite, that all teams have different percentages of people in different boxes and that managers have freedom to say that 100% of their staff are in the top 5% or bottom 5% if they believe that to be the case. I presumed that was the intent of the document (and that people discussing it later were misinterpreting).

Of course it is hard to know which is the true intent of the document because we only have a half-complete report of it.

The stack rank is the opposite of this.

Not putting a percentage on each grade risks managers interpreting it differently and, of course, you still might feel you're competing with your co-workers (like going on the pull with someone ugly so you're the pretty one).

Of course you could make the whole system wishy-washy, opaque and informal -- which is less controversial and creates less anxiety but is open to abuse. In the end any formal system of ranking people and coming to a conclusion about whether they are good or bad and using that to give them more attention or money will always be fraught -- people are a problem.

When I review papers for conferences I get a batch of (usually) between six and one paper to look at. Every conference asks you to give the paper a grade (usually 1-5) on how good it is. Some conferences just let you get on with it (I implicitly assume equal weightings to each grade). Others explicitly say something like:
1) Top 5% of papers
2) Top 10% but not top 5%
.
.
5) Bottom 50%.

It is not expected that my reviews follow that statistical distribution I am supposed to infer the "goodness" of the papers I have not seen from my knowledge of the quality of that conference.

I prefer the second system as it gives me an indication of how "rare" it should be to put a paper in the very best and very worst boxes. When a conference only accepts 15% of papers (as is common in my business) then it's not useful to know that the reviewer thinks it is in the top 20% as anything below that is not going in anyway.

Reply

andrewducker May 23 2014, 14:45:54 UTC
I presumed that was the intent of the document (and that people discussing it later were misinterpreting)

I'm confused. On what basis did you assume that was the intent of the document? If the intent of the document was to say that all groups would have varying numbers of people in different buckets then surely it would say that, and not give numbers?

As far as grading papers is concerned, you're doing it for a very different reason - in order to find the small percentage that are worth taking up people's time with. You only have so much space at the conference, and therefore only the best ones should be shown. Whereas you may well have an organisation full of awesome people, each of whom should be rewarded based on their own, individual, contribution - whether they're contributing more than others is pretty much irrelevant. If I bring in £50,000 worth of value to the company through my work then I should be rewarded for _that_, whether Bob in the next cubicle has brought in £30,000 or £300,000 worth of value isn't a contributing factor there at all.

Reply

steer May 23 2014, 19:20:02 UTC
I think maybe we're at cross purposes here. We don't have the full document so we have to guess what they meant. So saying someone is a top 5% employee could mean:
1) They are the best 5% of people at that job out of all the people in the world.
2) They are the best 5% of people at that job out of all the people in the company
or
3) They are the best 5% of people at that job out of all the people that manager has graded.

1 is obviously daft. I mean you hire them because they have that skill. Almost everyone in your company will be in the top 5% of people in the world at that job. I'm in the top 5% of C coders because 95% of people can't code C. (Numbers approximate).
3 is obviously daft because managers will not typically have enough staff for the fine grained split down to that level (you need 20 reports to make the 5% level meaningful) and abilities will vary. 2 is the only interpretation that makes any kind of sense and is at all useful as an outcome.

If the intent of the document was to say that all groups would have varying numbers of people in different buckets then surely it would say that, and not give numbers?

No.. you absolutely need the numbers. Otherwise how do you make the judgement? If I say to you "Andrew, you just watched that film, would you rate it 1,2,3,4 or 5" you need some kind of basis for that if that mark is to be a useful quality measure of anything more than "Andrew's opinion". It is very different if I tell you 1 means that it's one of the best 100 films ever made, or 1 means that it's in the top 5% of all films or 1 means that it's in the top 20% of all films. The point is that you actually need the numbers. If you don't put the numbers people will arbitrarily use numbers they come to in their head and what you have is a random element in the process which comes up

Without assigning sizes to the buckets then all I will get is the relative ranking of your preferences of films. I can't compare it with anyone else's preferences (because they may be doing it in a completely different way) and hence I can't compare in a good way between different people doing the ranking... which is the very point. [This example doesn't quite work because multiple people would rank the same films so you can see if someone is a "harsh" judge giving mainly 4s and 5s or if someone is easily pleased giving mainly 1s and 2s.]

Let's imagine that you don't assign weights to the boxes and you just let managers pick. So manager A rates all their staff 1 and 2 and manager B rates all their staff 3. Are the staff of manager A better than the staff of manager B? You have no way to judge it. Maybe manager A has excellent staff. On the other hand maybe manager A just thinks that any rating 3 or lower is an insult. [Consider computer game reviews which mark on a scale of 100% to 60% -- or 10-6 depending on website -- and anything below 60% is a complete panning and the reviewer will be getting a phone call from the publisher. If a new reviewer started to review on a 1-10 scale equally weighted it would actually be confusing because a really quite good game could get a 7 or 8 which is right now a very lukewarm mark.]

I assume the purpose of the grading is that those in the top and bottom grades will be treated differently. Perhaps you want to focus limited opportunities for promotion or training on the top. Perhaps you want to fire or focus limited opportunities for retraining on the bottom. Point being it's rational to want to identify by the best process you can the best and worst people over your company. Now the question becomes how do you do that.

You could have some fuzzy kind of "manager X thinks person Y is pretty good" -- but actually you do want to in the end split out a certain proportion of your workforce to fast track for promotion a certain proportion to drop or retrain and a certain proportion (probably the majority) for business as usual.

Reply

andrewducker May 23 2014, 20:33:38 UTC
No.. you absolutely need the numbers. Otherwise how do you make the judgement?

Based on job criteria? Are people achieving their goals? What is the value they are bringing to the company?

That's certainly how the _good_ managers I've worked with have done it. Based on individual merit - against criteria which are clearly written - with calibration against other managers to make sure that the different ones aren't grading massively out of whack against each other.

I don't see how the achievements of anyone else in the company have anything to do with "Am I providing good value for my current pay?" and "Do they need to pay me more to stop me leaving?"

Reply

steer May 23 2014, 20:52:32 UTC
Based on job criteria? Are people achieving their goals? What is the value they are bringing to the company?

That is how you would rate their performance -- how do you then map that on a scale 1-5?

Reply

andrewducker May 23 2014, 20:58:34 UTC
1 - So bad at their job that they need to be on an improvement plan right now or we're going to have to get rid of them.
2 - not good at their job. Failing to achieve some of their criteria.
3 - Achieving all of their criteria.
4 - Achieving all of their criteria, overachieving at some of them.
5 - Overachieving all of their criteria.

(With bonus points for doing things that aren't anything to do with their criteria but make a positive difference to the company, and negative points for being a dick and making a negative difference to the company.)

(Except, y'know, in better English.)

Reply

steer May 23 2014, 21:01:12 UTC
OK... so you've got that scale. Now what is "overachieving". If you mark on your scale maybe manager Bob is a bit Lake Wobegone and believes all children are above average and thinks everyone is over achieving. On the other hand maybe manager Alf has higher expectations and thinks nobody is over achieving. MAybe manager Bob sets really easy criteria... you need something under this saying approximately how hard it should be to get "5" or it's completely arbitrary. (I accept it will always slightly arbitrary but without some kind of guideline the poor managers actually have no chance of being fair even if they want to be fair.)

Reply

andrewducker May 23 2014, 21:14:37 UTC
Well, you also need role profiles to go with it, defining what you're supposed to achieve. And then you _do_, I agree, need cross-calibration, which is where HR comes into, getting the managers to spend some time making sure that when Bob says "AMAZING" that he means something similar to Simone.

But in-team ranking bumps into massive problems in exactly that situation, where one team under one manager does have lots of good people working on the latest cool stuff, and another team under another manager has a bunch of less good people who are slowly working away at something less important.

I don't think there's a perfect way of doing this - but the grading-on-a-curve method is one that seems to upset the most people, the most often.

Reply

steer May 23 2014, 21:21:10 UTC
And then you _do_, I agree, need cross-calibration, which is where HR comes into, getting the managers to spend some time making sure that when Bob says "AMAZING" that he means something similar to Simone.

Yes -- and the most straightforward way to calibrate this would be to say something like "5% of people in the company as a whole are AMAZING" or "20% are DAMN GOOD".

where one team under one manager does have lots of good people working on the latest cool stuff, and another team under another manager has a bunch of less good people who are slowly working away at something less important.

Yes... it's always going to be difficult. How do you compare slow and steady with fitfully brilliant.

I don't think there's a perfect way of doing this - but the grading-on-a-curve method is one that seems to upset the most people, the most often.

That's what I was trying to get at right back at the start when I said ".. except when you take in the human aspect and people worrying that they won't be granted the best grade because that would put them over the magic barrier."

Something like what they propose is pretty much the best lightweight method to try to get some kind of cross manager consistency in that type of grading... except it will drive people up the wall.

I think (as is often the case) we seem to agree on a lot of this -- I'm just not expressing it terribly well.

Reply

andrewducker May 23 2014, 22:06:45 UTC
Yes -- and the most straightforward way to calibrate this would be to say something like "5% of people in the company as a whole are AMAZING" or "20% are DAMN GOOD".

I'm going to need some justification for that. Because I can't see how you get to this conclusion.

What if 60% of the people in the company are amazing? What if it's only 1%?

Reply

steer May 23 2014, 22:19:23 UTC
I am assuming the aim is to try to get an assessment that is fair between managers, so that Dave saying "amazing" is equal as near it can be to Amy saying it. The rating of "amazing" is without some calibration meaningless so the concept of 1% or 60% off employees being amazing not useful, and if ranked 60% off your employees as amazing then would not be amazed. (The current ofsted rankings are such that iirc "average" is bad and "satisfactory" is resigning bad... The point is that the word meanings are not attached the ranks.... average is way below average and satisfactory is not satisfactory).

The best way that is simple is for them to compare to a pool of talent they can both have a chance at assessing - the rest of the company. This provides a way of calibrating the word amazing. Without some calibration of this form you are rating how easily amazed Amy and Bob are rather than how relatively good their employees are.

You could come up with a few more roundabout ways to do it but in a large company long established, your talent pool pretty fixed you comparing against essentially a constant.

Should be clear that this works only with large companies. With smaller companies you could change massively the talent base in a year.

Reply

I need to know what you think of...something odd Julie just said andrewducker May 23 2014, 22:32:07 UTC
With large companies you have massively variant talent bases working in very different areas, with largely divergent skill-sets.

And none of this gets away from the moral-destroying effects of grading people on a curve, which causes in-fighting and kills productivity.

Personally, I don't think that grading people 1-5 in the first place is a good idea - that's the rot setting in because managers somewhere think that in order ot manage something you need to quantify it (preferably numerically), and it's all downhill from there.

Reply

Re: I need to know what you think of...something odd Julie just said steer May 23 2014, 23:02:29 UTC
That is all undeniably true. I think I said elsewhere it is probably actually better from a personnel point of view too have a less fair system that does a worse job at identifying talent as it will cause less resentment. So having a murky unclear system open to abuse and worse for employees (fairer and more accurate) is probably better for employees (less damage to morale). So given you are dealing with people then heading with woolly words and giving promotions and bonuses an opaque way well probably make them happier. People are a problem.

Reply

Re: I need to know what you think of...something odd Julie just said andrewducker May 24 2014, 09:02:58 UTC
They certainly are. If they would just fit into nice neat boxes life would be so much easier!

Reply

steer May 23 2014, 19:20:09 UTC
If I bring in £50,000 worth of value to the company through my work then I should be rewarded for _that_, whether Bob in the next cubicle has brought in £30,000 or £300,000 worth of value isn't a contributing factor there at all.

If that's your metric then I sort of agree and sort of don't... so for the sake of argument let's assume that income generated is a perfect metric for the business in question (which would not normally be the case but let's make things simple). Absolutely what you want to identify is the top 5% of income generators and the bottom 5% of income generators. You certainly do not want to identify the top 5% of income generators who happen to be working for Alf and the top 5% of income generators who are working for Bob. You really don't want to identify all the people who generate income over arbitrarily chosen level X because then you might find you get 60% of employees in a good year and 0% of employees in a bad year and the exercise becomes meaningless because you would end up with lots of people in your "promote and train" box that you can't promote and train or nobody in your "promote and train" box and an unspent training budget and nobody promoted.

As far as grading papers is concerned, you're doing it for a very different reason - in order to find the small percentage that are worth taking up people's time with. You only have so much space at the conference, and therefore only the best ones should be shown.

I'm doing it for a very similar reason. There is only so much space for promotion and training, there is only so much budget for pay increases, there is only so much leeway for firing or retraining underperforming employees. A process which identifies 25% of employees in the top and bottom grades when you wanted to get the top and bottom 5% is pretty worthless.

Reply

andrewducker May 23 2014, 20:36:38 UTC
You certainly do not want to identify the top 5% of income generators who happen to be working for Alf and the top 5% of income generators who are working for Bob.

Absolutely. That would be insane. But I never suggested that.

And you promote people because you have a different role that requires people with their skills. And you train them because you have a job that needs doing that they could do if you trained them. Which has _something_ to do with their current role, but not necessarily vast amounts.

I've known great developers who shouldn't be promoted to lead developers, because they'd be rubbish at it. I've known people who were terribly in their current role, but with some training could be great at a different one. In both cases, training or promotion because of current role ability would be a terrible idea.

Reply


Leave a comment

Up