Assume that we have feedback on team members from two different project managers.

People-> |
U | V | W | X | Y | Z |

Managers | |
||||||

A | 70 | 63 | 82 | 91 | 56 | 77 |

B | 68 | 60 | 80 | 80 | 55 | 60 |

Can we say that W performs better in team A compared to team B? It looks like yes, he works better but analyse the scores a bit deeper. **Project manager B has rated all team members lower than manager A. **It may be that he is using tighter scoring.

In a different situation, it may be that two teams (of different people) have gone through two different tests, and we want to compare the people against others. Or, a university might want to compare people passed out in 2001 with those passed out in 2009.

**The question is, how do we compare people when the scores that we have do not use the same basis. **

The answer is: **normalization**. We fix the mean score, and the degree of deviation that we would like to see. For a 100 marks test, we may want 50 as the score and a 20% deviation. Now, we will compare this with the actual mean and the deviation of each of the sets, and modify the scores as needed. **Please refer to the ****attached spreadsheet which helps you do this.**

In the given example, A has a mean of 73, and a deviation of 13. B has a mean of 67 and a deviation of 11. **Let us bring both to a mean of 70 and a deviation of 15.**

People-> |
U | V | W | X | Y | Z |

Managers | |
||||||

A | 66.3 | 58.1 | 80.4 | 91 | 49.9 | 74.5 |

B | 71.2 | 60 | 87.9 | 87.9 | 53.1 | 60 |

So we note that A is indeed better in project A, but only slightly. Also from the initial figures we might have concluded that U is better in Project A, but actually the reverse is true.

Mathematically this process is called Normalization and is useful in fitting scores to a bell curve. Read more about it here if you are interested. However the spreadsheet attached is sufficient to get you started.