Proposal


Background
There's a web-app in development called QuickRank. It's a simple peer-review system where employees pick a winner from repeated "match-ups" among their coworkers. Questions like "Who is more hardworking?" and "Who takes longer lunch breaks?" are posed and the user simply clicks on one of two faces. It's like a corporate version of "FaceMash", the infamous website Zuckerburg wrote at Harvard comparing the attractiveness of classmates.
After "enough" clicks, the managers are presented with comparative rankings and graphs for each employee group, both current and historical.
A tool like QuickRank may be useful in many corporate situations, and can probably be generalized to lots of different contexts. The key is to provide meaningful, non-obvious insights into employee performance. We realize that a ranking of employees can be... controversial. But in many situations, a comparative (rather than absolute) performance summary can be very revealing.

1. Problem we're solving: Employee performance reviews are always based on an absolute scale: either employees are rated 1-5 on various metrics, or (usually vague) goals are stated and measured against. Managers may get a good idea of individual employee development, but have little insight into comparative performance. And true performance is what matters for the business. They may want to know the relative strengths and weaknesses of each employee compared to the rest of the organization or group.

2. Data: The underlying data in this project will be the collection of matchups presented to each user, and the winner from each. From that data, we can construct many metrics, like performance over time, performance versus the average, complete rankings, etc.

3. How: We plan to explore several techniques to gather interesting data. One simple way is to artificially generate the clicks based on some distribution. But a far more useful way would be to get test groups to use QuickRank. Perhaps the class would be willing to help participate in the study, or a small business.
If we could convince some participation-oriented grad classes to help participate in the study, we could ask questions such as:
  • Who participates more in discussions ?
  • Who has more insightful questions ?
  • Who has more creative answers ?
  • Who do you think will do better on the final ? :)

The last question is especially interesting because in a way it incorporates all other questions, and on the other hand it can easily be tested, with the professor giving us access to the grades (in an anonymous way, like assign each student an id, so we won't know the name).
The only public data of the survey would be the top 3 for each category, i.e 'Who has the most insightful questions ?:'
1. Bob
2. Joe
3. Jane

4. Algorithms/Techniques: The "score" of each employee in a dimension is based on the Elo rating system. But that's just a start. Finding algorithms and techniques to provide insightful data is the key.

5. What we'll evaluate: We might evaluate several different things: the paper could be a collection of case studies, or perhaps more of an algorithm analysis where we measure how changes in the algorithm affect the outputs. Or we could measure "expected" versus "real" performance: See how employee/user rankings compare to the rankings when performed by a manager.

6. Expectations: We expect to get amazing, breakthrough data that show how peer reviews, when performed simply and over a large scale, are superior to top-down reviews.

7. Team: Nishant Patel and Catalin Tiseanu

Disclosures / More Background:
QuickRank was invented by Illogic Inc. (a three-person company including Nishant Patel) for commercial purposes. It is currently unfinished and not under active development, though it may be again in the future. We plan to leverage existing code to create a functional version of the web app for research purposes. There is a possibility that the contributions made for this research project may be used in part for commercial purposes in the future.



Comment (Ben): Intriguing. I think this is interesting for many reasons. Looking forward to seeing what you do. My only question is what is the relevant literature you should be looking at? One bit of peripherally related work is going on at IBM - here is a representative paper:
http://www.perer.org/papers/pererCHI2011.pdf

It is always best to be transparent about efforts. It would be better to state the context of QuickRank (i.e., commercial effort that you are involved in, etc.)



QuickRank_progress_report

Final Paper: