Hacker News new | past | comments | ask | show | jobs | submit login

I often tell people that before they think about their hiring process, they should think about their performance reviews. You're doing those with multiple orders of magnitude more data, yet most managers will say their performance reviews have a lot of uncertainty.

I spent a lot of time at my last job developing ways to measure performance across my team. One big surprise was that I had to significantly change the way we assigned work in order to make it at all measurable. Rather than giving people well-defined tasks, I switched to assigning projects that had some amount of (expected) business value. This meant the projects had to include things like deployment, talking to marketing, and whatever needed to happen to make it useful. Generally each one was 3-6 weeks of work.

This provided a good first-order measure of effectiveness. Importantly, it helped people change their habits to contribute more business value. I found massive variation in the junior engineers, easily with 5-10x differences between the most and least effective. The differences in senior engineers were smaller, but still ~2x.

Of course, there are other major factors you have to take into account, like code quality and mentoring. I developed similarly detailed systems for measuring those. Once again, you have to change the way you do things if you want to make them measurable, often accepting local inefficiencies for global insight.

With all that data in hand, I correlated that to our hiring ratings. The results? Our two best engineers were both borderline in the hiring process, and would have been rejected except for a strong vote of confidence by our senior engineer, who, when we checked, turned out to have a basically perfect track record on hiring judgments-much better than anyone else in the company, me included.




IME it's hard to precisely rank average devs who are in the middle. There's a lot of clumping.

It's a lot less hard to identify overperformers or extreme underperformers. You mention the big possible range, but I've never had to dive that deeply into data to spot them - I've used the data to check my instincts, but have generally been right. And if you don't trust your instincts, just ask your team! They know whose code reviews they dread, who never offers helpful suggestions, who seems to do just the minimum. It's sometimes tricky (people often don't want to say bad things about their peers), but generally people already know.

I know what I'm looking for in my performance reviews, and so I know what I'm looking for in my candidates. A competent coding baseline, and skills in the non-coding areas (knowing what not to build, anticipating edge cases, ability to distill business logic, ability to explain complex things clearly...). But this list is quite different from the "standard" tech interview loop: algorithms after algorithms with a sprinkling of "design a URL shortener."


The data had a lot of other benefits:

* When I took over, I was a lot more permissive about letting people work from home. Upper management pushed back and was worried about productivity. I was able to show that people were getting much more done than before (though admittedly there were other factors, I changed a lot of things.)

* After a few months of monitoring, it became clear that one coworker had a predictable but volatile pattern: he'd be very productive for a few weeks, then very unproductive for a few weeks. So I asked him to take on urgent projects during the "up" periods and avoid them during the "down" periods.

* One particular person improved so fast it was almost unbelievable. They were already strong to start, but after 12 months they were literally getting about 4 times as much done as they had at the 3 month mark. I would never have believed it if I hadn't been keeping close track...and this helped me fight for promoting them twice in a year.


Not everyone is great at interviews. One of the best engineers I worked with was very bad at interviewing (even I wasn't initially sure whether we should hire him). The problem is that most of the interviews are very different from actual day to day work. Candidates do not perform same tasks in the same way as they would while working. Same goes for evaluation of results - very rarely is candidate rated using same criteria as it would in his performance review.

Would you mind sharing more details how you measured performance of your team? Thanks


The company I work for just started the process of completely reworking how the development team's impact/effectiveness is measured (along with each member of the team as you would imagine), so your comment really peaked my interest.

What did you end up measuring? How did you measure it? We've been debating for the last few days on how to measure the outputs and their real business impact without getting caught up measuring the inputs that may or may not really matter (commits, loc, story points, etc...).

For context this is an org that has been, historically, in the dysfunctional area of the operating spectrum, high employee churn, lots of technical debt, no testing to speak of in flagship product, high concentrations of knowledge held by single individuals.

We have somewhat coalesced around the idea of dynamically assigning business impact metrics on a per feature/product basis (if we build this thing we would expect to see metric x, y, z go in said direction). In addition to those metrics we are thinking of also doing something along the lines of an NPS (net promoter score) score that would be given to the feature/product by the end-user. Taking both of these into account would then score the development teams effectiveness/impact.

In addition to the outputs mentioned above we would also be tracking the inputs, but more as a historical data set, to see if there are any correlations between our inputs (commits, loc, story points, etc...) to better NPS and business impact metrics.

I'd love to hear any feedback, experiences, advice.

P.S. Team size is 10 devs, core team of 4 in U.S.A co-located, all others remote international.


> For context this is an org that has been, historically, in the dysfunctional area of the operating spectrum, high employee churn, lots of technical debt, no testing to speak of in flagship product

> P.S. Team size is 10 devs, core team of 4 in U.S.A co-located, all others remote international.

My advice would be to bring in a good dev manager and stay away from trying to narrowly define dev productivity metrics.

A good dev manager will be able to bring you up to average and fix the obvious problems.

If you are part of a larger company then the business side is probably using OKRs (objective-key results) or something similar to track at a higher level. Start looking at and making sure your team is contributing to these.

As a senior manager your teams self-assigned dev metrics are meaningless to me. It's not going to be enough to justify more staff, pay rises, different work, etc.


How did you measure code quality and mentoring? Those are both big open problems.


For code quality, main thing I did was read every pull request and keep track of how people did, including cases when people improved existing code. I also kept track of times where someone made a contribution to how we thought about quality – e.g. when one person used type-level programming to essentially combine a bunch of tests into one and significantly reduce the amount of code needed.

We had ongoing training – we'd meet every week to go over a chapter of some book – so everyone converged on the same code style which made monitoring quality relatively easy.

With mentoring, I primarily paid attention to what people said about others in terms of mentoring. I also monitored mentoring through code reviews & slack. People's reviews were very consistent – "Bob is the most helpful mentor, Charles is also pretty helpful" – so again, relatively easy in that specific case.


My current company has one of you. He thinks he's effective too. I'm sure there's a word for that...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: