Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Work sample evaluations do quite well, for example. As do in depth technical discussions about ... just about any subject the candidate claims to know about.

How do you compare candidate A with preferred subject 1 with candidate B with subject 2? And how do you get the candidate to talk sincerely and not bullshit around the topic, especially when testing ML - a modern alchemy more or less, no definitive answers, just a bag of tricks?

I have had problems with controlling the waste of time and directing the talk towards useful topics because the candidate wouldn't stop the bullshit talk. Maybe other fields have less bullshit maneuvering space.

I think it's better to go with a list of basic questions that remains the same between candidates. Only if they ace the basics I test for depth. Depth can be tricky to evaluate, especially for people who are not very bad - maybe just in ML.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: