I had a google interview about a year ago. The thing I found strangest is that some interviewers would walk in the room and throw up a coding exercise without any introduction at all. They literally wouldn't give their names and what projects they worked on. You don't care that I've been working for almost 10 years in the real world and would rather judge me on my ability to solve a linked list problem? Really? If anything is "gameable" this is. Just spend a year playing topcoder in your spare time and you could ace any of the questions I got.
I have a good friend who works at google and compares the recruiting team to being "the hot girl in highschool" (no intended sexism). Since google recruiters are at one of the elite companies in the world, they can treat people disrespectfully. If you don't like it, there will be a million people more people to choose from. I actually didn't get this vibe from them, but I only dealt with a couple recruiters.
The skills being evaluated in Google interviews (and on topcoder) are of minimal use in most real problem-solving environments. They're a proxy for skill - something many skilled candidates happen to have alongside the skills you actually want.
Linked list questions are a favored choice of this class of interviewer despite the fact that 90%+ of programmers will never have a good reason to write a linked list - or even use one, for that matter. It's a CS fundamental and it's important to understand, just like other data structures, but evaluating a candidate's ability to rattle off a particular linked list algorithm on the board doesn't tell you anything about their ability to solve problems customers care about.
> Linked list questions are a favored choice of this class of interviewer despite the fact that 90%+ of programmers will never have a good reason to write a linked list - or even use one, for that matter.
If you can't write basic algorithms to manipulate linked lists (traversal, reversal, etc) on demand, you almost certainly aren't a good coder. These kinds of questions are a good weed-out pass. They don't correlate with many necessary skills of a good programmer (organization, communication, thoroughness), but at least can detect lack of critical thinking or basic coding chops. They also lead to good follow-up questions that further explore a prospect's critical thinking skills.
These questions shouldn't be used to find human compilers. Oversights like minor syntax errors and the like should be ignored. Answers should be interpreted generously. Even then, you might be surprised how many candidates I've seen that were beyond incorrect. They weren't even in the right ballpark.
You shouldn't be expected to know Floyd's algorithm [1]. But reversing a linked list? Come on, the solution is 50-100 lines of code, 90% of it boilerplate.
I get the impression you didn't read the part of my post where I said these algorithms are useful.
The problem is expecting a candidate to have them memorized and be able to rattle off a particular algorithm from memory in a 15-30 minute interview window, when in the REAL WORLD if you don't remember the exact details of an algorithm, you look it up or grab it from a library instead of writing a busted implementation yourself.
That's why these questions are unrealistic: They're evaluating a candidate's ability to do something you almost never want them to actually do.
Yes, a candidate who can't understand linked lists is probably a bad choice. I never said otherwise. It's lazy to make the leap from that to 'thus, asking a candidate to write a complex data structure utilizing linked lists on the board in 30 minutes is a good idea'. Only if it's something they should actually be able to rattle off like it's no problem...
There are lots of ways to evaluate critical thinking and problem solving skills that don't rely on a candidate having memorized the contents of a CS textbook.
A better interview question is one that focuses on solving a real problem for a hypothetical customer, because that's what most engineers actually get paid to do. If the problem happens to be optimally solved with a linked list, you can see if the candidate arrives at that conclusion naturally, or guide them towards it if they don't.
> The problem is expecting a candidate to have them memorized and be able to rattle off a particular algorithm from memory in a 15-30 minute interview window
I get the feeling that we're talking past each other. It certainly seems that we have different kinds of interviews in mind. I'm referring to about an hour long session where a candidate is asked to solve a basic problem with a linked list. The candidate shouldn't be expected to have the algorithm memorized. Competent developers should be able to work out simple algorithms manipulating a linked list given an hour's time.
> you look it up or grab it from a library instead of writing a busted implementation yourself.
Usually. Sometimes there is no library. The point is the candidate should be able to write the library themselves if needed. The beauty of many linked list questions is that no reference is needed to work out the optimal answer (Flynn's algorithm being a notable exception).
> There are lots of ways to evaluate critical thinking and problem solving skills that don't rely on a candidate having memorized the contents of a CS textbook.
Again, I don't think that linked list traversal and reversal require some kind of arcane CS knowledge. Interviewers shouldn't be asking for e.g. on-the-spot implementations of Dijkstra's algorithm, the Hindley-Milner type inference system or the max-flow problem.
> A better interview question is one that focuses on solving a real problem for a hypothetical customer, because that's what most engineers actually get paid to do.
This is a good complement to a basic algorithmic / coding test. But it doesn't replace such a test.
My PoV is that even if you have the hour or so that it should take for an average candidate to arrive at Floyd's algorithm or something, is that really the best use of the hour to screen the candidate? If your goal is simply to see whether they can solve problems using linked lists, that's noble, but making a candidate spend the majority of an interview trying to arrive at a solution for a new problem from scratch is a questionable decision, especially if each person in the interview pipeline is going to do the same thing with another problem.
For reference, the first time I saw the cycle detection problem it was in an interview, and I had to derive Floyd's algorithm myself. It indeed took about an hour, so your estimate's probably not that far off. The problem is that it's the least optimal environment to solve a new problem for the first time without any assistance or context, which is what you're basically requiring. The candidate is basically left swimming in the deep end of the pool and you watch them flail around and at the end of it, maybe they've come up with a solution to the problem. Maybe they're really close and haven't hit that flash of inspiration yet. Maybe you accidentally misstated one of the problem's constraints and they're never going to figure it out as a result. Pretty lame.
For basic stuff like reversing a LL I agree that no reference is needed to work out the optimal answer, but on the other hand I strongly believe those questions are worthless as a way of evaluating competence. The kind of LL stuff I've seen in interviews skews much more heavily towards Floyd's than simple stuff like list reversal, most likely because the simple stuff is easy.
A good interviewer can certainly detect that a candidate just isn't familiar with LLs, and move to a different problem. I think that's an adequate method too: Attempt to screen out the people who aren't competent at logic and recursion and similar concepts, but without filtering out people who have just never been exposed to a particular class of problems.
On the other hand, if you think having a candidate flail around for an hour (whether they solve the problem or not) is a good idea, there's no convincing you otherwise. I just think it's a pretty poor choice if your goal is to make the candidate and interviewer come out of the experience looking back on it positively with lots of useful information.
function mklist(v, ...)
if v==nil then return nil end
return {value=v, next=mklist(...)}
end
function listeq(a,b)
if a==nil or b==nil then return a==b end
if a.value==b.value then return listeq(a.next, b.next) end
return false
end
function revlist(list, sofar)
if list == nil then return sofar end
local next = list.next
list.next = sofar
return revlist(next, list)
end
function test()
assert(mklist(1,2,3).next.next.value==3)
assert(mklist(1,2,3).next.next.next==nil)
assert(listeq(mklist(1,2,3),mklist(1,2,3)))
assert(not listeq(mklist(1,2,3),mklist(1,2,5)))
assert(not listeq(mklist(1,2,3),mklist(1,2)))
assert(listeq(mklist(1,2,3), revlist(mklist(3,2,1))))
assert(not listeq(mklist(1,2,3), revlist(mklist(1,2,3))))
end
> You shouldn't be expected to know Floyd's algorithm [1].
Bob Floyd had more than one algorithm named after him. I thought you were referring to his beautiful algorithm for sampling without replacement [1], described by Jon Bentley in his CACM "Programming Pearls" column in 1987 [2]. If you don't know that one, it's worth studying for the techniques it reveals.
Yes, but some interview questions are more important than others. I was asked this question before. I didn't know the answer. I still got the job.
A year later, I'm asking the question. I obviously don't hold it against interviewees if they don't know the answer. I see it is a "bonus points" question, not one that will make/break my opinion of a candidate.
Asking about cycle detection in linked lists can be informational even if the candidate doesn't know Floyd's algorithm. Like simpler questions, often the interviewee's thought process is more important than their final answer.
Why do you ask questions that don't influence your hiring decision? I find these kinds of questions quite annoying because I suspect the interviewer is not trying to measure me appropriately - and I have turned down job offers where I thought the interview questions were poor.
Why ask a question when you have empirical evidence that you can perform quite well (I'm assuming you are a good engineer) without being able to answer the question? It makes no sense to me.
> Why do you ask questions that don't influence your hiring decision?
Every question influences hiring decisions. Some questions just carry far more weight than others. Also, the answers often aren't as important as the thought process.
All else equal, the candidate that knows Floyd's algorithm would get the job over the candidate that doesn't. But we're talking about people here, and it's really rare that all else will be equal.
> Why ask a question when you have empirical evidence that you can perform quite well (I'm assuming you are a good engineer) without being able to answer the question?
We don't tell the candidate "write out Floyd's Algorithm". We say "how would you detect cycles in a linked list". If they can't derive SOME solution, that does count against them (and I'll be bold enough to say they probably aren't a good engineer). Someone who derives a decent but suboptimal solution to the problem isn't far behind someone that actually knows Floyd's.
In the end we're ranking people here, not objects. Many things factor into the final decision.
>>Doing a year of such training would raise your actual skill.
Hardly,
Building something would be your actual skill. Taking a product from an idea, finishing it, having some real users use it. Selling the product, and monetizing.
Those are the skills that matter.
Coming to storing trivia in your brain. Any decent guy who gets things done can read and work on such stuff. Its an utter waste of time to spend years putting text book knowledge in your brain, unless your profession is teaching.
So I signed up for an account, yet I was still unable to decipher what exactly it is. It says its a network 100K strong of coders and has the occasional competition. But I'm not sure what content is there to occupy time spending a year "playing top coder".
I think the main page seems confusing because much it is targeted toward businesses looking to have the site's users develop projects via contests.
If you click on Community->Developer it gets a little less confusing and shows a variety of contests. You can find some algorithm contests at Competitions->Algorithm->Single Round Matches->Launch Arena.
When I try this, I get an error message related to Java. When I last used the site, I ended up installing Java WebStart, downloading a JAR file for the arena, and running the JAR file from the commandline.
Overall TopCoder seems like more of a hassle to get started with than it should; fortunately, other sites like YC-funded HackerRank exist, although I'm not sure how the actual content and community compare.
I have a good friend who works at google and compares the recruiting team to being "the hot girl in highschool" (no intended sexism). Since google recruiters are at one of the elite companies in the world, they can treat people disrespectfully. If you don't like it, there will be a million people more people to choose from. I actually didn't get this vibe from them, but I only dealt with a couple recruiters.