"Then, acting as though this was the first time I'd seen this problem, I would ask if it was ok if I thought aloud as I worked my way through the problem on the board. I'd mumble to myself about moving-this-piece-over-here and-now-we're-going-to-get-this, and lo-and-behold, I accidentally solved it in constant memory space, in C - a language I didn't even claim to be particularly good at! Only someone with amazing problem solving skills would be able to do that their first time thinking through this problem so earnestly!"
All this tells me is that tech interviewing is horribly broken. What we have here is a story of a guy with a little bit of experience who has (if his story is true) successfully learned how to game the interview systems at dozens of startups. If you interview people, this article should terrify you. You can argue that it isn't "gaming" if the guy turns out to be a good employee, but that's wrong. If he can trivially game the system, so can anyone else with the ability to go to dozens of interviews and remember questions.
There's a fix for this problem: make the interview about more than the answer to the stupid question. If the candidate answers your question a little too quickly, ask something harder. Chase down a detail. Pick an experience on the resume (and not an obvious one), and dig as deeply as you can. Good candidates know the details. Bad candidates don't have details you can chase.
If you're interviewing correctly, no candidate should be able to come in and snow you with some regurgitated C code from previous interview experience, because you'll be able to out-flank them at every attempt to cough up a line. Unfortunately, there's a corollary: you need to know the subject better than the person that you're interviewing. If you don't, you have no hope of screening for good people.
It's not really gaming the system. He solved the problems, himself. He asked questions, learned and then came up with more and more effective solutions. Aren't these exactly the traits an employer should be looking for? Didn't he put a lot of effort and time?
Here's my own example. I've been recording some of the problems I have to deal with at work, and some of the things I come across in my spare time in the form of a blog, for the last 3 years (with some gaps here and there). I recently started looking for a better place to work. I included the link to my blog in my resume. I got an offer recently, it will be a substantial increase in salary. I only interviewed with ~5 companies to get an offer, that's my easiest offer and shortest job search ever.
Now, would you say that I 'gamed the system' because I had a blog, and most don't? I guess you can say so, because when I started my blog, I had that in mind - it may be useful next time I look for a job. But I did not mindlessly type stuff in - I made sure I understood it as much as I can, and quite often writing the solution down helped my understanding. Same with this guy - he did not just memorize how to do it.
I had one YC company ask me to create a linked-list structure in ruby, and then reverse it. I did it recursively, mentioned that it would blow the call stack, and then re-implemented it iteratively. I also remember there was some problem where I used symbols rather than strings and that would cause some bad memory leaks over time in a long-running process. - From this passage, he might not be the "top 1 percent" everyone tries to hire, but at least top 5%, that I'm sure of. Or maybe he gamed my impression too.
The definition of gaming the system is different here. These days unless you are implementing a library yourself. You are probably never ever going to even reverse a linked list ever in your life. Or deal sit down with a math text book when you code. Those days are probably gone in the 80's.
That means asking these questions is akin to you taking an approach to judge an candidate on things which have nothing to do his job. That's why so many programmers don't about all this. The reason is for 99% of the work they are likely to do they don't need to know it.
Under such circumstances when you ask these sort of questions the only way candidate can deal with them is not by actually learning how to solve them. But learn them just for the interview sake.
So you end up hiring some one on a criteria doesn't matter. And the candidate manages to beat you by system of work which defeated your original means of judging him.
You could just ask: How is lists:reverse(List) implemented in Erlang, or if you don't know it, how could it be implemented? What are the trade-off and limitations for different implementations?
Since most abstractions are leaky, you can make a good argument, that learning how libraries tick, even if you are never going to rewrite them, is a useful skill.
Understanding the limitations of libraries might help you use those libraries better. Using libraries can be a part of delivering business objectives.
I don't think that list reversal is a good interview question. At least not as a positive discriminator. Though it might help you weed out people: I'd probably not want to hire people who can't figure out how to reverse a list. If they'll stumble upon such a simple problem, they might also stumble over the more complicated problems we have to solve to deliver business objectives.
I wrote `can't figure out' deliberately. Not knowing how to reverse a linked list, but being able to come up with an algorithm after a few minutes thought is perfectly adequate.
I agree with you - it is a weeder-outer, but not an includer-inner - there are lots of people who can write code to reverse linked-lists that I don't want to hire.
In particular I don't want to hire the people who would rather write their own implementation than use a library (think Not-Invented-Here and Yak Shaving).
Indeed. By the way, in purely functional implementations of a list reversal there are non-trivial trade-offs to make. (See Janestreet's standard libraries for OCaml.)
If you don't need to know how to solve algorithmic problems or use mathematics and you're programming, why the fuck is your job still not automated or outsourced?
Tell me about it. I have nearly 20 years of experience in CS, and hold an advanced degree. I have 5 years of experience in a very niche field which is relatively hot. I was approached by Facebook for this specific purpose. They invited me in for just a "chat", asked me if I'd be interested in working with them.
Obviously I said yeah, I'd be willing to talk. They set up an in-person interview. Just 45 minutes they said.
So I show up, fully expecting a discussion about that area, my background, etc.
Instead, I get a random coder who wasn't even making eye contact when asking questions. And the questions? Elementary CS stuff that any undergrad would know.
Now, obviously I know them too; heck, I taught Algorithms courses in grad school.
But why waste my time with this crap? You can hire almost any fresh graduate from a top-40 school who would do a halfway decent job at answering those questions. What does asking a PhD "how to print the levels of a binary tree, one per line" tell you about his work? (That wasn't the question, but that was the level of questioning).
The sad part is: I can waste a couple of weeks of my life, go to Glassdoor etc. and basically memorize these questions. I'll probably stand a good chance of blowing the interviewer away by pretending to solve them on the spot. But what does that tell you about how I am at real-life problems?
I've interviewed a few people who thought the elementary questions I was asking were beneath them. I insisted, politely, that they humor me, because it's something we make everybody do, nothing personal. They failed. Hard.
Some people are probably bullshitters, but my theory is that some are just people who used to code (a little) but have since moved on to making architectural diagrams for so long they've forgotten what it's like in the trenches. Unfortunately, if you're interviewing for a job in the trenches, you need to demonstrate the ability to do the job.
It's entirely possible they screwed up, depending on why they asked you in, but once a company starts letting in people they "know" can code without verifying it first hand, things go bad fast.
"I insisted, politely, that they humor me, because it's something we make everybody do, nothing personal. They failed. Hard."
I'm sensitive to that argument, but I also think the parent has a point. Coding the solution to a toy problem on a whiteboard has only limited correlation with the ability of a candidate to solve real-world problems. We use the tool because it's one of the best we have, not because it's a great tool. And if your interviewers can't summon the energy to make eye contact with a candidate...well, they're one-tool cavemen. Most software engineering demands communication skills and culture fit, yet we're still screening as if individual mental horsepower were the most important factor.
The danger in dismissing someone because they can't stand at a whiteboard and barf out the code to print the levels of a binary tree is that it has next to nothing to do with real work. That's just a fact. I probably couldn't do it correctly on a whiteboard, on my first try right now, and I write a lot of code. And I'm sure some hiring manager would eagerly dismiss me as "another frickin' PhD that doesn't know how to write code."
The point is twofold:
1) As in all things, good judgment is key. Sometimes the candidate can bork the technical question, and still deserve a hire.
2) If you find yourself depending exclusively upon the outcome of coding puzzles to screen candidates, you've already failed.
I was talking to a friend of mine about just this issue recently. He and I have been on both sides of the interview table at different times. I have gotten interviewees to solve toy programming problems and I've asked them the funny little logic problems, just as I have been asked the same sort of things in the past. But eventually I too came to the realisation that these kinds of things had little if any real significance to the work that the interviewee was going to be doing. Is it usually going to be the case that the guy you hire will have to write his code with unusual speed, under the pressure of three or more people looking over his shoulder and judging his answer, without the benefit of Internet access or Intellisense or what have you?
My approach is to throw out the whiteboard coding and logic questions whenever I can and to look for examples of a prospective employee's real world work, namely contributions to open source (e.g. GitHub makes this easier) and if not that, then perhaps they can directly provide me with private real world examples of code they've written.
I realise that not everyone will necessarily be able to provide such examples of real world work and yes, looking over someone's pre-existing project will probably take more time than seeing if they can reverse a string. But maybe this is also part of the problem: hiring practices in some places have gotten lazy.
I can't think of a way to entirely, in 100% of cases get rid of FizzBuzz, and the guy who has to escape the fire spreading from one side of the island etc. but I really wish I could, because it just doesn't feel right.
I'm constantly surprised when I read articles like this one, or when I take interviews myself, that noone considers how the employee deals with a "project as a whole". Toy logic problems or algorithm optimisation tell you nothing about how good that person is at actually working as a software engineer. Do they care about code unit tests and code coverage? If they find a process missing - no continuous integration or something - do they just mumble and make do or do they find out why and, if necessary, implement it? Are they capable of delivering quality software, or can they merely write neat, optimised algorithms?
Those are questions a manager should be asking, ie, outside the scope of the interview I'm conducting. They're also the easiest to fake. Everybody knows the answer is to say they write test cases for every change before they check it in. That's way easier to memorize than any coding puzzle solution.
I've had some frustrating experiences where the obviously talented candidate was nearly passed over because they didn't use the right software engineering buzzwords while at the same time the manager came close to overruling the technical veto because they felt some other idiot sounded like a good fit.
Teaching someone to write unit tests is, imo, far more likely to succeed than teaching someone to understand recursion.
I can waste a couple of weeks of my life, go to Glassdoor etc. and basically memorize these questions. I'll probably stand a good chance of blowing the interviewer away by pretending to solve them on the spot. But what does that tell you about how I am at real-life problems?
Most of the so-called algorithm experts I've seen fit into this knows-theory-but-no-practical-stuff category. The fact there are some list of questions and their standards variants. All you need to do is just crawl over interview forums. I assume they even have books, as in pdf ready made for these purposes. Just ensure you read all those questions before hand. May be spend an hour a day to get familiar with them.
Go to the interview, when the interviewer asks the question act as though you've never heard the question. Act like you've been having it tough. But then suddenly like a hero emerging from a crisis present the solution to the interviewer.
You have not clue how many people do this. I've seen candidates, whose only job is this. They change a company every year. Spend half their day everyday studying and collecting salary and interview trends. Apply to a big brand, game the interview collect the x% hike stay for an year and then move on.
These kind of people are an online club. And they tend to hire only their kind.
No regards for the guy's actual work knowledge, his ability to solve real world problems. His knowledge of a programming language, tools, techniques. His productivity all that is irrelevant.
All that matters in these large web companies is a Ivy league brand and theoretical knowledge and some arcane facts memorized at college. Close to 98% of the so called algorithm experts fit into this category.
The genuine 2% are at places they like to work at and surely they pick the place they want to work at and not the other way around.
The cost of hiring someone who can't do this stuff makes it worth the risk of putting off someone like you. Sometimes you have to ask rather than be faithful if you haven't personally worked with the person you want to hire. A shitty programmer will kill an entire team.
There's a line here - I think the interview should cover a lot of bases, but by no means should a hire be a concrete thing. The first two weeks should be an intensive, fully-paid interview/trial-by-fire.
I suppose, I just don't see why a shitty programmer couldn't be filtered out almost immediately through this technique. Maybe it's a company size/age difference - what size company do you have in mind?
In the real world, people often have to relocate to work for your particular company. So you're out $50,000 if they don't work out in the first two weeks and you fire them. Also, people that will work for you are working for you (rather than themselves) because they value stability. You owe it to them to get it right the first time.
(Also, at big companies, people cannot be fired. If you fire someone, their manager "loses an open" and they are less likely to get promoted. Remember, you have to deal with the bad employee, not your manager.)
As a Facebook employee who holds a PhD, I can say we are constantly preoccupied about improving the quality of our interview process. In particular, the topic of how to adapt the interview process advanced degree holders comes about quite often. For example, I recall we recently discussed that we should offer PhD candidates the option to give a talk about their research. (The option is indeed available.)
That being said, I think a dismissive or patronizing attitude towards a coding interview would be mistaken. Allow me to explain.
It is quite well known that the ability to code is poorly correlated with holding an advanced degree or other formal pieces of evidence. A candidate with strong credentials understands this and should, therefore, not be in the least offended by being asked coding questions. As a good professional, the strong candidate would handle the questions with panache and should be ready for more difficult questions, discussing advanced topics linked to the question at hand, or discussing her particular area of expertise. Either way, she must be able to code her way out of a paper bag, and I see nothing wrong with the interview process probing for that.
I've interviewed a few dozen engineers while at Facebook, many with advanced degrees (because recruiters pair as well as they can interviewers with interviewees for areas of expertise), some from top 10 universities. I ask 2-4 coding questions extracted from my daily job, starting from undergraduate level and on rare occasions ending with a difficult complexity question. Nobody has ever done perfectly well, and for a variety of reasons holding an advanced degree doesn't correlate strongly with doing well.
All in all, I think you'd be mistaken to believe two weeks of rote memorization would be enough to pass the Facebook interview, which is very difficult. Overall, an attitude of professionalism, modesty, and focus helps a lot in any interview.
I have some simple baseline questions I ask everyone. Getting it right just means we continue the rest of the interview, but doesn't get you a job.
I almost always have a programming question I just made up 30 minutes before the interview. I make sure that I can easily do it in 10 minutes and then ask it. It really isn't important to me if they get they answer, but I want to see what questions they have, and how they think. Good candidates will usually get it coded, but some may get hung up on something early on, but still make good progress.
If they come in with a precanned response to my question then they're either a mindreader or can time travel. In either case, I still went them on my team.
>> I almost always have a programming question I just made up 30 minutes before the interview. I make sure that I can easily do it in 10 minutes and then ask it.
This is a lot easier said than done. Do you really do this and, if so, any tips on how? (I'm guessing variations on a core set of questions.)
I usually don't mind asking a questions someone may have already heard because there's not really a right answer, but I just want to discuss a problem to see how you think through it.
I actually don't have an explicit set of core questions, but I'm sure there are clear themes that run through the problems.
Probably my only real tip I can think of is not to have them code something related to stuff you're currently working on. You'll be far more likely to underestimate how hard the problem is.
But the problems aren't that hard to think of. They're not like ACM programming contests. They're more like:
You have two lists. One is a list of IP addresses. The other list contains IP addresses or IP addresses with wildcards. E.g., 192.168.$.1 or $.255.255.255. Return the set of IP addresses from list 1 that are matched in list 2. Additionally return the index from list 2 that matched -- I want the index that is most specific (as defined by having the fewest wildcards, or wildcard furthest to the right when there is a tie).
If you are implementing, than you are right. If you are interested in studying algorithms and how they work, asymptotic considerations can tell you a lot about the algorithm. You have to decide what you want to ask for in your interview.
My usual question on the interview is to implement an atoi() function. All I care is that a person doesn't do stupid shit like switch case for characters or a strlen() for iterating through the input. It really is a 10 to 15 minutes task and boy it shows whether a person can write code. Usually then I continue with "how'd you test it?" question, then move on to multithreading and a quick design question. All in all I spend maybe 30 minutes to tell if I'd make a hire.
Unless the end result is a completely incompetent applicant getting an offer, I wouldn't call the system "horribly broken". Very few companies are in the business of reversing linked lists, and hence make binary decisions depending on the outcome of just that one question.
I don't think gaming the system is that trivial - an interview is a conversation, so things like computational complexity, runtime restrictions, data structures, implementation differences in various programming languages, and other things are bound to come into conversation. Now, if one can maintain that conversation as it moves along, and seem like a pleasant person to work with, it seems that they're no longer purely "faking".
Exactly. I've had lots of candidates basically pass the initial question and then sink their own battleship by volunteering some running commentary that happened to be totally wrong. Memorizing enough crap to impress an interviewer about even a basic reverse the linked list question without understanding it is actually quite hard. It's not enough to remember to drop the terms stack and heap, you have to remember which one is which.
I also don't think that what the article describes is "gaming" the system, except maybe for the part about pretending to have not heard the questions before.
Generally, the reason I ask such questions of candidates is to make sure I'm dealing with someone who can actually analyze a problem and get a reasonable result. So if they nail the questions because they took time to study and get good at interview-style brain teasers, that's actually fine with me - it proves the person can execute a plan to get correct results.
I'm not sure I'd call anyone who successfully "games" the system incompetent.
Going through the details is the way to estimate candidate's true knowledge of a subject. Ask an easy question, then ask a very hard one, repeat depending on answers, i.e. basically do a binary search:
- Do you know what SSL/TLS is? general
- What's the maximum size of SSL protocol record? specific
- What's a cipher suite? general
- How is DH-anon suite is set up? specific
...
- What is a linked list?
- What is offsetof (and how is it relevant)?
...
- What is TCP? general
- What is TCP/Vegas? specific
...
This is the quickest way to converge to an actual level of expertise, and it really helps with filtering out people with bullshit resumes and those who cheated their way through an offline problem solving. We would say something like "You are not expected to answer all questions, we are just trying to understand what you know and what you don't know."
I recently interviewed for a position with this sort of technical interview. I felt it worked very well. The interviewer was able to cover a wide range of topics in varying depth, and could see what I could easily recall and what I struggled with.
With the right set of questions, and some hints when the candidate struggles, you can very effectively gauge a candidate's limits on a subject, and see how they try to puzzle things out when they don't know an answer but know enough to take a good guess.
The key distinguishing factor of this method, IMO, is a candidate cannot memorize enough answers to bluff their way through a good questions list. If a candidate really is an N out of 10 with technology $FOO like they claim, they should have no problem answering most/all of your 50+ N/10-level questions for $FOO (my technical interview consisted of ~2 hours of this sort of interrogation for a straightforward junior web dev position. Interview length, in this interviewing style, is important for ensuring you've exhausted a candidate's book knowledge).
There's always a downside to every approach though. Some people get flustered if they have to say "I don't know" too often, so instead of bouncing from tough to easy, I'd probably prefer to progress from easy to tough.
People will feel confident until they hit their limits. Their limits are what you are looking for (presumably), not their ability to think clearly when they are scared they just blew the interview on a question you didn't really expect them to answer.
Yeah, that was more or less the point of the post. For applicants, don't feel nervous, it's a silly game just to see if you know some of the puzzles.
For people hiring, go ahead and use this as a first-pass filter, but realize that it's easy to game (by someone who's trying hard and learning, so there is a positive signal in that). It's going to be important to go through and work with them for a period of time in a let's-date-before-we-marry relationship. As an employee, I always insisted on a contracting relationship before an employment relationship.
Like I said in the footer, Heroku really seems to do this the right way, and it shows. Kudos to them.
In all my history of interviewing people, I've always found there's almost no info to be had in correct answers to any kind of question. You learn about a candidate once you guide them into territory where the fail.
Both where that line is, and how they handle the failure tells you a lot.
(Caveat: Make sure you're keeping it on the fail/no-fail edge, and you treat the candidate with respect. The point is not to show them that they're wrong, but to explore the boundaries of their skill set. Without being mean is a major goal here)
And if you can't push the candidate there: HIRE THEM. Right away. If they know that much more than you do that you can't get them beyond their limits, they are a fantastic addition to your team. (Provided they match personality-wise, too)
I've done the same - keeping on pushing until the candidate is on uncertain ground. (I interview for consulting/ analytic /business/ senior management roles.)
As most of a job is about doing things that haven't been done before, getting the candidate into unknown territory is when you really find out how effective they will be. Some people just stop, others switch seamlessly into problem solving mode and come up with an answer that may or may not work, and also with a plan to test whether it does.
One of my favorite interviews was like this. The interviewer started by asking me some of the basic logic problems. I had heard most of these many years before. He actually appreciated it when I stopped him mid-question and finished the question for him. He asked another one, I finished the question for him. He saw that I had a math degree and asked about a particular problem involving infinity that he had heard about, and I gave him the summary of a half-hour talk I had given on the topic. Finally he sat down and he told me about a startup he had worked on when he was young and we talked through the technical problems. He stumped me most of the time but we both agreed that it was a very enjoyable interview. Funny thing is, that was also the only interview I had ever been on where I didn't get the job. Probably because I didn't where a tie to the interview and this particular company is apparently known for requiring ties.
That's why I always asked for dress code before the interview. You may feel slightly stupid when you ask, but you'll be dressed right when you come to the interview.
This sounds like a great technique. It also gives both sides the chance to talk a problem through, bounce ideas off of one another, and see if there are any interesting insights to be had on the other side of the table.
A candidate and interviewer both have to be fairly relaxed and comfortable to do it properly, but that's more of a problem for people new to the interviewing process (probably not senior management roles)
It also gives both sides the chance to talk a problem through, bounce ideas off of one another, and see if there are any interesting insights to be had on the other side of the table.
Some companies do this intentionally without hiring anybody in order to double-check their plans. It's basically free consulting.
That can be gamed as well. The OP could have let himself be pushed further by getting it done faster, but he did the "silly but adorable" thing of solving it with some mumbling and hesitation, which seemed to convince the interviewer they had seen his learning process. Someone more devious — betting on an unusual interview style — might have failed some answers on purpose.
"you need to know the subject better than the person that you're interviewing. If you don't, you have no hope of screening for good people."
I think it is not necessary and not possible that the interviewer know more about every subtopic than every candidate. It is enough that they have comparable knowledge in most topics. Good candidates can very easily know more about some parts of the subject (and maybe less about other parts). Being a good interviewer is a really tricky business, because good interviewers should see the strength of candidates and should see if the candidate know even more than them about some subtopics. I've seen some bad interviewers who asked only the very narrow topics they knew well, and basically wanted exact copies of themselves.
It sounds like in this case, this guy might be a good candidate - look at all the effort he put in and his ability to remember things, think critically, and learn.
On the other hand, there are plenty of competent programmers not willing to put up with this nonsense, and the companies are going to miss out on them.
All that is true, but there's a simpler fix. If they're not reusing each others' questions then it should be significantly harder. Nobody should be using "reverse a linked list" anymore - that's a really old one.
That's not a simple fix. If you don't use "reverse a linked list" anymore, then you start asking increasingly harder and harder questions, and the bar gets ridiculously high, to the point where everyone needs to know how to solve NP-class problems within 45 mins. And even then the system still can be gamed by simply memorizing more and more questions. It's a lose-lose situation.
You just have to make the problems so diverse (not necessarily hard), that genuinely understanding the solutions is the easier strategy than rote learning.
Can you give me an example of a question that isn't located in glassdoor.com and is "diverse but not hard". As far as I'm concerned, the bar is already ridiculously high to the point where I feel like the only people that get jobs are the ones who randomly get lucky enough to get 3 or 4 interviewees that ask easy questions.
There are people I know personally that have gone through every single one of the 1000+ Google interview writeups in glassdoor in order to prepare for interviews. One of my former co-workers showed me 10 pages of questions he distilled from glassdoor of various questions that were reported, and he memorized the answers to all of them, ranging from "find the common descendant of two nodes of a directed graph", to programming an AVL tree on the spot, to pure dynamic programming questions.
If they asked these types of questions 15+ years ago when I started, there's no way I could have gotten into programming. Even now, the caliber of those questions are simply too hard for me unless I sit down days before the interview to get the answer. There's no way I would be able to figure those out within 45 mins let alone 2 hrs, and I'm not dumb. It's ridiculous the level of expectations people have these days.
> Can you give me an example of a question that isn't located in glassdoor.com and is "diverse but not hard".
Sorry, `diverse' is an attribute of the distribution of questions you ask. Not of any single question. If for each interview you take a random question from glassdoor.com (and perhaps even tweak it a bit, if you feel like it), that might even meet the definition of a diverse but not too hard selection. (Unfortunately, I know almost nothing about glassdoor.com, and what kind of questions they have.)
When applied to me the problem is probably the other way round: I like all those silly questions and solve them in my free time for fun. (I did study mathematics, because I like this kind of thing.) But honestly, that doesn't help me much in writing non-trivial programs, and there's a bunch of people I know that are worse in those interview-type questions, but better at developing software.
But the clever thing is the work required to answer these questions is a filter in itself. Firstly it filters for the seriousness of the interviewee - have they done their (days of) preparation? And even then only a few people will have he ability to actually understand the quiz questions, and even if the answers are there in front of them.
The only problem is the filter for being good at cramming interview questions is not quite congruent with the filter for being a good programmer/developer/engineer/$role_youre_hiring_for.
as someone who successfully built a great dev team at a startup, i can say that this is a sign of sheer laziness on the interviewers' parts. which is sad, because in a small company, hiring is one of the most important things you can do. there is no real excuse for having an interview so generic that someone can game it in this manner unless you're so large that you can afford the odd charlatan.
> there is no real excuse for having an interview so generic that someone can game it in this manner unless you're so large that you can afford the odd charlatan.
I'm also part of a small start-up (as an employee), and I can certainly vouch for that. I for one put a very high emphasis on the personal/professional projects the guy/dudette in front of me has been involved in, on how best s/he can explain them to me, on how passionate the person seems when explaining all this (and no, I'm not talking about the enthusiastic approach that can be easily faked, I just think that when you're a programmer you can feel what the other programmer in front of you is very interested in).
Also, this mania a lot of interviewers have with the interviewees "always ticking the right boxes and giving the correct answers" sort of scares me. It's probably because I work in a start-up, but I can assure everyone that at least half of the time we ourselves don't know the answers (assuming that we even know the correct questions), and there are lots of times when we seem to be knowing in what direction we're going but truly speaking we don't.
Programmer interviews are famously terrible. Take a second to think about FizzBuzz, why anyone ever thought we'd need something like it, and how it actually continues to serve a purpose.
Everyone thinks they're better at interviewing than they are. They often have a survivorship bias. In clueful companies, it's not that dead weight doesn't get hired. It just gets washed out. Interviewers tend to remember the people they end up working alongside.
There are dev questions that are harder to study for. They tend to be open ended. A few of my favorites:
* Describe some code that tends to follow you from project to project (I mostly interview C devs, so I tend to ask this as, "what's in libyou.a?"). What's in your bag of utility functions?
* What's the worst library you've ever worked with and why did you hate it?
* Estimate the amount of time it will take you to complete system X. Drill down, both by forcing the candidate to specify components (do an informal design exercise) and then cost each of those components. Estimation is an extremely important skill regardless.
* What's a piece of functionality that tends to crop up on lots of projects that you'd never implement yourself if you had the option of using a library? (Usually, I'm trying to get C programmers to tell me that they would not in fact hand-hack doubly linked lists out of structs with nested struct pointers).
* Describe the last system you contributed significantly to; how did those contributions break down structurally (did you write libraries for X, Y, Z; did you write plugins for X, Y, Z; did you extend the engine in X, Y, Z ways). Now describe the trickiest bug you ran into on that project.
For C devs, I always used to ask, "You're testing a component you just wrote and finding that it crashes in malloc. Diagnose the problem." This is less relevant now, but there are probably surgical questions you can ask e.g. a Python dev that verify that they actually have the experience of working professionally in the language.
These are great questions. The one question I use with experienced hires is "Talk to me about the hardest bug you've ever worked on." I will keep drilling down to deeper and deeper levels to figure out how much they actually know, as opposed to memorize.
I'm not sure I understood you correctly about FizzBuzz, but if you're saying that it isn't necessary then I'd have to disagree. I wish the world was such that FizzBuzz wasn't an effective filter, but it is.
Thomas is saying that the hiring process for programmers is so comprehensively fubared that asking the FizzBuzz question to someone who has worked as a senior developer for 8 years at a well-regarded firm provides valuable signal and that if you fail to ask it you might hire people who are completely incapable of programming even with your fancy-dancy recruiting process, which shows that the pipeline and standard interview process are both borked like crazy.
Essentially: It is not common for doctors to be asked to locate the interviewer's mouth, ears and thumbs. If an interviewer asked that kind of thing, you'd really have to suspect that he couldn't tell a good doctor from a bad one.
But programming interviewers do need to ask this kind of thing. So you really have to suspect…
The problem isn't that we're asking doctors to locate their mouth, ears, and thumbs.
The problem is that the resume and candidate pipeline is so crappy, that the interviewer cannot be sure if the person sitting across the table is a doctor, or a plumber, or a carpenter, or an accountant, or maybe a bum off the street.
My interviewing life would be a lot easier if there was a firm guarantee that every candidate knows at least basic programming - and by basic I mean basic - i.e., can put together a for loop that compiles, in a language of their choice.
The second part of this problem is that the industry has so many jobs where the absolutely clueless can survive that "X years experience" in-industry is not a trustworthy metric for competence. "5 years at County General" for a MD, with a clean record, is a pretty decent guarantee that your candidate is in the ballpark. "5 years at Accenture" for a programmer guarantees nothing, not even the ability to write FizzBuzz.
Elsewhere in the thread someone mentioned being incensed that, even with his years of experience, he was being asked elementary questions. That's why - years of working experience is not a valid signal for competence.
"5 years at County General" for a MD, with a clean record, is a pretty decent guarantee that your candidate is in the ballpark."
Scary but true: There are plenty of experienced doctors who couldn't pass a medical Fizzbuzz. My supervisor at my last teaching hospital, who has been treating patients for at least five years, failed the Fizzbuzz question I asked him. ("What's that other class of antibiotics you can't give patients with penicillin allergies?", if you're wondering.)
(There's also a few stories of people faking medical licenses and working as doctors for years before anyone noticed [1], but those are probably too rare to worry about.)
At one Beijing-based start-up I've been working with full-time fewer than 30% of the senior iOS dev web dev applicants who have already passed the resume screen were able to complete FizzBuzz within an hour. Of those that could, only about half could code a fibonacci function.
On the other hand, there are lots of applicants and a few are very talented people who will work for under 2k USD per month. In your terminology, there's "a wee bit" of human capital here.
I think this conclusion is wrong. The fact that FizzBuzz exists actually means that programming interviews are awesome: with most other professions, you would end up hiring a candidate equally as incompetent as the programmer that failed FizzBuzz.
No. You do not have to ask doctors or lawyers or taxi drivers fizzbuzz levels of questions. If applicants for doctors jobs were as incompetant as programming interviews, you'd have to ask applicants to location the ear on the human body. Doctor applicants aren't that incompetant, programmer applicants are.
I've had plenty of taxi drivers who could barely drive, let alone find any given address in their city. And as for doctors, see my answer to potatolicious above - I've seen some terrifyingly incompetent ones who really should have been screened out with a fizzbuzz level question.
People talking about how he "hacked the interview process" are missing the point: as a hiring manager I'd gladly hire this guy even knowing that so much of what he did was not entire genuine. He took a problem (I need a job), implemented a solution (applied and interviewed at a TON of places in that short of a timeframe), and successfully iterated on that solution until he had completely mastered it! In a very short time period he went from being unable to solve a variety of programming problems (or doing so poorly) to internalizing the how & why of the optimal solutions - this is what good software development is about!
I care a lot more about initiative and problem-solving skills than I do reversing linked-lists - this guy has the former two in spades.
Sure, if you're looking for someone who is hard working and willing to learn, then this type of candidate is perfect for you.
But if you're screening for candidates with development experience who you can rely on for technical knowledge, the idea that someone can game your interview process in less than two weeks without previous knowledge is horrifying. But I'm guessing that people looking to hire experienced candidates aren't basing hiring decisions on questions about reversing linked-lists.
"....this process, starting from ~50+ raw leads whittled down into ~40 initial phone interviews/coding challenges, which went to ~25 next stage interviews, eventually down to ~6 really good offers, all in about ~1.5 weeks."
Does this timeframe seem a little dubious to anyone else? I count 65 technical interviews plus interviews with hiring managers, etc., followed by receiving offers. And all of this happening in about 8 days. This story would be a lot more believable if it had a realistic timeframe.
Nope, it's true. I woke up early in the morning (easy to do sleeping on the kitchen floor in mountain view without a heater), went through craigslist every morning, had an email template I'd individualize for each posting, and send it out. I'd work through the code challenges as they came in, set phone interviews into the late night, in-person interviews in the city during lunch, etc.
It happened, I took it pretty seriously because of the impending cliff of being homeless.
The 25 next-stage interviews are the only part that sounds even slightly unrealistic to me. The phone interviews are 30-60 minutes each, so 40 of them is no problem (20-40 hours of work).
The next-stage interviews are going to vary from 1-2 hours to a full day, so I assume he didn't do many full-day interviews in that 10-11 days, or probably 7-8 workdays.
But startups are often far more willing to do remote interviews, interviews at weird times, interviews on weekends... It would be a packed 1.5 weeks, though.
Besides the apparent mismatch between hours in a day and hours necessary for all that to happen in 1.5 weeks (well, it's technically possible), what surprised me was just even that 40 companies out of 50 were responsive enough to schedule phone interviews in that time. And next stage interviews after that. And make a decision to hire him. Especially for a self-proclaimed bad coder.
So, I agree with you, but this is not really the point of the post. The point is that interviewing is a game that you can master with practice.
Each "technical interview" I've been to is about 3-6 hrs. I've yet to see a Silicon Valley technical interview where it's not at least 3 hrs, meeting with 3 people. So right there, fitting 25 "next stage" interviews in 1.5 weeks is bullshit.
Where is he going to find the time for 40 initial phone interviews? That requires talking with the recruiter, and then the recruiter scheduling time with a developer. Anyone who has realistically interviewed in Silicon Valley knows that recruiters are very slow, and things tend to get muddled up very easily.
In 1.5 weeks, assuming they don't interview on weekends, that's 6 1-hr phone screens, plus 3 on-site interviews per day. The logistics simply don't work. It's a complete lie. If he had said 1 month, then it still would have been impossible with 1.5 phone screens and 1 on-sites per day, but it would have been slightly more believable than 1.5 weeks.
There is a spectrum. The elite companies have all-day interviews (Google). Middle-tier tech companies have half-day interviews (Amazon). Big non-tech companies have a couple-hour interview (Bank of America). Small companies can vary, but I've gotten hired off a quick IRC chat. I have no idea what the average startup's culture is like, but they probably don't have 5 hours of developer time to waste on each candidate. So instead, they have a short interview instead of a long one.
Sorry buddy, but please explain the complete discrepancy in what you reported.
How do you fit 3 on-sites and 6 interviews per day? Even if your onsites were 1 hr long and they offered you a job after a single interview, you still have to drive back and forth to each location. How did you have time for the 6 phone screens per day?
I didn't have a car actually. I just scheduled all the interviews as close to possible every day, and walked around SOMA/FiDi from morning until evening going through different stages of different interviews - initial chat (morning coffee/lunch/dinner/drinks), whiteboarding session, meet with the team, build a mini-project, discuss final offer.
Each successive stage is more time intensive, but there were also fewer of them. There's a lot of noise, especially in the beginning, so a lot drop off very quickly. From there, it was just a matter of tightly packing the schedule and doing it from day to night.
Like I said, the "you'll be homeless again very soon" factor was a big one for me.
Even more interesting then, because you just lost 2 hrs every day commuting back and forth on Caltrain.
"Each successive stage is more time intensive, but there were also fewer of them."
You just said you had 25 onsites. Unless you knew beforehand that you were going to cut out of them early, there's no way you could have scheduled 3 per day. And at 6 phone screens per day, the last 2-3 days worth of phone screens occurring near the end of the 1.5 week period would not have been able to produce an onsite within 1 or 2 days, therefore, that means most of the 40 phone interviews must have been front loaded in those 1.5 weeks, with the onsites being back-end loaded. Which means that your density of onsites would have been higher than 3 per day.
It's a great story, but you should use more realistic numbers next time.
I was flabbergasted that someone was asked implement a linked list in Ruby. My initial reaction was 'That is an incredibly stupid thing to do.' My slightly more in depth reaction was "Isn't that what mutable arrays are for?" And my final reaction was, "Maybe I'm missing something." At which point I tried to figure out the benefits of a linked list over a ruby array. I wasn't able to come up with a good reason to use a linked list in Ruby, but maybe someone could enlighten me.
Linked lists are overused even in C, where they have a natural expression.
The reason to use a list instead of an array is that it's cheap to insert/delete anywhere into a list, but expensive to do so in the middle of an array.
There's probably no expression of a list in Ruby where insertion is cheaper than inserting into an array. Perhaps if your "list" is a secondary Fixnum index on a larger pool of objects? At any rate, that's no longer a "linked list".
I think it's a silly question. I hope the expected answer is, "why would I ever do that?"
I think you are looking at this from the wrong angle. Just because it made a good interview question (for the interviewer) doesn't mean it's something you'd actually want to use in everyday practice.
Linked lists and arrays typically have different perf characteristics (although I don't know how Ruby arrays work).
For example if I give you an element of an array/linked list and tell you to insert a new element adjacent to it -- a linked list is a constant time insertion, whereas arrays are typically O(n).
That's a sensible point, but interpreter and runtime overhead crushes any cost savings you might get from adjusting links instead of the whole array backing store.
(Ruby arrays, at least in MRI, are basically STL vectors).
1000 inserts to the middle of a 1,000,000 element Ruby array happens so quickly you can barely perceive the delay. The same insert pattern to a basic Ruby linked list sets my machine on fire.
Wow, that sounds painful. So do people not use things like graphs or trees in Ruby due to perf? I actually used to do Python development -- about 13 or so years ago, but had to quit due to perf just being abysmal for applications intended for customers (and started focusing on C++). Is Ruby today similarly bad for apps that make heavy use of data structures like trees/graphs as Python more than a decade ago?
The key to understanding languages whose runtimes interpret an optree is to realize that all performance depends on the number of ops. When you implement a linked list in pure Ruby, that's a lot of operations that the runtime has to keep track of. When you insert into an array, the actual work is done (in C) in a single operation. Less bookkeeping, less overhead. If you implement both arrays and linked lists in pure Ruby, you'll see the performance you expect. If you implement arrays in C and linked lists in Ruby, the array will probably be faster for any workload. (But implement the linked list in C, and you'll see the performance you expect again. It's computer science, not magic.)
13 years ago. When the Pentium II was around. This is what I don't get about questions about linked lists.
They're so utterly irrelevant in 99% of startups today, where knowing how to lay out your classes and methods for maintainability is so much more important than any trivial and pointless performance gain.
The complexity of solutions is at such a higher level now than it's ever been and yet we're still worrying about structures that shaves less than a ms off something that's called once every 10 seconds.
The reason why ruby and python have grown in dominance in the last few years is because you don't actually have to worry about this any more.
I don't generally ask about linked lists, but I will say that if you can't reason about them, it's a pretty big red flag. They're an extremely simple data structure.
With that said, a lot of startups (and web apps in general) are effectively consumer CRUD apps w/ nice images and transitions. Perf generally isn't important until scale becomes an issue (and until then you're main perf bottlenekck is some DB and/or network code that someone who cared about perf wrote).
So I agree. It makes sense to use effectively a domain specific language/framework like RoR when you're in that domain. And linked lists may not be applicable. But I'd still be weary of hiring people for whom reasoning about them is off-limits.
Well, to start with, the issue here isn't that Ruby doesn't efficiently support the "modify-in-the-middle" access pattern. It's just that Ruby does this with the Array class. Similarly, even if you intend to insert into the middle, you still probably still want to use NSMutableArray in Objective C.
Even on fairly large graphs --- say, graphs at the scale of basic blocks in a program, but (obviously) not on the scale of "recommendation graph at Amazon" --- you shouldn't be burning huge numbers of cycles seeking through reference links. In naive reference-link implementations, Ruby sorely increases the constant factors for graph analysis, but computing a minimum cost spanning tree isn't going to kill you (especially because the intermediate data structures you'd use to do it would be native-code Arrays and Hashes). On the other hand, linked lists more or less pessimize Ruby, forcing it to do nothing but expensive interpreter looping, pointer chasing, and object management.
Having said all that, for serious work, I'd keep my data structures outboard, probably in Redis.
Other people would probably answer "just write a graph library in C; for instance, you could write a C wrapper around void-star-specialized boost::graph", then bridge it to Ruby with FFI.
Ruby is as good at C for I/O bound problem subsets. This is a lot of problems, and so having a language as pleasant to write in as Ruby is a win (you could say the same for Node.js and maybe even Python). It is obviously not the only language you'll ever need, though.
My first guess would be that a FIFO queue is more efficient as a list than an array, but maybe there's something weird about ruby that makes that untrue. I'd love to hear why.
Unless you have an extraordinarily clever implementation of "linked list in Ruby" (in which case this is still a terrible interview question), no. Arrays are more efficient for every reasonable access pattern. If the links of your list are references to Ruby objects (which every online "linked list in ruby" page uses), they're not so much "more efficient" as they are "tractable" compared to linked-list's "intractable".
Something like that. rbx takes about 0.5 secs to start up, but after that it generally runs in the neighborhood of 10x faster, assuming you're spending enough time in actual ruby code to make that possible.
Can you show me your list implementation? I just installed rbx and tried this; the Array implementation ran in single-digit seconds, but I gave up and CTR-C'd the list one (which works fine for smaller lists).
Why do I get the sinking feeling I'm about to embarrass myself by implementing a linked list wrong? Anyway, here's the test case I used, where I'm emulating a FIFO.
class Link
attr_accessor :next, :v
end
i = 2000
j = 10000
k = 100
if true
p "linked list"
j.times {
head = Link.new
tail = head
i.times {
n = Link.new
n.v = 12
tail.next = n
tail = n
}
(i-k).times {
n = head
head = head.next
}
}
else
p "array"
j.times {
arr = []
i.times {
n = Link.new
n.v = 13
arr << n
}
(i-k).times {
n = arr.pop
}
}
end
1. Make a 1,000,000 node linked list of integers (I used doubly linked lists but I don't think it matters).
2. 1,000 times, insert into the middle of the list; I wrote a trivial O(n) stateless insert and called it with an offset of 500,000 1,000 times.
3. Make a 1,000,000 element Array of integers --- I just did "1000000.times.map".
4. 1,000 times call "insert" with an index of 500,000.
Step (2) takes so long I kill the process. Step (4) takes a barely perceptible amount of time. (Both in Rubinius).
It looks to me like:
* You're using much smaller data structures than I am
* Your "Array" case is still building Link objects, so still incurs the object management overhead
* You're inserting at the head of the list every time, which doesn't incur seek time. But inserts at the head or tail of a contiguous array don't need to seek or reallocate either.
If you have to seek into the list, of course it's going to suck, that's not a good application for a list. Your test would blow just as bad in C. I'm not sure that says anything interesting about lists vs arrays in ruby in particular.
I am using the same data structure, I guess it would be a little smaller without the next attr. Still seems fair. Building an array of objects is going to require allocating them. I build hand rolled linked lists by directly linking the objects of interest. Calling the class Link may have been a misnomer, it could have been called AnyClass.
It's easy to insert at the tail of an array (I'm actually inserting at the tail of the list, btw), but popping from the front means having to shift everything down. That's what makes the FIFO case interesting. If we're going to prove that ruby is too slow for linked lists to be viable, we need to be testing a scenario where linked lists generally are viable.
The most naive possible C implementation takes 2.5 seconds to run this same test. I just tried it.
The point of a list is "insert and delete from the middle", so I don't know how to respond to the idea that actually inserting and deleting from the middle of the list is a bad benchmark.
It's difficult to respond to your last point about FIFOs for a different reason. Popping 1000 times from a 1,000,000 Array is so fast that I'd have to write code to benchmark it. And Ruby doesn't even optimize for that case; in real code, I'd use a ring buffer so that pops are just pointer adds. Ruby is, to the best of my knowledge, actually copying every single time. Copies are just way way way faster than you seem to expect them to be.
The reason manipulating linked lists in the middle is faster than arrays is so you can save the O(n) work of moving elements. But you lose that if you still do O(n) worth of seeking. There is no big-O difference between inserting and deleting in the middle of an array and a plain linked list, if the linked list is forced to seek.
I'm sure this is as obvious to you as it is to me, but perhaps the perspective it comes from explains why people are thinking that your described operations on the list are a dodgy benchmark for judging linked lists vs arrays; absent constant factors, they should perform the same. (And yes, for reasonable input on modern machines, the constant factors will dominate.)
I'm sure I agree with everything you're saying here, but it carries a whiff of tautology. You lose the benefit of middle-insertion in a list if you have to seek, but part of the point of using an array is not having to seek ever.
Anyways the only thing that moved me to comment is the general inferiority of linked list data structures compared to arrays, which are what Ruby (sensibly) uses.
I'm reading this thread and kind of tearing my hair out.
One theoretical case where lists beat arrays is seeking to the middle (once) and inserting 1000000 items in that one spot.
If you want to test if lists can ever be useful in Ruby then you want to test the case that they are theoretically useful. Your test is a theoretical dead heat as arrays and lists both perform O(n) in it. All you learn is that arrays are faster in Ruby for the same complexity, which we knew already.
I probably shouldn't have replied and dragged it out, and I apologise for my tone but you seem to be almost wilfully missing the point.
There are problems where a linked list is theoretically better suited than an array. Your test is a problem where they are theoretically evenly matched. The interesting question is: do the benefits of native code etc, that you get when using an array outweigh the costs that you incur for using a non-optimal data structure for a given problem? To answer this you would need to come up with a situation where a list should be faster if the array and list were both native or both higher level ruby implementations. Then compare the actual running times to see what impact native code vs ruby code has.
All your test shows is that in a problem where neither data structure has an advantage in complexity, i.e. they both perform in O(n), the one backed by native code is faster. My assertion is that is a pretty boring thing to discover and "we", or most people, would guess that anyway.
I think we're not communicating well because we're deep in a tangential subthread.
I take your point that repeated insertion at a specific held reference to the middle of a list is faster than insertion into the middle of an array.
I'm just saying that in Ruby, where every list node incurs object overhead and where list iteration is done by tree walking and array referencing is done by pointer dereferencing in the background, lists underperform arrays even in cases where you'd expect the algorithmic complexity of a list to yield a big win.
Of course blitting half of an array will be much faster than dereferencing an equal number of list nodes. I don't think you would want to use a linked list in a situation that resembles your benchmark. If you were going to insert many more elements at a time than 1 (like, say, all 1,000), or if you had to traverse the entire list and also update some parts of it, then I would expect it to do significantly better.
The point of doing 1000 list middle-inserts isn't to see how fast 1000 list middle-inserts are; it's to capture how much faster those inserts are (or aren't) than the same number of Array middle-inserts.
As it turns out here: Arrays way faster than lists.
This whole thread has gone off the rails a bit (albeit in the most enjoyable possible way --- the kind that makes us write code to test assumptions). The real point is: linked lists in Ruby are pretty silly, and an even sillier interview question.
All I meant to say is that on MRI, you might as well give up on reference-based linked lists; they just don't work at scale.
I think that, sadly, this is true of almost any well engineered data structure. MRI's overhead is so vast that even if you perform an operation at, say, O(n) with the logical solution, using a naive built-in or iterative technique with retrieval of O(n * n) will be faster up until the often rather distant point where the lines cross.
It's just as bad on Rubinius and just as bad on JRuby. I think this has more to do with tree-walking (and using a data structure that intensively requires that) than simply MRI overhead.
Are these kinds of interview questions really that prevalent? From the hiring side of the table, I can understand wanting to quickly know whether somebody knows some of the "basics" like data structures and whatnot. But I can't help feeling that you're needlessly limiting your applicant pool to only those who got a CS degree (and paid attention during the process).
Why is so much emphasis placed on the theory of how linked lists work (for example) instead of the practical application of where they'd be used?
Edit: Big O notation is great and all, but shouldn't the emphasis be more on shipping than optimizing? You can optimize your MVP to death, but if you never finish it what good does that do?
The problem with "practical application" questions is that the answer almost always comes down to "it depends". The correct optimization or fix is specific to the actual thing you are doing; that's what makes it practical.
In order for an interview question to be useful, both the interviewer and the interviewee must be able to answer it. The applicant doesn't know the interviewer's product well enough to make accurate technical diagnoses. The person doing the interviewing doesn't know about the applicants' products either. Bogging the interview down in specifics to fill in the background is usually not helpful - plus, it gives lots of opportunities for the applicant to bullshit the interview by listing a bunch of technical details that might not actually have been accurate or important.
Theory questions are favored precisely because they are not dependent on nuts-and-bolts practical details. "Reverse a linked list, writing your code on this whiteboard" is general enough that most applicants can answer.
I understand that people who lack CS degrees might not know the answer. Interviewing means accepting that you don't get a 100% accurate result. I would only switch away from the "basic CS question" problems if I was convinced that some other question has a higher accuracy rating, and I'm not sure of that.
> The problem with "practical application" questions is that the answer almost always comes down to "it depends."
Agreed. Talking about application of theory is way too vague and leaves way too much room for bs. But I think my original point was unclear. I'm not suggesting that interview questions be "When would you use a linked list?" but rather "Let's hack on something using a linked list (something beyond just simply reversing it)." Like the article mentions, once you've built a linked list before and have optimized (and reversed) it to death, there's little to be gained from doing it again for somebody else.
Yes, most interviews are like this. "Here, solve problem X on a binary tree on the whiteboard, you have 15 minutes, go!"
I'm not good at it, I have to study for a while before going on interviews. The whole "study ahead of time to look like you are a genius coming up with the best answer thinking out loud" routine seems to work, although it feels dirty to me.
When I am the interviewer, I ask people simple questions about what mistakes they have made and how they worked through them, because in my own experience the things I do to solve problems I have created for myself are usually the most enlightening. My take is if you have some obscure bug you spent a few days/weeks solving and you can relate it to me, I tend to believe you, but if you don't have stories about this then you probably weren't really doing much programming.
Big O is a little odd, because it can be very important or not very important, depending on what you are doing. The ability to say "I know this has bad runtime" and then deciding if it's either worth fixing now or noting for later is important.
But I can't help feeling that you're needlessly limiting your applicant pool to only those who got a CS degree (and paid attention during the process).
Why is so much emphasis placed on the theory of how linked lists work (for example) instead of the practical application of where they'd be used?
Edit: Big O notation is great and all, but shouldn't the emphasis be more on shipping than optimizing? You can optimize your MVP to death, but if you never finish it what good does that do?
I feel the third paragraph here conflicts with the second one. In effect, understanding Big O would help you identify where linked lists are applicable. You seem to want to be given an exhaustive list of where they should be used. Sorry, but that's not how programming works.
Secondly, I take issue with the "limiting your applicant pool to only those who got a CS degree". If you're going to work in the CS field, I think its only normal you are asked question to see if you've got a grasp of the foundations of your job. You don't need a CS degree for that, there exists such a thing as self-study, and I think its a very reasonable thing to require if you're trying to transition between fields.
And I take issue with "If you're going to work in the CS field." You're assuming that all programming/coding/software engineering/whatever jobs are in the CS field. There's a lot more out there.
I think this is more typical of positions being filled in Silicon Valley, where it appears that more programmers tend to have CS degrees. Or there are enough people with CS degrees that the rest of the community feels compelled to know and understand at least second year level CS theory (Big O and Data Structures).
I work in Texas, where this is not the case, and I've never been asked to implement a Linked List in any language. I have a CS degree from 10 years ago, and I can count on one hand the number of CS graduates that I've worked with in the past 6 years.
Because these things are immensely useful, but only if you know about them and can recognize when they will help in solving a problem. For example, I ended up implementing a linked list recently to efficiently allow editing of an ordered, traversable set of lines on a google map (a drawn route). There are other ways to solve the problem, but a linked list was the simplest as well as most performant.
Anyway, moral of the story is fundamental CS concepts come in handy if you know them enough to recognize how they will make a problem easier. Bonus points if you can whip it out in a few hours, which may be difficult if you have never done it before.
It's legal ageism. Asking questions whose answers will be fresh in the minds of recent CS graduates is a way to legally screen out everyone else, no matter how irrelevant the Big-O runtime of merge sort is to the job.
Personally, I always have to wonder how often someone programs a computer if they don't know the asymptotic complexity of a merge sort. It seems like something someone would either pick up at some point in their career, or be able to figure out on their own at the interview.
(Come to think of it, I never took an algorithms class in school, and I know the answer to this question.)
What's your definition of "programming a computer"? Personally, I always have to wonder how often someone programs a computer if they don't know how a four-state branch predictor helps their CPU.
The key insight that relates to the analysis of merge sort is that solving problems with computers always goes better if you divide them in half a few times. This comes up again and again; binary trees, binary search, and all the n log n sorts. Branch prediction is a little different; it's an optimization engineered into modern processors. Splitting problems into two is a fundamental property of the Universe. Therefore, it's less excusable to not realize that you do that a lot.
I concede that your example is more applicable. Perhaps something relating to L1/L2/L3 caches and memory might have been more suitable as mine.
I'm still not fully onboard with your general claim though. Someone building the most brilliantly simple, computationally simple Android calendar/planning/todo application might not care about dividing in half when all they have to do is Collections.sort(). Someone building a CRUD web app might see their most significant performance issue in HTTP throughput or latency. You'll be wondering how often they program a computer.
It's hard for me to imagine knowledge of Big O notation being a major limiting factor. I gleaned at least a superficial knowledge of it cruising Gamedev.net in high school, well enough that they didn't throw me out when I had to use it for an internship interview. It seems reasonable to expect people to have at least seen it. But perhaps I was an aberration.
We use these types of questions to allow an applicant to show us that they can, in fact, write a simple program (see http://www.codinghorror.com/blog/2007/02/why-cant-programmer...) and we use programming puzzles to make it more interesting for the bright candidates.
Are there any good sites out there that have a lot of these types of problems that are asked in interviews? Sites like http://spoj.pl seem to have problems that are more longer form. But it would be fun to start practicing solving the shorter ones that are asked without the hassle of interviewing to get them.
I guess that one of the advantages of such an approach is standardization, in particular if your company is big.
Once you get a significant number of candidates, it may become more important to have some "objective" and quantitative way to rank your candidates by performance than to get a true understanding about how likely they are to get things done (how would you "measure" that?).
At that point, it becomes more important to ensure uniformity of judgement across the pool of interviewers. Isn't that why they often ask programming puzzles: so that they can compare performance.
If the price to pay for that is that some valuable candidate won't get an offer, they can live with that.
At least, that's how I see it.
Of course the argument for startups will be different :)
I'm not sure standardization of interviews is desirable in a large company. My experience is that large companies hire by team, so interview candidates get widely varying experiences depending on what team interviews them. This way candidates can be considered not solely based on technical knowledge but also on how well they fit with a particular team.
Few of these types of questions ask anything beyond the first year of a decent CS curriculum. If interviewers wanted to limit themselves to CS students, they would ask about virtual memory and require candidates to write proofs of algorithms concepts on the whiteboard.
I certainly agree that writing proofs and algorithm concepts is a heavy bias towards CS grads. The issue there isn't so much with the concepts involved with the particular problem as with the structure of a proof and the approach to those problems in general. Once you've learned about induction formally, it's much easier to use it in a proof, even if you can understand what it is anyhow.
On the other hand, I'm not sure virtual memory is such a concept. I'm under the impression that virtual memory is a topic tons of people are talking about and that it's hard to get away from it even if you're a programmer without a CS degree. Of course, this could just be the result of my being a Linux enthusiast.
I use a similar approach. I spent 20 years as a consultant, so I did a lot of interviews. I would send out lots of emails. I didnt do many tailored emails, but I did a carefully tailored cover letter designed to be amusing to the HR ladies. You have to get by them first.
I figured the first few interviews were throw-aways, just to get my interviewing skills back up to speed and to find what this years interview questions were. Yes, they go in fads. One year the programming task of choice was to reverse the words in a string. See it a couple of times and you get too be pretty proficient.
I keep a stock of questions to push back. I havent issued programming challenges, but I have asked, "What is your pain point? What technical issue is causing you problems?" Usually we get into a long discussion of the challenges theyare facing.
A lot of discussion here about memorizing the programming questions, but I think the most interesting part of this is when he turned the tables at the Do You Have Any Questions For Me stage and really interviewed them.
I haven't asked anyone a programming questions like he did, but I did ask some tough questions about the team, work environment, management style, company goals (looking for a quick exit? in for the long haul? etc.) at my last job interview (after some good friends gave me the advice) and it both gave me a lot of insight into the company and team that I was considering working for, and I think it impressed them that I was thinking about those things.
In short, know what you want from a team, manager, and company, and take time to interview the people interviewing you to make sure they have what you are looking for.
I don't understand why is there such a strong emphasis on these kind of interview questions. I think showing a piece of nontrivial code which the candidate have created in the past tells a lot more about the candidate.
For example this tells more about me than the interview questions I usually got: http://codeclamp.com/ccui.js , because it tells about how I think long-term, how I solve relatively hard and relatively complex problems, not how I survive a quick interview (solving relatively easy algorithmical problems but under very high time pressure.). (By the way, if you have an interesting well paying job for me, I might be interested...)
I think a much better indicator of how valuable a person will be in your company is not if he can solve puzzles on the fly but how munch time he puts into coding. If a person is coding in his free time and keeping up with technology and experimenting even though his current job doesn't require it, that person is more likely to have a breadth of knowledge that will pay dividends over time. Something like that cannot be gauged by him/her solving a "reverse-linked-list" problem.
Furthermore, a person involved in the community is more likely have in-roads with other strong developers, which again might pay dividends in the future in terms of hiring.
In my experience interviewing candidates, the two are very strongly correlated. A person coding in a lot, including in his free time is more likely to have implemented a wide variety of things, and will be much more hands-on to solve these kind of riddles/exercises, than a person with only book knowledge who coded the same kind of stuff for years on an end.
I had a similar experience interviewing after being off the job market for a quiet a few years: You will be rusty initially, but then you will pick up steam.
Just like in your "real life", solving interview problems is not about reinventing the wheel with 100% custom solution, its about pattern recognition where the pattern is a problem you know a solution for. The more problems you solve(and possibly fail) the more data your internal pattern recognition algorithm(your brain) has to help you with the solution.
I dont see it as gaming the system, I see it as being able to prove that you are capable of problem solving.
Yeah, it's a positive signal that someone was able to pick up from past failed interviews. It's a weak filter, but it's fine to have. From there, it's best to work with the person on short projects, and then hire them as a contractor to find out if they're a culture match and can solve the problems you're facing at an acceptable rate and in a maintainable way before hiring them on as an employee.
I can't post to the site, could somebody who can (or can contact the author in some other way), leave a note to the author, notifying him of a typo in "For people hiring, remember that anyway can pass yours tests with enough experience interviewing, even someone as bad at coding as myself." (It should be `anyone' not `anyway'.) Thanks!
This article really indicates to me the fatuousness of this sorts of problem/solution interview approach.
I agree with timr in that a more more intelligent approach is to throw a problem at the candidate, then drill down into their thinking about the problem.
It's a bit silly if you're hiring based on regurgitation of algorithms in my opinion.
I certainly agree. However, I haven't actually seen any pure problem/solution interviews--and this is just for internships where you wouldn't expect the companies to be as thorough. I had questions ranging from "find and fix the problems in this code" to "how would you design a library to do..." to "propose a solution to this problem and prove it works". This might be because I mostly talked to smaller companies.
I had a similar experience. This past Fall, I was interviewing with multiple companies, and I received questions that were very similar.... some being the same exact thing. I was nice, however, and told them out front that I had seen the problem before and asked for another.
I don't regret doing that; some may have passed the interview by keeping quiet even though they had seen the problem, but those people won't go far in the world because they cheat. They will eventually get called out for it and lose their reputation.
All this tells me is that tech interviewing is horribly broken. What we have here is a story of a guy with a little bit of experience who has (if his story is true) successfully learned how to game the interview systems at dozens of startups. If you interview people, this article should terrify you. You can argue that it isn't "gaming" if the guy turns out to be a good employee, but that's wrong. If he can trivially game the system, so can anyone else with the ability to go to dozens of interviews and remember questions.
There's a fix for this problem: make the interview about more than the answer to the stupid question. If the candidate answers your question a little too quickly, ask something harder. Chase down a detail. Pick an experience on the resume (and not an obvious one), and dig as deeply as you can. Good candidates know the details. Bad candidates don't have details you can chase.
If you're interviewing correctly, no candidate should be able to come in and snow you with some regurgitated C code from previous interview experience, because you'll be able to out-flank them at every attempt to cough up a line. Unfortunately, there's a corollary: you need to know the subject better than the person that you're interviewing. If you don't, you have no hope of screening for good people.