Hacker News new | past | comments | ask | show | jobs | submit login
Interview Programming Problems Done Right (cforcoding.com)
92 points by cletus on Jan 5, 2012 | hide | past | favorite | 79 comments



This is not, nor are similar whiteboard questions, a good interviewing problem. Not because the problem is particularly difficult or uninteresting, but because no matter how much the interviewer wants these questions to just be 'signals' of your ability, the candidate's performance at the whiteboard will likely be the determining factor for landing a job. The whiteboard gets too much weight, because it's too hard for an engineer (or likely anybody) to ignore somebody having a bad moment and flailing on something that seems so trivial to the interviewer.

And this is why Google's interviewing process and reliance on these problems is terrible.

You are put in five 1 hour sessions where you are asked Pascal Triangle esque whiteboard problems. Get a bit intimated? Stressed because of the interview? Miss your morning coffee? Start to panic a little bit? And suddenly you stumble on 1 or 2 of the problems and make an embarrassing impression.

At this point, it does not matter how strong your portfolio is, how many recommendations from fellow Googlers you have received, past performances, or your samples of past code (all of which are much stronger signals than a whiteboard problem). You will not be getting the job.


I hear this complaint a lot, but I kind of don't get it. I just don't know any way to make job interviews fair for people who are having a bad day. Can we just stipulate that any interview technique will fail to recognise the potential of some candidates? False negatives of that sort are kind of inevitable. And from the point of view of a hiring company? a few false negatives aren't the end of the world. A false positive is way more costly.


Because a candidate with a mediocre background yet a strong showing on the whiteboard will get the job. The problem isn't not hiring somebody having a bad day. The problem is the over reliance on the whiteboard signal, despite a candidate's history of contradictory evidence (either positive or negative). If you are going to use typical whiteboard problems as a signal, it is really really hard to not heavily weigh the results, especially when sour.


Your argument is that using coding tests under interview conditions is bad because a poor interviewer will overweigh it in their assessment. But the same applies to everything that takes place in an interview. Some interviewers will make up their mind on the candidate's handshake, so it's probably not fair to have handshakes in an interview. Others might place too much weight on the vocabulary and language an interviewee uses in answering questions, so we should probably not let them hear the interviewee speak. And asking candidates to write code risks the interviewer placing too much emphasis on the interviewee's ability to demonstrate their coding skills, so we should prohibit that too.


"Because a candidate with a mediocre background yet a strong showing on the whiteboard will get the job."

Is there evidence that this happens a lot? Does this happen at Google, where whiteboard programming during interviews is common?

"The problem is the over reliance on the whiteboard signal, despite a history of contradictory evidence (either positive or negative). If you are going to use typical whiteboard problems as a signal, it is really really hard to not heavily weigh the results, especially when sour."

What history of contradictory evidence? Do you have any suggestions for other signals that have proven effective in your experience?

I have very, very little experience as an interviewer and no experience dealing with and evaluating the success of new hires in the long term, so I definitely don't claim to have the answers here.


Sorry, the 'history of contradictory evidence' should have read the 'candidate's history of contradictory evidence' (e.g. having a really strong portfolio and work history to contradict a poor showing on the whiteboard).

I try to interview a candidate like he is a friend I haven't caught up with in many years. I want to have as honest and interesting of a conversation as possible, hear all his opinions, and learn all about what he has done and can do. If I feel the conversation was honest, the personalities were compatible, and there is evidence of good work and reliability in the past, I will likely consider the candidate a hire.

The problem with this method can be

1) It relies on the candidate having some sort of tangible history (referrals, solid portfolio or resume, open source projects, etc...), so new grads are tougher for example.

2) Having an 'on the level' conversation with somebody who knows they are being interviewed is not always possible. Nor are all interviewers capable of extracting useful information this way.


We hire based on personality. Before we bring an interviewee in we have looked at their bitbucket, github, portfolio, etc.. By time they get to the interview we know they are good enough to work for us, so we just talk technology with them and gauge based on that.

If the person is inexperienced or does not have a very public profile (no opensource projects), then we will drop back and throw them some random problems to solve but never a puzzle, since most people just memorize the solutions to puzzles or learn the solutions after doing enough interviews.

We prefer giving them problems like writing a text based guess the number game or tic tac toe. Mindless easy to solve project. They could spit it out in the shortest number of lines, which is what the bad puzzle questions encourage, but we are looking for how well the design/architect code. So the person who creates a good class structure and writes tests for his/her game is the one who gets hired, not the person who prefers shortest lines of code.


Well that is unfair. I got did not get an interview at a financial company for this reason. Unless stated explicitly, I'm not going to gold-plate interview code, because my assumption is that you are weeding out people who cannot code.

If you tell me to write it as OO as possible, then I'll do that. But if I can write it in a shorter, more efficient way using only arrays, I'll do that.


We tell them "Write this how you would write a real world application". We don't care if it is OO or functional or whatever... We just care if it is well designed. We don't tell them to write tests but if they don't then that means they don't write tests in the real world either.


There's just too much context missing from what this "real world application" is supposed to be or where its supposed to serve to make the concept meaningful.

What size inputs am I expected to handle? What platforms must this run on? What environment can I suppose is available? What performance is available? What kind of tests does your company use?

I think you're making too much implicit assumptions that everyone is programming in an environment similar to yours if you state it that way.


I described it in my initial comment:

"We prefer giving them problems like writing a text based guess the number game or tic tac toe. Mindless easy to solve project."

Language, Environment, etc doesn't matter. If you are interviewing people on that stuff then you aren't finding the best candidates.

They get to choose everything outside of the problem to solve and it is an extremely dumbed down project like I said in the original comment. The "real world" part is in the implementation... How well is the code architected? Did they write tests? Are there performance issues? Bugs?

We don't expect people to be perfect either, we learn more from the discussion afterwards, since its not a puzzling problem we get to focus on software design and not the problem itself, which we feel is more important.


Why would I write tests for Tic-Tac-Toe? It's insanely easy to exhaustively hand-test, or just look at the code to see its validity. If you expect tests on something that simple, then you're expecting people to waste 1/3 of their time testing code that is sure to work.


Can you be more specific about what do you mean by personality and how do you measure?


Our interviews are never in the office. They are at a restaurant or bar. We just have drinks and food with them and chat. Talk about technology, them, us, etc... If we can have lunch and drinks with someone and they are talented then they are a good fit for the team.

The lunch/bar setting also loosens the interviewee up and they are always less nervous than if we had brought them into the conference room.

We don't care how talented you are if we can't stand being around you. If we had to choose between a mediocre programmer who has a lot of passion and fun to be around or a brilliant programmer who is arrogant, we'll take the mediocre guy and just train him up into being a brilliant programmer.


You're selecting for extroverts at the expense of introverts. Which may be desirable for your company, but I would (somewhat selfishly) not want that to be the common case.

we'll take the mediocre guy and just train him up into being a brilliant programmer

If you actually have a successful process for doing that, you could be owning many islands.


Its not about introvert vs extrovert, its about passion. I don't think any of the people on my team would do well socially at a bar but you ask us something about Linux, Python, etc and we could talk for hours.

"If you actually have a successful process for doing that, you could be owning many islands."

Don't pull a fox news on me, you can't rip out the most important part of that sentence ;)

Someone _who has a lot of passion_


Hrm, I don't like this. I consider myself a pretty competent programmer, and I like doing programming puzzles for fun. I've spent a decent amount of time on 4clojure, project euler, and similar sites, and I can generally come up with solutions I'm happy with (elegant enough for my own liking).

That said, from the stress free environment of my couch, some of these puzzles take me quite a while. I'm fairly confident that if asked to hash out a solution to this on a whiteboard in an interview situation, the stress of being put on the spot would impede my ability to perform up to par, and I'd probably come across as inept.

Ironically, I'm one of the principal interviewers at the medium-sized company I work at. Many of the interviews I conduct delve into fairly technical discussions of previous projects the candidate has worked on, and I can generally walk away with a fairly firm gut feeling of how competent someone is.

Candidates that give a solid initial impression are always asked to submit additional code samples (sometimes several rounds thereof), and occasionally asked back for one additional followup interview if we need any additional details.

Through this gradual feeling out process, I feel like we've acquired some very excellent hires that we may have otherwise looked over if we based our process strictly on brain teaser type screenings.


I'm fairly confident that if asked to hash out a solution to this on a whiteboard in an interview situation, the stress of being put on the spot would impede my ability to perform up to par, and I'd probably come across as inept.

There's a mistaken assumption here that because you're a competent programmer you're expected to solve these exercises with ease [1]. If you're interviewing with that expectation, you'll have a huge false negative rate and only hire people with nerves of stainless steel. Even experienced programmers can make very silly mistakes when coding under pressure in front of an audience.

What you're expecting to see is that the person can reason, knows the basic syntax of the language he claims to be familiar with, and doesn't make any mistakes that show a total misunderstanding or unfamiliarity of how it works. (Put differently, have they read a book/website of examples of the language, or have they actually coded in it?)

If you point the experienced programmer that's very nervous a few of his mistakes, you're expecting him to show sheer terror and panic as he realizes what he's done [2]. If the candidate is showing you a blank stare instead, you might have a problem.

If you're only verbally interviewing and asking for code samples, you can get fooled easily. Some candidates think that if they can bullshit their way through the interview, they'll stay hired. And sometimes, they're even right, because most people don't like to fire.

At some point, we were having serious problems filling positions and started de-emphasizing doing bad/mediocre on the coding part of the interview, and focus more on the experience and verbal interview, on the assumption radiated in many posts here that serious coders might not necessarily be able to show their ability in a coding interview. All of those hires turned out to bad.

You'll have a hard time convincing me now that coding interviews aren't effective.

[1] The example here is FizzBuzz. Why is it so trivial that many people can't believe it works? Because even good coders will fuck up more complicated questions in interviews.

[2] Just as an observation and nothing more: experienced coders tend to also not being able to bear the sight of those mistakes and insist on being allowed to correct them even if told it really doesn't matter.


I agree.

I do think is valid to show code produced just-in-time, but I don't it's okay to observe the candidate, for the reasons already pointed out.

If you must ask for code, give the candidate a computer and leave the room, or have the candidate leave the room with a computer and leave him to it.

You should be at least partially trying to emulate real working conditions. Otherwise it's just torture.


This is an awful interview question. It falls into the same trap that a lot of questions fall into: this is not something that would likely turn up in every-day work. Why are you asking a candidate questions about work they will never do?

It's also too Computer Science. They're not interviewing for a CS course. They're interviewing to write, debug, maintain and ship code. What you should be exploring is code style, correctness, error catching, and the thought process that goes into it. Expand the question in that manner.

Code golf is a very poor signal for a good engineer.

Edit: For example, ask "Will this work for all input values of n?". Negative values might have issues, and as 'n' increases it either takes too long to complete or runs out of memory. It's surprising how many candidates don't understand that - and the way this question is asked would not discover it.


Typically we'll start with an even simpler problem before dropping them into something like this. Over time, we've found that a good 1/3rd of candidates can be eliminated with just fizzbuzz (print the numbers 1-100, for numbers divisible by 3, print fizz. For numbers divisible by 5, print buzz. For both, print fizzbuzz). Even if they do stellar on a phone interview, a large number of candidates still cant get past this simple problem. It's also easy enough that you can give it to people at job fairs on the spot. Any programmer that you'd want to hire can finish this in less than 5 minutes, which is an easy time investment to make.


One of the advantages that proponents of reviewing the candidate's code before the interview have is that you don't have to spend any time doing FizzBuzz kind of problems, which are just time wasters.

If you're interviewing people who can't print the numbers 1-100 then something in your process is just wrong. Why did they get that far?


Asking FizzBuzz is a lot easier than reviewing code, which may or may not be theirs.


If they have a github/bitbucket, this will show more than answering silly questions.


Until you have someone who just gives you the link to someone else's github account. You'd be amazed at what people do.


Who said it was supposed to be easy?


that's simple: people lie on their resumes and can talk a good game. Until you actually ask them to write code (or they have public code available), it's impossible to tell if they used PHP for a week per employer (making up 5 years of 'experience') or if they work with it day to day.


This! I can't believe how many people are ditching the coding interviews here. You're guaranteed to hire a few lemons if you don't do them.

The inability to write simple routines in a language you claim to be familiar with is a NO HIRE. Be it at a whiteboard (its not like one cares about syntax errors) or at a PC/laptop.

Unless you accidentally have in-depth knowledge of the exact work an interviewee did in his last job, assessing their skill in what they did can be very difficult.


My main issue with this type of question is that people will take this (or other popular questions) as the catch-all type of problem to ask in any interview for any candidate.

If you are hiring a generalist or you are in a company that writes code in everything under the sun (Experience in Java, Ruby, Python, iOS, and Javascript preferred!), these questions are probably appropriate.

But when you are hiring a web developer to work on your Rails app - the interview question should not revolve around optimizing space complexity or a recursive algorithm that exploits symmetry. The question should be more targeted to what the day in and out work will be - write some RSpec tests, fill in some controller methods, create a simple model. Use that as the filter.

I don't think anyone applying to a Rails gig is going to balk at doing that kind of interview and I think it will be a better predictor than if they can find the largest substring palidrome in O(n).


You'd rather test for trivial mechanical skills rather than fundamental understanding?


As a filter? Yes.

I'm not saying hire anyone that can do a fill-in-the-blank Rails controller action - but how they handle that mechanical task should help you figure out what direction to proceed in (a deeper dive into the fundamentals, a "Thanks, we'll be in touch").

I'd rather test trival mechanical skills rather than puzzles .


If you ask me to do anything on a whiteboard I assure you it will take me a very long time and probably not work when transferred to an executable environment.

At least let me type it in VIM and have an interpreter handy so I can test as I go. If I'm not expected to code on a whiteboard, having me write code on a whiteboard during an interview is retarded.

I think that it would be appropriate to have me, say, model a database by drawing an ERD on a whiteboard as part of an interview question.

If I went for a job interview and was asked to hand write code or write on a whiteboard I'd probably leave.


That seems at least slightly unreasonable. I think it's appropriate to ask the candidate to write pseudo-code (or real code, depending on the problem), especially if you are just examining how the candidate thinks through problems. I'm perfectly comfortable doing this during an interview, but, then again, I often sketch things out on paper. But, different strokes for different folks.

In the past, I have been asked to sit at a machine and write code, which was far better than doing it on a whiteboard.


I actually had an interview where I not only had to code the solution on the whiteboard but the developers then gave me parameters that they wanted passed in and I had to execute the function in my mind and give the correct output of the function.


That's what efficient developers do. Relying on the compiler/interpreter for seeing if an algorithm works is a crutch. Mentally stepping through an algorithm, including keeping track of the state of e.g. state variables or stacks, is a litmus test to see if you really understand the code you just wrote.

(preemptive nitpicker rebuke: getting the exact numeric output if that output depends on calculating e.g. the root of 34252 in your head is not what I mean by this)


I think some companies like Pivotal labs actually have you pair-program with your interviewer. I haven't applied there so all my knowledge is second-hand.

Would you like that sort of interview?


Sure I'm more than happy to code with someone assessing me.


Problems on a whiteboard don't test programming skills simply because the whiteboard isn't executable. You don't get the feedback that REPL gives you, so you can't build up your solution - and understanding of the problem - iteratively.

Seems to me that holding a code retreat and pairing with interviewee's would be more realistic. It would eliminate at least some of the pointless stress of interviewing and would get closer to a real programming environment.

It's not like nobody has enough computers so you have to use whiteboards.


Problems on a whiteboard don't test programming skills simply because the whiteboard isn't executable. You don't get the feedback that REPL gives you, so you can't build up your solution - and understanding of the problem - iteratively.

Strange argument. What are people who write in compiled languages supposed to do? Particularly if their code is part of a product with a long build cycle?

It's not like nobody has enough computers so you have to use whiteboards.

On this I agree. We switched from whiteboards to giving the candidate a laptop, their editor of choice, and access to a compiler and the manpages and this was a major improvement.


look up doc - edit - compile - test - repeat is pretty close to interpreted language repl cycles. But then I started on punch cards, so maybe my perspective is different. [one to three compile & test cycles/day makes anything interactive look good]

Appreciate the comment though. Gives me food for thought.


I'm not sure how this is any better than any other annoying cs question asked by 90% of companies nowadays. My best interviewing experience was with a fairly large startup for an iOS programming position. Instead of a phone screening interview, they sent me a small iOS app description and asked me to code it and send it in. The in-person interviews was me and the interviewer, at a computer, coding sample iOS applications and looking through code.

This is directly related to the job and through asking questions about the code and how to accomplish different things, the employer knows all they need to about the candidate's abilities.

I've been asked to write the Fibonacci algorithm so many times that I can literally code every version of it with my eyes closed. Does that mean I'm a good programmer? I'm pretty sure anyone can memorize a dozen lines of text.


I personally don't care how simple a question is or how well it's explained. If I walk into an interview with a portfolio full of samples and completed projects, and a resume full of references and qualifications, it's insulting and leaves me with a bad impression of the employer.


At one point in one of my first hiring experiences I was asked at the last minute to come sit in on interview of a candidate, and handle the technical part of it. This was for low-level driver development work. I hadn't seen his resume before, so I got a pretty bad shock when this candidate said at one point "I'm the maintainer of subsystem X in the Linux kernel". With a a laptop on hand, "cd ~/git/linux-2.6; grep -R candidate_name" revealed easily enough to verify he wasn't joking.

I bailed out of that interview, as I considered it grossly inappropriate to interview someone who was obviously much my senior in the relevant subject matter.

The more senior interviewers informed me afterwards that was a mistake. If the candidate is who he claims he is, it will be immediately obvious during the questions, you apologize profusely and all are happy. If that doesn't happen, you might have saved the company from a major screw-up. And that does happen.

TL;DR easier to apologize to a candidate than to to fire someone


When Robert Metcalfe calls his ISP with a connectivity problem, they're still going to tell him to power cycle the modem first.


I'm being nitpicky, but the iterative and "dynamic programming" solution are both dynamic programming if you follow the strict definition.

Both are exploiting optimal substructure and the recursive nature of the problem. The best solution, labeled as "dynamic programming" in this article is actually just the most well optimized dp solution.


While we're at it, the last one could work purely in-place.


"Any solution trumps no solution but the quality of the solution will give you a useful signal (IMHO)."

I hate how it sends a negative signal in a programming interview to do the obvious recursive solution first instead of leaping directly to the dynamic programming one. I'm already nervous enough for Christ's sake!


As an interviewer, I would not mind seeing a "naive" recursive solution first, though I would want the interview candidate to either mention this before implementing the recursive solution or later when they "discover" the inefficiencies. Showing your work (and describing your thought process) can provide more insight to an interviewer than just popping out an optimal solution without explanation!


This is still a terrible problem. It's simple exactly because it is familiar to the interviewer. It's easy for them to believe that anyone with half a brain should be able to answer it because they've practiced it a hundred times and have forgotten exactly what it is like to not know the answer.

I think coding problems are terrible for almost any interview because the candidate is going to be struggling to find the right answer that will please you. They're looking to avoid causing you to press the big red button that gives them the electric shock and sends them home feeling like they're the most stupid person on Earth for not solving the easiest algorithm problem known to human kind.

I get it though: as people who have to interview potential hires it gets tedious wading through the muck. Personally I don't understand how you can interview someone for a senior position without reviewing their prior work. I also find it rather insulting to assume that we should just separate them from the wheat just like all the other chaff. It's just disrespectful IMO.

I'd done a lot of interviews about a year ago. All but one of them used a programming test of some sort. From fizzbuzz to fibonacci to splitting words, finding palindromes and implementing FIFO queues. I did it all. Some of them I got right. Some I didn't. Even the ones I had practiced and came to know really well... sometimes even then I would blow it. It was a really disheartening experience and I almost gave up programming forever when I started to believe that rigmarole about how any programmer worth their salt should be able to solve these problems without thinking about it.

Which got me to thinking. I can solve these problems easily. I've got my algorithms books, I practice code kata, and I can generally golf some problems off the top of my head and they'll often compile. Why can't I do this in front of other people?

The psychology of the situation is what gets in the way. If you give a candidate a problem they're going to immediately begin to think about what you want them to say. You say it's not trivia, but that's the exact mental process that will happen in most people. Given such a problem I'm certain a good number of candidates I interview would be able to solve them on their own. They might even be able to white board them once we start working together and they get to know me. But while we're in the interview room I'm the gatekeeper with the big red button and they've doing everything they can to keep me from pressing it. That doesn't work for me because I know that's not what I want from them.

When I hire someone I look for initiative, experience, and intuition. I look for people who have taken what they've been given and made something more for themselves. If at their last job they weren't happy with something I'll ask what they did to change it. I also like to test their experience. I ask design-type questions, ask them what they like about their tools, what they don't like, patterns they've seen, how they practice... things like that. And finally with intuition I like to throw up some source code of some problem I'm working on or interesting ones I've come across (and solved) in the past and discuss it with them. I give them an over view of the code, my expectations of it, and then ask them to read it. We walk through it and I get to see how well they can spot bugs, code smell, what they focus on when they see code. IMO, these are the sort of things that I think demonstrate a person's knowledge of programming.

And from my experience I think it's a better way to find good candidates.


Personally I don't understand how you can interview someone for a senior position without reviewing their prior work.

Or what they claim is their prior work. And it really isn't reasonable to only consider candidates with lots of code on GitHub.

I think coding problems are terrible for almost any interview because the candidate is going to be struggling to find the right answer that will please you.

To some extent that's going to be the case for any interaction during an interview. At least with a coding question there's an objective fact as to whether your solution works, as compared to trying to decide whether the interviewer wants an honest or BS answer for what your greatest weakness is.

And finally with intuition I like to throw up some source code of some problem I'm working on or interesting ones I've come across (and solved) in the past and discuss it with them. I give them an over view of the code, my expectations of it, and then ask them to read it. We walk through it and I get to see how well they can spot bugs, code smell, what they focus on when they see code.

That sounds like a fine approach, but it can suffer from the same drawbacks. For example it's very easy for a bug to be "obvious" to you and not to somebody else.


Your first sentence is way off the mark. Obviously an interviewer needs to have intimate familiarity with the problem that they give to potential employees, and have hands on experience implementing it in many variations. But that does not mean that it is unfair to ask this question because you do not know the answer.

This is a lot like fizzbuzz because the instructions for writing the code are simple, straightforward, and should be understandable by any programmer. This is not a question with an answer. This is a SPECIFICATION for a program. If you know how to code, it should be easy to write some code that meets this simple specification.

P.S. most interviewers will give you bonus points for explaining your work as you do it, particularly why you chose to use a certain approach, and what you assumed even though it was not explicitly specified. This is true even if you botch the end result and provide code with severe errors.

After all, that's what good coders do most of the time, write incorrect code with severe errors. The rest of the time they fix their mistakes.


It's not far off the mark. The problem with familiarity is that it's easy to conflate with easy. Once you see the solution to the problem it's hard to believe that it's hard to implement. The solution is so easy, how could anyone worth their salt not see it?

As an example of what I mean: studying math. The majority of the time after reading or listening to an explanation of the problem and walking through an example exercise or two, I the student am left to bash my head against a set of problems. Despite the wishes of the teacher and their best efforts to illuminate the way the solutions to the problems and why they work is not clear. You have to just throw your mind against the wall until you break on through. Then when you do you see that the answer really is simple. And you wonder why it was so hard to see it. Years later you might wonder why anyone cannot recognize how simple it is. You've forgotten what it's like to be a beginner on the other side of the wall.

The other half of the problem I see with using puzzles in interviews is the adversarial nature of the situation. By posing the problem you're triggering a response in the candidate to find an answer. They want the correct answer because they want the job. Explaining to the candidate that they can think aloud and walk through the problem might help to switch them off that track, but I find only a handful of people respond to that encouragement.

I did upvote your comment though because you're right -- most of the time we write incorrect code. Despite our best efforts to write our tests up front and walk through our best laid plans, we still make mistakes. Developing software is a process and becoming a good programmer, to me, means mastering that process and cultivating the wisdom and experience to see the bigger picture.

Trivial problems trigger the trivia response. We learn trivia by rote. Once I got over the initial depression when I had to go through this style of interview process I figured out a way around it. I wrote software to train myself in trivial programs and practiced walking through problems I was unfamiliar with. I started practicing code kata and as a result have been more comfortable with these sorts of problems... but is that really a good indicator of competence? Is it even a useful signal? Of what does it signal at all?


Mathematician's solution: the entries in Pascal's triangle are combinations (n-over-k), which can be calculated directly: n-over-k = n!/((n - k)!k!). (If you want to be even faster, use Stirling's approximation to the factorial.)


Yes, neatly circumventing all of the clever algorithmic complexity optimisation that the interviewer is apparently looking for, and making this an even worse interview question.


> The second optimization one could make is that since f(r, i) == f(r, i-1) + f(r, i) then to calculate f(r, i) you only need to calculate up to the i'th element of each row.

You actually only need to calculate up to the i'th element of the row you want.

The Wikipedia page about Pascal's triangle [1] teaches us that it's possible to calculate a row directly (without any knowledge of other rows).

Therefore we can do this:

    #!/usr/bin/python
    for row in xrange(200):
      curr = [1]
      vc = 1
      for c in xrange(1, row/2+1):
        vc = int(vc * (row+1-c)/float(c))
        curr.append(vc)
      curr.extend(curr[::-1])
      print curr
(And of course since each line is symmetrical we only need to compute half of it).

This version is only a little faster than the last one provided by the author, but for some reason it's much more consistent, time-wise. For 200 rows it stays between 250-330ms between runs, whereas the author's last version is between 300-550ms.

And with a little modification, this version will let us compute any row much more efficiently than starting at the first row every time.

[1]: http://en.wikipedia.org/wiki/Pascals_triangle

PS: There may be a more efficient way to calculate vc (Cell Value) than using int() and float(), but I don't really know Python so that's the best I can do ;-)


Extra bonus points if they reduce quadratic to linear time with something like

    def pascal_row(n):
      row = [1]
      for i in range(1,n+1):
        row.append(row[-1]*(n+1-i)/i)
      print row
(and more if they can talk intelligently about additions versus multiplications and divisions, the cost of [].append(), when if ever linear-versus-quadratic actually matters, bignum arithmetic, etc., etc., etc.).


Yeah, I finished the article still wondering when he was going to get to one of the good solutions. A programmer with good math should be able to write a nice linear time solution right away.

And that's one more reason why it's a terrible question. If you're looking for good programmers and not math whizzes then the solutions in the article tell you more. But they're all badly suboptimal if you know what you're doing. If you don't know what you're doing, the best answer is that you'd Google for it on Wikipedia.


A good test to see if interview questions are you are posing to a potential candidate are good ones is to allow the candidate to pose a similar number of interview questions to you. If they can also pose problems that by your own standards you should be able to answer, but aren't, then you are not doing it right (unless you yourself are not a fit for the position under consideration which may very well be because you are in a different position).

For some reason, some interviewers start with a belief that they are smarter than the candidate. While any reasonable person would feel that they do not fall in this category, think about how would you actually know if you do (this is not to say that you do fall in this category). The job of a good interviewer is as much to not reject a good candidate as it is to not accept a bad candidate; in both cases thinking of the position under consideration. If you start with this mindset, and by default subject yourself to as much doubt as you are subject the candidate to when her answer does not match your thinking, you are doing great.


Let me point out a slightly different whiteboarding approach I've used. I give you the problem. I give you the solution too! I will even explain the solution patiently, line by line, until you are absolutely convinced.

Then I simply grow the problem - just one tiny step. I ask for the solution for this new problem. And guess what ? Most people still flunk!

Here's the problem I use( we are a quant outfit ): --- Say I have $100 in 1 dollar bills.

How to make the most money if I have to split it between google and apple ?

---- solution --

val apple = (0.015,0.1) // 1.5% return/month, 10% risk

val google = (0.008,0.05) // 0.8% return/month, 5% risk

val rho = 0.4 // google apple correlation is 40%

// returns are weighted means

val myreturns = (0.0 to 1.0 by 0.01).map(x=>(x * a._1 + (1-x) * b._1 ))

// risks are standard deviations

val myrisks = (0.0 to 1.0 by 0.01).map(x=>sqrt( x * x * a._2 * a._2 + (1-x) * (1-x) * b._2 * b._2 + 2 * x * a._2 * (1-x) * b._2 * rho))

// sharpe = my return over my risk

val sharpes = (0 to 100).map(x=>myreturns(x)/myrisks(x))

// goal is to make most money ie. max sharpe

println( "Most money has sharpe:" + sharpes.max )

------

Now let me grow this problem, by just 1 more stock.

I have $100 in 1 dollar bills.

I give you 3 stocks: google, apple and amazon.

How to make the most money ?

This stumps so many people I have lost count! The ones who do well have usually taken a semester of undergraduate combinatorics, so they see instantly what needs to be done.


I don't really know anything about this, and I'm finding the code written hard to understand (what language is this?).

If I would have to guess, you need to add var amazon = (foo, bar), adjust myreturns and myrisks to calculate over 3 arguments (or generalize them). Then modify the maps to map over 2 variables instead of one (like nested loops). The allocation is the x,y in sharpes() that gave sharpes.max, though I have no idea how to get that out of this language.

Edit: Ok, so you need to define another rho, and the problem is that a) not all mappings over x,y (0->1) are valid allocations a) the stocks are discrete. So that's where the combinatorics come in, as you need to consider all possible combinations of allocating 100 dollars over 3 stocks, and then map over that instead. I'd have absolutely no idea how to express that in this language.

Also, what are the stock prices? Google, Amazon and Apple all trade at >100USD/share. You can't buy fractional shares, can you?

PS. One thing I like in this approach is that you're requested to understand code that's already written. This is what happens in real life, too.


That's a terrible programming question. You might as well just ask "Hey, did you take combinatorics in school? Ok, cool."


If that's important for his company, why not? It's like the people that apply for a programming job and complain they get O(n) questions. Also, it doesn't just test if they took it. It also tests whether they understood it and can apply it. (You would expect the schools to do that, but...)

Of course, if this question requires combinatorial knowledge, and its irrelevant or not critical for the job, then that's just bad.


Is there any way for somebody with 10+ years of C++ experience but little to no 'real mathematics' skills (meaning: I know what sin and cos are and can multiply two matrices, which probably puts me in the 90th percentile of the general population when it comes to math, but I have no idea what 'combinatorics' are) to work in some capacity at a hedge fund (well, technical capacity, not sweeping floors ;) ) ?


The problem with tech interviews is that we tend to debug candidates rather than interview them, which leads to this confrontational, almost antagonistic pop-quiz process. An interview should be a conversation - sure you're trying to ascertain whether the candidate is smart and, most importantly, whether you could stand working with them but it should still be a two-way, human interaction.

If you genuinely think that their tech skills aren't good enough then I've found the best test is to give some simple exercise for them to do in their own time, which you can then talk through in the interview. In this way you can set a realistic exercise (as opposed to 'generate me Pascal's triangle') and dedicate more time in the interview to the interesting stuff - 'why did you do this this way?', 'what the complexity of this approach?', 'can we improve that?' etc. How the candidate answers those question are a much better signal of intelligence then how adept he is with a whiteboard.


Done right? Sorry, you bored me to death as soon as you started blabbering about Pascal's Triangle. I'm sure I learned about that at some point, but I've forgotten it. You asking me that questions would result in:

1) Me feeling stupid because I don't remember what Pascals Triangle is.

2) Me stumbling through some code while I try to understand the explanation you give me.


I think a lot of people here are missing the point that this is only one interview question, not the entire interview. You still need to ask icebreaker questions about the candidates experience to get them comfortable before breaking out a whiteboard question, and you should still follow up a "stock" question like this with a more specific question taken from your domain (if they make it that far).

I'd also add that you need to have several of these "stock" whiteboard questions on hand. If the interviewee seems to get the solution too easily, they may have heard the question before. Everyone knows FizzBuzz by now, and I gave three answers to "compute the nth Fibonacci number" at my last interview. These questions are no good anymore if you're trying to see how a candidate thinks on their feet.


The biggest issue with these puzzle questions is they are easy to memorize and encourage solving in the shortest amount of lines of code.

I don't know why people rely on them at all because most candidates they hire probably have just went on pythonchallenge or interviewstreet and learned all the solutions to popular problems and then went to the interview or even learned the solutions by going to enough interviews.

I want someone who can think on their own (not memorization) and can write good code (read: not shortest lines possible).


I think this problem is a great interview question. It's easy to describe, and it's easy to get started solving it by writing the first few lines on the whiteboard. From there, the pattern should be obvious, and once you've recognized the pattern, you only need to write a few lines of code. The disadvantage is that someone might remember Pascal's triangle from high school math, and will try to "remember" the answer instead of actually solving the problem incrementally. This is worthless because the question is about problem solving ability rather than your memory of exam questions like "Expand (x+3)^27" or "Factor x^3 + 3x^2y + 3xy^2 + y^3". (I do remember writing Pascal's triangle on these sorts of exams, incidentally!)

Anyway, reading the other comments in the thread, it seems I am alone in liking this sort of question. I think these types of questions are good indicators of programmer quality. Anyone can read a Ruby on Rails tutorial and make an application that stores some records in a database. It doesn't take understanding to do that, it takes practice. That's all very good, as long as you're sure that your work will only be repeating what you've practiced rather than developing new skills. I think that programming work is very rarely like that. Yes, your job involves Ruby on Rails, but it's largely about solving some sort of complicated problem and then hooking the answer into your UI. Your experience with Rails will speed up the UI work, but if you aren't willing to understand and solve problems, you'll never have anything to put in the UI. (The people that can only do the UI part are "code monkeys" and don't get paid very well.)

If you think that you'll never need to understand problem X, it's simply because you are unwilling to understand problem X. Once you understand graphs and algorithms and data structures, you'll see how you can apply your understanding to write better software. You might not think they're very important, but that's because, right now, you have no idea of how those techniques might be important. Learn them and then come back and say "everything in CS class is garbage". You won't, because it's not. Your job will be easier and you won't be navigating the waters blind.

So I guess my point is: algorithm questions are good. There are a lot of programmers who can't solve problems (but can monkey code), and if you want the best programmers, you want the ones who can both solve problems AND write code.


No, this isn't "right". It may be right for a specific job with specific skill requirements.

I've been programming for over 25 years, and I've never had to solve any problem even remotely like it. I've also worked with dozens of programmers who could easily solve a problem like that, yet couldn't deliver a working application if their live depended on it.

Different programming jobs require different skill sets. There is no universal test.


I agree. I think the attribute most important to my job is being able to turn a complex specification into an app with clean architecture, with no two way coupling between components, generally useful parts abstracted away into reusable code, usage of existing open source modules instead of rolling my own where appropriate and so on. I don't think this test would reveal much of that about me.

That said, for the first time in my career I was given a test to do and that seemed to be specifically designed to see how well I could architect code and yet I still screwed it up. That might show that I'm a bad programmer, or it might show that it's dangerous to use a short test to generalise about someone's ability.

I like to take jobs on a contract basis, that way there's less risk for the clients as they can cut me whenever if it doesn't work. As a result of that the interview process is usually pretty quick and informal and I can show on a day to day basis that I am good at what I do.


Why don't people pay candidates for a day's work and see how they do? It would be cheaper and a far better indicator of actual potential.


The administrative overhead of doing this would be immense, if it is even legal (at least in my country).


You're all wrong.

Firstly, for the crowd arguing about whether this is a good/bad/ugly question, your initial premise is wrong, because you pre-suppose that there are any good programming questions to ask in an interview. Since waffling about this sort of crap is entirely orthogonal to actual coding skill, there are no good questions, they are all bad. It's like trying to find the fastest marathon runner before the race by interviewing everyone and deciding who will win based on who gives the 'best' answer.

Secondly, for the crowd debating the algorithms and offering better programming methods, or smug in your assurance of quadratic/linear/O(n) time or whatever, you're all wrong because your initial premise is wrong. You assume this is a programming problem, but programming is the wrong tool for the job. A much better general purpose tool for 'solving' this problem (but not even necessarily the 'best' tool) would be a spreadsheet for instance. Using a spreadsheet you could have finished (including testing - e.g. eyeballing the result for correctness) before the 'optimisers' have even finished typing in their code. In terms of effectiveness, you silly coders might as well be using pea-shooters in an ICBM competition.


I'm with the camp that thinks that having to code on a whiteboard for an interview is just INANE. Worse than that, I find it to be borderline abusive. The whole concept just makes me quite mad. I graduated from MIT, and in my first programming class there (SICP), which I took first semester freshman year, I scored in the top 10 students out of 200 in the class. I've written software to control a space telescope and adapted brain imaging software to visualize astronomical data. I wrote a tutorial on how to do coordinate frame transformations, after first deriving the math myself (which isn't rocket science, but I hadn't done any linear algebra in 20 years at the time, so it took some thought).

Clearly I know how to code! But put me at a whiteboard and ask me to code Hello World, and I'm likely to start sweating profusely and then get very dizzy, and then just ask to leave. I managed to make it through a first round of interviews at Google, but I had to take tranquilizers before the interview to make sure that the aforementioned scenario would not happen, but I don't think that people should have to be subjected to this. And I certainly wouldn't subject myself to this for almost any other company. I'd just say, "Screw you--I can find a job elsewhere." (I didn't end up doing the second round of Google interviews because I was offered a job elsewhere, and Google said that it would take a least an additional month to make a hiring decision.)

Where I work now, the interviewing process is much more civilized. We send the specs for a small program along with some unit tests, and give the applicant however long they'd like to complete it. It should take a page or two of code and a couple of hours to complete at most. One might worry that people would cheat, but that hasn't been my experience. Most applicants never submit a solution at all. I suspect this is because they couldn't get their solution to pass the unit tests. (What we ask them to do, is not difficult. Anyone who passed a college course in software engineering with a grade better than a C should have absolutely no problem completing the assignment.)

So, then some fraction of applicants actually complete the assignment. These submissions show that the vast majority of so-called software engineers -- or at least those who are looking for jobs -- can't code their way out of a paper bag. I.e., most of the submissions are grossly inefficient and hugely over-engineered or under-engineered. Finally, a few of the submissions are passable (rarely do we get a truly excellent solution, which is kind of sad), so we have them come in, and as part of the interview, we'll have them go over their code a bit and explain why they made certain design decisions, etc.

If you ask me, this process gives us a much better idea of how someone would perform in the real world of being a software engineer and any amount of coding on a whiteboard ever could.


I really, really like this approach.

It beats the whiteboard quiz in every aspect.

It is not only a far more pleasant experience for the interviewee, but it also represents the candidates real world programming skills more accurately.


You can learn a lot about a programmer, the way he thinks etc by asking very simple questions .. my favorites being "reversing a string" and frequency of characters in a given sentence etc.


But thats just another popular interview question that all developers have memorized. So in python every developer you ask will just say s[::-1], they aren't even going to think about it.

I prefer hiring based on Github or having them design a small project.


> Bonus points if the candidate correctly states this as "memorization" (rather than the more generic "caching").

Probably an autocorrect mishap there.


An unfortunately placed one at that.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: