Google translate is sufficient. They’ll do it pretty carefully, complete with “pretend you get stuck at this specific point and if you get asked why to use a hashmap, act baffled for a moment, and then say X”
It’s ridiculously obvious when people have seen the question before.
The way we do it is like this: we have like 3 or 4 different small variations on each question. Such that the solution is measurably different, in quite telling ways, but that the given problem looks almost identical.
In one specific case the given is identical, but there are 3 variations to the question based on how the candidate asks questions about it (if they don’t ask questions the question is not possible to solve as we don’t give all of the information you need.)
We started doing it this way precisely because we kept running into people who would have 3 nearly perfect interviews and one “hard fail”, and we eventually realized it was because they were so good at faking that they’d seen it before but if they hadn’t seen it before they bombed it hard.
So now that we have the “variations”, at least once a month someone will “hard fail” the interview because they’re obviously cheating and will quite literally give the right answer to the wrong question, just rote memorizing it.
Your candidates show that after learning how to solve a problem, they can demonstrate they're able to solve it.
Have you considered just hiring candidates and then training them, or expect them to learn approaches that are new to them?
Right now, you're pretending that your company needs random puzzles solved, and they're pretending that they're able to solve random puzzles without looking them up in an algorithms book.
What's the point of this whole theatre?
I get that your ego is enjoying that, but is that providing your company really any value?
Along with what the sibling said, I'm not interested in tricks and puzzles, but I am interested in how people take in and handle new information. I'm not going to pretend like my job is bleeding edge or remotely novel tech. I do CRUD with SQL.
It's impossible for anyone to be an expert in every application that my team handles. The key for us is that we try to keep our applications relatively simple with how data moves from point to point. Orienting yourself with new environments and applications significantly increases productivity here. It's always good to have people who can recognize and apply logic to patterns, but knowing how to ask questions is important. It isn't about the "gotchas". It's about what happens after the person is stuck. We try to make sure our applicants can make some assumption or ask clarifying questions about ambiguous portions.
Being able to implement a solution after having already been shown how to do it is sufficient for many roles. But not all. Some roles require that you are also able to figure out solutions to problems you have never seen before.
If your company requires only the former that's fine. But if you also require the latter, that's fine too and it's ok to test for it in your interviews.
If a company generally requires candidates to be overqualified for their intended role, that's a bit dumb. But I imagine that such a problem would eventually be fixed by the free market (supply of workers at various qualification levels vs. demand for said qualifications).
> Your candidates show that after learning how to solve a problem
The way I read it, they’ve shown they can learn the solution to a problem.
It’s like asking ”what’s 43 × 57?”, getting “2451” as a reply and, from there, assuming they’ll be able to calculate ”42 × 58” or “41 × 59”, too. If they memorized just “43 × 57 = 4251”, that conclusion may be (and likely is; most people who know how to multiply won’t memorize such products) incorrect.
"Your candidates show that after learning how to solve a problem, they can demonstrate they're able to solve it."
The parent of your post is talking about a situation where they've demonstrated they've memorized a solution to a particular problem, not that they can solve it. And that it was the wrong solution, which they didn't notice. It's that last bit in particular, combined with not being able to adapt or create a solution for the actual problem, that is the hard fail.
I can't imagine my personal interview questions & style are common enough to have shown up on any of these sites, but I have personally witnessed two people (out of a few dozen) who knew only and exactly what was on their school curriculum, but were completely incapable of stepping outside of it. I come at interviews from the point of view that I'm trying to discover what they can do, not what they can't, so when I say that, bear in mind that I didn't hand them a problem and smugly watch them fail for 30 minutes, I actively tried to formulate a problem they could solve and failed. I've also witnessed someone persistently applying a book solution that was the wrong solution to the problem. Perfect depth-first search, but it should have been breadth-first, they couldn't seem to understand my explanation of that, and they shouldn't have put it in a tree in the first place. (It would have worked if the correct search was applied but it was massive overkill. It was something like finding a substring in a string... yes, you can represent that as a tree problem, but you're only making things worse.) They nailed the definition of graph and basically just wrote out the depth-first algorithm as quickly as they could write... but it was wrong, and the moment they stepped off the path they were completely lost.
I also don't do brainteasers like this, I focus a lot more on very practical things, so we're talking more like "failing to write code that can take the string 'abcd,efg' and split it on the comma without hardcoding sizes, either handwritten or by calling library code". I really want to start an interview on some kind of success but every once in a while a candidate gets through that I simply can't find the place to start building successes on at all.
(You have to remember the "evaporative cooling" effect when reading about interview stories. Of the good candidates that even go through an interview process, they do one round of interviews and get snapped up immediately. People who have good-looking resumes but, alas, graduated with insufficient actual skills, interview over and over again. The end result is the average interviwee, anywhere, is not very good. One of the preceding interviews I'm referring to emotionally wrecked my day, I felt that bad for someone who had clearly put in immense amounts of (all the wrong kinds of) work but I couldn't even remotely hire. But there's nothing I could do about it.)
I should also mention I run a style of interview where if I ask you a question and you blast it out of the park in a few seconds, great, you (metaphorically) get full points and then I move on to the next harder variant on it. And I can always make up a harder variant of something we can talk about for an hour or two. If I can't find something you can't blast out of the park in an hour or two, well, unless you're completely insufferable or your salary expectations are something I can't meet, expect the offer shortly. But what you'll be "blasting out of the park" will be something like a full round-trip delivery of a service with code-as-infrastructure and a database and user logins and a solid security design and so on and so on, not solutions to Project Euler problems.
Having been on both sides of interviews but fortunate enough to no have to do leetcode interviews I have a genuine question for those that do.
Why do them?
Are you really facing those problems frequently enough at FAANG to have know them? Is it uppity engineers? Gatekeeping? Or are you just getting so many applicants that you have to filter somehow and leetcode interviewing has some nice properties (easy to apply remotely, can be done by engineers, strong pass/fail criteria).
Genuine interest. Ignore the negative subtext, that's just me on this subject.
Its probably partially gatekeeping. "I had to invent a unique sorting algo in this interview so you will too".
But also, its a good measure of someones ability to take an abstract problem and solve it. Lots of mini events in a LC problem to critically think. Like you said, we need a filter and its really easy to use LC to be that.
That said, I've worked at fang with co-workers who had trouble using iterators or properly assessing complex Boolean logic (and I'm not talking about needing de Morgan), so sometimes LC skills are needed on the job. So getting a signal that "this person can't write loops" means "we don't trust this person not to write an infinite loop", however rare that day comes.
There's enough programmers who want FAANG jobs and its easy enough to apply and the pay is high enough that you should be free to gatekeep by someone who understands intro-to-java level data structures and algos. Maybe leetcode-hard is unnecessary, but easy should be doable.
Because if you otherwise like the candidate you can help them along and if you don't like them you can just watch them sweat, but then claim your interview process has some level of technical rigor.
I really don't get shareholders at big tech companies.
They could hire a few teams of talented engineers and trove of cheap developers and forget all of this.
Almost nobody needs leetcoding engineers, 0.1% of their tech is hard, the rest is all forms and api, as is most of the industry.
The lack of self-awareness shown by this interviewer is staggering. You're right, they seem to relish wasting their (paid) and applicants' (unpaid) time. I won't hold my tongue. The people that do this are total assholes, if not downright sociopaths.
Why do you call it cheating? Did the candidate actively copy a solution by deception or were they just very well prepared - to the point that they quickly recognised and solved the problem which they had seen before?
Perhaps I should throw out my CLRS and Skiena and invent everything in those books from scratch?
It's an Arms race because people like you have turned it into one. You're not solving Nobel prize winning problems, so stop expecting people to magically invent on the spot novel algorithms for things they've never seen before.
As someone who has studied and passed before in this manner, and is now an interviewer, I have a simple solution that other companies should follow:
for at least one round of interviewing, let me (the interviewer) use my own custom question, where the goal is not so much to solve it but rather to reason outloud collaboratively about many different aspects of the question.
I like to use 3d graphics as a domain that candidates most likely havent seen before, but sufficiently motivated/smart ones can hold their own in. If someone doesn't quickly and intuitively grasp that a shape is a collection of faces, and a face is a collection of points, and a point is a collection of vertices, I'm not sure that they have what I am personally looking for (even though we dont do any 3d graphics in our project)
There's a certain irony to exploiting an ethically questionable method to obtain a job and, once you've obtained that position, using your authority as a gatekeeper to attempt to prevent others from entering the same way.
I can't speak for them, but I would assume that jsiaajdsdaa still considered themselves qualified for their positions - even if they employed ethically questionable methods for passing the interviews. In the end isn't that all that matters given that's what these interviews are supposedly trying to measure?
What part of jsiaajdsdaa's process do you think is more efficient? To me it sounds less objective ("the goal is not so much to solve it") which seems like it would make the process less efficient when assessing candidates.
I think as a general rule the idea that you give someone something that they are unlikely to solve because they are not completely familiar with the problem space does measure some things well - specifically, how one performs with something new and unknown (a critical programming skill) and their ability to abstractly reason as opposed to having memorized some information.
Although for other things related to the job not that useful.
That said I was talking about your comment that it was somewhat unethical; they didn't so much think that they gamed the system but that the system was so inefficient at doing what it should do that someone had to fit themselves to the system to get in.
Or tyrants gatekeeping future tyrants, or mature bears killing/eating younger bears to make their own life easier in the future (food and access to sex).
This is the first time I’ve come across this definition of a point. Geometrically point is defined as a zero dimensional shape (or something similar) if I recall correctly.
Besides, I don’t see how it’s intuitive at all! It isn’t to me at least.
Larger point being, if you pick such random domain without calibration you will run into such argument/discussions during an interview. Not sure what data point could be derived from such discussion when you are looking for someone who can write a decent maintainable code.
I must say that I’m terrible with geometry/graphics which hasn’t stopped me from creating value through software development in my domain, online payments.
If your intention is to gather signal for collaboration then I suggest picking something you are likely to collaborate on a day to day basis. Let’s say code review or architecture review. You could discuss why such and such an architectural pattern is useful, under what conditions, what are the pitfalls to watch out for etc etc.
A vertex is a point. So a point being a collection of vertices doesn’t make sense. A point in the sense you mean is a collection of real numbers equal to the number of dimensions of the space it’s in.
> A point in the sense you mean is a collection of real numbers equal to the number of dimensions of the space it’s in.
Not necessarily. In 3D graphics, it is common to represent points with homogeneous coordinates, where points in N-dimensional space are represented by N+1 real numbers. Using 4x4 matrices [0] to describe affine transformations of 3D points is very convenient.
(Agreed with your overall point though. Just goes to show how different some fundamental perceptions/definitions can be.)
> The extra real isn’t really part of the definition of the point in space though
It actually is. It's generally assumed to be equal to one, but it need not be.
> isn’t necessary to store to apply a 4x4 matrix
...if you assume it is equal to one, yes.
However, actually representing the fourth component is both mathematically sound and occasionally useful. For example, the midpoint of two affine points, such as (1, 2, 3, 1) and (3, 6, 9, 1) is actually just their sum: (4, 8, 12, 2), which represents the same 3D point as (2, 4, 6, 1). The fourth component can also be zero, in which case you describe a point at infinity in the given direction.
But yes, if you only use homogeneous coordinates for chaining 3D transformations, storing the extra component it pointless.
I don't understand how a point is a collection of vertices. Are you talking about X, Y, and Z coordinates?
This is a good example of how ambiguity kills an interview and reduces it to quickly figuring out what the interviewer is talking about so I might have a decent chance of solving the problem with the time remaining.
My experience with interviewing at Google, Facebook, and Amazon can be reduced to "What the hell are you talking about?"
> If someone doesn't quickly and intuitively grasp that a shape is a collection of faces, ..., I'm not sure that they have what I am personally looking for
I was doing the same to weed out the bad candidates - asking them something they should know, something logical and basic - but got bad feedback once, been asked to instead focus my questions on the strong points described in the CV. I mean, for the practical part the candidate wasn't able to count the unique words in a text file in 30 minutes. I thought opening files and reading strings and splitting are so basic anyone should know them.
I program in C++ daily and wouldn't know the currently accepted way to read lines from a text file in it off the top of my head. It's simply not something I ever have to do. A good candidate should still manage to figure it out in 30 minutes, but your programming experience is most likely a lot less universal than you think.
They didn't specify needing to read the file line-by-line. You could read the whole file at once. There might not even be any new line characters in the file. You invented a requirement.
They specified "strings", which I interpreted as lines in a text file. But it doesn't actually matter, because I could make fundamentally the same comment no matter how words (or "strings") are separated.
Are you very knowledgeable in 3d graphics yourself, some of the replies here suggest that you may have a superficial knowledge (which is what I would have), if you choose an example domain that you have some knowledge about but not deep knowledge what happens if by accident your interviewee has significantly deeper knowledge than you? I worry that person might end up sounding overly technical or even like they're BS'ing their way through the interview.
There is also another way of creating 3D graphics that does not involve the use of faces and vertices, so developers with that background would be at a loss (or win depending on your viewpoint.)
This is exactly the type of question that is the worst for interviews. It's a completely uncalibrated, completely subjective, esoteric type of question where you can't say exactly why you liked a candidate or why you didn't like her. There's no data underneath it except for "I liked how the conversation went."
It completely gives an advantage to candidates who know 3D and completely gives a disadvantage to candidates that know nothing about it. Even worse, they are relying on YOU to explain it to them. Who's to say you are sufficiently qualified to teach people the basics of 3D graphics enough such that they can answer your questions? Have you been calibrated or judged on your ability to teach even the basics of 3D graphics? Or are you assuming that you're good enough?
It's completely inappropriate and relying completely on you to determine a candidate's qualifications based on nothing except your feelings. It's a horrible question and I seriously hope this is not entertained at all at your company.
You have some good points, but can you please make your good points without crossing into personal attack? I'm sure you didn't mean to, and I'm sure you have good reason to feel strongly, but what you posted here is already aggressive and a step away from what the HN guidelines call for.
I do not believe the hard data you are looking for exist for real.
Interviewing is assessing what value an individual will bring to a project. Ideally we want this assessment to not depend on the interviewer and as a unidimensional real number so it's easy to do comparisons.
But the reality is:
- future performance depends on the team (and more largely on everything else in the business) and that varries unpredictably and is usually not part of the evaluation anyway.
- future performance also depends on how the project itself is going to evolve, which is hard enough to evaluate in itself (not just the timeline but oftentimes the involved technologies)
- assessing the value of anything is its whole can of worms (it's intrinsically subjective, you can't measure a physical "value" in SI units)
I prefer data over opinions like everyone else, but this may be a case where it is eventually safer to rely on as many as possible individual opinions and weight them, with just the amount of process in place to avoid common biases? (friends-of-friends, judging on look, country of origin, gender, ...)
I've seen big companies that takes hiring very seriously rely equaly on some preset metrics (based on prewritten questions thus easily gamed) as well as the gut feeling of several interviewers who are free to ask whatever additional questions, and I think it's the correct approach.
So I'm not a part of this world at all and I'm super fascinated by this perspective. I hear your point entirely. How do you counter the issue with pre-structured interviews where the questions get distributed and you end up with candidates who learn how to answer your exact question (but lack the skills to actually be dynamic in their job, or even do their job)?
I used to interview for a big company, and I could tell some candidates already knew the questions, but I'm a bad interviewer myself as I tend to rate most candidates as 7+/10, I very rarely found coding-illiterate candidates.
When I detected the candidate knew the questions, I'd switch order, introduce new questions I didn't usually ask.
The interview we did was quite extensive on the java basics and if the candidate somehow managed to learn all that stuff by knowing the questions, that was a pretty good sign anyway.
It was just this one time I found a candidate who aced every question I asked, even OOP, Design Patterns, I made up on the spot a code-design challenge, the guy suddenly was not so brilliant, but still managed to hold his own and I didn't penalize him for my impression that he knew the questions, he was way better than the usual candidate anyway.
Perhaps the stakes were not so high as in the USA, but I can say I've never found someone who would have failed without knowing the questions in advance; I did fail people who were trying to cheat on the phone interview, but those were rare cases.
My impression is technical interviews just filter programming-illiterate programmers and people who don't give a f#ck -- as a general rule.
There's also the 30% of cases where the interview is made up to show the candidate he's not really all that senior as he thinks and to lowball his salary request.
And some companies have very high technical interview standards because of some cultural inferiority complex (we are just like Google, you see...).
If I could answer that question, I would be a billionaire already.
Programming interviews are almost exactly akin to actor's auditions. Just because you flunk an audition doesn't mean that you're a bad actor. Also, auditioning takes a special skill and it's not very much like being a real actor. But they still do it to this day. Programming interviews are similar.
The best hiring model I can come up with is the Netflix model. Pay top of the market, hire and fire people quickly if they don't meet expectations, with a generous severance. Have high expectations, reward the ones that can fulfill those expectations, and quickly get rid of those that don't. It's ruthless, but the Netflix engineers I know love working there.
Yeah, I would say a much bigger issue than algorithm questions in interviews are interviewers who assume all programmers follow a particular path (usually the one they followed) and discriminate against those-that-did-not-follow-particular-path.
Computer science is a massive field that people enter through many different and unique ways. If you're trying to gatekeep and force everyone to enter through the same gate that you entered, you should not be an interviewer.
3D graphics is not a good moat, I agree.
There are much better moats -- such as recursion, there's no way I'd let someone in my team if the don't get recursion, even though we almost never use it :)
Interesting. I had never expected 1P3A mentioned on HN. It's mainly a forum for Chinese overseas to discuss their life and so on. Surrounding this topic they have also developed side features like COVID-19 statistics and some system for school applicants.
Is this common among immigrant communities? For example do Vietnamese or Indian or Nigerian communities exist to give each other exclusive support on finding jobs or other such advantages?
Yes. And American immigrant communities in other countries. The size and formality will generally correlate with the size of the immigrant community. From real-world social networks to Facebook groups to dedicated sites and forums.