Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The tech industry has unfortunately adopted the methodology of centralized hackable tests as the canonical gatekeeping method in the form of programming interviews.

Most big tech companies don't care about how good you have been at delivering some value through creating software: they want to see you deliver a very specific type of performance at a whiteboard. Interviewers are given specific math puzzle questions to ask. Interviewees are explicitly told by the same companies' hiring departments that they should aim to hack the system by studying books like "Cracking the Code Interview".

This is an industry that prides itself on supposedly making data-driven decisions through A/B testing. When it comes to hiring people to make those decisions, everybody just plays along to a decades-old script.



Being a hiring manager at a big company, I can tell you this is just as frustrating for me as it is for candidates. I hate “leet code” and frankly find algorithmic interviews to be very low signal compared to more practical, open-ended, domain-specific problems.

I will say though, the problem is one of “standardization” across an organization where it’s too big for everyone to fit in a room.

Suppose you give each team high autonomy to hire whoever they like using whatever “good” process they come up with. 90% of the time this results in good hires. But as you grow, that ten percent of underperforming people becomes large in absolute numbers, and is very painful to deal with.

It becomes a real problem when relatively lower performing people end up concentrated on a team, and then start being the hiring gatekeepers for that team, thus multiplying the number of lower performing hires.

Later you start having institutional problems when everyone starts to perceive that the engineers in Department A are generally better than the engineers in Department B. Engineers in Department A are more likely to leave if they perceive the company is getting worse at engineering - it becomes a self-fulfilling prophecy.

Then you get enormous pressure to come up with standardized testing - aka algorithms on the whiteboard, or some other academic inspired exercise - imposed by higher level leadership that wants to address a genuine problem (skill disparity across the org) but does not know any better way to do it.

I think, as PG points out, there may be a real opportunity to innovate here, and probably a big financial opportunity if anyone can figure out how to productize and scale a solution.

I struggle to see an easy answer, though. In a utopian universe (for a hiring manager) I’d do something like pay candidates to come on site and work for a week, then make a hire/no-hire decision based on that. But I think that is far too onerous for candidates (and a big company) to have legs.


I was a hiring manager at a big company for years. We never did coding tests, and I like to think that I made good choices every time. I kept a high-functioning team together, under fairly humble pay, and stressful, sometimes demoralizing, conditions, for decades.

I'm mediocre, at best, at these tests. I don't come from a traditional CS background (started as an EE). I tend to take unusual, hybrid approaches to solving problems; often incorporating elements of new-fangled tech with patterns that have been around for thirty years, and I've found that people get VERY uncomfortable with "thinking outside the box."

I also tend to take some time arriving at the final release, going through iterations of improvement. My first bash is usually fairly naive and clumsy. It works, but not so well. I make each iteration improve on it, maintaining that working software throughout; so there's always something working and testable.

Despite all the aspiration to "disruption," it seems that people don't like to step out of their comfort zones.

Making software that SHIPS is a learned and earned skill, and one that I believe, can be most effectively demonstrated with portfolios.

When a designer goes for a job, they bring with them a large black case. It's filled with their designs and working drafts. Much of the interview consists of the designer going through this portfolio with the hiring manager; discussing each example, and talking about why they did this, or why they didn't do that, etc.

No design manager in their right mind would ignore that portfolio; instead, throwing a matchbook on the interview table, and asking the applicant to "Draw Spunky."

I'm actually shocked that there is so little importance placed on software portfolios. A portfolio represents fairly substantial work product. That's why I find open-source contribution work to be so attractive.


I've often wondered as well why tech companies seem to go out of their way to ignore portfolios. First I thought it's a misguided but honest attempt to make the process look more like a blind meritocracy, to basically turn the interview into a modern version of the Imperial Examination of ancient China.

Then I read "Soul of a New Machine" by Tracy Kidder, a book that documents the design process of a minicomputer in the late '70s. The idolized manager of that project wants to hire young engineers fresh out of school so that he can mold them according to his wishes and have them working all-nighters. If that's your hiring principle, you're not going to get anyone with portfolios.

So I suspect it's an industry sickness that's been turned into a virtue over time.


The benefit of a portfolio for picking a designer is that most often the designer was the only designer on a project, therefore the design can be said to theirs, whereas programmers are seldomly the only programmer on a team and the portfolio can be unclear as to what they contributed.


I don’t think that’s true for typical commercial design work. Clearly explaining what you did as part of the creative team is a major element of presenting the portfolio.


I suppose you're right, I made the mistake of thinking the role of designer in web design was the same all around.


Also there are a lot of good people who can't show you their portfolios beyond a few sentences on a resume. And most places are hiring for group projects, not solo projects where you could make portfolio projects in your spare time.

But everyone can do a whiteboard interview!


Not everybody. I can DO them, but I pretty much guarantee that I won't do them WELL.

After a couple of these fiascos, I just gave up, and decided to start my own gig. I'm fairly good at what I do, and it really is too bad that it doesn't show in half-hour theory tests.

I do have a pretty massive portfolio, though. Comes from doing open-source software for over twenty years. I was paid to manage; not develop, so I needed to keep my tech chops up in other projects.

I also realize that this is not a typical thing. It's just my journey.

I thought it was funny that I mentioned this once, and someone insisted that I could have faked the portfolio.

If someone can fake ten years' worth of commit history, in dozens of repos, and tens of thousands of lines of code, as well as hundreds of pages of documentation, and dozens of articles, you should hire them immediately, because they are a genius.


Your username checks out here.

Maybe I won't become a specifically-software person.


Portfolio's don't play nicely with NDAs.


an NDA doesn’t prevent you from taking credit for specific features


Unfortunately portfolios would make finding jobs very difficult for myself and my colleagues. We aren’t really allowed to talk about what we work on outside of what could easily fit in a vague, 1-sentence summary on LinkedIn.

This is actually frustrating me quite a bit because as I look for new jobs people like to ask, in detail, what I worked on, and I can’t really tell them.


I completely understand.

That was a big reason that I learned to draw people out when they interviewed. I asked them to "tell me stories" about projects they worked on, without getting detailed about specifics. In my experience, they were always able to demonstrate plenty of enthusiasm and creativity without revealing the family jewels. A half-hour whiteboard test would have been completely useless. I was usually able to establish a fairly comfortable level of tech knowledge fairly quickly, and the bulk of the interview was really about how well I thought they'd fit the team (NOTE TO SELF: Don't go for homogeneity).

However, this isn't a problem that's unique to our industry. My father was in the CIA, and I never found out until just a few years before his death. Lawyers have legal cases they can't discuss, doctors have medical cases they can't discuss in detail, without violation of HIPAA, etc.

Somehow or another, these folks are able to interview for jobs that often have a far greater risk than a line programmer on a major initiative.

True, some are screw-ups, and that doesn't become apparent until after they are hired, but the same goes for an engineer that can ace every problem on HackerRank, but goes all to pieces, when presented with 100 KLoC of spaghetti code, and told to fix a problem (exactly what happened to me, on one of my first programming jobs. 100 KLoC of 1970s-vintage FORTRAN -I fixed the bug, but had to hold my nose, while doing it. I later learned that this was what everyone did).

What bothers me most, is that a chief reason that I'm given for these tests, is that companies want to find people that can come up with innovative and unique solutions, yet they seem to actually have the opposite effect; filtering for people that only come up with standard solutions.

I remember once doing a "take home" test that asked me to apply a third-party library. I did what I always do with dependencies; I encapsulated it, and that seemed to totally freak out the interviewer. To this day, I have no idea why they lost their bottle so badly over it. My solution worked extremely well, and it also gave the problem tremendous "future-proofing." That's why I encapsulate dependencies.


Re: your point about interviewers looking for their expected/standard answer vs an actually innovative one...

I take that as part of my job in interviewing the the company/interviewers. I will very deliberately give a more radical, non-standard but correct answer precisely to see if they fit into that broken/fixated mindset and see how they deal with being challenged. The dynamic of how they deal with that shows everything about not just how clever they are but how intellectually honest, curious, and emotionally mature they really are.


Sure you can. You can talk about the hard problems that have nothing to do with the fact that it was part of a missile guidance system or whatever. If you worked on anything non-trivial there are always some interesting problems you had to deal with be they algorithmic, engineering, human, or whatever. If people press for inappropriate details when you've given them plenty of other interesting bits then that's telling you more about them (and whether you want to work with them).


I consult some (maybe a lot). My work product belongs to the client. I am not legally allowed to share.

Some of it I would like to share (and some of it not!), but it really isn't my choice.

(Also - because I am fairly cross-disciplinary, the code for a controller to stand up a 100 ton rig, is not the same as the code to monitor the growth/feed/heat/light for algea in vertical tubes. And neither is likely to be asked of me for projects having to do with creating controllers for missions related to diabetic control or scooping up space debris. The ability to analygize only goes so far.)


The big problem with using a software portfolio is that plenty of competent engineers don't have impressive work that they can share. If you don't (or can't, because of other obligations like parenting) code in your free time and aren't a recent graduate, you probably don't have much to show.

Of course, difficult whiteboard interviews that require studying to pass have the same problems probably and give less useful information. So, :shrug:.


I know. I'm fortunate to have one. Like I said, I completely realize that it isn't usual for people to have portfolios; especially ones with the scope of mine.

I just feel that it's rather self-destructive to ignore those rare instances where they exist.


> I was a hiring manager at a big company for years. We never did coding tests, and I like to think that I made good choices every time.

I don't doubt that you did, but I doubt that everyone at your company places the same effort as you do. At my current company, 90% of interviewers half ass their interviews. They don't have much incentive to care, so you want interview questions with consistent standards and produce some signal with minimal effort. Algo questions fit this bill perfectly.


> 90% of interviewers half ass their interviews

Good interviewing is difficult, it may not necessarily be because they don’t care.


> But as you grow, that ten percent of underperforming people becomes large in absolute numbers, and is very painful to deal with.

I feel that beyond hiring, our entire society has a problem with tails of distributions. There is this strong movement behind standardization of everything everywhere to minimize the variance, but that not only eliminates both tails, it also lowers the mean. So e.g. here, in order to avoid exceptionally bad hires, companies not only forgo the chance of getting exceptionally good hires, they also lower the quality of a typical hire.


> But as you grow, that ten percent of underperforming people becomes large in absolute numbers, and is very painful to deal with.

I think you could reduce the 10% to 1% simply by actively firing bad employees, and the easiest way to do that is with a probationary hiring period (let's say 6 months, maybe a year).

I am fully aware of how systemic rot can be, and how hard it can be to remove. The real problem is when you hire a bad manager, because they will not only let the rot fester but encourage it to grow.

I think you could wildly relax hiring standards for ICs and be incredibly successful. What we need are better practices around hiring managers more than anything.


Alas, for all of the focus on the interview processes to "prevent" false positive hires, those same companies suck as actually getting rid of the bad hires. Some is rational risk aversion to lawsuits and the like but that clearly bs when you see just how fast people that aren't liked by someone high enough up the food chain are shown the door.

The effect on morale and real productivity of getting rid of the toxic and worst performers is amazing.


The first job I had there was a probationary period - it was good and bad. It was good because you knew everybody there was extremely talented and you could rely on your coworkers. There was a bit of an understanding about the cut throat nature of the place though, and obviously that's not for everyone. But at the end of the day, when it came time to just get shit done, there was never any problem.

I can't name that place without doxing myself, but it sounds like Netflix has a similar culture. Some people love it. Some people hate it. But I don't think many people would say they have an issue with employing less-than-stellar employees.

Of course this is only "solving" (it's certainly not perfect) the problem of hiring for engineers, which most of us are on HN (at least I assume, but certainly more engineers than engineering managers). It does nothing to address the issue of hiring bad managers. I have no solution for that, other than to make sure the first manager is excellent. A players hire A players because they want to do the best work they can, and will go out of their way to hire people better than themselves. B players hire C players because they just want to make themselves look good. Once you have a bad manager, the assumption is everyone beneath him will be as bad or worse (obviously there are exceptions).


As I mentioned in a sibling comment, the toxic (and incompetent) manager problem is fundamentally a leadership problem. The single biggest responsibility of the founders is the culture of the organization -- everything stems from that, one way or another.

Leadership, from the top down, is absolutely responsible for the mentoring, training, and environment created by the management. None of the systemic problems with & induced by management is new. It comes down to: does leadership actually care enough to do anything about it? Companies institute all sorts of formal stuff like internal surveys and NPS scoring and so forth but it always comes down to trust & communication and then follow through. And the follow through is the clearest way to create trust. All of the things that even well intentioned leaders spout, if not followed through, quickly erodes and ultimately destroys trust. So, just like with kids, don't make promises you won't keep.


Just out of curiosity, why do you make multiple replies instead of one larger reply?


Because you made a variety of specific points and it's easier to address them specifically.


Note that I'm also not talking about companies doing forced rankings and nuking the bottom X%. That's a different form of systemic toxicity and, alas, plays into the technical industry's huge problem with toxic "geek machismo" culture.


Unless you work in a crazy place like Germany with the extremes driven by the work councils, probationary periods are an HR gimmick. If people aren't working out, they can be let go. Have simple, clear, legal policies and simple documentation, WIPs, and most importantly managers who actually care, communicate, and help their team members succeed. The fact is that most companies and managers don't fit those criteria and so create their own hells.


And let's not forget how scant real mentorship and training of managers is in this industry. Nor, how we've setup the power/money hierarchy that many people feel the only career growth path is to become a people manager (even if they aren't any good at it). :-(


Those "bad" managers are correctly following their incentives:

- More headcount is always better for the manager's prestige and career prospects, as long as those people contribute to project success at least a little bit (and most low performers do, just less than you'd hope). Managing infinitely many people each producing epsilon above breakeven is much better than having 1 rockstar.

- Headcount availability ebbs and flows with the business cycle. It's important to take everything you can while the money is flowing in case you might need it during a contraction. It's important to have nonessential staff on hand during a contraction so you can satisfy your X% cut requirement without compromising effectiveness too badly.

- As a manager of a high-performing division at a growing company, you will be given an integer multiple of your current headcount, and expected to deliver proportional results with it. If your strategy was "employ only high performers" you will fail, someone will be brought in immediately above or immediately below you to "help out," and within six months you'll be "spending more time with your family."

- The hiring process is hard to mess with. Recruiters put your ICs onto the panels directly; systematically leaning on them to change their votes will probably get the ethics hotline involved. But the decision to retain year over year is entirely yours.


Stack ranking was invented for this exact case, cutting the rot out when it's already metastasized.

It has a bad reputation as it's often used year on year - it's meant to be done just once or twice to get back on track after years of over-hiring and reluctant firing.


I’m a pretty good engineer, and I’m never going to join any job with a probationary period. It could turn out to be a bad manager or something and I would end up with a messed up resume. Why would I accept some kind of second-class status when Google etc. are willing to hire me with normal employment terms? I think you’d have a serious adverse selection problem if you only hire people who are willing to take such a job.


I can honestly say the place I worked that had a probationary period had the most talented team of anywhere I have ever been, and I have worked with FAANG before, as well as unicorns.

Like I said, it's not for everyone. A bad manager can ruin your experience anywhere and fire you regardless. That place was just very forthcoming about the fact that you need to demonstrate value if you want to keep your job.


On the manager side, I can't upvote that enough.

Toxic systems are always the responsibility of the managers and that's from the top down. The systemically perverse issues created by people "managing up" is one of the most critical problems that people higher up the food chain need to correct for.


> the easiest way to do that is with a probationary hiring period

How often do you actually boot someone though? For juniors we have pretty low expectations so they never fail this.


People got booted from my first place of work a lot - probably something like 1 in 5 didn't survive the probation period. The idea is to make the probation period long enough that the employee can demonstrate value. If they can't demonstrate value in that time period, they are let go.

This was not the practice where I worked, but you could have different probation periods for different roles. So someone who is primarily focused on technical debt might have a longer probation period than someone focused on rapid feature development.


"do something like pay candidates to come on site and work for a week" by doing this you've essentially cut your hiring pool down to 3 categories: new graduates; mediocre engineers who were fired from their last job; or terrible engineers who can't find a job.


Or freelancers who probably aren't looking for full time work anyways! But yeah this is exactly why a weeklong trial won't work


The other option is taking leave. In many countries people might have a week saved up, or maybe they work for one of those fancy companies with Unlimited Leave*

* conditions apply.


I would like to see data on if these interview systems actually achieved their goal of rejecting bad candidates.

I worked at a FAANG company and saw several people get let go for underperforming. I have no idea if they could have been productive doing something else but they were clearly not productive at the tasks assigned them or that they chose to do. I also saw several other people I thought were a candidate for going down the same path before I left. Whether they got moved to something they were more productive on or whether they were showing value in other ways I have no idea.

The point being the hiring system didn't reject these people. My guess is they either got lucky like me, easy questions, or they are good at answering the questions but not good in actual production.

For example one person that was let go was clearly smart and could make the code but they would refactor forever looking for perfection and end up taking 4x - 8x longer than others around them. Maybe in some place that's a plus but for our team we needed to ship and this person was not able to prioritize shipping and after several attempts they were let go.

Another had a task that they were taking a long time on. Others on the team gave them the benefit of the doubt that it was harder than it looked but when they were finally let go it turned out it was not harder than it looked and was finished rather quickly.

Another wrote bloated obfusticated code that seemed to be 6x the number of lines it needed to be. Maybe they could have been helped via code review.

Yet another was one I interviewed and they passed the "can write code test" but in the entire time I was there I don't think I saw them submit a single PR. I have no idea what was filling their time.

I don't know if that ratio of unproductive candidates is better or worse than anywhere else I've worked on average. Pretty much every place I worked before FAANG was 100 people or less and my team 30 people or less and those teams were 10 engineers or less. There were always 1 or 2 engineers in the company known for being slow or writing bad designs. Sounds about the same but I didn't measure.


> I will say though, the problem is one of “standardization” across an organization where it’s too big for everyone to fit in a room.

I think you've got a lot of this right (disclaimer: we've built the product I think you're describing)

The don't think the most important problem is standardisation though, it's observability/instrumentation ie. if you don't measure what's working, you can't improve things.

The very best tech companies measure quite a lot, and often look back at their hiring processes in the event of a mis-hire to figure out what went wrong and how they can avoid the same happening in future... but even then they only do that in exceptional cases because it's done fairly manually. That means they have low statistical significance and a stuttering cycle of learning.

I believe they should be constantly looking at what's working well, for every hire. So that's what we built.

Once your hiring pipeline is trivially visible, a lot of these questions go away. You can see what's working well and try new things in safety, you can optimise with your eyes wide open.

One thing we did straight away was to deprioritise CVs and replace them with written scenario-based questions relevant to the job. If managed properly that takes your sift stage from a predictive power around r=0.3 to a performance we find typically above r=0.6. Far fewer early false negatives makes your hiring funnel (a) less leaky, (b) more open to pools of talent previously ruled out by clumsy CV sifting, and (c) potentially shorter as the improved sift accuracy allows companies to consider dropping their phone interview stage(s)

Our NPS rating for HR teams is currently running at 85, and MRR churn is under 1% so there's clearly some value to the approach.


If you can't stop bad people from accumulating, you will have a problem no matter how selective is your hiring process.


Can't you just unhire them?


Yes but there's all sorts of silliness that people and companies need to unlearn so they can do this effectively and efficiently.


Firing is too hard for humans, and too easy for bureaucracies.


Hesitation to fire is real.


>But I think that is far too onerous for candidates (and a big company) to have legs.

Actually I did exactly this as a candidate at one place for an afternoon. I left after 2.5 months. :-) In the end I felt a bit sorry for them since they put a lot of effort into the process, being a really small company. On the other hand I really didn't want to stay there. It was technically really interesting and I could contribute but the business was run really badly IMHO


it's a really simple problem honestly. Make a standardized test, but don't put an hour time limit on it. If you gave the exact same algorithm test with a 24-hour time limit vs. a 1-hour time limit then had a 1-hour interview explaining the solution to the problem you would be testing for something closer to programming acumen than memorization. I have no idea why tech companies find this so challenging.


The key bit is getting people to explain the answer, not the answer itself.

I have had interviews where the company has done that. I would rate all of those interviews to be very high quality. No nonsense trick/leading questions. Just simple: can this person actually program? What does he understand? What doesn't he understand? I felt like it was obvious what I could and could not do...there was nowhere to hide.

Btw, in terms of investment, this seemed far cheaper for the companies too. What is cheaper? Arranging an hour-long interview where you go through a simple problem that is focused and will clearly identify knowledge. Against an hour-long interview where you probe someone randomly about their life and projects that the interviewer has no idea about (funnily enough, no matter how good you are communicating ideas...interviewers almost always get it wrong).


Cheating? Even with phone screens, some candidates still cheat at these interviews (have some engineer with them answering questions, or looking up similar questions online).

I agree a full blown project over 24 hours would be better, but it's more costly to create questions and score them (if you want to do it in a way where cheating is hard/impossible). I've seen startups do this, and it works well for them, but might not be scalable for companies that hire thousands of employees each year.


what even is cheating on an algorithm problem? looking up an answer online? asking someone in your social network to help you work through the problem? because that's what you actually do in the real world. as long as you can explain / defend your solution what does cheating even mean.


That's an idealistic way of looking at things.

People would look-up solutions to that exact problem; it's very hard to come up with a unique problem that was never asked before, and questions leak pretty quickly. (especially at larger companies)

The goal of a test is to evaluate whether you'd be a good employee; I agree algorithmic questions are not representative of day-to-day work, but evaluating your friend's ability to help you is out of scope. Companies want to hire someone that has decent programming skills; you can't rely on other people to solve all your problems, you need to have a minimum level of skill.

[again, playing devil's advocate here; we all agree the process is suboptimal, but let's not ignore the negatives of some of the alternatives suggested here]


how many things did you learn by seeing / being taught the solution and how many did you re-invent as you're basically asked to do at an interview? To give maybe a bad example, I don't know how to do a quicksort. I've written it maybe twice in my life by either being taught by a teacher or by looking it up on wikipedia. I didn't have to think of the solution on the spot. I'm not saying I can't think of solutions but still, the majority of my knowledge comes from having been taught and then using that taught knowledge over and over, not from re-invention.

Note I agree we have the same goals to hire people that can do the work not people whose friends can do the work but I'm not convinced that the typical interview puzzles (find the longest segment of an array that contain 3 values) has anything to do with actual work skills. Maybe it would be better with more typical tasks? If the person claims to be back end engineer ask them to write a query to select all people between the ages of 30 and 35 in Nebraska? Maybe if they are front end ask them to implement a select/option UI?


> Cheating? Even with phone screens, some candidates still cheat at these interviews (have some engineer with them answering questions, or looking up similar questions online).

The only difference between these people and those who pass normally is that the latter group looked up the answers to the problem before you even asked them. Nobody comes up with things like mods of dfs or n-pointer problems on the spot. These problems were the subjects of doctoral dissertations the first times they were solved. We pass them because we've seen problems just like it. The system has been gamed and doesn't work.


My solution is smaller companies - that way the underperforming ones can be eliminated more easily.


At Amazon, perhaps one of the largest and most successful employers, the solution to this is that when interviewing, everyone's main goal is to hire people better than the current average of the team.

To ensure this, someone experienced with interviewing from outside the team/organization, so called bar raiser, is always added to the interviewing team. This person is there to ensure high quality hires and has hiring veto power.

The rest of the interviewers are from within the team, likely under pressure to compromise on quality when they're desperate enough to hire someone, but they can't do it unless the bar raiser agreed.


Amazon phone screens you by saying: "Get out a notebook and a pencil. Write down the code in C to flatten a binary tree into a sorted list, and then read it back to me over the phone."

If this is raising the bar, no fucking thank you.


Seriously.


> At Amazon, perhaps one of the largest and most successful employers, the solution to this is that when interviewing, everyone's main goal is to hire people better than the current average of the team.

At this point it's Kool-aid. In practice bar raisers have so much more interview experience that you can very easily pick them out, but so much of their behavior results in simply weeding out the bad.

I have received LOTS of offers from Amazon over more than a decade (only one accepted at one point), and my friend used to be a bar raiser. In his first interview at a new company he was both rude to an interviewee and flat-out wrong technically on multiple points along with it.

You can't raise the bar if you can't pick the bar raisers.


FWIW this wasn't my experience when interviewing with Amazon; I felt the bar was much lower than other companies I interviewed with, even when I wasn't that good at algorithmic problems.

That's just one data point, and maybe what you're describing is for more senior hires?


I've found a generally higher bar than places like Microsoft but definitely a much lower bar than other FAANG companies, unicorns, and even tiny-but-potentially-elite startups run by people who know what they're doing.


Agreed, Microsoft was an even lower bar... but in a good way: I felt they just wanted someone with basic CS knowledge, whereas the Amazon interviewers seemed to think they were some sort of elite group of engineers.


A federation.

To be honest, if a large company has multiple departments that essentially govern themselves, perhaps they could just give them proper names, as if they were subsidiary companies? Because it seems to me that the mechanism behind the "best performers leave when they feel the average performance of workers in the company goes down" is really bounded by the company name. I.e. people in department of frobnification leave because of the people in department of foobing bars. But people in Frobnificator perhaps wouldn't leave because of people in FooBars.io, even if legally/administratively the two situations were equivalent.


No, it's not that. It's that often, to make a new product you have to have multiple parts of the business work together. Then you will find yourself working with the other department and they'll frustrate you.

Some places attempt to solve this by saying that you can use whatever vendors you need to solve the problem but often the best place to solve it is the other department since the solution is infeasible without them.

The problem is that often the solution is infeasible with them because the guys who made them good have left and now you only have the lameos.

As to why they can't operate as independent business unit? Well, there's the Theory of the Firm to tell you why that doesn't work often.


The tech industry has unfortunately adopted the methodology of centralized hackable tests

I think the opposite is the case. Every industry has some sort of test to get into it. But the other industries are the most centralized and hackable ones. The tech industry is doing the best job of any of the top industries of having an application process that is open to skilled people who don't match the "centralized" standard.

Your grades in college, the experience on your resume, those are "centralized" tests. You have one resume and you send it out to everyone.

When companies give programming tests to applicants, each company is free to measure the applicants' skill in any way they see fit. One might ask math puzzle questions, but another company might give take-home Rails projects. And companies are free to just do a "soft" interview and look at your resume if they want.

Facebook hires far more software engineers with no software engineering education than Chevron hires chemical engineers with no chemical engineering education.

Plus, starting a company is open to anyone with a credit card to open an AWS account. No interview required. Just make something people want.


Haha here is the email i have from my facebook interview thats coming up.

"Your initial Facebook interview is coming up and we want you to ace it!

Here are some tips and resources so you know what to expect and can prepare adequately. Preparing is key.

Our initial interview determines whether to continue with a full series of onsite interviews. This initial interview is primarily a coding interview that will take place between you and a Facebook engineer.

It's important for any engineer to brush up on their interview skills, coding skills and algorithms. Practice coding on a whiteboard or with pen and paper, and time yourself. Preparing increases your odds significantly! Below are two helpful links to a Facebook Interview Prep Course led by Gayle Laakmann McDowell, author of “Cracking the Coding Interview”. Use password FB_IPS to access the videos. Cracking the Facebook Coding Interview - The Approach Cracking the Facebook Coding Interview - Problem Walk-Through This article provides much more advice about how to prepare: Preparing for your Software Engineering Interview at Facebook

Here's a sample problem to get started.

Sample Problem Write a function to return if two words are exactly "one edit" away, where an edit is: Inserting one character anywhere in the word (including at the beginning and end) Removing one character Replacing exactly one character Most importantly, you can view this and other Facebook sample interview questions and solutions here.

Want more sample questions? Try HackerRank, LeetCode, and CodeLab. Be sure to practice questions in a variety of subjects and difficulty levels."


> Write a function to return if two words are exactly "one edit" away, where an edit is: Inserting one character anywhere in the word (including at the beginning and end) Removing one character Replacing exactly one character

I've been writing code my entire life, and I've been employed as a software developer for longer than I'd care to admit. I don't have a quick answer to that problem. I know there's an algorithm for computing the "distance" between two strings, so this is probably about that. I have no idea how to write it, and it would never come up in my job, and if it did I'm sure I'd import a library to do it.

After searching it, this is what they're talking about https://en.m.wikipedia.org/wiki/Levenshtein_distance

That means my only chance of getting this problem correct was either:

1) I prepared for it by studying the bank of questions and memorized the formula for Levenshtein distance.

or

2) I considered this problem for the first time during the interview, was immediately struck by the same insight that mathematician Vladimir Levenshtein had in 1965, and developed the algorithm on a white board in real-time.

At least they're transparent that if I do lots of HackerRank, LeetCode, and CodeLab then I'll be able to pass their test. I suppose that's better than only those "in the know" realizing how to be employed at FB.

I really hope I never have to do another job interview again.


You don’t need edit distance. The solution is literally a for loop and three if statements. The fact that you can’t stop and think for a few seconds about how you might solve this means you’ve been on autopilot for many years now. This is a proxy test to weed out people who work on autopilot and never think.


Are you proposing brute forcing it? Depending on requirements that might be fine, but I doubt FB would like that kind of solution. If they did, they'd have a different kind of question.


This isn't "brute force." It's solving the specific problem posed rather than a generalization, which in this case is both easier and more efficient (since there are cases in which you could terminate the loop early).


Thanks! I completely misunderstood what the commentator I replied to was saying. Not understanding how to solve it well I assumed they were referring the first option with loops and tests that popped into my head: make every one-edit possible and see if it's one of them.


1st for loop with first word with charecter counts map and second for loop subtracting counts from frist map. See whats remaining in the end, either a map with 1 char left or 1 char left in second. so O(m + n).

abc , adc

a - 1

b - 1

c - 1

after second word loop

b - 1, d


I might be misunderstanding, but doesn't that treat 'abc' and 'cba' as distance 0?

Regardless, I was also thinking Levenshtein is sub-optimal, and that you can probably solve it in O(n+m).

Even if you go for Levenshtein first, you should identify a simple modification to stop calculating the matrix early if the distance is necessarily greater than 1. Get a bit fancier and you can just 'go down the diagonal' and greatly optimize the algorithm.


ah yea, sorry i misread the question.


That only works up to permutations.


Many of these sorts of problems seem intimidating at first, but if you’ve had any training / exposure to implementing recursive solutions (i.e. a bit of SICP), the solutions come much easier because you are trained to think in terms of “what is the tiniest amount of work I can do to advance the problem by one step?”

In this case, all we need at each step is to look at one character plus one lookahead character from each string. We’ll call them a and b for string 1 and c and d for string 2.

If a and c are equal, advance.

If a and d are equal, delete c and advance.

If b and c are equal, delete a and advance.

If b and d are equal, change one of a or c so they match, then advance.

Once you use up an edit, the algo changes to “a and c must match” for the rest of the string.

Edit: terminate the algo when one of a or c is a NULL char. If they are both NULL, the string passes. If only one is NULL, then your one edit allowance was not enough to make the strings equal.

Also, rather than “deleting” a char, it is sufficient to simply advance that pointer an extra step.


Look up "edit distance"... it is one of the first problems an Algorithms course will discuss that can be solved by dynamic programming. But ya, no way anyone is coming up with that unless they have seen it before.


I 100% would be able to solve this problem without seeing it before. Yes, this is because I've practiced a lot of algorithm problems (and I enjoy them, frankly).

Yes, this is hackable, but relying on resumes like every other industry (which is implicitly relying on hoop-jumping and college prestige and GPA) is a much worse form of hackable. This levels the playing field and is a signal for (1) how smart someone is and (2) how much they prepared for the interview, both of which are weakly related to the job. Further, people who excel at algorithm are very unlikely to be bad at the kind of work at big tech.

It's not perfect, but finding something better at scale is hard.


I don't think it is a proxy to how smart someone is. I will take point (2) tho.


An algorithm for "do these strings differ by an edit distance of at most 1?" is much simpler, and should be easy to figure out. No dynamic programming required.


This is something I would expect a freshman CS major to be able to solve.

It only requires knowledge of looping, conditionals, and how to work with strings in your language (e.g, indexing, finding length, maybe substrings, etc.).

This is like a less insulting version of fizzbuzz.

People are way overcomplicating this. Any professional programmer could solve this.


This is an initial phone screen and in this context this a valuable approach. This is one level past fizzbuzz. 'Can you code something past "hello world"?' is what they're looking for. There's nothing worse than an onsite with somebody who just cannot code at all.


The particular example is bad, though, because there’s room for a massive algorithmic insight. There’s an obvious-enough but terribly inefficient way to do it, and a fancy efficient algorithm that either you know or you don’t. If given this question I would middle through the inefficient way, aware that there must be a better algorithm out there that’s eluding me, and I’d probably get a “vaguely pass” mark, but the kid fresh out of college who has spent the last three months doing leetcode study will regurgitate the correct solution and pass with flying colours.

(Once in a while maybe a complete genius takes the test and reinvents the good algorithm on the spot, but only gets the same grade as the diligent student, for his effort.)

Too many basic coding screens seem to rely on some algorithmic insight. I would rather give a candidate a complicated-but-straightforward problem (like a super-fizzbuzz that takes half a page to describe,with a bunch of complicated rules and exceptions to those rules) because that’s closer to what a real world programming task looks like. No room for a clever insight, just translate the problem statement, carefully, into code, and test whether it runs correctly.


> because that’s closer to what a real world programming task looks like

To make it even closer, you'd have to have people coming in at various points with contradictory updates to the task, impossible requests, an overbearing managerial diktat, etc.



For the second one:

> Please solve it without division

This is just an absurd constraint…


That's good, they're trying to put everyone in a level play field given the system they use.


I think I blame Google, because it was started by a couple of graduate students.

I don't know how it was at Stanford, but at Berkeley in Physics there was a series of big honking tests ("Qualifying Exams") that dominated our attention the first two years. You were expected to know all of undergraduate Physics. Fail it, and you're out.

Grad students don't know any better. "How do we hire the best people? I know - they'll have to pass a big hard test, just like we did."

Then everyone else goes, "How do we hire the best people? I know - we'll do it like Google does, they seem pretty successful."

Ugh.

No, I don't know that it really went like that. I just suspect it.


Stanford physics PhD student here. We actually don't have qualifying exams, but my unpopular opinion is that we should. Physics is a lot of fun to study. The undergraduate and core graduate curriculum show off all its greatest hits. Why not learn that by heart? I spent two years doing that, it pays off every day in research, and I'd have been happy even if it didn't.


There are alternatives but they are not necessarily better. E.g. academia seems to rely on references and publications for hiring. This seems closer to your wish of caring about "how good you have been". But consider the downsides - it is inherently more insular (to work for a famous professor you need a reference from someone, preferably also a famous professor) and also your thesis advisor can easily ruin your whole career.

Contrast that with our industry where a bright kid out of nowhere can study for the (imperfect, hackable, dehumanizing - I agree!) interview and have a realistic shot at that job at FAANG. At least I wish it still works this way, although of course it helps to be a Stanford graduate!


When I started preparing for interviewing, I spent most of my times studying general data structures & algorithms and practicing applying them to different problems. I don't think this is so bad and can lead to a lot of learning you wouldn't do otherwise.

The frustrating thing, is how easy it is to hack the test. After one particular interview, I remember talking to a friend about how hard the technical interview was. He told me that he had already seen the problem and knew it would come up because he bought leetcode premium for the interview. Kind of frustrating to spend hours and hours learning data structures and algorithms when the real key to success is getting lucky and memorizing the problem before hand


Leetcode premium doesn't mean you'll see the problem. I suspect the probability of that is low, given these problems are constantly changing (there's a banned problem list at Google) and the interviewer pool is as well.


This is exactly what I said in my retweet to PG's essay. Having hackable bad tests in the very tech industry proves the point that artificial tests are still the way of thinking for many and something we collectively need to unlearn. The question is how.


When I see a company conducting these kinds of tests during interviews, it's a signal to me that they are probably not a very forward-thinking company, and probably not a good fit for someone like me who tends to think outside the box. If it's another cog in their machine that they want, these kinds of interview tests are probably pretty good. But if they're an innovative company trying to change the world, they'd be much better off with a different, more pragmatic approach designed to attract the independent thinkers.

Beyond that, in my opinion, nearly anyone with average programming experience can hack a typical programming/whiteboard interview, so these kinds of interviews are definitely not the best way to find the best fit for particular roles. It's a good way to filter out the inexperienced, for sure, but it's a pretty low bar to set, especially if it's a company looking for top tier engineers, people with the ability to see the bigger picture.

It's also likely that a great engineer would pass the test with ease, but because their skillset and abilities have been inaccurately assessed due to the poor interview process, the company could have initially assigned them a much more effective role where both the company and the employee could have benefited to a much greater extent, but didn't. Overall, relatively speaking, the "hackable test" approach wastes time and hurts everyone in the long run, both employers and employees. A little investment up front with more personalized, specific interviews can go a long way.

I think we should start refusing these impractical interview processes, or at the very least, from an interviewee standpoint, turn down jobs (if you can) and let the potential employer know that their interview process is the reason.


> Beyond that, in my opinion, nearly anyone with average programming experience can hack a typical programming/whiteboard interview

I agree with your conclusion that all it does is filter out inexperienced/incompetent candidates. This is valuable, but to me it's the "first question" - can they code?

I don't think there's really much value between an "Amazing coder" and a "can get the job done" coder. Very rapidly you're trying to answer much softer questions like "does this person work well others?" or "how fast can they learn new systems?" and there isn't a good way to do that.

The interview question I usually give is algorithmically very simple. There's no linked lists, no graph theory, none of that. What I actually look for is mostly how they interact with me and how they go about solving it.

There's a pretty straightforward gotcha that more or less everyone hits. If the candidate hits the gotcha and then keeps piling on special cases without ever taking a step back, that's a big minus. If they keep doing it even after I suggest there's a simpler solution, that's worse still. I much prefer somebody who takes a step back but can't quite figure it out without a little more help than one who just remains confident in their solution no matter what.

Yeah it's inherently shitty that I have 45 minutes to figure out if this person will benefit the org for years to come, but there's a lot more signal there than just "Did they open their CS 201 algorithms textbook in the last few week?"


> it's a signal to me that they are probably not a very forward-thinking company

Can you give me an example of a forward thinking company according to you?


>When I see a company conducting these kinds of tests during interviews, it's a signal to me that they are probably not a very forward-thinking company,

Can you list all 'forward thinking' companies for the benefit of all of us?


huh?


Sorry, I responded to the wrong thread.


I agree with all you said. However, 99% of major tech companies employ exactly the same process of hiring software engineers. I've been on hundreds of interview loops and unfortunately just one bad interview in a loop of 5-6 could make the candidate look like a "bad" fit and thus receive no offer. That particular candidate might be in fact great fit, but the way he was tested could not reveal that.

I don't know the solution to this problem. Eliminating tests whatsoever and just talking with the candidate about his experience and probing his knowledge on different topics is not efficient either. There are a lot of talking heads around who when given a simple task fail miserably.

We as a tech community need to come up with better ways to assess other people's competency while also making sure those people fit within our company's culture, work efficiently with others and after all create value. This is a hard problem. So, we try to simplify the problem by imposing the "proven" way of finding such people -- give them arbitrary tests and hope they pass them.


loh, Can you list all 'forward thinking' companies that you have dealt in the past? I'm sure there will also be other people interested.


I work at a very large company (IBM) and I refuse to interview people that way. My track record for great hires (both full-timers and interns) is arguably impeccable. No whiteboard or brain teaser quizzes involved. I wrote a little about my interviewing approach in my latest "hiring" post: https://programmingzen.com/new-ibm-internship-positions-in-m...


Software Engineering interviews are the worst offenders of this “test for grades approach to hire candidates”. On that other hand, programmers are spending huge amount of time, money and energy into hacking these coding interviews just to get a job. Situation is horrible and needs immediate disruption.


As PG implies, the work environment at big companies consists of more hackable tests - so (ironically) that type of interview might be a proper test.


I loathe all types of coding or design challenges when interviewing! I once designed and coded a five page website.. it took awhile, yet never heard back from the company. Thus, Im going to seek out all opportunities that do not force me to waste my time, especially design as it's subject. Overall you liked my portfolio well enough to consider me then lets chat and see if we jive/im a good fit for you/your team/company!

So far I've been very fortunate that recruiters reach out on LinkedIn and I rarely have to do such time wasting activities as they vouch for me.

Recently dealt with a company who pays parity (everyone makes the same .. no woman, man, etc can negotiate their worth), had five to 6 interviews and 2 to 3 coding/design challenges. WoW I guess they are looking only for a subset of talent who would do all that. People might do that for a FAANg company or when jobs in the field are scarce, but this was no FAANg company and thankfully there is still a good amount of demand!


There was a time where computers weren't as fast and libraries were not as high-level/user friendly, and you required to know these things in order to get things done.

Today, unless you are doing things at scale (tiny fraction of startups), you don't need to know how to make things run in the most optimized way possible.


I think testing algorithmic questions is certainly evil, but it's a necessary evil because all other methods are either too time-consuming or unrealistic. You wouldn't believe in a big company how many clearly unqualified applicants there are, and there has to be some quick and dirty rule to run a filter through them. Perhaps it's the aura of FANG companies that make unqualified applicants try because they have nothing to lose. My experience is that just about 1% of interviewees can actually confidently write a depth first search and explain its time complexity. If they can't even do that (which is basically among the first things schools teach in a CS curriculum), how do you trust they that they have the skills to design and implement even more complicated systems?


> I think testing algorithmic questions is certainly evil, but it's a necessary evil because all other methods are either too time-consuming or unrealistic.

> write a depth first search and explain its time complexity.

Part of our test goes "here's a function, in plain English what does it do?", and then "there's a few bugs in this implementation, what are they and how can they be fixed?"

I think there's a lot of alternatives to traditional algorithmic questions that just don't usually get considered. This one I liked because it involves things you'd actually do day-to-day: Reading other people's code, figuring out intent, and finding/fixing bugs.


I've never done that (self-taught dev), but I've set up SSO, terraformed production systems, wrote ETLs for system critical data... I'm not an amazing developer, but I get things done. That test would filter me out simply because I've never tried it before.


I totally understand. Different job expectations. Whenever I tell people I actually need to write dynamic programming in my job people are always incredulous. To many, dynamic programming is a technique they learn only to pass interviews and not use in real life. It's really that if you don't know certain algorithmic stuff, you can't actually work on the lowest levels of the stack, but not all companies expect their developer to do that.


> My experience is that just about 1% of interviewees can actually confidently write a depth first search and explain its time complexity. If they can't even do that (which is basically among the first things schools teach in a CS curriculum), how do you trust they that they have the skills to design and implement even more complicated systems?

My experience with former BigCo engineers I've had to work with is that 1% of interviewees can actually confidently design a relational data model from scratch for a weekend MVP of reasonably any major web service you could think of. If they can't even do that (which is basically among the first things you do as an engineer standing up a service from scratch), how do you trust that they have the skills to design and implement even more complicated systems?

See, I just took what you said and replaced it with a couple of key phrases. But the truth is, when it comes to designing and implementing even more complicated systems, my passage (while reductive and still arguably wrong) makes a lot more sense than yours. I'm not saying this to pick on you, just to say that you probably want to revise what you say at the very end. Instead of "how do you trust they that they have the skills to design and implement even more complicated systems?" you should really be phrasing what you're looking for as "How do you trust that they have the skills to maintain and build efficient modules at Google scale?" because there are plenty of "even more complicated systems" The truth is that the architectures that consist of the grunt work of what you do scaling a company from $0B to $100M or $100M to $1B or $1B to $10B that are by and far more important than a depth first search. The fact that you leapt to do that rather than something more widely applicable shows exactly what's wrong with this mentality. It's not that you're incorrect; you're not even wrong. You never got to the point of knowing what the right question is.


> which is basically among the first things schools teach in a CS curriculum

I'm pretty sure that didn't come up until at least my 2nd or 3rd year - which was 25+ years ago (and I've never had to implement it by hand since.) You seem to be biasing your selection criteria towards "new/very recent graduates"?


Sure, but what's the alternative?

I know big companies do a lot of AB testing with interviews and change their approach over time (Google stooped asking why are manholes round and looking at AST scores because data showed them those things had almost zero correlation with performance).


> Most big tech companies don't care about how good you have been at delivering some value through creating software: they want to see you deliver a very specific type of performance at a whiteboard

PG specifically points out that success within a large tech company is predicated on bogus hackable tests. So it makes sense that their selection criteria should also be a hackable test. The better you do on that artificial selection criterion for admission, the better you're likely to do on the artificial selection criteria of internal career progression. So in this particular case, the hackable admission test is a good proxy.


I've been curious for a while about the inflection point between the more informal startup programming interview style and the bigco interviewing style, and while there's obviously variability between orgs, whether it comes with headcount, customer profile, fund raising milestones, etc




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: