Maybe I’m a slow reader, but reading, understanding, and paraphrasing the response seems like it would take enough time to be awkward and obvious as well.
I’m not sure why anyone would want a job they clearly aren’t qualified for.
As an interviewer, if a candidate can use chatGPT to give a better answer than other candidates, I'm not gonna mark them down for use of chatGPT.
It's a tool, and if they can master it to make it useful, then credit to them.
Alas, ChatGPT seems to be a jack of all trades, but master of none, which is gonna make it hard to pass my interviews which test very specific technical skills.
> As an interviewer, if a candidate can use chatGPT to give a better answer than other candidates, I'm not gonna mark them down for use of chatGPT.
Tool usage is what separates us from animals and is generally ok where tools are available/expected, but in this case I think you misunderstand which tool we're talking about. The tool involved isn't actually chatGPT, it's more like strategic deception. Consider the structurally similar remark "as a voter, if a candidate can use lies to represent themselves as better than other candidates, I'm not gonna mark them down for use of dishonesty".
The rest of this comment is not directed at you personally at all, but the number of folks in this thread who are extremely eager to make various excuses for dishonesty surprised me. The best one is "if dishonesty works, blame the interviewer". I get the superficial justification here like "should have asked better questions", but OTOH we all want fairly short interview processes, no homework, job-related questions without weird data-structures and algorithms pop-quizes, etc, so what's with the double standards? Hiring/firing is expensive, time-consuming, and tedious, and interviewing is also tedious. No one likes picking up the slack for fake coworkers. No one likes being lied to.
> the number of folks in this thread who are extremely eager to make various excuses for dishonesty surprised me
Not me. I see it all the time, online and offline. I suspect they think it confers status on themselves, but what actually happens is honest people wind up shunning them.
I assume you know this is just an expression, and you know that I know that animals indeed use tools. So I'll refer you to community guidelines https://news.ycombinator.com/newsguidelines.html
Apologies, I had no idea that you used this as an "expression". I have only heard this from people who believed it. Also, I don't think this is a good "expression"; at the very least it's misguided, but mainly it's scientifically constraining.
As for the guidelines, I think I could quote it back at you.
The problem is that a good interview is only vaguely related to getting a good employee. Anyone can ace and interview and then slack off once they have to job.
If someone aces the interview using an LLM and then does good work using that same LLM then what should the employer or other employees care? The work is getting done, so what's the problem?
Compare a shitty worker to a deceptive one using an LLM. They both passed the interview and in both cases the work isn't being done. How are those two cases different?
Your hypotheticals are all extremely unlikely. People who ace interviews are usually good, and people who lean on stuff like ChatGPT aren't. I'd also rather not have someone dumping massive amounts of ChatGPT output into a good codebase.
>what's the problem?
Using a LLM is akin to copy/pasting code from random places. Sure, copy/paste can be done productively, except ChatGPT output comes completely untested and unseen by intelligent eyes. There are also unsolved copyright infringement issues via training data, and a question as to whether the generated code is even copyrightable as it is the output of a machine.
People who ace interviews are people with practice. That means you are last in a long line of unsuccessful interviews or the person constantly interviewing and will be leaving you as fast as they came in.
Find someone with a great resume and horrible interview skills. Chances are they have been working for years and are entering the job market for the first time. You are one of the firsts in their interview process. Grab them right away because once they start getting slightly good in the interview process someone will snap them up and realize they got a 10x (whatever it means to that company).
You'll never find that 10x if you are looking at interview performance unless you can compete on price and reputation.
You don't have to guess if someone is entering the job market for the first time. You can just look at their resume.
Interview skill is not some monotonically increasing quantity. It very much depends on how the question hits you and what kind of a day you've had. Also, it somewhat depends on the interviewers' subjective interpretation of what you do. If you're more clever than them, your answer may go over their head and be considered wrong. They might also ask a faulty question and insist it is correct.
I'm not great at interviews myself. My resume is decent, but the big jobs usually boil down to some bs interviews that seem unnecessarily difficult to pass. I don't practice much for them, because I feel like it mostly depends on whether I've answered a similar question before and how I feel that day. I also often get a good start and just run out of time. I've found that sometimes interviews are super hard when the interviewers have written you off, as in you presented poorly in an earlier session and they are done with you. Also, when there is zero intention of hiring you generally, like someone else already got the job in their minds.
> does good work using that same LLM then what should the employer or other employees care?
Maybe I'm wrong, but I find it very hard to believe that anyone thinks the "good work" part here is actually a practical possibility today. Boilerplate generation is fine and certainly possible, and I'm not saying the future won't bring more possibilities. But realistically anyone that is leaning on an LLM more than a little bit for real work today is probably going to commit garbage code that someone else has to find and fix. It's good enough to look like legitimate effort/solutions at first glance, but in the best case it has the effect of tying up actual good faith effort in long code reviews, and turns previously productive and creative individual contributors into full-time teachers or proof-readers. Worst case it slips by and crashes production, or the "peers" of juniors-in-disguise get disgusted with all the hand-holding and just let them break stuff. Or the real contributors quit, and now you have more interviews where you're hoping to not let more fakers slide by.
It's not hard to understand that this is all basically just lies (misrepresented expertise) followed by theft. Theft of both time & cash from coworkers and employers.
It's also theft of confidence and goodwill that affects everyone. If we double the number of engineers because expectations of engineer quality is getting pushed way down, the LLM-fakers won't get to keep enjoying the same salary they scammed their way into for very long. And if they actually learn to code better, their improved skills will be drowned out by other fakers! If we as an industry don't want homework, 15 interviews per job, strong insistence on FOSS portfolio, lowered wages, and lowered quality of life at work.. low-effort DDoS both in interviews or in code-reviews should concern everyone.
The premise of my comment was: if a person passes an interview using some tool and then uses that same tool to do the job, then didn't the interview work?
You found a person (+ tool combo) that can do the job. If that person (+ tool combo) then proceeds to do the job adequately, is there a problem?
If you present a scenario in which a person passes the interview and then doesn't do the job, the you are answering a question I didn't ask.
To you scenario I would respond: the interview wasn't good enough to do its job, the whole point of the interview process is to find people (+ tool combos, if you allow) that can do the job.
That's not the point I was making. The full quote is:
>Anyone can ace and interview and then slack off once they have to job.
In that a person can pass an interview, get hired, and then not do the job. An interview will never tell you if you will get poor job performance with 100% accuracy.
I don't think you are getting my point. You can totally ace an interview and then slack off. That's it, that's my point. Not the opposite, not something else, just that.
Ok. I see. This is theoretically possible. But in practice, I haven't seen it. That's not something I really care about spending effort filtering for in an interview.
>As an interviewer, if a candidate can use chatGPT to give a better answer than other candidates, I'm not gonna mark them down for use of chatGPT.
I think that makes you an incompetent interviewer, unless your questions are too hard for ChatGPT. In any case, solving the question without ChatGPT is more impressive than using it. Just like most other tools, like search engines or IDEs.
Would you also say that, "as an interviewer, if a candidate can use their buddy to give a better answer than other candidates, I'm not going to mark them down for using their buddy"?
Even if you don't mind that situation, shouldn't you get buddy's contact information and offer him the job?
That's not a great analogy: you can't do the job with your buddy; whereas some interviewers are ok with, and even expect you to, use GenAI on the job daily. Depends on the interviewer and job expectations.
A better analogy is an interview where you can use a calculator (and not be detected). If the interviewer were only to ask you simple arithmetic questions with numeric answers then sure you'd seem to do well. So interviewers adjust to not doing that.
Sure, and also split the dental and other benefits, vacations, and share one building fob, parking pass, cubicle and computer. Also, split the food at the company dinner. :)
To use a slightly more extreme example .. if you were hiring someone to maintain a nuclear power plant, and when you asked them a question about what actions to take to avoid a meltdown, and they had to ask ChatGPT to figure it out, would you really be OK with hiring that person to maintain your nuclear plant? When they don't actually have the knowledge they need to succeed, but instead have to rely on external tools to decide things? If they need to ask ChatGPT for the answer, how do they know if the answer is right? You really think that person, who relies on tools, is just as good of a hire as someone that fully internally knows what they need to know?
Yeah, hiring someone to code a website isn't the same as maintaining a nuclear plant, but it's the same concept of someone that knows their craft vs. someone that needs to rely on tools. There's a major difference in my mind.
I hope your statement is hyperbolic because we're all doomed if you expect a person to know how to operate a nuclear power plant. Normally, your testing if they can follow operational procedure that were created by people who designed the power plant in the first place.
Similar it is unreasonable and bordering on negligence to assume a person has the skill set unique to your situation.
If the job at your nuclear power plant were so simple you only needed the employee to follow operational procedures, then you'd be better off scripting it instead, or training a monkey.
Consider e.g. being a pilot, or a surgeon - two other occupations known for their extensive use of operational procedures today. People in those jobs are not being hired for their ability to stick to a checklist, but rather for their ability to understand reasons behind it, and function without it. I.e. the procedures are an important operational aid, not the driver.
Contrast with stereotypical bureaucrats who only follow procedures and get confused if asked something not covered by them.
Now, IMHO, the problem here is that, if you're hiring someone who relies on an LLM to function, you're effectively employing that LLM, with its limitations and patterns of behavior. As an employer, you're entitled to at least being made aware of that, as it's you who bears responsibility and liability for fuckups of your hires.
Like a university diploma is a signal of being able to learn or at least comply, use of a chatbot is a signal of not bothering enough to learn or comply.
I can see how an applicant who cheats interview with chatbot would later not bother to internalize operation instructions for the job.
I’d like to believe the common line that chat GPT is “just a tool” and that it can actually be used to learn/comply just as much as a university degree can be obtained by mere compliance or demonstration of learning (or merely giving the appearance of such).
My experience with Chat GPT ranges from “it’s really good for rapidly getting a bearing with a certain topic” to “it’s a woeful substitute for independently developing a nuanced understanding of a given topic.” It tends to do an OK with programming and a very poor job with critical theory.
> a university degree can be obtained by mere compliance or demonstration of learning
Exactly. It “only” shows you can & willing to at least understand the requirements, internalize them well enough, and comply with them. It shows your capability of understanding & working together with other humans.
Which is key.
In my impression, almost always the knowledge you receive at the uni is not really pertinent to any actual job, and anyone can have PhD level understanding of a subject without having finished high school.
It is the capability of understanding and working in a system that matters.
Similarly with a chatbot. Using it to game interviews in ways described does not mean candidate is stupid, or something like that. It is, though, a negative signal of one’s willingness and intrinsic motivation to do things like internalizing job responsibilities & procedures, or just simply behave in good faith.
Mental capacity to do mundane things is often important when it comes to, say, maintaining a nuclear reactor.
> just a tool
> it’s really good for rapidly getting a bearing with a certain topic
Perhaps. Personally I prefer using Google, so that I at least know who wrote what and why rather than completely outsourcing this to an anonymous team of data engineers at ClosedAI or whatnot, but if it is efficient to get some knowledge then why not?
It’s using it to blatantly cheat and do the key part for you where it becomes questionable.
ChatGPT like all transformers (language models) depends on how well you prime the model as it can only predict the next series of tokens over a finite probability space (the dimensions it was trained on) , it is up to you as the prompt creator to prime that model so it can be used as a foundation for further reasoning.
Normally people who get bad results from it would also get similar results if they asked a domain expert. Similarly different knowledge domains use a different corpus of text for their core axioms/premises, so if you don't know the domain area or those keywords your not going to be able to prime the model to get anything meaningful from it.
in terms of tools, I absolutely want the nuclear power plant engineer to use a wrench and pliars and tongs and a forklift and a machine while wearing a lead lined safety suit instead of wandering over to the reactor in a t-shirt to pull out the control rods with their bare hands. You could be Edward Teller and know everything there is to know about nuclear physics but you're not getting anywhere without tools.
to your point though, a person needs both. all of one and none of the other is useless. You don't want someone who doesn't know what they're doing to play around disabling safety systems so you don't get Chernobyl, but for the everyday crud website you can just hire the coding monkey at a reduced cost.
That's like being okay with a candidate Googling the answer during an interview. Not unheard of, but unusual. It seems hard to test someone's knowledge that way.
At my company we tell people that they should feel free to google or consult references at practical coding challenges.
> It seems hard to test someone's knowledge that way.
I don’t really want to test knowledge but skill. Can you do the thing? At work you will have access to these references so why not during the interview?
Now that doesn’t mean that we are not taking note when you go searching and what you go searching for.
If you told us that you spent the last 8 years of your life working with python and you totally blank on the syntax of how to write a class that is suspicious. If you don’t remember the argument order of some obscure method? Who cares. If you worked in so many languages that you don’t remember if the Lock class in this particular one is reentrant or not and have to look it up? You might even get “bonus points” for saying something like that because it demonstrates a broad interest and attention to detail. (Assuming that using a Lock is reasonable in the situation and so on of course :))
I do want to understand their knowledge. I'll preface questions with the disclaimer that I am not looking for the book definition of a concept, but to understand if the candidate understands the topic and to what depth. I'll often tell them that if they dont know, just say so. I'll start with a simple question and keep digging deeper until either they bottom out or I do.
I'm okay with them googling too. And I tell them that at the start. But if they take ages to lookup the answer when others just know the answer, it's gonna hurt their chances.
Sure, they can search it live but you have to assess if they understand what they found. Usually, if they really know their stuff, whatever they find is just gently pushing their working memory to connect the dots and give a decent answer. Otherwise it's pretty easy to ask a follow up question and see a candidate struggle.
It's like in college when you're allowed to take textbooks to an exam. You can bet the professor spent more time crafting questions that you can't answer blindly.
That being said, I think both types of questions have their place in an interview process. You can start with the no searching allowed questions in the beginning to assess real basic knowledge and, once you determine the candidate has some knowledge, you start probing more to see if they can connect the dots, maybe it's architecture decisions and their consequences, maybe it's an unexpected requirement and how they would react, etc.
The knowledge we're testing is related to how well you can do your job. Work isn't closed book - if you can quickly formulate a good query to grab any missing information off the internet then more power to you. I've worked with extremely stubborn people who were very smart and would spend a week trying to sort out a problem before googling it, there are some limited situations (highly experimental work) where this is valuable but... I no longer work with these people.
I remember the days when Greybeards would look down on me for using Google in my first IT job, they would harp on about how real Sysadmins use man pages and O’Reilly books to solve problems, and if you tried to Google something you were incompetent. I had college professors that told me you can’t use the Internet for research because the Internet is a not a legitimate source of information, only libraries can have real information.
What happened to all those folks? They retired, and turned into Boomers who are now unable to function in society at a basic level and do things like online banking or operate a smartphone.
On the other hand, they knew how their hardware worked. And if LLMs keep improving, we're going to reach the last generation that knew how software worked.
We’re pretty close. I’m not sure that 51% of the people I work with understand what DNS is, what a call stack is, what the difference between inheritance and polymorphism is, or what a mutex is
When I'm retired, sitting on the beach with my beer and a good book,
please don't come bothing me that your smartphone banking and GPT
arse-wiping assistant has gone berserk.
It will take 3 to 6 months to determine that a new hire is incompetent, especially if you're required to document their incompetence before firing them.
I've never had a job without a probation period where you can let someone go without cause within the first 90 days with nothing more than two weeks pay in lieu of notice. It definitely doesn't take 6 months to identify someone who only got their job because they used AI in the interview.
> I’m not sure why anyone would want a job they clearly aren’t qualified for.
Well, I suck at interviewing and/or leetcode questions, but have so far done perfectly fine in any actual position.
I can totally see how you’d resort to ChatGPT to give the interviewers their desired robotic answers after 3 months of failing to pass an interview the conventional way.
> give the interviewers their desired robotic answers
As someone who has interviewed a lot of people – robotic answers are specifically not what I (we?) look for. The difference between hands-on experience and book knowledge is exactly what we're trying to tease out.
It's very obvious when someone is reciting answers from a book or google or youtube or whatever vs. when they have actually done the thing before.
For the record: ChatGPT is very good and the answers it gives are exactly the kind of answers that people with book knowledge would give. High level, directionally correct, soft on specifics.
I mostly interview seniors, you obviously wouldn't expect experience from an entry-level candidate. Those interviews are different.
I understand that you have no control over who you're interviewing with but... if you're a good fit and the interviewer leaves thinking you're a terrible fit that's a sign of a bad interviewer. Obviously there are non-proficiency things you can do to skew that perception (bad hygiene, late, obviously disinterested) but a good interviewer (especially one used to working with developers) should be good at getting by all the social awkwardness to evaluate your problem solving.
And yes, most large companies have terrible interviewers.
I refuse to believe that all the interviewers I had over the course of 6 months were all terrible. It must be something about the process that is pathologically broken (especially when getting hired at larger companies)
I mean... if the interview process is even a little broken then doesn't that mean that over time worse and worse interviewers will get hired, making for worse and worse interviews meaning that worse and worse interviewers get hired...
"Algorithms are taking over much of the human work of hiring humans. And, unless they are programmed to seek out currently undervalued and difficult-to-track factors, they may tend to find that the more robot-like a human is the best she or he will be at doing most jobs. So, it could be that the robots are most likely to hire the most robotic humans."
I find the whole gamified system to be bizarre and disheartening no matter which side of the table you're on.
To me, looking at modern tech interviewing is like comparing the gold standard OCEAN and the emergent HEXACO in personality surveys. Take the former on a bad day and it may leave the test taker feeling bad about themselves. The latter, much kinder and gentler in messaging around strengths and weaknesses.
That "by design" quality strikes me as missing from the entire tech interview system. If it weren't broken, this would not be a 7-year conversation updated yesterday:
> I’m not sure why anyone would want a job they clearly aren’t qualified for.
Money, obviously.
Software jobs in particular are magic in this way - the pay is way above the average, and performance metrics are so poorly defined that one can coast for months doing nothing before anyone starts suspecting anything. Years, even, in a large company, if one's lucky. 80% of the trick is landing the first gig, 15% is lasting long enough to be able to use it as a foundation of your CV, and then 5% is to keep sailing on.
No, really. There's nothing surprising about unqualified people applying for software companies. If one's fine with freeloading, then I can't think of easier money.
(And to be fair, I'd say it's 10% of freeloaders, 10% of hard workers, and in between, there's a whole spectrum of varying skills and time and mental makeups, the lower half of that is kind of unqualified but not really dishonest.)
Just because I can’t recite rabin-karp off the top of my head or some suffix tree with LCA shit for some leetcode question about palindromes doesn’t mean I’m unqualified to do the work of an engineer.
I’ve gone public, been acquired by Google, and scaled solutions to tens of millions of users. I’m probably overqualified for your CRUD app.
Consider a situation where you’re applying for a job that you’re 50% qualified for and then using chatgpt to cheat on the interview. Would be much more difficult to catch is my guess.
If you slide from 50% to 99%, how do people feel about using ChatGPT? What is more honest: Many people here were hired when they were less than 100% qualified, and did very well in their new role. It has happened to me more than once.
>I’m not sure why anyone would want a job they clearly aren’t qualified for.
Easy. They have nothing to lose because the jobs they are qualified for don't even pay enough to survive. You probably could have figured this out yourself.
I’m not sure why anyone would want a job they clearly aren’t qualified for.