Hacker Newsnew | past | comments | ask | show | jobs | submit | josephg's commentslogin

I have no doubt plenty of smart engineers at tech companies would rather reinvent the wheel than read a book on theatre. But if anyone’s interested, there are plenty of great books talking about the philosophy of comedy, and why some things work on stage and some don’t. I highly recommend Keith Johnstone’s “Impro”. He’s the guy who invented modern improv comedy and theatre sports.

He says things are funny if they’re obvious. But not just any obvious. They have to be something in the cloud of expectation of the audience. Like, something they kinda already thought but hadn’t named. If you have a scene where someone’s talking to a frog about love, it’s not funny for the talking frog to suddenly go to space. But it might be funny to ask the frog why it can talk. Or ask about gossip in the royal palace. Or say “if you’re such a catch, how’d you end up as a frog?”.

If good comedy is obvious, you’d think LLMs would be good at it. Honestly I think LLMs fall down by not being specific enough in detail. They don’t have ideas and commit to them. They’re too bland. Maybe their obvious just isn’t the same as ours.


> Maybe their obvious just isn’t the same as ours.

Of maybe they're just stochastic parrots and are devoid of intelligence, a necessity to make other intelligent beings laugh with novel jokes ;)


> devoid of intelligence, a necessity to make other intelligent beings laugh with novel jokes ;)

I don't think you need to be that smart to make people laugh. Study actual comedy. Keith Johnstone is right. The funniest things are almost never the cleverest things. We have this idea of jokes in the west as being the height of intelligence. But I think the genius of great improvisers and comedians isn't in their cleverness. Its how they're more in tune with their inner "stochastic parrot" than the rest of us.

Billy Connolly used to play banjo on stage before he became a standup comedian. In one show, he'd just come on. He's in a crowded auditorium and he was just about to play his first song. He strums the very first note on his banjo and the string snaped! There's silence in the room. Billy looks down at the banjo. He thinks for a minute. And then he looks up at the audience and says "Well thats just gone and F-ed it, hasn't it?". And the crowd erupted in laughter.

I don't know if it translates in text, but that story gets a laugh even in the retelling. I had a couple people ask if I thought it was planned. (As if!)

Why is that funny?

Its certainly not funny because that was a clever line. Or because he's highly intelligent. He just said the obvious thing! And he said it slowly enough we could watch him think in real time. If you watch basically any comedy, you'll find almost everything that gets a laugh is the same.

I don't think that kind of humour is beyond the grasp of chatgpt. Far from it. With the right training data, I think LLMs could be better comedians than almost anyone. Most people are far too nervous and clever to let ourselves react honestly and obviously. Thats why most people aren't as good as Billy Connolly.


I landed right in the middle - -1, -2. Which seems weird because I’m very opinionated about a lot of this stuff. I like a lot of the questions but a lot of my answers felt like I was arbitrarily picking something. That’s probably why.

Eg for testing, do I want “whatever finds bugs most effectively” or “property based testing”? Well, property based testing is usually the most time efficient way to find bugs. So, yes both of those. Debugging: do I use print statements, or a debugger, or logically think it through? Yes all of those. But if I arbitrarily said I use a debugger in a multiple choice test, I don’t think that tells you much about how I code!

I do - controversially - think some of the answers are naming bad practices. Like abstraction first is a bad idea - since you know the least about your problem before you start programming it up. Abstraction first inevitably bakes in whatever bad assumptions you walked in the door with into your code. Better to code something - anything - up and use what you learned in the process to iterate on your software architecture decisions. I also generally hate modern OO and these days I prefer static types over dynamic types.

But yeah. Interesting questions! Thanks for putting this together.


Same. I am dead center but this did not really give me any hard questions. For example I controversially believe that user applications do not benefit from unit testing and that manual testing is both faster and superior in quality. Similarly, I believe that for most situations Python’s optional type system is a waste of time and mental load. It is not what catches bugs.

I think both are appropriate for well-scoped library code. But application code is just not well defined enough in most circumstances to get any benefit from it. But this quiz didn’t ask that and I suspect this would swing the score quite strongly.


I forget who said it, but "I don't truly understand a program until the 6th time I've written it."

That's such a good quote. I can't find it anywhere, so I'll attribute it to you.

I can't find it, but believe Joe Armstrong said something along those lines (but I think his number was ten).

Yeah, I can't find it either, but I'm fairly sure it was him.

Sounds like something Chuck Moore would have said. I have no idea if he did, but it made me think of him.

Same. I got dead centre, even though I feel like I have strong biases, and rarely agree with my coworkers on design and style choices.

Maybe your preferences are so contradictory that they cancel each other out :)

I got very close to centre also, just slightly on the "concrete" and "human friendly" sides. But who wouldn't want to be concrete or human-friendly?


I likewise got very close to the centre, and was surprised.

If you had shown me the diagram only, and asked me to position myself on it I would have placed myself on the middle of the perimeter of the second quadrant (135 degrees along the circumference), to indicate that I strongly prefer human friendly and concrete over computer friendly and abstract respectively.

And even as I was answering the questions I felt that I was leaning heavily towards that, with answers like starting simple, documenting well and so on.

I think some of the pull in the opposite direction comes down to interpretation as well.

And actually I see in the repo for the quiz there is a JSON file that contains scores for each question that one could have a look at to see if the answers are scored the same way that you think they would be.

For people who haven’t done the quiz yet, don’t look at the json file until after taking the quiz.

https://github.com/treeform/devcompas/blob/master/questions....


Also, the ranges of possible values are not equal in each direction so the resulting compass is biased a bit in favour of abstract and human friendly over concrete and machine friendly respectively.

abstract: min=-25, max=38

human: min=-27, max=33

Which means that the circle diagram showing the result can give a bit of wrong impression imo.

Edit to add: In a frequency plot you can see also specifically how the possible score additions and subtractions are a bit unevenly distributed

    Abstract frequencies (ASCII bar chart)
     -2: ##########                               6
     -1: ##############################           18
      0: ######################################## 24
      1: ####################                     12
      2: #################################        20
         ---------+---------+---------+---------+
         (max = 24)
    
    Human frequencies (ASCII bar chart)
     -2: ##################                       11
     -1: #####################                    13
      0: ######################################## 25
      1: ###########################              17
      2: ######################                   14
         ---------+---------+---------+---------+
         (max = 25)
Abstract frequencies:

    -2:  6,  -1: 18,   0: 24,   1: 12,   2: 20
Human frequencies:

    -2: 11,  -1: 13,   0: 25,   1: 17,   2: 14

Ok. But then I took it 4 more times. I tried to maximize in any direction, and always stayed within the bullseye center-right. One time I was more computer, but I never made it out of the center 20% radius.

Maybe the message is that none of us are extremists, because we care? I like it


+1 abstract and 0 Neutral.

I thought the imperative vs object oriented was strange, since they are the same thing.


Mmm I don’t think I agree. The way I structure code in C or rust is subtly different than how I’d write the same program in Java. OO python or Ruby looks different from data oriented Python or Ruby.

They’re all imperative programs though. “OO vs Imperative” isn’t the right name for that design choice.


This is an excellent talk explaining how procedural (imperative) came to be OO - which are abstractions over procedural statements:

https://www.youtube.com/watch?v=mrY6xrWp3Gs To summarize: Block scope is procedural. Wrapping that up in other abstractions like objects, which are fancy closures, eventually ends up in OO.

When talking about OO, people often conflate the data design with the program abstractions. Ofc you program differently in languages that have dedicated data structures and idioms around domain driven design^1, which take advantage of imperative execution within those mechanisms as well.

eg Java Spring - Spring has a lifecycle. It's imperative. Beans and other annotations have a lifecycle. It's also imperative.

^1 Domain driven design is still the norm and any efficiency observed from circumventing (or ignoring) the design is considered novel, which should inform the industry that there might be a better way. Good enough wins out again.


The way I was taught "imperative" encompasses both "object-oriented" and "procedural", much like "declarative", the opposite of imperative, captures both functional and logic.

It’s in the word. “Imperative” means something like “urgent demand to action”. Imperative code is code where each line is a command for the computer to do something. Like functions in C - which are made of a list of statements that get executed immediately when they get visited.

C++ and Java are imperative languages because functions are expressed as a list of imperative statements. But there’s nothing inherently imperative about structs and classes. Or any of the concepts of OO. You could have encapsulation, inheritance and polymorphism in a functional language just fine if you wanted to. Haskell fits the bill already - well, depending on your definition of OO.


Yes! The number of lousy articles and blog posts I've seen that talk about "imperative, oo and functional programming"...

> So I think "no gc but memory safe" is what got people to look at Rust, but it's 1990s ML (ADTs, pattern matching, etc.) that keeps them there.

Yeah; this is my experience. I've been working in C professionally lately after writing rust fulltime for a few years. I don't really miss the borrow checker. But I really miss ADTs (eg Result<>, Option, etc), generic containers (Vec<T>), tuples, match expressions and the tooling (Cargo).

You can work around a lot of these problems in C with grit and frustration, but rust just gives you good answers out of the box.


I’ve had the same experience with old rust packages. But nothing quite so old - at least not yet!

Of course, lots of people are employed despite giant holes in their knowledge of CS fundamentals. There’s more to being an effective developer than having good fundamentals. A lot more.

But there’s still a lot of very important concepts in CS that people should learn. Concepts like performance engineering, security analysis, reliability, data structures and algorithms. And enough knowledge of how the layers below your program works that you can understand how your program runs and write code which lives in harmony with the system.

This knowledge is way more useful than a lot of people claim. Especially in an era of chatgpt.

If you’re weak on this stuff, you can easily be a liability to your team. If your whole team is weak on this stuff, you’ll collectively write terrible software.


Also, what people fail to realize is that the whiteboard coding interview was never about testing skills that are necessary for your day to day work.

Most fighter pilots don't fly missions that require superhuman reaction time or enduring 9.5g acceleration either.

Whiteboard coding exercises are just a proxy for certain thinking skills, a kind of je ne sais quoi that successful engineers tend to have.


Alternatively, whiteboard coding exercises are a hazing mechanism to weed out candidates who have certain forms of anxiety or can't perform well under extreme scrutiny and time pressure. Which could be valid selection criteria for certain jobs. But let's be honest and admit that whiteboard coding exercises aren't actually a proxy for anything else, or at least we have no scientific evidence on that point.

First, whiteboard interviews were a great selection criteria for about 30 seconds before it became public knowledge that being able to pass them was a ticket to a google salary. Subsequently, they functioned literally at all while under the pressure of literal billions of people knowing that they were the ticket to a google salary. A criteria surviving this second challenge is extremely impressive.

To put it another way: I can hire based on open source contributions instead of credentials and interview performance. If google decided tomorrow to start hiring based on open source contributions, then their new criteria would leak on monday, and on tuesday the pull requests queues of every major project would simultaneously splatter like bugs on windshields.


> But let's be honest and admit that whiteboard coding exercises aren't actually a proxy for anything else, or at least we have no scientific evidence on that point.

Nah. Whiteboard interviews test a bunch of traits that are important in a job. They aren't designed to be a baroque hazing ritual.

More generally, we could make a list of desirable / necessary qualities in a good hire based on what they'll spend their time doing. Imagine you're hiring someone to work in a team writing a web app. Their job will involve writing javascript & CSS in a large project. So they need to write code, and read and debug code written by their coworkers. They will need to present their work regularly. And attend meetings. The resulting website needs to be fast, easy to use and reliable.

From that, we can brainstorm a list of skills a good applicant should have:

- Programming skills. JS + CSS specifically. Also reading & debugging skills.

- Communication skills. (Meetings, easy to work with, can explain & discuss ideas with coworkers, etc).

- Understanding of performance, UX concepts, software reliability, etc

- Knowledge of how web browsers work

- Capacity to learn & solve unexpected problems

And so on.

Now, an idealised interview process would assess a candidate on each of these qualities. Then rank candidates using some weighted score across all areas based on how important those qualities are. But that would take an insane amount of time. The ideal assessment would assess all of this stuff efficiently. So you want to somehow use a small number of tasks to assess everything on that big list.

Ideally, that's what whiteboard interviews are trying to do. They assess - all at once - problems solving skills, capacity for learning, communication skills and ideally CS fundamentals. Thats pretty good as far as single task interviews go!

> we have no scientific evidence

There's a mountain of evidence. Almost all of it proprietary, and kept under lock and key by various large companies. The data I've seen shows success at whiteboard interviews is a positive signal in a candidate. Skill at whiteboard interviews is positively correlated with skill in other areas - but its not a perfect correlation. Really, the problem really isn't whiteboard interviews. Its that people think whiteboard interviews give you enough signal. They don't. They don't tell you how good someone is at programming or debugging. A good interview for a software engineer must assess technical skills as well.

Speaking as someone who's interviewed hundreds of candidates, yes. There are some people who will bomb a whiteboard interview but do well at other technical challenges you give them. But they are nowhere near as common as people on HN like to claim. Most people who are bad at whiteboard interviews are also bad at programming, and I wouldn't hire them anyway.

The reality is, most people who make homebrew get hired. There's plenty of work in our industry for people who have a track record of doing great work. Stop blaming the process.


> proprietary, and kept under lock and key by various large companies.

I trust that about as much as I trust secret proprietary encryption algorithms.


The new automerge is apparently much faster than it was before. (I haven't run benchmarks though, just been told that by the core developers.)

I'd love some performance benchmarks.


There's a good reason for that. Almost all strings ever created in programs are either very small, immutable or append-only. Eg, text labels in a user interface, body of a downloaded HTTP request or a templated HTML string, respectively. For these use cases, small string optimisations and resizable vecs are better choices. They're simpler and faster for the operations you actually care about.

The only time I've ever wanted ropes is in text editing - either in an editor or in a CRDT library. They're a good choice for text editing because they let users type anywhere in a document. But that comes at a cost: Rope implementations are very complex (skip lists have similar complexity to a b-tree) and they can be quite memory inefficient too, depending on how they're implemented. They're a bad choice for small strings, immutable strings and append only strings - which as I said, are the most common string types.

Ropes are amazing when you need them. But they don't improve the performance of the average string, or the average program.


Yes. But also overwhelming consensus is that complex indirect data structures just don’t end up performing on modern hardware due to cache and branch prediction.

Only use them when the theoretical algorithmic properties make them the only tool for the job.


They have their place. Certainly B-tree data-structures are tremendously useful and usually reasonably cache friendly. And if std::deque weren't busted on MSVC, there are times where it would be very useful. Linked lists have their place as well; a classic example would be an LRU cache, which is usually implemented as a hash table interleaved with a doubly linked list.

But yeah. Contiguous dynamic arrays and hash tables, those are usually what you want.


Thats not at all true though?

If you have a small dataset, yeah, memcpy will outperform a lot of indirect pointer lookups. But that doesn't stay true once you're memcpying around megabytes of data. The trick with indirect data structures on modern hardware is to tune the size of internal nodes and leaf nodes to make the cache misses worth it. For example, Binary trees are insanely inefficient on modern hardware because the internal nodes have a size of 2. If you give them a size of 64 or something, they perform much better. (Ie, make a b-tree). Likewise, a lot of bad tree implementations put just a single item in the leaf nodes. Its much better to have leaves store blocks of 128 items or something. And use memcpy to move data around within the block when needed.

This gets you the best of both worlds.

I spent about 18 months optimising a text based CRDT library (diamond types). We published a paper on it. By default, we store the editing history on disk. When you open a document, we reconstruct the document from scratch from a series of edits. After awhile, actually applying the stream of edits to a text document became the largest performance cost. Ropes were hugely useful. There's a stack of optimisations we made there to make that replay another 10x faster or so on top of most rope implementations. Using a linear data structure? Forget it. For nontrivial workloads, you 100% want indirect data structures. But you've gotta tune them for modern hardware.


My comment is an observation about how this gets tried every few years in major libraries and is usually reverted. I Agree, there are use cases where these are better. But the pattern tends to be to revert to simpler data structures.

There are plenty of great TV shows and movies set in London at least.

Its weird - I know about little american towns like Boulder, Colorado. I've never been there. But I know what it looks like because its featured - or at least mentioned - in plenty of movies and shows.

But the population of Boulder is just 100k. Australia has lots of way bigger cities - like Brisbane, Queensland (population 2.8 million) or Perth, WA (2.4 million) that are never depicted on screen. Even on Australian TV, I basically never see brissie or perth shown at all. I only know what they look like because I've visited.

But maybe that's normal in the english speaking world - at least outside the US. We've gotta raise our game and make more good content.


Part of the problem is selling into America - as an American, I can recognize London (smog and Sherlock Holmes!), Paris (Eiffel Tower), Sydney (Seashell Opera House), and New Zealand (Middle Earth).

I can't recognize Brisbane (and visiting it would feel like visiting Bluey).

Producers are SCARED of using unrecognizable areas (and/or for live-action, just film near where everyone is located).

If it makes you feel better, the USA has tons of large cities - far north of 100k, north of 1 million (especially if considering urban areas), that rarely or ever get featured in TV or movies; and if they do, it's often older ones.

Which is sad, mind you. Every city should have its own feel (too many places now feel like suburbs of Los Angeles, even in Europe or Asia), its own beer, its own food, its own media and music.


I don’t think it’s just unrecognisable places, it’s non American culture. Australia has made a bunch of really good shows. But it’s often quite Australian. I think it’s hard to break through on a meaningful level.


>I can't recognize Brisbane

Eh, lumping the gold-coast in with Brisbane is easy enough. Tanned bodies, barrel waves, way more tourists than you expect... it's basically California except it faces east, not west.


Boulder's metro area is around 330k - not quite "small town". That 100k is people inside one of the local government boundaries of the area. The US Census considers 5k to be the upper limit of a small town.


Brisbane is often called a "big country town" by other Australians, and it's 2.8 million people, so don't take that phrase too strongly :)

That said, agreed with the GP - places like Boulder, Pittsburgh, Baltimore, or New Orleans are places that we know about through culture and are internationally recognised, while being much smaller than Australian cities. That's mostly a factor of a huge amount of English-speaking media being from the US.

Australia attempts to counter this through laws requiring a certain quota of Australian content in the media, but that hasn't really worked - and is one of the factors which spawned many Australian reality TV shows.


New Orleans is tiny by global scale and not very large even in the US. It is, however, culturally unique (there is nothing else even close) and strategically insanely important.


Plenty of towns and cities in the UK are also totally unique culturally.

I visited Edinburgh a couple years ago and was blown away by the city and its people. God, everyone I met was so funny and interesting. But it’s almost never depicted on tv, outside of the occasional BBC crime drama or something. And those usually don’t get much air time outside of the UK. Peaky blinders has done an amazing job telling some of the history of Birmingham. I want more of that! The world is just so big and interesting. Far bigger than Hollywood will ever bother to portray.


Do Americans know what stoke on Trent looks like? Or Derby? U.K. towns of similar size

You might have heard of Aberdeen I guess. But have you heard of Geelong in Australia?


> Do Americans know what stoke on Trent looks like

American here. Literally the only thing I know about Stoke on Trent is that Messi would struggle there during a cold rainy night.


Vibe coding can make it a lot harder to learn programming though, depending on how you use an AI. If you're a beginner and you can't read code very well, you're going to struggle a lot more when you have thousands of lines of the stuff written badly by an AI.


Which means real experience still takes years. But you need to consider the speed at which coding agents improve. Maybe next year they will be more reliable to use without domain experience. Today? You can get a small app or POC without knowing how to code.


> A certificate confirming he did learn the job is enough for companies to employ them.

This is hilariously out of touch with real world hiring.

If you put up a job ad, there are so many people who will apply with all the certifications you can name. And if you ask them to write code, even something quite simple, they will fail utterly and completely.

I've interviewed a bit over 400 people at this point. When I was doing it as a full time job, people only talked to me after they pass a screening test - which already screens out the majority of applicants. Even then, about 3/4 of the people I've interviewed were terrible. So bad they could barely write hello world in the half hour we give them. This is on their own computer, in their favorite programming language. They did not have to talk to me during the process at all. A lot of the people who fail have graduated from decent universities. Some said they had 30 years professional experience.

I'm sure some of that is due to nerves. But a lot of it is simply because programming is hard, and most people don't have the right kind of brain for it. Lots of people - probably the majority if we're honest - bluster their way through certification programs or degrees. Many of them learn the theory but struggle with the practical skills. Sometimes they gather together in low performing teams at large companies where management doesn't know any better.

If you graduate from a music conservatory, you can probably play an instrument. But that isn't true of most computer science degrees. Lots of people graduate without being any good at programming.

Its also a numbers thing. Good programmers don't stay on the job market for long. Great people will send 1-3 applications and be hired. Or more likely be hired through word of mouth. Bad applicants might send hundreds. As a result, most job applications are written by dedicated people who get turned down over and over again by employers.

There's a reason fizzbuzz has become a cliche in our industry. If you put up a job ad, most people who send in an application won't be skilled enough to program it up.


Fizzbuzz would be completely fine. Companies out there ask devs to solve highly niche cryptic tasks like the knapsack problem for no reason.


Yeah I think the fundamental reason this gets debated so much is that "whiteboard interviewing" encompasses both fizzbuzz, and leetcode dynamic programming nonsense. Some people are saying "fizzbuzz is great!" and others are saying "no you're totally wrong, leetcode is terrible".

Even this article falls into this trap. The first thing he quotes is fizzbuzz-level, but then the research paper he uses to argue against fizzbuzz actually used a much harder leetcode style problem.

IMO fizzbuzz-level problems are totally fine. You can't really argue against them. They filter out tons of people who I wouldn't want to hire, and nobody who should be hired can fail them, even under ridiculous pressure.

It's more debatable when you get to actually difficult algorithm problems but that's another argument.

(Also fizzbuzz itself is a pretty terrible "simple" problem because it feels like there should be an elegant "trick" solution but there actually isn't. Don't actually use fizzbuzz. The example in this article - filter out odd numbers from a list - is totally fine though.)


Meta phone screens currently involve 2 medium/hard leetcode problems AND 20 minutes of behavioral questions. The inmates (i.e. we programmers) are running the asylum.


who can pass this except recent college grads?

I wouldn't have any time to prepare for this unless I was laid off unexpectedly, then grinding leetcode would be a full time job for a while.


> who can pass this except recent college grads?

I can. So can a lot of the better candidates I interviewed. Most googlers and fb engineers can pass this stuff too.

Theres a much better way to test for senior engineers though, and it’s this: prepare a small program (200 lines or so) with some bugs in it. Write a few test cases which highlight the bugs. Then give the candidate the code and the tests and ask them to fix the buggy code.

Good senior engineers - in my experience - aren’t better than smart young grads at programming. But they’re soooo much better at reading code and debugging it.

I interviewed this insanely good Ruby engineer once. He had the body of a bear, and a bushy, greying beard to match. During the interview he gave me a masterclass at debugging. Before he even looked at our tests, he read the code and named out loud assumptions he was making about it. Then he started writing his own test suite to validate those assumptions during the interview. He fixed a few of our bugs before he even looked at our tests, in the first 10 minutes of the assessment. And he fixed another bug we didn’t know about. I can’t remember how he did at the programming section of our interview. But I’d hire him in a heartbeat from watching him debug.

The one thing to keep in mind if you do this is that most people will overestimate how much debugging you can do in half an hour or an hour and overcomplicate the program you give candidates. If you make a test like this, make the code simpler than you think and actually calibrate its difficulty with your coworkers.


Me. Because, for about 2 weeks during lay-off gardening leave, I did study it.

On one hand this demonstrates that it isn't useful for testing innate skill.

On the other hand, it was the best paid work I've ever done. It was an essential part in me getting a job (not at Meta) that paid multiples better than previous roles.

This work allowed me to place myself in a group of people who can pass the test. Some of the people who are outside this group would make fine employees but don't want to or think to put in this work, for understandable reasons. But all of the lackadaisical applicants will fall outside this group. I get to signal that I'm not one of them.

I spent 4 years in higher education for a non-essential positive signal to employers: 2 weeks is nothing.

In the past I hated Leetcode, now I've begun to think of it as an opportunity. While as an individual my opportunity to shift the consensus on this form of interview is low, my opportunity to benefit from the arbitrage is high. Don't hate the player, etc.


Hah totally. When I was interviewing professionally, we emailed candidates a document describing our interview process. We told them in that document what we’d ask and told them how they could prepare.

Almost nobody bothered to read that document before the interview. And the people who did were usually the better candidates. We joked internally about skipping the entire interview process, and just passing everyone who read our study notes.


I once got asked to write a Levenshtein edit distance calculator for a “15 minute” SRE phone screen


I'm not sure I could write that even with a full day.

Even if I did, the solution would probably end up running in O(n!) time.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: