It's not just you, that does grate on me sometimes as well. I get that he needs to move the show on, but I often feel like the guest is trying to make quite an important point when he interrupts. Still love it, though.
As I've listened to more and more of it, I start to gravitate not to particular topics, but to particular experts. There are many guests who are regulars when topics in their field come up, and the good ones make any topic in that field interesting. For instance, if there's an episode about religious history and Martin Palmer is on it, it's bound to be a banger (listened to one on Augustine's Confessions recently, for instance, and it was great). Same with Ancient Greece and Paul Cartledge and Angie Hobbs. If I'm looking for something to listen to, I just put one of those three into the search field of my podcast player, and I'm never disappointed.
The thing that makes it work (aside from Melvyn's excellent hosting) is that they have an unspoken but fundamental assumption about the audience, which is that the listeners are intelligent. Like, it's ok to have nuance, to dig deep into topics, it's even ok for listeners not to follow every point precisely. But the listeners are smart people that appreciate hearing from people who know what they're talking about.
That's a very rare assumption in modern media, when most mainstream things seemed to be aimed at some sort of lowest common denominator.
Imo this works well, especially in the podcast format, because it doesn't feel like they're trying to make every episode for every listener. You dip in on the espisodes that catch your curiosity so if you're listening there's a decent chance you're interested in the topic and are paying attention.
I must not be smart, because I can't stand Victoria Coren-Mitchell. As annoying as her husband, but without being hilariously funny at the same time. About as insufferable as her brother too..
A piece of advice I read somewhere early in my career was "a boolean should almost never be an argument to a function". I didn't understand what the problem was at the time, but then years later I started at a company with a large Lua code-base (mostly written by one-two developers) and there were many lines of code that looked like this:
serialize(someObject, true, false, nil, true)
What does those extra arguments do? Who knows, it's impossible without looking at the function definition.
Basically, what had happened was that the developer had written a function ("serialize()", in this example) and then later discovered that they wanted slightly different behaviour in some cases (maybe pretty printed or something). Since Lua allows you to change arity of a function without changing call-sites (missing arguments are just nil), they had just added a flag as an argument. And then another flag. And then another.
I now believe very strongly that you should virtually never have a boolean as an argument to a function. There are exceptions, but not many.
But this isn't really a boolean problem - even in your example there is another mistery argument: nil
And you can get the same problem with any argument type. What do the arguments in
copy(obectA, objectB, "")
mean?
In general, you're going to need some kind of way to communicate the purpose - named parameters, IDE autocomplete, whatever - and once you have that then booleans are not worse than any other type.
You're correct in principle, but I'm saying that "in practice", boolean arguments are usually feature flag that changes the behavior of the function in some way instead of being some pure value. And that can be really problematic, not least for testing where you now aren't testing a single function, you're testing a combinatorial explosions worth of functions with different feature flags.
Basically, if you have a function takes a boolean in your API, just have two functions instead with descriptive names.
Tons of well-written functions have many more potential code paths than that. And they're easy to reason about because the parameters don't interact much.
Just think of plotting libraries with a ton of optional parameters for showing/hiding axes, ticks, labels, gridlines, legend, etc.
The latter is how you should use such a function if you can't change it (and if your language allows it).
If this was my function I would probably make the parameters atrributes of an TurboEncabulator class and add some setter methods that can be chained, e.g. Rust-style:
Hopefully you could refactor it automatically into 1024 functions and then find out that 1009 of them are never called in the project, so you can remove them.
True, but I think its worth noting that inferring what a parameter could be is much easier if its something other than a boolean.
You could of course store the boolean in a variable and have the variable name speak for its meaning but at that point might as well just use an enum and do it proper.
For things like strings you either have a variable name - ideally a well describing one - or a string literal which still contains much more information than simply a true or false.
Named arguments are a solution to precisely this issue. With optional arguments with default value, you get to do precisely what was being done in your Lua code but with self documenting code.
I personally believe very strongly that people shouldn’t use programming languages lacking basic functionalities.
Named arguments don't stop the deeper problem, which is that N booleans have 2^N possible states. As N increases it's rare for all those combinations to be valid. Just figuring out the truth table might be challenging enough, then there's the question of whether the caller or callee is responsible for enforcing it. And either way you have to document and test it.
Enums are better because you can carve out precisely the state space you want and no more.
That's not a problem per se. It may very well be that you're configuring the behavior of something with a bunch of totally independent on/off switches. Replacing n booleans with an enum with 2^n values is just as wrong as replacing a 5-valued enum with 3 booleans that cannot be validly set independently.
Lua doesn't directly support keyword arguments, but you can simulate it using tables:
serialize(someObject, { prettyPrint = true })
And indeed that is a big improvement (and commonly done), but it doesn't solve all problems. Say you have X flags, then there's 2^X different configurations you have to check and test and so forth. In reality, all 2^X configurations will not be used, only a tiny fraction will be. In addition, some configurations will simply not be legal (i.e. if flag A is true, then flag B must be as well), and then you have a "make illegal states unrepresentable" situation.
If the tiny fraction is small enough, just write different functions for it ("serialize()" and "prettyPrint()"). If it's not feasible to do it, have a good long think about the API design and if you can refactor it nicely. If the number of combinations is enormous, something like the "builder pattern" is probably a good idea.
It's a hard problem to solve, because there's all sorts of programming principles in tension here ("don't repeat yourself", "make illegal states unrepresentable", "feature flags are bad") and in your way of solving a practical problem. It's interesting to study how popular libraries do this. libcurl is a good example, which has a GAZILLION options for how to do a request, and you do it "statefully" by setting options [1]. libcairo for drawing vector graphics is another interesting example, where you really do have a combinatorial explosion of different shapes, strokes, caps, paths and fills [2]. They also do it statefully.
So 1 line of C/C++ becomes 5 lines of Java/C#? That sounds about right! :-) Though I'm sure we can get to 30 if we squeeze in an abstract factory or two!
You can do the above in C#, I haven't written Java in a decade so can't comment on that. I don't really understand your argument though - the options approach is extremely readable. You can also do the options approach in C or C++. The amount of stuff that you can slap into one line is an interesting benchmark to use for languages.
It's always crazy to see languages like C being able to beat high-level languages at some ergonomics (which is usually their #1 point of pride) just because C has bitfields and they often don't.
"Best way" is often contextual and subjective. In this context (boolean flags to a function), this way is short, readable and scoped, even in C which doesn't even have scoped namespaces.
Maybe there are better ways, and maybe you have a different "best way", but then someone can legitimately ask you about your "best way": `Why is that the "best" way?`
The only objective truth that one can say about a particular way to do something is "This is not the worst way".
It's simple, efficient, and saves space in memory. While not as big a deal these days where most systems have plentiful RAM, it's still useful on things like embedded devices.
Why waste a whole byte on a bool that has one bit of data, when you can pack the equivalent of eight bools into the same space as an uint8_t for free?
Sure, that works when trying to conserve memory to the degree that a few bytes matter, but the downside is that it's more complex, less obvious.
I've done exactly what you propose on different projects but I would never call it the "best" method, merely one that conserves memory but with typical trade-offs like all solutions.
I'm surprised nobody has suggested this yet. Just use a different name for the function. In your example, the new function should be prettyPrint(). No booleans required. No extra structures required.
I don’t remember exactly where I read this, but I think it was some internet forum of some kind. It makes sense that whoever wrote it got it from there. Never read it myself.
> I now believe very strongly that you should virtually never have a boolean as an argument to a function. There are exceptions, but not many.
Really? That sounds unjustified outside of some specific context. As a general rule I just can't see it.
I don't see whats fundamentally wrong with it. Whats the alternative? Multiple static functions with different names corresponding to the flags and code duplication, plus switch statements to select the right function?
It's interesting to compare this with the Post Office Scandal in the UK. Very different incidents, but reading this, there is arguably a root assumption in both cases that people made, which is that "the software can't be wrong". For developers, this is a hilariously silly thing, but for non-developers looking at it from the outside, they don't have the capability or training to understand that software can be this fragile. And they look at a situation like the post office scandal and think "Either this piece of software we paid millions for and was developed by a bunch of highly trained engineers is wrong, or these people are just ripping us off". Same thing with Therac-25, this software had worked on previous models and the rest of the company just had this unspoken assumption that it simply wasn't possible that there was anything wrong with it, so testing it specifically wasn't needed.
No, this is not a "hilariously silly thing" for developers. In fact, I'd say that most developers place way too much trust in software.
I am a developer and whatever software system I touch breaks horribly. When my family wants to use an ATM, they tell me to stand at a distance, so that my aura doesn't break things. This is why I will not get into a self-driving car in the foreseeable future — I think we place far too much confidence in these complex software systems. And yet I see that the overwhelming majority of HN readers are not only happy to be beta-testers for this software as participants in road traffic, but also are happy to get in those cars. They are OK with trusting their life to new, complex, poorly understood and poorly tested software systems, in spite of every other software system breaking and falling apart around them.
[anticipating immediate common responses: 1) yes, I know that self-driving car companies claim that their cars are statistically safer than human drivers, this is beyond the point here. One, they are "safer" largely because they drive so badly that other road participants pay extra attention and accommodate their weirdness, and two, they are still new, complex and poorly understood systems. 2) "you already trust your life to software systems" — again, beyond the point, not quite true as many software systems are built to have human supervision and override capability (think airplanes), and others are built to strict engineering requirements (think brakes in cars) while self-driving cars are not built that way.]
> but also are happy to get in those cars. They are OK with trusting their life to new, complex, poorly understood and poorly tested software systems
Because the alternative isn't bug-free driving -- it's a human being. Who maybe didn't sleep last night, who might have a heart attack while their foot is on the accelerator, who might pull over and try to sexually assault you.
You don't need to "place confidence in these complex software systems" -- you just need to look at their safety stats vs e.g. regular Uber. It's not a matter of trust; it's literally just a matter of statistics, and choosing the less risky option.
I wonder if this is a desired outcome of fuzzing, the puncturing of the idea that software doesn't have bugs. This goes all the way back to the very start of fuzzing with Barton Miller's work from ~1990.
> there is arguably a root assumption in both cases that people made, which is that "the software can't be wrong"
I think in this case, the thought process was based on the experience with older, electro-mechanical machines where the most common failure modern was parts wearing out.
Since software can, indeed, not "wear out", someone made the assumption that it was therefore inherently more reliable.
I think the "software doesn't wear out" assumption is just a conceivable excuse for the underlying "we do not question" assumption. A piece of software can be like a beautiful poem, but the kind of software most people are familiar with is more like a whole lot of small automated bureaucracies.
Bureaucracy being (per Graeber 2006) something like the ritual where by means of a set of pre-fashioned artifacts for each other's sake we all operate at 2% of our normal mental capacities and that's how modern data-driven, conflict-averse societies organize work and distribute resources without anyone being able to have any complaints listened to.
>Bureaucracies public and private appear—for whatever historical reasons—to be organized in such a way as to guarantee that a significant proportion of actors will not be able to perform their tasks as expected. It also exemplifies what I have come to think of the defining feature of a utopian form of practice, in that, on discovering this, those maintaining the system conclude that the problem is not with the system itself but with the inadequacy of the human beings involved.
Most places where a computer system is involved in the administration of a public service or something of the caliber, has that been a grassroots effort, hey computers are cool and awesome let's see what they change? No, it's something that's been imposed in the definitive top-down manner of XX century bureaucracies. Remember the cohort of people who used to become stupid the moment a "thinking machine" was powered within line of sight (before the last uncomputed generation retired and got their excuse to act dumb for the rest of it)? Consider them in view the literally incomprehensible number of layers that any "serious" piece of software consists of; layers which we're stuck producing more of, when any software professional knows the best kind of software is less of it.
But at least it saves time and the forest, right? Ironically, getting things done in a bureaucratic context with less overhead than filling out paper forms or speaking to human beings, makes them even easier to fuck up. And then there's the useful fiction of "the software did it" that e.g. "AI agents" thing is trying to productize. How about they just give people a liability slider in the spinup form, eh, but nah.
Wanna see a miracle? A miracle is when people hype each other into pretending something impossible happened. To the extent user-operated software is involved in most big-time human activities, the daily miracle is how it seems to work well enough, for people to be able to pretend it works any good at all. Many more than 3 such cases. But of course remembering the catastrophal mistakes of the past can be turned into a quaint fun-time activity. Building things that empower people to make less mistakes, meanwhile, is a little different from building artifacts for non-stop "2% time".
I'd consider the Post Office Scandal to be far more malicious. The higher ups in the post office were getting bonuses IIRC according to how much money was "recovered" (defrauded) from the subpostmasters. Also there was a lot of lying to the courts and ministers about the reliability of the software.
As far as I know, the Therac-25 incidents were reasonably honest mistakes.
I agree, that is very true, Therac-25 was incompetence, Post Office was incompetence with a heavy dose of malice. This aspect just steuck me as similar, the unquestioning belief in the infallibility of software.
As a single developer, you have very little weight against Google. The same is true of a single developer in the US.
What does have weight is the European Union, which Croatia is a member of. If the EU parliament makes a law that Google is not allowed to have these kinds of rules and do business in the EU, then Google will listen. Given the horrible state of the US government, the EU is just about the only force left in the world able and willing to stand up against these tech giants in a way that forces them to pay attention and act responsibly.
The chances are higher that the EU makes a law mandating this sort of thing than demanding dropping this requirement in the EU.
The only thing you can expect from the EU is that it requires that apps in the EU market are signed with keys signed by the EU which you will only be able to get if you provide your ID or business registration.
Between Google and the EU I think I would rather be governed by the devil.
Several! You're correct that Lisps is the most famous, and there's also languages like Erlang that have this as a core functionality. But it's also used in things like game engines for C/C++. You do have your "updateAndRenderFrame()" function in a dynamic library, having it take a pointer to the full game state as an argument. When you want to reload, you recompile your dynamic library, and the main loop can swap out the implementation, and the game keeps running with the new code. I don't see a reason why you couldn't do this in Rust, though I imagine it's trickier.
I don't know about "shouldn't", I think it's fine if they do. But I basically agree, at some fundamental level, you have to have some trust in your coworkers. If someone says "This fixes X", and they haven't even tried running it or testing it, they shouldn't be your coworker. The purpose of code reviews shouldn't be "is this person honest?" or "is this person totally incompetent?". If they're not, it's a much bigger issue, one that shouldn't be dealt with through code reviews.
Very different situation if it's open source or an external contribution, of course.
reply