Ability to make complex things simple is what sets apart a great programmer from an average one. So why do most programmer interviews at companies such as Google focus on writing performant code instead?
Most professional programmers only occasionally need to look into performance issues, but need to take complex things and simplify them with every line of code they write. And yet most programming interviews don’t evaluate this ability. I think this should change.
Because its optimal solution to let talent flow through BigCo in SV. There is no domain or tech-stack specific stuff. So studying for the interview process allows the applicant to apply for multiple companies. At the same time it's a pretty hardcore process, so the company knows they are getting engineering talent that is a) smart and b) obedient.
For the rest of the world it absolutely doesn't make sense and that's why where I live, you don't see that style of interviews at all.
One thing I'd add: One possible function of an interview process is to find qualified interviewees. Another is to make interviewers feel smart. I think typical BigCo interview processes are optimized for the latter.
David Epstein's new book Range talked about academic research splitting domains into kind vs wicked. Kind learning domains are ones where "feedback links outcomes directly to the appropriate actions or judgments and is both accurate and plentiful", while wicked ones are "situations in which feedback in the form of out-comes of actions or observations is poor, misleading, or even missing". [1]
The hard parts of real-world software development are generally in the "wicked" bucket. Schoolwork and puzzle questions are both generally "kind" in the sense that there's a right answer and you're expected to figure it out. It's impossible to be too smug working on "wicked" problems because you get your ass kicked often enough to stay humble. But in "kind" domains it's quite easy to indulge one's desire to feel superior by dragging people through things you know well.
Personally, when I interview people I try to set things up so that there's no right answer; the goal is to see how well they get to good answers, and how well they collaborate during that process. I'd love to see more people do that.
> I am reluctant to believe someone has mastered simplicity until after they've mastered complexity, hence the complex interview does have value.
While I buy the premise, this seems like a strange conclusion. If a mastery of simplicity requires a mastery of complexity first, then testing for an ability to be simple tests for both kinds of knowledge, whereas testing for an ability to handle complexity tests only one–so what's the point?
I think you're over-indexing on the notion that someone has "mastered simplicity", when what's truly valuable is the ability to reduce that which is complex to something that is simple. Inherently, that requires the ability to grasp the complex, and so you interview by presenting something that is complex, and judge based on the simplicity of the solution.
I like very much the 'over-indexing' terminology. I agree that I may be improperly confusing the ability to write simple code with the ability to solve simple problems.
I think that's my point: if you write the simple code, then you've probably demonstrated both that you can write complicated code, and that you can simplify it.
If I may add, simplicity is also an expression of .. experience/wisdom.
At first I had a distorded view of what is complex. I thought was complex was mostly foreign and a different perspective. With the right view point everything aligns onto a small line.
The more you read/see the more you get accustomed to that fact. The more you see that more != more complex, quite the opposite.
The math fields, not always I believe, but very often, runs after minimized models of anything. Even recursion is a way to reduce the infinite into a small/finite set of rules.
All kinds of ideas are apt to come when you are in the shower... ideas for simplification, ideas for improving performance, ideas for new features. That doesn’t mean we should incorporate a shower in the interview schedule! We can simple review the interviewee’s code prioritizing simplicity over things like did he/she consider every corner case.
> We can simple review the interviewee’s code prioritizing simplicity over things like did he/she consider every corner case.
That sounds like a recipe for disaster, though. Simplicity that doesn't account for every corner case in the domain of the code[0] is false simplicity, a bad abstraction. The challenge of writing simple code is this: threading a single, unifying concept through all corner cases.
--
[0] - I.e. when writing a function, you don't necessarily have to account for e.g. the off chance of heap getting corrupted externally in the middle of execution of your code. But you'd better account for all the values your function might be called with, in all combinations.
>threading a single, unifying concept through all corner cases
Some corner cases influence the core algorithm some don’t. Checking for degenerate cases and so on may not influence the core algorithm, if so need not be done in an interview.
Re: performant code, the developer writes the code once, it is read many times, and it is run many many many times. Several orders of magnitude more.
Yes, making code that is easy to read and less bug-prone is good. But at the end of the day the customers are going to be running your code millions of times a day, and if you need to make the code slightly harder to read to improve performance, then by all means do so.
If your code is only going to be run once and must be reliable, then you can make a different trade-off.
What is premature and what is not? Is choosing fit for a job data structures a premature optimization? I don't think so. But I've seen people arguing against it.
First you make it work, then you benchmark it. Then you see if that particular part is a bottleneck and whether there is a business case for optimisation.
I know it's fun and exciting to optimise a function to perform at maximum efficiency, but people tend to forget that someone has to read that piece of code in the future and understand it.
All the fancy tricks might've given a 2% increase in performance, but made it 200% less understandable by anyone except the codegolfing optimizer trying to be clever. =)
Spectrum of performance:
LO |---*-------*--------*------------*-------| HI
^ ^ ^ ^
| | | |_root of all evil if premature
| | |_you should be here
| |_you can be here if you don't do stupid things
|_you are here
--
> All the fancy tricks might've given a 2% increase in performance, but made it 200% less understandable by anyone except the codegolfing optimizer trying to be clever.
This applies to hairy, last-ditch effort optimizations. The kind of your average programmer isn't even capable of doing. It's nothing like the optimizations most real-world code needs.
It's why I consider the "premature optimization" adage to be actively harmful, as it legitimizes lack of care and good craftsmanship.
From what I've seen, a lot of code can be trivially optimized with no net loss to readability (and sometimes a gain!), by simply removing dumb things, mostly around data structures. Fixes involve using vectors instead of lists or hash tables, depending on size and access and add/delete patterns. Using reference equality checks instead of string comparisons. Not recalculating the same value all over again inside a loop.
The kind of things above are ones that bleed performance all over your application, for no good reasons. I consider it a difference between a newbie and a decent programmer - whether or not they internalized how to code without stupid performance mistakes, so that the code they write is by default both readable and reasonably performant.
Then you go to actual optimizations, the kind that benefit from a benchmark - not because doing them elsewhere is wrong in principle, but because they take time and noticeably alter code structure. Using better algorithm, and/or using a better data structure, both come here. They don't have to impact readability, as long as you isolate them from the rest of the system behind a simple interface.
(Like, e.g. one day I achieved 100x boost of performance of an application component by replacing a school-level Dijkstra implementation with a two-step A* -based algorithm and data structure specifically designed for the problem being solved, and easily managed to wrap it in an even simpler interface than original. Since the component was user-facing, it pretty much single-handedly changed the perception of application from sluggish to snappy. The speedup itself probably saved many people-hours for users who were a captive audience anyway (this was an internal tool).)
Only then you get to the "premature optimization is a root of all evil" part, which is hairy tricks and extreme levels of micromanagement. Making sure you don't cons anything, or more than absolutely necessary. Counting cycles, exploiting cache-friendly data layouts, etc. This can have such a big impact on a system and surrounding code that it does really benefit from not being done until absolutely needed (except if you know you'll need it from the start - e.g. in some video games).
>changed the perception of application from sluggish to snappy
.. so, you measured the performence (sluggish), saw the need for improvement and improved it (snappy). It is not premature optimization. It would be premature optimization if it happend without mesaurement and without need.
I agree with your examples above. If you choose the right data strcuture/algorithms/patterns without sacrafising readability or development speed. By all means. But don't spend hours to improve something which dosn't need improvement.
That's true. What I personally advocate is: first, learn enough about programming and language to not do stupid things - so your code is already somewhat performant by default, at zero cost to readability. Second, when you're designing, think a little bit about performance and, given two designs of similar complexity but different performance characteristics, pick the more performant one. Three, when refactoring, if you see something stupid performance-wise, fix it too. All these things cost you little time and make your application overall snappier and less likely to develop performance problems in the future.
Beyond that, measure before you optimize, as such interventions will require larger amount of effort, so it makes sense to do them in the order of highest-impact first.
(Also note that "performance", while usually synonymous to "execution speed", is really about overall resource management. It's worth keeping memory in mind too, in particular, and power usage if your application could be used on portable devices. Which is really most webapps nowadays.)
What? I guess some systems design interviews talk about performance but most interviews are algorithms / data structures and have nothing to do with writing performant code.
Any interview involving data structures and algorithms will certainly expect you to be able to categorize the big O of your solution in both space and time. Your interviewer will probably also challenge you to at least speculate on how you could improve on the big O if needed. What is that about if not writing performant code?
If you consider “write code that doesn’t timeout or OOM on pathological inputs” to be writing performant code, sure.
But as it is normally understood, writing performant code is more about writing for fast performance with low time constant on everyday small inputs - something which algorithms and data structures interviews never touch on. Big O complexity is not directly related to performance, except for pathological inputs.
The concept of clarity is why I don’t like the consequence of SOLID, where you have lots of tiny classes.
It’s easier to understand complex functions and a simple class structure than the other way around. Because jumping between multiple files/classes incurs a high understandability cost, whereas complex functions fit on your screen typically.
I typically find it very difficult to understand complex functions, (100+ lines of code, ~3 or more nesting levels deep) and even simpler complex functions (mixing 2-3 concepts into a 15 line function).
I've only just started doing TDD the "Growing Object Oriented Software guided by tests" way, and I find it incredibly helpful that each and every class does just _one_ thing, even splitting up those 15 line functions into two or three separate classes implementing an interface -- single responsibility -- helps me a lot in reasoning about the code.
I _have_ experienced the dependencies issue myself already though, it's very annoying to click on a method in my IDE, and then get shown the interface definition of that particular method. I'll then have to trace my way through a couple of files to find the dependency, very annoying.
This is widely believed and repeated, but empirical evidence actually runs the other way: according to studies cited in the book Code Complete, functions in the range of 100 to 150 LOC are more maintainable than shorter ones.
Code complete speaks about subroutines if I understood correctly.
I think that in functions, or even objects, the results would be very different.
I usually find the shortest functions more powerful and clear.
For example the pipe operator in F# that is nothing more than:
let (|>) f g = g f
While it has huge benefits in the overall readability of the language.
At the end of the day, a simple measure like LOC can never capture readability, good or bad, and you get eaten by Goodhart's Law if you focus too much on it.
No one would argue otherwise. Indeed, you can trivially take any readable function and transform it into an unreadable one of exactly the same length. But this doesn't seem like a valid reason to dismiss specific findings of specific studies. Don't you think it's interesting that such research as we have contradicts the most often-repeated claim about this aspect of programming?
I don't know. I've learned over the years that there is always a study confirming or contradicting whatever point you want to make. "Beware The Man Of One Study".
I say that without even looking at those studies, which is perhaps unfair. But there are So Many Studies...
My personal experience is that when I was exposed to shorter simple and (so important!) well named functions, my work became so much better. And that is now the school I subscribe to.
That's not - at all - to say you can't also find very good practices doing different things. But that's not where I found it.
Still, one study is still an important piece of evidence to consider when all you had before is no studies and a gut feeling.
My personal experience differs from yours somewhat. I believe it's not the length or the number of methods that matter, but what language (i.e. abstraction) they create. You try to subdivide the function into functions that are natural fit for the task being done, but no further. If you still end up with a long block of code - as you very well might - consider comments instead. A comment telling what the next block of code will do is kind of like inlined function, except you don't have to jump around in file, you don't lose the context. Much easier to read.
I used to write code where essentially every piece of code longer than 3-5 lines got broken out into its own private function. The amount of jumping I had to do when reading the code, and the amount of work maintaining and de-duplicating small private functions, was overwhelming.
When I was shown that you can break out a function that's only used once, just in order to name it (2005, or so), it was one of the greatest revelations in my career.
It also serves as a way to tell you what that code does, without you having to know details of how it does it, until the rare day when it's important.
But I only do it when that code is genuinely hard to follow, not because my function is "over 10 lines, and that's our policy".
agumonkey's mention of cyclomatic complexity in a parallel comment made me remember yet another realization wrt. breaking functions out: if you work with languages without local functions and start breaking your large function into smaller functions or private methods, you run into a readability/maintenance problem. The next time you open the file and start reading through all the little helper functions, you start wondering - who uses which one? How many places use any one of them?
With IDE support answering the question for a single function is just a matter of a key combination, but that still adds friction when reading. I found that friction particularly annoying, and a file with lots of small helper functions tend to be overwhelming for me to read (it's one reason I like languages with local functions). Whereas if you didn't break the code out, and only maybe jotted a comment inline, you can look at it and know it's used only in this one place.
> I typically find it very difficult to understand complex functions
It seems to me like "complex" and "ability to understand" mean the same thing, so this phrase doesn't have much meaning.
It's difficult to define "ability to understand" / "complex" without using either of those words in the definition. For example, you mention lines of code, nesting and multiple concepts.
I tend to agree with your examples, however not necessarily the lines of code. I've seen single large functions that represent an algorithm in a way that's easier to understand than the implementation that breaks it up into tens of little functions. It made liberal use of comments to explain each section of code in the function. I believe its advantage was that when reading, you could simply scroll down the function line by line rather than having to jump all over the file.
Sure, all I mean is that I get frustrated and lose unnecessarily much time being distracted by keeping a model of how the function works and how variables interact in my head. I have ADHD so my working memory isn't fantastic for these kinds of functions.
I suppose you could take my definition of complexity to be approximate to cyclomatic complexity
Yeah, but I would say it depends on context, clarity is very context dependent. Do what makes the code easiest to read taking into account that human limitations.
Sometimes splitting out code makes sense by making the underlying structure clearer, sometimes it makes it bothersome to find the actual important details. For instance, logic that filters what jobs to run based on conditions, it might make sense to abstract the logic for the conditions into one class per condition, to make the filtering logic clearer.
> Sometimes splitting out code makes sense by making the underlying structure clearer, sometimes it makes it bothersome to find the actual important details.
Yeah, that seems to be something I'll have to spend some time learning about. Right now I'm just mindlessly splitting off everything, and it kind of works, but it's annoying to navigate all over the place just to find some detail somewhere
> it's very annoying to click on a method in my IDE, and then get shown the interface definition of that particular method. I'll then have to trace my way through a couple of files to find the dependency, very annoying.
The IDE should be able to show you the implementations. When using Java and Eclipse for example, you just hold Ctrl and hover with the mouse over a method call. In the appearing dialogue you can then select "Go to implementation" instead of "Go to definition" (or similar) and Eclipse searches for and then shows you all classes that contain an override for that method and then you can click on one of them to view that specific overriding method.
I have a theory about why this functional atomization has become so popular: small design topics (functions, classes) are easy to talk about so the various wannabe OOP gurus like Robert Martin keep talking about them the most, becoming more and more extreme to the point that we end up with hundreds of one line functions which can only really be put together in one way so that they fulfill the arbitrary criteria of reading like prose.
This is an excellent example of local optimizations - i.e. local design optimizations - which end up harming the overall design and maintainability. You've discovered this yourself, but still cling to this dogma of tiny functions.
I've never really understood why those particular 5 design principles became a sort of "top 5". Then I grokked dependency inversion and it really changed how I write code.
That article bashes (among other things) IOC containers and mocking frameworks. I absolutely hate Spring IOC and mockito. But dependency inversion is not dependency injection. It wasn't until I got sufficiently fed up with Spring being everywhere in my Java code that I figured out how to hide Spring as an implementation detail.
These are just a bunch of ideas that have been cobbled-together by Robert Martin and accidentally ended up as folklore in a world without rigor and order.
Kevlin Henney basically refuted the idea that SOLID has any meaning or use in software development.
SOLID is rather old, and never really grabbed me that much. Well, rather, I kinda agreed with it (but I had been in OOP for about a decade before it came along) for a bit, but no longer. Most of it is rather obvious, and I find the terms just buzzwords. "Dependency Injection" particularly drives me crazy, as we've been passing in references to objects forever, and suddenly we need a Special Term for it. What next, instead of pointers we call them Dependency Actors?
I also agree that I'm tired of tiny classes and tiny functions. I went on a rant to my coworkers the other day about them, after spending an afternoon figuring out how some code works that fully believed in small functions.
Nothing makes 100 lines of code more readable than splitting them across 100 functions in 10 classes in 10 files! /s
I agree with another poster, though, that VisualWorks Smalltalk made it a lot more palatable. I'd honestly really like to try a system that treats a program as a database of functions rather than a directory structure of files. I don't think the directory structure adds anything that a tagging system couldn't do.
> "Dependency Injection" particularly drives me crazy, as we've been passing in references to objects forever
Might have been old to you, but not to everyone. I have seen plenty of codebases which inject nothing and where all objects create their dependencies themself. Lower level code is especially susceptibly to this - the amount if C code that I've seen that made use of some kind of DI tends to go towards zero. And with that often also the amount of tests that are available for the codebase (with claims like: This is not testable).
I find the term "Dependency Injection" also still unnatural and academic, but I really like the core idea, which isn't that hard to teach (pass dependencies instead of creating them yourself).
Whether one also needs DI frameworks is another topic, on which I have no strong opinion.
This. It makes it sound like more than it is, and while the basic idea should be taught (briefly), it isn't some amazing concept worthy of enshrining in some holy Acronym.
The complexity of tests tends to go up faster than the complexity of the functions under test.
One of the thought experiments I use for teaching testing is to remind them that nobody wants to read your code. The next person to read your test is probably going to be reading it because it’s failing. They are in effect already having a bad day. Don’t make it worse. They will assign that negative emotion to you.
This has started to affect all of my design thinking. I’m probably only reading your code because I’m trying to hunt down a bug (or my attempt to add new functionality has failed spectacularly). Every time I run into a function that contains code smells, I have to stop long enough to figure out if those smells are the bug I’m looking for or something else. In a particularly bad codebase, like the one I inhabit now, by the time I finally find what I’m looking for, I no longer recall why I was looking for it in the first place.
This is not good code. It’s shitty code written by people who aren’t emotionally secure enough to write straightforward code. Or are running from one emergency to the next, all day every day. Or both. Or are young developers just copying what they see around them.
All interesting points, the reason I hate class clutter is especially because of debugging/understanding. I don't want to have to spend a long time building a mental hierarchy of classes and what they do just to figure why a link doesn't have the right URI. Essentially, people use classes to achieve magic code. Code that's very, very convenient when it works, but is monstrous to debug when it doesn't. Especially if the code isn't something you're familiar with.
This is a subjective opinion. I’ve had this debate with countless coworkers as I take the exact opposite stance as you. Neither of us is correct, neither of us is wrong.
The problem with declaring simplicity and clarity to be the final goal is that’s not an objective truth
This is the very issue I have with a lot of SE dogmas and those that subscribe to one particular set, not just the one described here. Objective truth is an afterthought in most cases.
Most these dogmas are anecdotal at best with very little to no empirical backing within surrounding context, but people will stand by certain approaches as if they were well tested theories like general relativity, newtonian mechanics, or QED.
The concept of clarity is why I don’t like the consequence of SOLID, where you have lots of tiny classes.
tl;dr - Context!
The problem with a lot of the principles developed from the Smalltalk days, is that different environments have differing cost/benefit results for different tasks. This means that many practices which are awesome in a dynamic, programmer-is-god-of-runtime environment like Smalltalk are doing to bog down in other environments. (see below)
It’s easier to understand complex functions and a simple class structure than the other way around. Because jumping between multiple files/classes incurs a high understandability cost, whereas complex functions fit on your screen typically.
What this reveals about the above observation, is that reading and conceptualizing large chunks of operation of the code has a higher cost/benefit in the accustomed environment. My day job is in C++, and because of the compile times, that's certainly the case in that environment. However, in an environment like VisualWorks Smalltalk, where the debugger is 10X nimbler and has literally made students in my Smalltalk classes cry out, "This debugger is GOD!" the cost/benefit trade-offs are very different. Most of the time, code is read and edited in the debugger, and flipping between multiple classes in that context happens automatically by navigating. Also, the work for navigating relationships in the object model in a clean code base is an entirely different order of magnitude. Instead of O(2^n) on n == distance by references, it's O(n), because there's no fan-out for different kinds of references. It's all done by "message sends" or calls of a method.
A lot of damage over the past 3 decades has been done by very smart OO people who tried to transplant methodologies from Smalltalk to C++ and Java. If one wants to be a step above, then look at the largest 7 or so cost/benefit trade-offs that affect the methodology. Then adjust accordingly.
Similarly, probably the main reason I really like JS... I can apply different paradigms to different areas of code where a given approach makes more sense in terms of understanding or cognitive overhead. JS debugging is relatively nice as well, though in some cases the tooling leaves a bit to be desired.
When I'm in C# by contrast, I often get irritated that I can't just have a couple functions that I can reuse, no, I have to create as static class, ideally in a namespace and do a bit more. I do try to minimize the complexity of my classes and separate operational classes that work on data, and data classes that hold values.
I wish we named all kinds of ECS-es. It's a total confusion of terms now. Your view is one kind of ECS. Another view is "structuring data to be cache-friendly". Another is "data feels better when modeled and accessed in relational fashion". Yet another is "composition over inheritance taken up to 11". People mix and match these views, which is why every single article on ECS seems to be talking about something different from every other article.
Your view is one kind of ECS. Another view is "structuring data to be cache-friendly". Another is "data feels better when modeled and accessed in relational fashion". Yet another is "composition over inheritance taken up to 11".
I believe you can, though I've not been there myself. But all the articles I've read and all the ECS implementations I've studied so far usually focus on one, maybe two of those aspects at a time, so every one looks different from another.
(In my current side project, I have relaxed performance requirements, so I'm experimenting with taking the relational aspect up to 11.)
Maybe I'm not so sure what you mean by "taking the relational aspect up to 11." My side project is in golang, so composition over inheritance isn't really an issue. I guess that leaves me with just 2 of the aspects.
Classes and methods are abstractions. They hide code. It's like putting your dishes in the cupboard instead of leaving on them a mess on the counter. There is a small initial payment to know where they are located, but then for the rest of time you benefit from the mental space by having them out of mind until you need them.
With modern IDEs, going to class/method definitions is a breeze. In my experience people who write big walls of code are often those who don't know how to leverage modern IDEs.
What you should really aim for is a simple class structure and simple functions within. Complexity at either level is not your friend and certainly not something the next developer to maintain the code will thank you for. That’s the person you should be designing for.
> whereas complex functions fit on your screen typically
Are the complex functions unit-testable? Do they depend on other units of work or other libraries? Does it have multiple responsibilities? You are probably following most of what SOLID entails.
I find it funny that HN consistently bashes SOLID, I feel like SOLID has been misrepresented. They are _guidelines_ for development, they do not dictate everything. They might influence or support a decision.
Those who bash SOLID: have you worked on gigantic projects that are in active development for decades? I advocate for SOLID is because I have witnessed first-hand its great benefits. I have built and worked on plenty of projects that apply these principles, I have seen wonderful open-source projects that embrace them. And of course I have seen adverse effects from it (eg. your linked article complains of innumerable, non-sensical interfaces) but that is mostly due to inexperienced developers that don't get it. And of course there are some devs/architects that go overboard, introducing premature abstractions, etc.. To those people I say YAGNI. The point being: following SOLID doesn't guarantee nice design. It is easy to produce shit code following SOLID, but it is even easier without it.
At the end of the day, there are trade-offs. I think using SOLID as development guidelines produces a scalable codebase divided cohesively into units of work.
I find it funny that HN consistently bashes SOLID, I feel like SOLID has been misrepresented. They are _guidelines_ for development, they do not dictate everything. They might influence or support a decision.
And of course I have seen adverse effects from it...but that is mostly due to inexperienced developers that don't get it.
Whether SOLID is seen to pay off in the medium term, or the long term, or the very long term, is dependent on environment. In some environments, the payoff is apparent sooner. In others, it's only longer term. This is why inexperienced developers may not get it.
Of course, that beings up the question, "How can we better communicate the benefits?" Can we document and present those, from the actual history of the project?
EDIT: To better relate this to my other comment in this thread, a problem with SOLID in environments where there's a lot of bookkeeping for the compiler's sake and compile times slow down the edit-test cycle, less experienced developers are going to first notice, "Hey, this stuff makes me flip back and forth between files!" If they never see the benefits, they're naturally going to conclude it's a bad thing.
I'm a big fan of SOLID too. It actually makes a great deal of sense.
Re: your comment about "And of course I have seen adverse effects from it (eg. your linked article complains of innumerable, non-sensical interfaces) but that is mostly due to inexperienced developers that don't get it"
While this is true, there is a deeper implication here. Assume programming talent is normally distributed. Now, ask yourself above what percentile do you have to be in that distribution to truly grasp the how/why of SOLID and to be able to wield it to solve problems. Now, ask yourself what percentile do you have to be below where you just go crazy with creating non-sensical interfaces and thousands of awfully named classes with single responsibilities such as "CustomerCommandMapEmbelisherConverter"?
The real problem is that code bases tend to be horrible (also normally distributed!) because a lot of talent doesn't meet the bar and can't actually produce programs that aren't rubbish. In any org you'll find the quality of the code base is somewhere on a normal distribution. And you will find all the engineers somewhere on a normal distribution. You'll have a couple of brilliant people, a couple of horrible people, a lot of average people.
The only time you truly see exceptional code bases that everyone stops and goes "wow, this is nice!" are the rare times the stars aligned.
SOLID is contradictory and flawed when used with Object Oriented Programming.
1. Single-responsibility Principle. Objects should have only 1 responsibility
Objects by default often have two responsibilities. Changing it's own state and Holding it's own State.
2. Open-closed Principle. Objects or entities should be open for extension, but closed for modification.
The very concept of a setter or update method on an object is modification. Primitive methods promoted by OOP immediately violate this principle.
3. Liskov substitution principle. Subtypes can replace parent types.
This principle represents a flaw in OOP typing. In mathematics all types should be replaceable by all other types in the same family, otherwise they are not in the same type family. The fact you have the ability to implement a non-replaceable subtype that the type checker identifies as correct means that OOP or the type checker isn't mathematically sound... or in other words the type system doesn't make logical sense.
4. Interface Segregation Principle. Instead of one big interface have functions depend on smaller interfaces.
I agree with this principle. Though many composable types leads to high complexity. I don't think it's an absolute necessity.
5. Dependency Inversion principle. High level module must not depend on the low level module, but they should depend on abstractions.
This is a horrible, horrible design principle. Avoid runtime dependency injection always. Modules should not depend on other modules or abstractions, instead they should just communicate with one another.
If you are creating a module that manipulates strings. Do not create the module in a way such that it takes an Database interface as a parameter than proceeds to manipulate whatever the database object outputs.
Instead create a string manipulation module that accepts strings as input and outputs strings as well. Have the IO module feed a string into the input of the string manipulation module. Function compositions over dependencies... Do not build dependency chains.
I seems you interpreted SOLID with a functional mindset and then turned around and found OO lacking.
1. Single-Responsibility Principle: Whether or not an object can change it's own state has nothing to do with how many responsibilities it has. Even a pure function that takes a single argument can have multiple responsibilities. To give a silly example a spell-check-and-update-wordcount function/object would violate SRP.
2. Open-Closed Principle is about modification of the code. It means the function/object should do its thing so well, you never have to touch its code. But if you want to modify the behavior of your program you should have a way to insert your new function/object so the new behavior is added.
3. Liskov Substitution Principle: "the type checker isn't mathematically sound" No type checker is mathematically sound. Obviously correct statement, since even math itself cannot be automatically proven. However what LSP basically warns against is to say: "a square is a special type of rectangle". It's not, because if you take this 'rectangle' and multiply its width by 2 and its height by 3, you either end up with a 'not a square', which is unexpected or you don't end up with 2xwidth by 3xheight, which is also unexpected.
4. Interface Segregation Principle: agreed
5. Dependency Inversion Principle: "modules should just communicate with one another" is exactly what DIP warns against. Your monthly-activity-calculator shouldn't 'just' communicate with the user-database module. It should take a user-collection interface and let another part of the program that is responsible (SRP!) for setting up that system provide it. That way this program-setup can decide based on configuration / the environment to pass it a redis-user-collection instead of an oracle-user-database.
1. Depending on what layer you analyze things at, in OOP, it may not be possible to maintain single responsibility. In OOP SOLID refers to the business layer. However with FP you can take single responsibility all the way down to types. A function returns One TYPE. It interfaces with the universe through a single type and that is one responsibility.
2. Why does the open and closed principle only have to apply to code? What if it could apply to everything. You gain benefits when you apply this concept to code... what is stopping the benefits from transferring over to runtime structures. SOLID for OOP is defined in an abstract hand wavy way, for FP many of those guidelines become concrete laws of the universe.
>No type checker is mathematically sound.
3. A type checker proves type correctness. Languages can go further with automated provers like COQ or agda. They are mathematically sound. Your square example just means that types shouldn't be defined that way. It means that the type checker isn't compatible with that method of defining types.
4. -
5. I highly disagree. There should only be communication between modules NEVER dependency injection. The monthly activity module should not even accept ANY module, or module interface as a parameter. It should only accept the OUTPUT of that module as a parameter. This makes it so that there are ZERO dependencies.
For example don't create a Car object that takes in an engine interface. Have the engine output joules as energy and have the car take in joules to drive. Function Composition over Dependency Injection. (Also think about how much easier it is to unit test Car without a mock engine)
If you get rid of dependency injection, you get rid of the dependency inversion principle. DIP builds upon a very horrible design principle which makes the entire principle itself horrible.
...I agree with your comments on Liskov Substitution Principle, that the fact it's even possible is a weakness in OOP type systems.
But the rest of your examples are very far off base.
I somewhat agree with Single Responsibility being perhaps not quite right as "single" isn't always desired, appropriate or possible. But the general philosophy is absolutely on point. It's an instruction to carefully consider whether a component should be responsible for something or not and if not then think about where else that responsibility should lie. It gets pretty gnarly when you see things that just have way too many responsibilities. They become unwieldy. An object being responsible for holding and manipulating its state isn't what I would class as a responsibility. That is below the line. That's thinking far too granularly about what a responsibility is.
Same for open/closed. There is a great picture that represents open/closed of a human body (being the closed system) that you can put different layers of clothes on (open for extension) which I think beautifully captures the essence of the principle. When this is done right it's an absolute blessing. You mostly find it in frameworks that have a life-cycle and at certain points (say before anything happens or after everything has happened) they provide an overridable method with no behavior. That method allows you to insert logic the framework designers didn't think to cater for, but also keeps the framework life-cycle intact.
The example of dependency inversion just doesn't make sense. If you're creating a string manipulation library it should take strings and nothing else. It doesn't need anything else. If you're creating a string manipulation library in the first place you probably should just use the standard library. Maybe that's just a bad example, but I still don't agree with your sentiment with always avoid runtime dependency injection.
Forgetting the string manipulation example - I'm curious what you have in mind when you say "modules should instead communicate with one another". How does this communication take place? What language are we talking about and what does some code look like? Mostly what comes to mind when I think of that are either newing up an instance of a class or calling a static method, or perhaps making some kind of http/tcp request?
For the first two principles... in OOP they are just guidelines operating at the layer of business logic. There are programming languages/styles that implement these "principles" as laws all the way down to primitive components.
See what I wrote about composition. I also have an example about a Car and engine class later in the thread.
Function Composition > Dependency Injection.
> ...I agree with your comments on Liskov Substitution Principle, that the fact it's even possible is a weakness in OOP type systems.
It's not actually a weakness in the type system. It's the weakness in the language. The language should never allow for such types to be constructed. Basically Inheritance is not compatible with the type checker. You get rid of inheritance, you get rid of this problem.
>I think using SOLID as development guidelines produces a scalable codebase divided cohesively into units of work.
The problem with Object oriented programming is not any of these things. The problem is that an object is a bad choice for a unit of work. A good analogy is bricks and construction. If a brick represents a unit of work to construct a wall, object oriented programming represents a brick with jagged faces.
This is why, no matter how deeply you follow these guidelines you will always have to build custom "interface bricks" (aka glue code) to compose jagged bricks together.
GoLang solves the problem with objects by getting rid of objects all together, but the fundamental procedural function that it uses as a primitive of composition is also jagged in a way. GoLang procedures do not compose very well.
There is a deeper primitive that programmers should model their code around that gets rid of the usage of misshapen bricks as the building block of programs. Bricks that compose with other bricks without glue. I leave it to you to find out what this primitive is, as you use it everyday to build misshapen objects.
The original article talks about readability and simplicity. It does not talk about compose-ability and modularity. Both of the aforementioned traits have a strange relationship with readability and clarity. More modularity does not necessarily mean less readability in all cases but it certainly changes readability.
> Those who bash SOLID: have you worked on gigantic projects that are in active development for decades?
Yes, and by far the worst part of them is the dependency hell problem of a sufficiently mature front-end. It gets sand in your cornflakes during development, testing, and debugging.
Imagine you're writing a front-end in this mature codebase. What injection bindings do you need to instantiate a FooUIWidget, which contains a BarUIWidget and BazUIWidget, and a few new data types, relevant to the business logic of FooFeature?
Who the fuck knows! You have a rabbit's nest of nested dependencies, you have no idea what part of the system owns which data change, or what cascading effects that data change has. Oh, and when you decide to move FooUIWidget out of ParentUIWidget into UncleUIWidget, good luck figuring out which dependencies it needs, which need to be removed from Parent, which need to be added to Uncle, which need to have alternative bindings added (Because Uncle already provides them, but they are not what Foo needs - your code compiles, and gets no run-time Dependency Injection errors, but your values are silently bound wrong behind the scenes.[1])
Unless, of course, you do something sensible, and instead of having each bit of your system depend on 20 things provided by dependency injection, just build the bloody thing right the first time, by using event listeners and MVVM.
[1] Oh, and of course, neither your compiler, nor your DI framework is mathematically capable of telling you that half of the dependencies you're providing for Parent are no longer used for anything. Go get your coal miner's hard-hat, finish up your will, sign the waiver about black lung, and go delving through your dependencies.
Front-end (web?) is a more narrow domain than what I was speaking to. It sounds like you are pointing out specific issues that you encountered when working on a particular project.
How is SOLID responsible for these issues? The acronym represents guidelines.
What you are describing sounds awful. IoC can get nasty when developers are inexperienced and off-the-leash.
Keep in mind that everyone is ignorant. We all have different experiences with different technologies on different codebases.
> Front-end (web?) is a more narrow domain than what I was speaking to.
It's the domain where proper architecture matters the most, because it's hard to get it right.
> It sounds like you are pointing out specific issues that you encountered when working on a particular project.
If by 'particular project', you mean every single FE project that I've worked on, that made unopinionated use of dependency injection, sure.
> How is SOLID responsible for these issues?
'LI' doesn't do any value add for these problems (You don't use all that much inheritance, or define very many interfaces when working on front-ends), and 'D' is actively harmful, because it paves the road to dependency injection. I find posting events to a bus to be a lot easier to deal with, then dealing with a spaghetti of objects interacting with injected dependencies.
> [1] Oh, and of course, neither your compiler, nor your DI framework is mathematically capable of telling you that half of the dependencies you're providing for Parent are no longer used for anything. Go get your coal miner's hard-hat, finish up your will, sign the waiver about black lung, and go delving through your dependencies.
Yes they are?
If it's not referenced it, it's not needed. That's pretty straight forward?
The compiler can tell if something isn't referenced - but it can't tell if a provider that goes into a DI framework is never invoked.
The DI framework can tell (at run-time) that you're asking for something that is missing a provider. It, quite obviously can't tell (at run-time) that you're never going to ask for something in the future.
> The compiler can tell if something isn't referenced - but it can't tell if a provider that goes into a DI framework is never invoked.
It sort of can though. It depends on the circumstance. If there is an interface with one implementation or even multiple implementations and that interface isn't referenced anywhere nor are any of its references then you can reason that those dependencies might be provided to the DI container but will never be requested as they can't be. In that case - delete them.
In the case where you have one interface which has multiple implementations and the interface is referenced, I agree. Nothing will tell you if there is one implementation sitting there entirely unused forever.
If you wanted to solve that problem you probably could. In practice I don't find it a big issue.
1. The interface may not be referenced within the particular scope of an injector. Scopes (and the modules that make up an injector) are determined at run-time, so the compiler has no idea whether or not the injector that generates ParentWidget needs FooModule, or not. As long as an object sharing an interface with something that FooModule produces is injected anywhere else in your application, for any reason, you can't statically figure out that you should remove it from the injector that creates ParentWidget.
2. The interface is referenced, the implementations might not even be bound to it, depending on run-time conditions.
Even trivially scoped dependency injection is a fantastic way to make it impossible for your compiler, and very hard for a human, to reason about your dependencies.
The issue is that in Go speak, "clever" means any programming language technique that was invented after the 1970's. We are fortunate that the creators of Go felt comfortable with structured programming, or otherwise they might have felt that function calls with their compiler maintained stacks and local variables were "clever" and that we should all use goto statements since those are much clearer as to what is actually happening.
The difficulty of giving examples to illustrate what "clarity" means really shows in this article.
For the most part I evaluate software clarity by how many times I had to hit "goto definition" to see what was actually happening. But this takes us away from what the author was attempting to say. In my opinion, 95% of clarity comes down to writing good abstractions, and it is next to impossible to articulate what a good abstraction is.
Because "good" just as "clear" is not universal. But we can actually estimate how good abstractions can be in specific circumstances for specific users if we think of them in terms of familiarity, simplicity, consistency, flexibility and universality.
There was that recent post about innovation being a limited resource. Clever is a limited resource. Save your clever for good, insightful architecture, or for when you really need the algorithmic chops. Don't expend it just to impress your coworkers.
And for the love of god when you do actually need to be clever, document the hell out of why and what the trade offs/benefits were. The first time I had to be clever to get performance I put a nearly half screen comment of why the hell I did it.
Another trick to manage cleverness (or even just messiness) is to hide the code in a well-named function, so readers can understand what it's doing in the context of the rest of the code and only have to delve in when they need to.
Move the code that should be clever to a separate function, then add the clever in a single commit.
People tend to tolerate clever code that is out in leaf functions they don’t have to step through, and knowing they could revert the change makes them tolerate the clever longer.
My favorite recently is a class named CustomerCommandMapEmbelisherConverter.
It's not a good name. Nor is it concise, informative or anything vaguely useful.
But I get the strong feeling it comes from someone spouting off a line along the lines of "we need to name things for what they do" and then someone else just coming up with that.
In your case I'm going to guess doBillCustomer() is about 2.5k lines long with a copy of its entire logic duplicated and it branches based on annual or monthly billing and subtle bugs have been fixed in one implementation but not the other and now they are diverged such that they are only 93% the same but that means all bets are off. There are 15 levels of nesting and it interacts with at least 5 external systems during all of that.
> Clever is a limited resource. Save your clever for good, insightful architecture
I'd sooner say clever is a skill, that gets better with practice. Compare it to solving tricky math problems - the more you solve them, the more clever you 'expend', the better you get.
I'd sooner say clever is a skill, that gets better with practice.
Exactly! However, keep in mind the difference between rehearsal and performance, and act according to the cost/benefit. It's very analogous to what stand-up comedians do. They do a form of practice, where they try material out with friends in private. There's another level of practice when they're on the road, but in small, obscure venues. Those are the times when they go "courageous," take risks, and try new things out. It's a different matter entirely, when it's their big HBO special filmed in some huge famous theater. That filmed show is going to be a permanent record which affects their reputation for years after. Think on this, when coworkers check code into production. Code in production might be executed and read years down the road.
Some of A, some of B. But there's plenty of ways to expend clever that don't make the code harder to read as well (clever architectures that aren't obvious before you create them but once they are built are still readable). Clever tricks that require being clever to read as well as to write are the dangerous ones.
Clever tricks that require being clever to read as well as to write are the dangerous ones.
There's a level of clever, where things seem complex and abstruse on the surface. There's another level of clever, where things seem clear and simple on the surface, but deep insight went into making things that way. (Then there's a level of faked deep-clever that relies on "automagic," but which isn't as clever as it seemed on the surface and costs a ton of extra debugging time.)
Over the years, I have encountered dozens of programmers in their 20's and 30's who seem to prioritize impressing fellow programmers over the clarity of the code base as a whole. In fact, I'd say there's something about programming education which seems to produce these attitudes.
Let us say "clever" is problematic when "hard to understand": complex and unintuitive. (The best "clever" is simple and obvious).
The issue is whether it is sufficiently general to become a standard technique.
If so, you're right. Familiarity with it makes you "cleverer", as it becomes intuitive, and less complex (as you push details down into long-term memory).
But, in that case, IMHO, it's even cleverer to "turn over the detail to the machine", i.e. create an abstraction, to hide the detail.
> There need be no real danger of it ever becoming a drudge, for any processes that are quite mechanical may be turned over to the machine itself. https://wikiquote.org/wiki/Alan_Turing
I'm not making any claim regarding the relation between clever and clear/understandable code. Just that writing clever code, however defined, doesn't expend much of a cleverness resource - if anything, it strengthens it.
That is, looking at each piece of clever code in isolation. If the original claim was meant more that a code base has a limit to how many clever tricks it can contain before it becomes unmaintainable, I'd be more inclined to agree.
Just that writing clever code, however defined, doesn't expend much of a cleverness resource - if anything, it strengthens it.
Walking strengthens the legs and increases endurance. However, no Roman Legionnaire would command his men, as if more marching would only increase capability, and they could therefore march forever and as much as they want. Instead, it's best to reserve the fast marches for when a goal is attainable which gives a tactical or strategic advantage.
There's only 24 hours a day to be clever, and there's only a limited number of hours per day a given person can muster the concentration to be clever.
That is, looking at each piece of clever code in isolation.
Which only applies to an isolated problem, as in a coding interview. In a programming project or a startup, it's more like a military campaign, where there will be many, many interrelated problems over many years.
I think one reason why people like to write supposedly clever code especially on the realm of software development is due to lots of these developers want to get acknowledgement of what they produce. And hearing someone say "Oh, this is a very clever implementation" sort of fix that inherent need to be recognized. I haven't heard (particularly in corporate environments) where a praise in the line of - "wow, this was a very clear and simple implementation" trump what managers and people deem as superior when the term clever was attached to it.
I've challenged far quite a lot of implementations where understanding a piece of functionality has required for the developer to jump between more than 23 files across 8 different projects in implementing a very domain specific functionality. Splitting code into single independent parts introduces simplicity, only and if only you are reading that part by itself, but when you layer it overall to get the functionality it delivers and it becomes a web of tangled mess of code, then that clever solution was not really clever after all.
I have an intuition about library design that I’ve been slowly trying to formulate into a set of guidelines. I have extremely high standards in this area and not being able to state them concretely makes communicating them a struggle.
One of the ways I complain about particularly bad decomposition (the sort of practices that lead to parodies like Enterprise FizzBuzz) is the ridiculousness of stacktraces for errors in these systems.
We tell people to use delegation but many have trouble differentiating delegation from indirection. You know things have gotten particularly bad when you have traces with the same sequence of three or more functions appearing three times. Debugging this is a nightmare. It’s literally a maze of logic. This type of code has to be memorized to be understood, which further makes an existential threat of a saner person’s attempts to refactor it - moving things around to be discoverable and debuggable comes at a cost to the people who already memorized it.
There is also DAMP vs DRY and “desertification” of code, which is related to the good versus bad indirection problem.
When you get a prolific “clever” person who suffers from these problems, the whole team suffers with them (which is why I need a new job...).
Someone above mentioned flame graphs, which is a trace of every call in the system, typically for the purpose of visualizing where time is spent by the CPU. In thinking about this thread, I now want to look into using them as a measure of time spent by the reader.
My overall philosophy on code is that we should use our best days to protect ourselves from our worst days. I expend most of my clever on trying to make things look easy, which is a bit of a challenge come review time because one of the hallmarks of really clever reasoning is that people react by saying things like, “well of course it works that way”.
"Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?" - Brian Kernighan, "The Elements of Programming Style"
Writing "for (int i=0; i<limit; i+=step) {}" is easy. Debugging the off-by-one error requires that you understand the alignment issue you ignored in the first place. That would make debugging harder, not because debugging is inherently hard, but because that's the place you're forced to deal with the hard bit you skipped earlier. I wonder how often that applies, compared to well-understood-but-implementation-mistake code?
> Debugging code is more difficult than writing it.
It's not. It's possible to not know how to do it well though. As it's something you have to experiment with, learn various approaches, instrumentation.
I'm interested to hear what your rationale for this is.
When you are writing code, you generally know what it is you are trying to achieve. When debugging code, you're frequently trying to find out why a problem is happening in the first place; often in code that someone has written or that you wrote months or even years ago.
I'm not saying that debugging isn't a skill you can learn, but it's a superset of writing code, so it's by definition harder.
I disagree that debugging is a superset, it's a different skill. It's possible not to know how to design and implement an algorithm, but know how to dig into somebody else's implementation and vice versa.
But being harder is not even about the skills themselves, it's about mental effort it takes to do something. And designing and implementing things is certainly much much harder, than digging into something already designed and implemented.
My rule of thumb has always been "the amount of comments I leave is directly proportional to how clever my code is".
Code that is trivial really needs no elaboration, but occasionally I feel like I gotta go crazy with a bunch of hashmaps of lambdas and all that jazz, and I don't think that there's inherently anything wrong with that.
However, when I do that, I make sure I document it like crazy with comments, so that when I have to look at the code two weeks later, I at least can figure out what I was doing.
I will agree with that but only if you put air quotes around “clever”.
There is a point beyond which accurate documentation is more difficult than improving the code to negate some of that documentation. That makes the code cleverer still (without air quotes). This is not far off from St Antoine de Exupery’s comments on perfection being achieved when there is nothing left to take away.
Have to really disagree with the "comp" function example. No need to have "else" statements. The one with early returns and only "ifs" is succinct and good.
Also what is up with comparator function being used this way for most articles nowadays? If I am not mistaken "return a-b" is the much better solution and don't say that it is considered too clever :)
The author states it's just an example and also says
> This is reasonable when you’re dealing with functions which fit on a slide, but in the real world complicated functions– the ones we’re paid for our expertise to maintain–are rarely slide sized, and their conditions and bodies are rarely simple.
I agree with the sentiment of the article, though I don't really like the example given. To me, using a switch statement instead of if-else-if is not any clearer.
Code is written and read by humans, therefore it should be clear and concise.
Cleverness should be used in constrained situations like performance (fast inverse square root [1] comes to mind), and comments explaining the cleverness is important.
I should note [1] did a terrible job at commenting.
I originally wrote O(n^4) vs. O(n^2), which is something that actually happened to me two days ago, but thought I'd go more basic. I should've gone real-world example.
I don't disagree though. Even though in much of my work performance is usually not a concern, I still run into plenty of situations where it really is. But I run into more situations where I probably spend more time optimizing when I don't need to. YMMV, obviously.
Ha, I spend a decent amount of time trying to explain to clients how it's worth their while to decrease a page's load time from 20+ seconds to < 5 in just a few hours of work...
If you have only a final “else” instead of a bare “return” in a function that returns a value, you are making it worse. Now when I read the code, my first instinct is “seems to be a bug, undefined behavior” and I have to read more carefully.
This is a terrible example and a code change I would never approve. The clarity-over-cleverness goal is good but not with these kinds of cases.
Creating anything is a craft and creating software programs is no different. While everyone should strive to learn how to write programs well so the intent isn’t obfuscated, it ultimately boils down to two factors: programmer’s experience and talent. Most programs, like most works of art, will be utter crap and nonsense, as most artists are - with rare notable differences. This is why I heavily support frameworks and prescriptive style of programming, or “opinionated” systems as some would call them. They are usually invented by people much smarter than the average Joe and ultimately generate better long term results. It would benefit our productivity much more if we invested efforts into translating these brilliant minds into compiler features so the compiler checks for style as well, not just “spelling”. We need Grammarly for code.
This put me in mind of a passage from the (excellent) 1982 book Inside the Soviet Army, by the Soviet defector Vladimir Rezun (which can be read in English in its entirety here: http://militera.lib.ru/research/suvorov12/index.html), which explained why ammunition for Soviet weapons hadn't been standardized across a common set of calibers:
The calibre of the standard Soviet infantry weapon is 7.62mm. In 1930, a 7.62mm `TT' pistol was brought into service, in addition to the existing rifles and machine-guns of this calibre. Although their calibre is the same, the rounds for this pistol cannot, of course, be used in either rifles or machine-guns.
In wartime, when everything is collapsing, when whole Armies and Groups of Armies find themselves encircled, when Guderian and his tank Army are charging around behind your own lines, when one division is fighting to the death for a small patch of ground, and others are taking to their heels at the first shot, when deafened switchboard operators, who have not slept for several nights, have to shout someone else's incomprehensible orders into telephones-in this sort of situation absolutely anything can happen. Imagine that, at a moment such as this, a division receives ten truckloads of 7.62mm cartridges. Suddenly, to his horror, the commander realises that the consignment consists entirely of pistol ammunition. There is nothing for his division's thousands of rifles and machine-guns and a quite unbelievable amount of ammunition for the few hundred pistols with which his officers are armed.
I do not know whether such a situation actually arose during the war, but once it was over the `TT' pistol-though not at all a bad weapon-was quickly withdrawn from service. The designers were told to produce a pistol with a different calibre. Since then Soviet pistols have all been of 9mm calibre. Why standardise calibres if this could result in fatally dangerous misunderstanding?
Ever since then, each time an entirely new type of projectile has been introduced, it has been given a new calibre...
[West Germany and France] have excellent 120mm mortars and both are working on the development of new 120mm tank guns... [W]hat happens if, tomorrow, middle-aged reservists and students from drama academies have to be mobilised to defend freedom? What then? Every time 120mm shells are needed, one will have to explain that you don't need the type which are used by recoilless guns or those which are fired by mortars, but shells for tank guns. But be careful-there are 120mm shells for rifled tank guns and different 120mm shells for smoothbore tank guns. The guns are different and their shells are different. What happens if a drama student makes a mistake?
The Soviet analysts sit and scratch their heads as they try to understand why it is that Western calibres never alter.
Nice. I said exactly this for the same reason yesterday. In that case it was a thread about Lisp. Here's the comment:
> Even the open source Common Lisp compilers, written by arguably the lispiest of Lispers, don’t have a lot of “cleverness”.
To which I replied my agreement:
> Don't write clever. Write clear.
Its a sentiment that you find attached to Lisp programming style fairly often, although ironically there is a whole lot of barely readable Lisp code out there.
Personally, I think the code is (nearly) worthless crap if someone with skill has to spend as much time parsing it as the writer did writing it.
That switch case transformation in the end is just syntactic sugar for the equivalent if-else-if chain so I am not sure it is actually an improvement.
That said, I still agree with the basic idea. An if-else-if chain is easier to reason about than if-return, and representing information with enum or algebraic data type can be more robust than using a combination of booleans.
This is a strange read as a fiction writer. This lesson is learned quite early in prose writing- explicitly due to the fact that written works must be read. Maybe because of this, I've never felt the need to write 'cleverly' in my programming either. Can other prose writers corroborate?
I believe that approaching writing, including programming, to this, is reductive.
At its base level, this is missing an important thing: context. Clear to who? Something can be very clear to one person, yet opaque to another. In writing, and in programming, you need to decide which audience that you're talking to, and write something that they will understand.
This often comes up in discussions about jargon. Jargon is a way to increase the density of communication. This is often perceived as a loss of clarity, but the question again is, clarity for who? For two experts, discussing complex things in their field of expertise, jargon can increase clarity, by referring to shared context. Higher bandwidth communication allows for more discussion of more complex topics, because you're not wasting time and mental energy re-explaining things from first principles.
Put another way, there is always some shared context going on; that's what language actually is in the first place. I have used a number of words in writing this comment, but I haven't set out any definitions; that's because I'm assuming that you know English in order to read my comment. If I were trying to communicate to a child, I wouldn't be using all of the words that I'm using here, because it is too complicated for them to comprehend. But, trying to explain the topic of this comment to that child would take much longer, and be much more difficult.
So yeah, that's just one way in which discussions like these tend to frustrate me. Writing is a rich, wonderful thing, that has a huge variety of uses. Pigeon-holing it in this way makes me feel, well, dispirited. Or should I say "sad"...
(I do believe that, for both commercial development of software, as well as commercial development of writing, "keeping things simple" can be important, for various reasons. But not everything we do in life must be in the service of business needs.)
> This often comes up in discussions about jargon. Jargon is a way to increase the density of communication. This is often perceived as a loss of clarity, but the question again is, clarity for who? For two experts, discussing complex things in their field of expertise, jargon can increase clarity, by referring to shared context. Higher bandwidth communication allows for more discussion of more complex topics, because you're not wasting time and mental energy re-explaining things from first principles.
> Put another way, there is always some shared context going on; that's what language actually is in the first place. I have used a number of words in writing this comment, but I haven't set out any definitions; that's because I'm assuming that you know English in order to read my comment. If I were trying to communicate to a child, I wouldn't be using all of the words that I'm using here, because it is too complicated for them to comprehend. But, trying to explain the topic of this comment to that child would take much longer, and be much more difficult.
Thanks for this. I always struggle to articulate it.
Good prose often involves misdirection. Making the reader believe something to be one way, while it was something else all along. Then, in a built up climax, undo the knot in a single pull to blow the reader's mind. That, to me, is the antithesis of the article's argument.
I think the literary quote is "kill your darlings" and I often find myself thinking of that when I write something that impresses me, and I often revert to something simpler when I can.
> The competent programmer is fully aware of the strictly limited size of his own skull; therefore he approaches the programming task in full humility, and among other things he avoids clever tricks like the plague. https://wikiquote.org/wiki/Edsger_W._Dijkstra
> Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it? https://wikiquote.org/wiki/Brian_Kernighan
Being quoted several times in this thread; what about the experience of finding something feels "clever" but then after a while of using it, finding it no longer feels clever but instead feels normal?
> Every weightlifter is fully aware of the strictly limited size of his own muscles; therefore he approaches the weight lifting task in full humility and among other things he avoids heavy weights like the plague.
As more people are familiar with a clever trick, it becomes an idiom, which others master. Now it is in the realm of "clear", graduating from the realm of purgatory of clever tricks.
Your analogy assumes that the goal of code is to make your code more and more clever over time, just as a weightlifter seeks to lift heavier and heavier. The goal of code, however, is simply to communicate a process to a computer. Or rather, when that process is subject to change over time, to communicate a process to a computer, simply.
Then why aren't they written on paper, in English? Because that's how people used to read things when the A&S quote was from, in 1979. And natural language is still how people read things today, even if on screens. People don't code as if code was primarily for people to read.
> Then why aren't they written on paper, in English?
On the other hand, why aren't programs all written in machine language, in hex or octal? Why invent assembly language? Why invent macro assemblers? Why invent high-level languages?
Programmers are not unique in this regard. Mathematicians and logicians do not write all their dealings in English. They've developed a highly specialized notation for writing compact and precise descriptions of their ideas.
Furthermore, many layfolk might even say that the language of jurisprudence isn't quite English, despite how it looks. The jargons of many fields, like “legalese”, serve the same purpose as mathematical notation, which is itself the same purpose as programming languages: to enable ease, brevity, exactness, and precision in their respective domain-specific communications.
You can see a little of all that in the same preface by Abelson & Sussman, which goes on to say:
These skills are by no means unique to computer programming. … We control complexity by establishing new languages for describing a design, each of which emphasizes particular aspects of the design and deemphasizes others. ¶ Underlying our approach to this subject is our conviction that “computer science” is not a science and that its significance has little to do with computers. The computer revolution is a revolution in the way we think and in the way we express what we think. … Mathematics provides a framework for dealing precisely with notions of “what is.” Computation provides a framework for dealing precisely with notions of “how to.”
> Because that's how people used to read things when the A&S quote was from, in 1979.
Clearly it's not how people always read things back then, as it's not how people always read things now. People read programs, sometimes on screens, sometimes on paper, just like they read mathematical formulas łsĩ. In some cases, programs have been written on paper in some formal language that hadn't actually been implemented, simply because that language was seen as an effective means to communicate them. We usually identify it as pseudocode, ranging from “pidgin algol” to “plausibly python” to the M-expressions of the early LISP manuals.
M-expressions are still used in the LISP 1.5 manual of late 1962, despite the fact that 2.5 years after the LISP 1 manual, the LISP system was still incapable of reading M-expressions—the programmer had to translate them to S-expressions by hand before entering them. The Appendix B of the 1.5 manual gives the code for the interpreter, as well as some rationale:
This appendix is written in mixed M-expressions and English. Its purpose is to describe as closely as possible the actual working of the interpreter and PROG feature.
(It turns out to be possible to get an even closer description with a formal notation for the semantics, as was done with the definition of Standard ML, but such formalism has yet to catch on).
This emphasis on the importance of notation for the exact expression of thoughts and precice description of “ideal objects” is not particularly new, and it certainly predates the invention of the computer:
… I found the inadequacy of language to be an obstacle; no matter how unwieldy the expressions I was ready to accept, I was less and less able, as the relations became more and more complex, to attain the precision that my purpose required. This deficiency led me to the idea of the present ideography. …
I believe that I can best make the relation of my ideography to ordinary language clear if I compare it to that which the microscope has to the eye. Because of the range of its possible uses and the versatility with which it can adapt to the most diverse circumstances, the eye is far superior to the microscope. Considered as an optical instrument, to be sure, it exhibits many imperfections, which ordinarily remain unnoticed only on account of its intimate connection with our mental life. But, as soon as scientific goals demand great sharpness of resolution, the eye proves to be insufficient. The microscope, on the other hand is perfectly suited to precisely such goals, but that is just why it is useless for all others. ¶ This ideography, likewise, is a device invented for certain scientific purposes, and one must not condemn it because it is not suited to others.
(from the preface of «Begriffschrift» by Gottlob Frege, 1879, translated by Stefan Bauer-Mengelberg).
In 1882, Frege further explained: “My intention was not to represent an abstract logic in formulas, but to express a content through written signs in a more precise and clear way than it is possible to do through words.”
> People don't code as if code was primarily for people to read.
I agree. I am often guilty of this too, although I usually forget about it until I try to read a program I'd written some time ago and discover that it requires some careful study to figure it out.
It's a shame, really, because we should be writing readable code. But after I'd read this statement, I was thinking: how do people code, then? And I was reminded of this little bit from Paul Graham's essay “Being Popular”:
One thing hackers like is brevity. Hackers are lazy, in the same way that mathematicians and modernist architects are lazy: they hate anything extraneous. It would not be far from the truth to say that a hacker about to write a program decides what language to use, at least subconsciously, based on the total number of characters he'll have to type. If this isn't precisely how hackers think, a language designer would do well to act as if it were.
It is a mistake to try to baby the user with long-winded expressions that are meant to resemble English. Cobol is notorious for this flaw. A hacker would consider being asked to write `add x to y giving z` instead of `z = x+y` as something between an insult to his intelligence and a sin against God.
While I generally agree with your sentiments here, and this is a somewhat pedantic response, with regard to the following:
The goal of code, however, is simply to communicate a process to a computer. Or rather, when that process is subject to change over time, to communicate a process to a computer, simply.
I would tend to disagree. Code is not about communicating with a computer. It is about communicating with humans. The computer does not care how the code is written - it is the humans that have difficulty with it. In a way, programming is the translation of a language that the computer understands into a form that humans can comprehend. Not so much the other way around. In this regard, clever is fine for a computer, but it is not always understandable to a human.
Still would not deploy anything from codegolf to any production environment. Fun, but definitely could undermine understanding. So there is a need for balance in my opinion.
For code that you only use yourself? Maybe do it, but I would guess that after one or two years you would not immediately understand what your former self fabricated.
The goal of code, however, is simply to communicate a process to a computer
But once that simple process was first communicated to a computer in 1965, what next? More complex processes, surely? And more and more complex processes as the computing power increases?
I worked for a company that asked me not to use hash tables, because it no one else at the company knew how to use them. Finding your way around an arbitrary-length array of arbitrary-length arrays was apparently "easier" for them than learning the full capabilities of their chosen language (ColdFusion).
Point being, a little bit of "cleverness" in one place may save a lot of effort down the line. I've also never been too fond of that quote.
I often cite this quote : Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it? (B. Kernighan)
> verbiage: excessively lengthy or technical speech or writing.
> synonyms: verboseness, padding, superfluity, redundancy, long-windedness, protractedness, digressiveness, convolution, circuitousness, rambling, meandering; waffling, wittering, "there is plenty of irrelevant verbiage but no real information"
I guess the downvoter didn't like me pointing out int is a shitty way to answer the question "which is bigger?" in an article supposedly selling clarity over cute tricks.
> If software cannot be maintained, then it will be rewritten; and that could be the last time your company invests in Go.
Maybe on to something there, golang code bases I've seen are a complete mess because of how "simple" the language is. Hopefully more people start realizing this and moving on to better languages.
In my experience Golang code bases are messy because of two things: OO and CSP. People all too often resort to goroutines, channels, objects and interfaces where a simple side-effect free function could do.
Most professional programmers only occasionally need to look into performance issues, but need to take complex things and simplify them with every line of code they write. And yet most programming interviews don’t evaluate this ability. I think this should change.