Man, I miss making a living as a C programmer. My happiest days as a dev were when my "stack" was Linux, C, Makefiles, and some shell scripts. Only thing I'd change is that source control was svn instead of git.
Sure, there was a lot more to type. Debugging was harder. But there was a beauty to that. A simple mental model. A sense of control.
These days all jobs seem to be sort of online crap. Piles and piles of layers and complexity and heterogenous frameworks and tools. Being on call. Never being able to truly master anything because everything changes all the time.
Oh man, I feel exactly the same my friend. People who have never gotten into the C world find it frightening, but it's a beautifully simple language loaded with (at times dangerous) power. It's so much closer to the hardware that you can't use a declarative approach easily (which now that I've drunk the functional programming kool-aid, I do love), but in many ways it's actually much simpler to understand. You can also be pretty declarative if you are just smart about breaking things into functions.
I still glance longingly at my dusty paper copy of The Linux Programming Interface (https://nostarch.com/tlpi)
> People who have never gotten into the C world find it frightening...
I got into C over 25 years ago. Didn't find it frightening back then, but I sure do now.
Still use it pretty often for firmware and kernel driver development, but I want to replace it with something safer. Then again, I also use assembler for same reasons... sometimes C just doesn't cut it for interrupt handlers when every cycle and/or code byte size counts.
No doubt. It's amazing too how some code that was never expected to be exposed to untrusted/unsanitized data gets re-factored into a new spot or called from somewhere else, and fails to provide sanitation expecting that the callee will do it, or simply forgetting altogether (easy when under pressure to deliver). I coded a pretty bad security hole myself once by doing something like that, and I am a security specialist that knows what to look for lol!
I love C, but it really is a security nightmare full of footguns.
> Still use it pretty often for firmware and kernel driver development, but I want to replace it with something safer. Then again, I also use assembler for same reasons... sometimes C just doesn't cut it for interrupt handlers when every cycle and/or code byte size counts.
Of course there are those who claim it is actually frightfully complex, when it is those same people who are re-interpreting the standards to actually create that very complexity.
Not sure if this is what you meant, but a lot of the stuff they added to C in the latest standards really turn me off. C89 has a special place in my heart.
i still only code in C, didn't start in the stone ages, but i love it. C and asm give me the feeling i'm programming my computer, and really for low level stuff trying to find a good alternative which is as good and simple (yes simple :D c/asm is just pure logic!) is difficult for me.
I don't code professionally though, since i can't for the life of me find a job in C which doesn't have already tons of guys like you guys with a century of experience in the language lining up to take it :D
Honestly, if you use one of the available GCs out there (like Boehm's), and give up on static typing, and heavily rely on function pointers, you can write C similar to how you'd write something like Haskell. Yes, it won't go as fast as it the most idiomatic C, and you can't really make an operating system if you have a GC, but really, how often do most of us actually write code that can't use a GC these days? Even with a GC, it'll still probably perform better than 90% of languages.
I'm not a fan of adding GC to C. I've hard my fair share of stress caused by GC issues. It's great 99% of the time but when you run into performance issues caused by the GC it becomes a very very leaky abstraction.
I'm ok with it not being 70's-era C to be honest; I even with the extra stuff, I find the language to be fairly simple to pick up compared to C++.
I haven't had any problem with performance with the Boehm GC personally, though for what I've used C for is not-real-time video processing stuff. I found I typically got better throughput using Boehm than I did when I was manually managing my memory, but for what I was using it for, a small pause wasn't really a problem as long as the throughput wasn't really affected.
If you are doing that, you might as well use Go and get green threads and little or no undefined behaviour in a modern actively developed language for free. Go is practically C without the undefined behaviours.
If you have a GC and no static typing, is it better than other functional languages (many of which have benefits of both GC and static typing)? Not a rhetorical question, I have never used a GC with C.
For what I was doing, which was video processing stuff, I'm not sure that it was faster than if I had written it in Haskell. For this job, I was required to use C or C++, and so I never attempted to port it to Haskell or OCaml or something. I'm more of a wannabe-academic and certainly not a systems programmer, so I did what I could.
If I were to guess, the C version would have a bit better performance in a single-threaded context due to the fact that I would occasionally use tricks to avoid reallocations/frees, and the Boehm GC is opt-in, so when I was reasonably certain I could handle the memory correctly, I would do it myself, minimizing how much was actually being done by the GC.
I do feel that a static functional language like Haskell might perform well (maybe even better-on-the-average-case?) in a multi-threading context, since I personally find dealing with locks in C (and C++) to be very difficult to do correctly, so utilization of some of the cool tricks in Haskell to avoid manual locking might benefit. I was too much of a coward to use threads and locks much at that job, so I haven't had much of a chance to test it.
Each time a new "hotness" language, framework, pattern, or dev/devops process comes out you have a rush of professional gurus working hard to build their business (e.g., speaking fees/books/blogging/training) by teaching that this is the new true way.
In the late 90's I began to notice how the new kids kept advocating for the newest so-called best practices to do things that the bad-old practices could handle just fine. Seeing the writing on the wall, I dipped out of the game a few years later.
Unfortunately, the unhelpful tech churn has worsened. Similarly, the quality of the product produced has worsened, or at best, not improved.
Note, hyper-scale advertisement services, global human tracking, and digital Skinner boxes are not an improvement to anything.
Meanwhile 50 year-old programmers with deep general development knowledge cannot find jobs. I guess it is easier for young founders to justify using frothy tech if there are fewer old-timers around to suggest otherwise.
There's a lot of BS, but there are some real improvements as well. Many innovations that are considered best practices today actually came out in the 1990s. The Java programming language became incredibly popular, being a machine-independent, memory-safe language and system that could deal out-of-the-box with concurrency and networking-- The Go of its day, to some extent (and indeed it had a whole lot in common with Go's predecessors, Alef and Limbo). Functional programming was popularized around that time. The C++ standard introduced us to the notion of zero-overhead abstractions combined with strong static checking, which Rust is refining today.
And of course, the Web became widespread around that time, as well. Whatever you might think about "hyper-scale advertisement services, global human tracking, and digital Skinner boxes", Amazon first became prominent in the dotcom era, and it's quite massive today.
I have been reacquainting myself with some so-called best practices as I muddle through my recent side-projects.
While some newer languages are interesting, the dev stacks today are a mess.
Also, While we have safe pointers and GC everywhere, the lack of technical discipline/professionalism in the industry is worse than ever. I recognize that the C-suite and VCs share in the blame for this, but devs are the ones building things and evangelizing the newest-hotness that comes onto the scene.
But I do have to remind myself that compared to traditional engineering tracks, software engineering is still in its infancy.
> ...the lack of technical discipline/professionalism in the industry is worse than ever.
What concerns me quite a bit is the overt opposition we're now seeing to professionalism in the industry. The whole 'post-meritocracy' shtick, regardless of the best intention of the "useful innocents" who came up with that particular phrasing, is really a way of saying: "Professionalism? What professionalism? There's no such thing, we know better than that! Brogrammers r00lz FTW!" Again, this is clearly not what the proponents were seeking-- but in some sense, it's what the phrase actually means, out there in the real world.
This is one reason I wonder whether there is room in the world for a better C. Low complexity programming languages with a simple machine mental model along the lines of Go (or perhaps in future, Zig and Jai?) for doing systems programming, with a strong static type system, and a rock solid build system. Early in my career I did a lot of bare metal and embedded systems programming and the one thing I miss about C is the predictable assembly output. I primarily use Rust for this purpose right now but I wonder if there's a place for something simpler for doing really low level stuff (i.e. programming hardware directly, device drivers) that's better than C.
> I wonder if there's a place for something simpler for doing really low level stuff (i.e. programming hardware directly, device drivers) that's better than C.
Maybe a cleaned-up Pascal would do the trick? It was a great teaching language back when I was a student. Low complexity, strong static typing, compiled language, no GC, pretty fast. No pointer arithmetic, harder to shoot oneself in the foot, but still easy access to pointers and easy ability to manage memory.
edit: What I meant by "cleaned-up" Pascal was addressing some of Kernighan's criticisms as seen in https://www.lysator.liu.se/c/bwk-on-pascal.html
(also, the Pascal syntax is a bit bloated)
Freepascal pretty much is the cleaned up version you describe. Fast, free, multiplatform, and just plain sensible. Overdue for a resurgence of use. Maybe the foundation in charge could rename it Cpascal and it would suddenly lift in popularity.
Pascal was a language I learnt in 1982 and love it for its elegance. The only thing i dislike and is still around its the begin... End and only because I'm a lazy typist and lazy reader.For me is hard to find Begin-end blocks... Harder than looking for stupid squiggles used in other languages. Go figure.
Pssst, come over to embedded. We’re in C all day long using kilobytes not gigabytes.
I don’t need to chase the latest framework... but I’m also on my own for almost everything. Pros and Cons, but I wouldn’t leave it for anything web related.
The Rust Evangelism Strike Force is gunning for the embedded space, too. As soon as the tooling becomes widespread enough to support the most-used microcontrollers, C will be a niche language even in embedded.
Yea... We'll see. Rust has had quite awhile to make an attempt at Embedded and unless you count drivers for a couple STMs and a few other (mostly outdated) chips - I haven't seen a single thing that says progress.
I'd like to see Rust happen, because I don't see C++ as the embedded future. At least Rust had the good sense to leave garbage collection out.
However... I'll believe it when I see it. And by that, I mean when STM and Nordic and NXP and others are pushing out their own Rust device support files on their sites. When Keil or IAR or Rowley or Atollic pushes out a full featured IDE that uses a Rust compiler. When Rust is not only supporting the latest run of chips but there is a way to debug them with code-overlay. But until then...
mbed is a toy, at the same professional level as Arduino, although ARM does a better job of marketing as something much more serious.
However, AutoSAR is a fair point. There are a lot of hands in the pot on that and them coming to an agreement to use C++ is rather surprising - counterpoint that Automotive ECU code is absolutely HORRIBLE right now and no one seems to know it. I work in Auto and all the mfgs have moved to a fix in the field model. It’s really bad, and I wouldn’t touch a first year new model for any reason.
So while AutoSAR C++ is interesting, this isn’t exactly the group that I want to model myself on.
The biggest are of progress has been in using those devices as a test bed to stabilize features working on embedded generally. You can now develop for those devices on stable, which is a huge advantage.
> ...But there was a beauty to that. A simple mental model. A sense of control.
There's something to that, I think. And it's still an open question whether the newer languages that are in development at present will succeed in bringing some of that underlying "beauty" back. It's an important issue if we want lower-level programming to be more widely accessible.
(To clarify - the underlying machine model of C is beautifully simple, and newer low-level languages tend to be built on something very much like it anyway. The C language itself is frightfully complex - albeit less so than that other language in the C family that's often conflated with it.)
I wonder if, now that processor speeds are increasing at a much slower rate, there will be a return to that lower level programming to focus on speed improvements through more efficient code.
People said it about assembly when it came out. The actual (binary or octal) machine codes gave you an intimacy with the hardware that assembly took away. (People actually said that.)
If you like "intimacy with hardware" you can drop down one level below machine code and design a processor on an FPGA with Verilog. Or, to go even deeper, design a custom circuit with SPICE.
C might be the optimal point on the abstraction ladder as far as the trade off between (exposed) complexity and control.
This is why I feel that the solution to most of the modern ailments in development is to just put Lua everywhere. All the great stuff of C, and all the new-school shit too.
If you do this, it'll seem soon enough that the Javascript nightmare was just a dream. Takes balls though.
I do the same thing with a twist [0], since I'm not very fond of some of Lua's design choices. Lack of a decent type system, the table mess, etc. And I think Forth and Lisp make better glue languages.
Can't stop smiling these days when I see people fighting over which language will rule them all. I spent tens of years searching for that language myself. Time I would rather have spent solving real problems using the best tools.
Its not Lua specifically addressing modern ailments, its the attitude that taking full control to put the same common codebase on as many of the target platforms as possible can be profitable, in light of the vendor mess which is, presumably, what we're talking about here. It can be a very disturbing thing to realise what a few tweaks here and there to package.json might do to ones love life.
Lua is a great, easy to use, easy to apply, language -- with a healthy framework ecosystem, and it is very easy to put it to use in a legacy code-base, since its C-based, and we all know that C still makes the world go around. However, its not the fact of Lua, but the fact of 'put our own VM everywhere' that wins, imho.
The GC and goroutines make it more abstracted from the metal than it looks, while lacking the conveniences of the zero-cost abstractions available in lower-level languages like Rust. The only upside of taking the opposite side of the Rust tradeoff is compile time.
> The only upside of taking the opposite side of the Rust tradeoff is compile time.
Not so, there are a few specialized domains where having tracing GC easily available is genuinely useful. (Pretty much anything having to do with general graphs - the sort of stuff you'd traditionally use LISP for!) Go is a great fit for these use cases.
Funny. I looked through the GO book and walked away feeling justified in being diligent with C. Maybe that's because I dislike jumping through other peoples hoops when I know better.
Same here. I worked quite a lot on C on Unix and some on Windows (including working on a successful database middleware product on Windows, which was used in some VB+Sybase/Oracle client-server projects) before I got into other languages like Java, Ruby (and Rails) and now Python for quite a while. Great fun working in C, although of course frustrating at times too, debugging weird issues. Also, somehow, I never found working with pointers (at least for simple to intermediate uses of them) confusing, like some do. (I once taught a C programming class at a large public sector company; while I was explaining the int main(int argc, char argv stuff, and the pointer to pointer stuff, the head of their IT dept. who was in the class, said "now it is 'overhead transmission' :)". Maybe I didn't have trouble with pointers because I had some hobbyist assembly language programming background from earlier, including learning about different addressing modes, such as direct, indirect, indexed indirect (or vice versa) (on the 6502 processor), etc., plus used to read a lot on my own about microprocessors, computer architecture, and suchlike related areas, even though they were not directly relevant to my higher-level application programming work (hint hint, to kids learning computers today). Working close to the machine is good. Also, a bit off topic, I kind of independently discovered the concept of refactoring. I was in a nice relaxed state one afternoon, at work, after a good lunch, but not heavy, so not drowsy, working on some Unix C program (likely a CLI one), and over a period of time, I noticed that I had been making small (behavior-preserving) incremental improvements to the code, one after another. In a very relaxed way, without any tension, taking my time for each small change, so I was fairly sure that each small refactoring change did not change the meaning or correctness of the program. Thought that was a good technique after I realized that I had been doing it. Unfortunately did not keep doing it with other code. It was only some years later that I read about the term "refactoring" and about Martin Fowler's book on the subject. I'm sure others must have discovered the concept similarly. Anyway, interesting incident.
>In a very relaxed way, without any tension, taking my time for each small change, so I was fairly sure that each small refactoring change did not change the meaning or correctness of the program.
Unlike the rushed, tense way in which some (many?) projects are conducted these days (and plenty earlier too), with people playing whack-a-mole with bugs introduced by said rush, "because we have to ship last week".
Well said. And often buggy or rapidly changing (or both) dependencies too - because the authors are keen on showing they are keeping up with the Joneses (er, times) and so their project is the latest and greatest - never mind if stuff doesn't work, the next version will be even more awesome!!! [1]
[1] (Frequent use of) exclamation marks is obligatory or you're not a (team) player, go home. /s
I've gotten in several arguments with people about why I like C more than C++, and that's in no small part because I find C to be a lot simpler than C++. This is an example; I feel you can learn enough C to actually start doing stuff in a 45 page manual, where with C++ I did it for about a year and never really felt I had a real handle on all the idioms.
I know there are probably a lot of objective reasons to why C++ is safer or something, but I've always felt that if you embrace GCC's extensions, and glibc (and use of Boehm's GC for parts that need to be a bit safer), you end up with a language that's simple to learn, and has a lot of the features I actually use in other languages.
That said, this is coming from a very-much-not systems programmer, and I mostly do Lispey stuff nowadays.
where with C++ I did it for about a year and never really felt I had a real handle on all the idioms.
With all the Modern C++ changes happening, it seems like the standards committee is actively making it harder for people who want to understand the language completely. I much prefer C to C++ for the same reason, although I think some features from C++ are genuinely useful, like classes.
I found that when I wrote C++, I almost exclusively ended up using features that were already in C; obviously there are no classes in C, but I was happy enough using structs and functions that just take a pointer of that struct type for the first argument.
No argument there, but how is that radically different than embracing GHC Haskell, Racket, rustc, or Go, or any of the other de-facto compilers for a language?
I'm personally not opposed to breaking from ANSI/ISO in regards to using GCC or clang; these platforms are incredibly well-tested and supported, produce fast executables, and work on basically every platform out there.
Sure, there are a lot of compilers for C, but my point is that I'm happy enough writing GCC C; a large chunk of the nonstandard GCC stuff actually works across other compilers (like clang and Intel), and my point was that I'm happy enough, at least for the work that I do in C occasionally, to limit myself to compilers that support the GCC extensions.
Yes, the world isn't composed of FOSS UNIX clones entirely, but GCC works on a lot of platforms now that aren't Unix. MinGW has typically worked fine for me on Windows (not even counting Cygwin or WSL). I don't do systems programming, so I don't know how much (if at all) GCC is used on stuff like micro-controllers.
I suppose I didn't make my argument clear enough, but as I said, if you view GCC C as its own language, I don't view that as different than using rustc.
C takes the middle road -- variables may be declared within the body of a function, but they must follow a '{'. More modern languages like Java and C++ allow you to declare variables on any line, which is handy.
I know this is not the case in C11 for example, but is there a compile-time speed-up when declaring variables in this way, or any other benefit?
Old C compilers would first parse declarations, then allocate space on the local stack (usually by subtracting the number of bytes needed from the stack pointer), and go on to compile code, knowing that - from that point on - there's a well defined stack structure (and very often in those days, even a constant offset from the "frame pointer" or "base pointer" which would be copied from the stack pointer at that point)
Introducing additional variables later, means that if you emit code "as you go", you'll be less efficient - which makes no difference today, but was a big thing in the days C was designed; Most compilers back then were single pass, emit-as-you-go. There are relatively simple ways to deal with that even under the single-pass constraint, but in those days and the common compiler construction techniques prevalent at the time, it was considered harder.
There was always an issue of fixups with scope-exiting control transfer like break and goto - however, they are simpler, and don't harm emit-as-you-go compilation to the same extent.
On the other hand, one nice thing about C compilers is that they are _fast_. I always inwardly shudder when I see a C++ file including Boost, because I know the compiler is going to have to chew on it for several seconds every time there's a change.
In assembler, the function prologue has to set up the stack frame with enough space to store local variables. Having all local variable declarations come at the beginning of the function was an extension of that - keeping C closer to assembler and perhaps making compilation conceptually easier. With modern optimizing compilers, though, we're not necessarily directly translating to assembler line for line, and might optimize out some variables entirely, so it doesn't matter as much.
EDIT: Looks like beagle3 beat me to it while I got coffee.
I suspect it may have helped very slightly with the parsing in early compilers to not mix declarations and other statements, but with SSA-based compilation which is quite common now, it doesn't matter where you declare variables as long as you do so before you use them.
I've always pronounced it "car". I've never made the connection with my northern English accent pronunciation of character i.e "caractah" - though the Mancunian glottal stop 'er' is hard to articulate :)
Home Counties accent (or even Received Pronunciation) do not pronounce "car" and the opening syllable of "character" the same way, since the 'a' in "car" is elongated (I'm not sure what the correct linguistic term for elongation is).
As much support as there has been built into processors and compilers to get better performance out of the simple machine model of C, I must wonder if a new low level language aimed at exposing things like SIMD, large caches, SMT, and all to the programmer might catch on.
When I used to program in C decades ago, I felt the need to read literature, write small test programs etc - before launching into the actual problem solving/solution building exercise. That is, I wanted some sort of internal "armed with the knowledge now, I can get down to brass tacks..." feeling. But I see younger devs these days using advanced frameworks as lego blocks and getting right into "making it work/prototyping" mode, with bits and pieces from Stackoverflow etc., whereas I find myself hesitating to even start. I'm jealous.
Yes, and these devs come up with the worst form of spaghetti dependencies and nightmare operational environments. This and mega scale deployment has brought us to the container, automation and orchestration stage where no one is competent to actually deal with anything other than bundling code into where it worked once and pushing it everywhere.
I don't understand why HN does not see the previous link as the same and give points to that last thread instead of creating a new one and then forcing a member to post the old link with the old discussion and me this silly comment...
However, links to the original discussion of an old article might be useful if people are interested in the topic but no discussion occurs on the current posting. Or, the previous discussion may have some interesting comments that are worth revisiting.
I’ve been reading through a lot of your comments and you’ve made me want to review my existing code bases and simplify things! I’ve been that intern who has had a large legacy C/C++ codebase dropped on my lap, and I would have certainly appreciated it had the original developers had your mindset going into their development. It certainly makes sense to me why we, as developers, should be writing both performant but readable-first code that allows those in the future, given an inherent assumption that the project we’re working on will succeed, to be able to easily maintain, debug, and improve the codebase. I think if the language-specific constructs offer a significant advantage that is not achievable in another, more straightforward, language-agnostic manner, then the intention and explanation of its function should be clearly annotated in comments from the original developer (with reasoning, pitfalls to avoid when attempting to debug, refactor, extend, etc).
When I started taking undergraduate CS courses at Stanford in the late '80s, there were some wonderful instructors teaching intro courses, including Stuart Reges, Mike Cleron, and later Nick Parlante. It's great to see that Nick is still at it and has had a distinguished teaching career at Stanford.
Remember. Don't write stuff like a = b[i++] or a ^ c | d or use all these ambiguous C specific tricks that make it harder for everyone to read your code.
At the risk of being downvoted again (for which I don't much care, but it does say much about the mindset...), is it so very much unreasonable to ask of programmers using a language, and more importantly a language they will be using very frequently, to just learn the language!?!?
The amount of dumbing-down that I've seen happen to programming is already beyond ridiculous. You should definitely look at APL, Lisp, or some of the other more expressive languages out there if you think anything beyond the equivalent of glorified Asm (one statement per line, one operation per statement, one use per variable...) is "dense and unecessary[sic] shortcut".
You're absolutely right that developers should know this stuff. But that's to some degree the wrong point to make. When considering the maintainability and correctness of the codebase, "clever" hacks are often not desirable. Nowadays, there's no performance gain to be had by indulging in such tricks; the optimiser will do the right thing for both cases.
The problem for me here is not that it's "too complex", but that it's ambiguous even for experts with a casual glance. It makes perfect sense right now after you've written it, but months or years later, the person reading this while doing some maintenance or debugging might glance over and not see the subtlety of pre- vs post-increment while they are busy with other tasks and deadlines. It does make sense to avoid such pitfalls, where possible.
This has nothing to do with what I'm saying. You obviously write code without thinking about people reading it later. It doesn't matter if the reader knows a[i++] and a[++i]. What matters is that you can easily make a mistake here, and the reader can easily not see the mistake or misunderstand what you meant to write. Compactness is the enemy of clairity.
Neither of those are hard to understand if you know C. Write C like it's C, not like it's some other language.
Edit: the first example is clearly stepping through values in an array, and the other is flipping some bits and setting others... what's hard to understand about that?
? The answer is unambiguous, and can easily be found by consulting the standard or any decent reference, but I've been using C for decades and I couldn't reliably answer without looking it up.
I dislike some overuse of parentheses, but in this case I'd definitely use them.
Chances are you haven't been using C much for hardware/low-level stuff, because the ^ operator is seldom seen outside of that context.
On the other hand, the precedence of & and | (and likewise, && and ||) should be general knowledge: the former has the higher precedence, in analogy with multiplication vs addition.
a = b[i]
i++ // or even better,
// i = i + 1
// and unequivocally
// rejecting any
// compiler for which
// it might matter.
What you save in typed characters utterly means nothing compared to the clarity of separating the two operations (especially side-effectful operations).
is certainly unambiguous to the compiler, but unlike the other example it should be unambiguous to the reader as well, assuming the reader is reasonably familiar with C. i++ yields the unmodified value of i, and as a side effect causes i to be incremented. There are no other references to i within the expression, so it doesn't matter when i is incremented. It's a common idiom for stepping through the elements of an array.
If you're going to be reading C code, you have to be familiar with certain idioms that might be unfamiliar to someone who doesn't know C.
I would certainly reject any compiler for which these:
a = b[i++];
a = b[i]; i++;
a = b[i]; i = i + 1;
are not equivalent, since the language requires them to be. But I find the first more readable.
While this is a common idiom, I do think it is bad from a maintainability standpoint. Let’s say in the future you need to change make a change, and the value of a becomes the elemental sum of the vectors b and c.
a = b[i++] + c[i++]
Errors like this are more avoidable by avoiding this sort of syntactical compression.
This is the classic mistake of thinking that whatever familiarity of the standards or language quirks are “ubiquitous” in your mind somehow ought to be acceptable as standards for assumptions about what other people, people who may never have written a line of C code before, are going to think while reading.
As a writer, whether of code or natural language, you are responsible for the thought patterns your writing actually induces in the mind of the reader, the practical side effect of someone reading your code, and emphatically not whatever you intended them to think.
In this sense your defense of b[i++] just fails. It has nothing whatsoever to do with uniquity of the known value of i++, because that is already assuming something (familiarity with C syntax) that is dangerously foolish to assume of a code reader.
Especially when an effortless alternative exists (i = i + 1) that meets a much, much wider natural expectation about assignment and order of operations.
I disagree somewhat. The i++ vs ++i behavior is a classic C idiom and something that any experienced C programmer should understand. The order of operations for bit operators is something only a subset of experienced programmers would likely have memorized.
I’m not advocating for using it, per se, but I’d be shocked to find any C programmer who knows the latter but not the former.
In my current job, I write production machine learning code often in Scala. I had never written a line of Scala before that, had no idea what the idioms were nor even what some of the inheritance and object privacy syntax meant.
I read through mountains of legacy Scala code while working on my first several projects, and have assisted dozens of other new developers (most of whom also have never once read or thought about Scala previously).
The assumptions people made in that codebase still give me nightmares, mostly over how foolish and lazy they are in little ways, little unexplained “self documented” things that create multi-hour stumbling blocks but which could have been written in a more “elementary” style in the first place (without sacrificing any necessary language features).
When I see things like, “any C programmer should know ...” I just slap my head in stupor. It just misses the point entirely. It’s of zero importance whatsoever to write code in language X in a style that benefits mostly only “people who know language X.”
Pretend you’re writing the code for a freshman intern who will look at it in 5 or 10 years with no prior knowledge of the language it’s written in, and now you’re in the correct ballpark for how to write professional software.
> “Someone who can't understand the code they've been assigned to work on”
Nobody said anything like this! I don’t understand how you make this conflation mistake.
Before I ever read one single character of Scala code, I had written hundreds of thousands of lines of code in C, Python and Haskell, among others.
It was easy to understand Scala code despite having never read it except for places where people tried to rely on needless shorthand or obtuse ways of doing things.
I can’t understand your response. It’s so easy to write C code that a veteran C expert has to spend hours to understand, or to rewrite the same code in a way where a freshman who only ever used Java could equally easily understand it.
Familiarity with how to unlock meaning from badly written code is not the same thing as experience with a language.
Software Engineering must be the only engineering discipline where mastery of tools and barrier to entry in terms of skills are in many ways inversely correlated with market demands. Every clown can write software these days and in fact a lot of clowns get paid six figure salaries to do so. When the domain is largely composed of clowns, standards become extinct and one finds oneself calibrating for a circus. If / when software liabilities become real, this fiesta of the clueless will come to a rapid stop. Let us hope that the damage done by that point will not be too great.
I actually find comments like this to be some mix of funny & infuriating because it is so dramatically false. There are all these myths about “one bad hire” or how a bad engineer can ruin a team, but they are mostly overblown bravado by coder cowboys who think they know how to do everything right and should just be left to code in a vacuum, and that anyone who dares challenge them or demand compromise with real world limitations, deadlines, customer demands, must be a “clown” (as you put it). It’s a sad state of arrogance and I think it leads directly to the joke we have today for software interviews, code tests, TripleByte / CoderPad garbage using language esoterica and whiteboard trivia to haze out anyone with common sense about business software.
Most code that creates value to people is messy, written in a hurry with vague specifications & unclear understanding of what the end user wants or would pay for, and it gets iterated by disjoint teams of people with competing timelines, politics, credit mongering and resource constraints, and most of it is for reporting something to someone.
If you don’t step back and realize that bare metal performance and esoteric data structures and algorithms generally don’t matter and can be picked up quickly by just about any engineer, and that real value comes from putting careful scaffolding and comments around hacked up business logic that needs to be future proofed against highly volatile business circumstances, you’re just going to be miserable and probably should quit and try to find a financially viable open source job on a compiler team or something where the exceedingly rare assumption you want to enforce (that most people reading your code are above average at the idioms of the language or tool you’re using) has some prayer of being relevant or applicable.
> ...Most code that creates value to people is messy, written in a hurry with vague specifications & unclear understanding of what the end user wants or would pay for, and it gets iterated by disjoint teams of people with competing timelines, politics, credit mongering and resource constraints, and most of it is for reporting something to someone. ...
Let's not conflate the essential complexity of a problem domain with what's purely a result of grossly sub-standard development practices. And let's not pretend that the clowns are any better at coping with "real world limitations, deadlines, customer demands" - they're not.
Nobody’s conflating anything.. these are just the ubiquitous standards imbued by Mother Nature onto the human sociological phenomenon known as “software engineering.”
To call it “substandard development practices” is like saying that wealthy people shirking their civic taxation responsibilities through offshore accounts is “substandard economic practices.” It just is.
Scala is a kitchen sink language, much like C++, one of the reasons I’m unfond of both. The number of things a programmer must know about C is much more constrained, and i++ is absolutely on that list.
I don’t see how the specifics of Scala are related to this discussion. The same thing happened to me in C in fact back in the 00s when I was doing scientific computing for a government research lab that had a large and critical collection of C programs written between the late 80s and late 90s.
It happens to all programming languages when engineers don’t prioritize future-proof readability as a first class part of maintainability along with testing and judicious choices about when to invest in extensibility designs vs when to ignore extensibility.
when engineers don’t prioritize future-proof readability
What does it say about the competence of "future" engineers if they can't understand code written by past ones?
One of my favourite sayings is "the code is unreadable to you, because you are not qualified to understand it yet". Perhaps we should not encourage the degradation of our craft.
> “What does it say about the competence of "future" engineers if they can't understand code written by past ones?”
It doesn’t say much about those future engineers at all.
It’s easy to write inscrutable code that even veteran engineers can’t understand, yet still “gets the job done.”
Also, you act like this would imply that all future engineers are the same, but that is not true and not related to my points.
You’re writing code for the inexperienced future engineer, some poor soul tossed into a legacy codebase without much help. It’s not their fault they were put in that position. It’s sink or swim. Good code helps _that_ person swim.
As for the quote you mention,
> “One of my favourite sayings is "the code is unreadable to you, because you are not qualified to understand it yet".
That is a horrible way of thinking, with built in condescending attitudes and everything. If code is unreadable to someone, beyond basic syntax definitions, that is the author’s fault and not at all the reader’s fault for being inexperienced.
The reason 5th graders can’t read Infinite Jest is because David Foster Wallace tried to make it complicated, not because, in some skewed perspective, the writing is perfectly simple but 5th graders just aren’t experienced enough yet. It’s a complexity property of the writing not of the reader’s brain.
With literature you can get away with this because there are extenuating arguments about artistic merit.
For business software, not so. In that case, if you write something more like DFW and less like Hemingway, it’s a sign of laziness and lack of self-discipline.. and not at all a sign of skill advancement.
That seems like a completely backwards attitude. If you set out to write code to be maintainable and easily readable, you won’t think this way. Instead you’ll think, if I can write the code just slightly differently, slightly outside my own comfort zone or with less assumption about what burden should be on the reader, and it makes things much clearer for virtually no cost, why wouldn’t I? There’s no cost to me to write a separate assignment statement “i = i + 1” in many cases when I might otherwise write “i++”. The idea that it’s ok to put extraneous readership burden on the reader to nest “i++” in the middle of some other operator / indexing syntax / control flow logic is incompatible with writing good code. It has to start with prioritizing elementary clarity pedantically in cases when there’s no objective cost to disfavor it.
I'm going to argue that there is an objective cost, and that is the length of the code. To me, there's a massive win in being able to take in the whole of an algorithm on a single screen or sheet of paper, and using two statements to do something that can be idiomatically expressed in one is in direct opposition to that. I get the feeling that there are (at least) two camps here that can very easily talk past one another. In my experience, "succinct-code" advocates care about readability just as much as "avoid-language-specific-constructs" advocates. They just have different experiences of what maximally-readable code looks like.
I agree with you there are multiple interpretations, but there’s an important distinction.
For me personally or likely for other experienced engineers, code compactness is a nice thing.
But this is unimportant, because nobody should be writing the code for me or with me in mind. Code compactness for the sake of other experienced engineers is an extremely stupid thing to prioritize above the simplicity of reading straightforward, separated operations that rely on as few concepts as possible.
Code compactness is nowhere near as important as concept compactness, because the latter appeals to all engineers, not merely to other experienced ones.
Anything that requires or introduces a new concept or mixes multiple concepts together is more costly in terms of hurting readability and maintainability than corresponding “longer” code that relies on fewer concepts.
I think I see where you're coming from, although I find it rather dispiriting, in that this point of view seems to be arguing against exploring and understanding the universe of different programming techniques.
It would perhaps be instructive if you could enumerate or point to what you consider to be a good set of concepts to work from. Clearly there are some absurd answers ("it's all NAND gates in the end"), but I don't imagine that's what you're thinking. Would you include functors? applicative functors? Different forms of memory management? APL/R/Numpy-style arrays? setjmp/longjmp?
I don’t see how it argues against exploring and learning about many programming techniques / language specifics / etc. By all means any practicing programmer should be investing to learn about these things.
It’s kind of like martial arts. You are encouraged to learn a huge variety of skills, mostly fundamentals but also esoteric advanced things.
But when it comes to actually using them in real life, you should try to avoid needing to use them at all costs, and even when you are absolutely forced to use the knowledge, use the simplest, most straightforward way to address only the problem at hand, never in excess or for personal interest.
It’s about self-discipline, to orient your code for the mind of an inexperienced novice despite the fact that your simple code might be solving state of the art problems.
The problem is that a lot of C devs have been coding in C and only in C all their lives. They cannot accept that people coming from other languages want to be able to read their code, or that a security audit needs to be done over thousands and thousands of their lines and that these C hacks are an awful thing to read for someone who wants to quickly review the code.
Replace C with Japanese or Chinese and you'll see just how absurd your argument is.
If you want to read something in a language you don't know, you learn the language. Every profession has its own terms of art, its own abbreviations and lingo, and to suggest that those don't have any value is just ridiculous.
I seriously don't understand this attitude. You don't see other professions saying how they should be understandable by everyone who hasn't studied them. Why should programming be any different? I blame the "everyone can code" charlatans...
It seems like your analogy with other natural languages is just a false comparison. You say,
> “If you want to read something in a language you don't know, you learn the language”
but this means you’re missing the point. If you are writing something like a tech manual in Japanese, and you know ahead of time that it will be required that sometimes people totally unfamiliar with nuanced Japanese (but with high skill at picking up badic ideas in any language) have to rely on your manual, then how should you write it?
The foolish answer is to say you’ll write it with advanced Japanese language constructs and then turn around and say, “oh well, novices who want to rely on the manual should have gone and learned advanced Japanese.”
The smart answer is to say you’ll use self-discipline and restrain yourself from nuanced Japanese, and instead write in a way that novices have a good chance of decoding with minimal extra effort, and of course advanced Japanese readers will also still be able to get what they need too.
This is such a self-centered idea, approaching it like, “I can write whatever code I want using whatever idioms and it’s someone else’s fault if they didn’t learn the language sufficiently ahead of time before they found themselves needing to quickly read my code.”
Yes, I agree with you. That is the driving force, even with other languages besides C too, and influences many things, from code readability to interview techniques.
IMO all these features of C makes programming and therefore system design anti-fragile. Anecdotal experience based on interactions with a few C programmers who work on embedded systems.
Sure, there was a lot more to type. Debugging was harder. But there was a beauty to that. A simple mental model. A sense of control.
These days all jobs seem to be sort of online crap. Piles and piles of layers and complexity and heterogenous frameworks and tools. Being on call. Never being able to truly master anything because everything changes all the time.
/nostalgia