"First a punch is just a punch,
Then a punch is not only a punch,
And finally a punch is just a punch" (heard from Bruce Lee)
Basically it means that in the beginning we punch like we can.. then a martial arts student learns the proper and different ways to punch.. so a punch is no longer just a punch... Is a lot of instructions..
Then to sheer practice and repetition the right way to punch becomes second nature.. so the right way to punch is the only way so it's only a punch again.
So coding it's just ifs and loops... Then is OOP and Functional and Logic... But then once you transcend all those paradigms you understand that is all ifs and loops.
Before one studies Zen, mountains are mountains and waters are waters; after a first glimpse into the truth of Zen, mountains are no longer mountains and waters are no longer waters; after enlightenment, mountains are once again mountains and waters once again waters.
I started off writing a lot of 68000 machine code. I was always amazed what you could accomplish with a lot of loops and branches. I never lost that sense that at whatever higher level I was at later on, it was all loops and branches.
The redirection he called an abstraction, and how it makes things easier to make, with less performance and in manh way it’s good, but there’s a point where it isn’t, and solutions like the idea of needing an AI to tell us everything shouldn’t be required.
I like how he talked about using bare metal to run games and not needing an OS, relying on an ISA, or how games were easier to at one point before excessive abstractions.
One very memorable comment he made was when he was asked about what he thought about the M1 chip running software faster, his response was that we could use current hardware and make it run 100x faster if it was well written. Huge part of the many reason I switched to Linux. ;)
ICE has thousands of uses, they aren't just for cars.
An example? Up above the treeline, there are communities relying on diesel for power.
Solar is a no go, with months of darkness. Wind turbines are hard to maintain at -50C, and snow and ice can impede them.
Even electric cars can only cover some of car use-cases, both due to range and refuel time issues.
Expect ICE in 2050 still.
And in most places, electric cars take petrol out of the car, and instead, see things like coal or natural gas burned, to make the electricity to charge them.
> And in most places, electric cars take petrol out of the car, and instead, see things like coal or natural gas burned, to make the electricity to charge them.
This is true, but also a good thing, while you present it as at best a wash and possibly a negative.
Coal powered EVs are better than gasoline ICEs in most situations, and nobody has a 100% coal grid anymore so the benefits are even greater. There's lots of reasons why but the main one, which is relevant to this conversation as well, is the inefficiency of ICE compared with electric motors.
What is more green? Coal or natural gas plants, AC lines that lose some power, building transformers, using AC to DC converters for all our power supplies, refining rare earth minerals to make decaying chemical cells, and using brushless motors (I think?) versus oil drilling with huge machines, refineries, transport and risks of oil spills, and building combustion engines? ICE looks like it can be way more efficient.
Yes. It's actually one of the things that makes EVs greener. A lump of valuable metals in one relatively easy to access and pure form is a lot greener thing to have than a decade of combusted fossil fuels.
I have a feeling that Rule 90 is the most efficient algorithm: the Sierpiński Triangle, self-sacrifice, infinite recursion to a perfect pattern. I'm not mathematically talented enough to prove that though.
Please come to chat about A New Kind of Science on Facebook! A couple of weeks ago I noticed that the page didn't exist, so created it. Hopefully Wolfram doesn't mind. We stand on the shoulders of giants.
I'm no physicist, but doesn't the Standard Model fit on a few pieces of paper? That the universe has rules that are a lot simpler than the behavior that emerges from those rules doesn't seem like novel claim. Wolfram's idea to examine the universe by exploring CA-space is the novel part I think.
Yea, the fuzziness comes from protons shuttling about, tunneling around, electrons moving in cycles with phosphates. It's all just vibrating, bumping and bumbling about. I remember how blown my mind was when I first learned that all reactions catalyzed by enzymes would have happened anyway given enough time and the enzymes just tune the systems to work in concert in a time frame conducive to life.
The organization is only needed for the inevitable bloat you add later to justify the rewrite.
Its something like the sage uncovers the boiler, points at it and announces: Look! It is a beautiful marvel of technology! Why would you want to cover it up? Look! You might learn something!
Coding is not "ifs and loops, then $foo". That's a false premise. If you want to be fundamentalist about it (which I don't advise), a CPU is just a state machine.
assembly instruction + machine state => new machine state
Our idea of "jumping around code" and "control flow", as in for loops or if statements, are themselves abstractions of control state (an instruction pointer) and data state (perhaps a CPU register).
So coding is really "the execution of a notional interpreter (aka semantics), then $foo." That gets to the absolute bottom of what "code" even is: instructions for an interpreting device (be it physical or mathematical).
Oh, but CPU instructions can be built up from a series of μops which are then executed out of order or in an overlapping fashion, making both "instruction" and "state" boundaries in your equation more fuzzy than it looks. So at the absolute-absolute bottom of "code", it is instructions for an abstract model of an interpretative device.
(I'm not sure if your "but" is a retort, or an addendum. Assuming retort...) We can consider one more level of implementation machinery, but I fail to understand how uops aren't another form of instruction, and CPU internal gobbledygook like register files, caches, etc. aren't another form of state. It doesn't seem so fuzzy.
The difference is that you don't get to access the uops machine, subsets/sequences of that instruction set are offered as (macro-)instructions instead. Programming happens on the idea of a CPU.
That's more or less shu-ha-ri https://martinfowler.com/bliki/ShuHaRi.html. Very useful concept, I often refer to it: for example to argue why "just be agile" probably won't work when there's juniors or when the team is new-ish. That's skipping to ri, start with shu (scrum or whatever set process).
That’s what disappoints me in modern Java. There is practically no ifs. It makes it inaccessible to beginners on the project, and the streams are about twice as slow… just because devs think they are more clever if they pile 7 “.map()” to transform a list.
list.stream().filter(Objects::nonNull).map(User::getUsername).filter(Objects::nonNull).filter(name -> name.contains(searchString)).map(name -> “We have found your user: “ + name).orElse(“We haven’t found”);
Basically unreadable.
for (User user : list)
if (user != null && user.getName() != null && user.getName().contains(searchString))
return “Found it!”;
That’s a failure of Java’s language design, not a failure of the functional/declarative paradigm.
Your for loop can do all kinds of damage to the list, and you have to read it all to find out what it does. Saner languages make the functional version more expressive.p:
I’m not saying JS is a sane language; it has an anemic standard library without a robust “first()” or “contains()”. But that code gets you the first user where the name matches, without resorting to indexes or null checks or worrying about how a for loop can mutate lists that it should not.
EDIT: I should have used filter(list fn) here, I was trying to write something plausible in JS instead of a purer functional language and misses that translation.
Hard disagree. For loops do not have guarantees about in-place mutation; that’s why they require a full read. Anything that starts with a map/filter is guaranteed to only apply to the thing that it is assigned to. That’s more than worth its weight in grey hairs.
EDIT: “full read” includes “everywhere else this array is used, which can have far-reaching consequences if it was passed into the containing function by reference.”
Most temporary variables are single assignment, and many newer procedural languages do have variable declaration syntax for single-assignment variables. Such as Javascript's `let`.
Another wide-spread syntax feature is for-loops over collections like `for item in list`, again being extremely easy to read and not requiring any assignments. So, accidental in-place mutation is not an issue in practice. But, in-place mutation can be very useful to express logic in an easy to understand way.
Coincidentally my main language does not have any of these features, and in-place mutation is like 0.1% of all my problems. (And notably, memory management is < 5% of my problems).
These are all just theoretic problems, what really matters is developer experience. You can write terrible bug-ridden convoluted code in any language, ESPECIALLY with many filters and lambdas and callbacks.
The issue with the functional style is that it becomes harder to see the control flow and run-time characteristics. I'm sure functional style is great to express set operations (like DB queries) but IME they tend to optimize for source code brevity and pessimize for runtime performance.
DB queries are a good analogy, especially if we qualify them as atomic operations. I’m hard-pressed to find a situation where procedural mutation-in-place makes more logical sense than an atomic update of the whole list, so why not make atomic updates the default?
It’s different if you’re working in lower-level code. I don’t intend to trivialize performance; the option to “go procedural” should be available. I just think it’s the wrong default. Operating systems and some kinds of Big Data need those perf boosts.
But for your run-of-the-mill CRUD app, map/filter is both clearer and safer than a for loop. Strong language support encourages this both algorithmically and syntactically—unlike what Java did, which is what I originally replied to.
>For loops do not have guarantees about in-place mutation;
Every modern language has a `for elem in range` construct, giving exactly that guarantee, as long as the element isn't a pointer.
Besides, I neither want nor need such a guarantee. There are many scenarios where in-place mutation is exactly what I want to do in a loop, and I don't want to fight the language, aka. my craftsmans tool, to do it, just because it imposes an arbitrary restriction on what I can or cannot do.
Is this a potential source of bugs? Of course it is. That's why I have tests, code reviews and debuggers.
And besides, nothing prevents me from doing the functional approach in languages like Golang, Python or Julia *if I want to*. I simply refuse to use languages that that force a paradigm on my code. This is true not only for "pure functional" languages, but also Java, which wants me to use OOP everywhere.
To me, paradigms are tools.
They need to be there when I need them, and get not get in my way when I don't.
The caveat there is that you cannot call it without a null check on “u.name” first. contains(null, “whatever”) would just return false instead of throwing a runtime exception. Hence my edit: users.filter() has the same problem.
searching for "Jo" returns all three of them in the first example, it stops when the first has been found in the second.
Start including the tedious bits about adding found items to the list and the waste of intermediate variables and your "clear" code is wrapped around a lot of repetitions, that only add noise.
It just happens that you are more familiar with the second style, but pipelines are better in many other ways, clarity of intentions being one of them.
The code he posted isn't actually valid because you can't "orElse" a list. That being said, I would presume it was meant to include a "findFirst". Something like
list.stream().filter(Objects::nonNull)
.map(User::getUsername).filter(Objects::nonNull)
.findFirst(name -> name.contains(searchString))
.map(name -> “We have found your user: “ + name)
.orElse(“We haven’t found”);
The snippets actually do the same thing, as long as you add a .findFirst() to the first example to make it valid Java code.
Intermediate stream operations like map or filter are always lazy. And .findFirst() is a short-circuiting terminal operation, that does not need to consume all stream elements.
with the benefit of being lazy and only inspecting list items up to the first one that matches (and if you don't mutate any data, easily parallelizable)
Modern Java looks horrible to be honest. I create abominations like this in JS pretty often, but it isn't good code. Best if wrapped in a huge try/catch where the error case juwt prints "didn't work for some reason...".
I don't write java, but assuming it has a null coalescing operator you can likely do all that in a single map in a way that is (in my opinion) cleaner and easier to read than both of your examples.
Java has never really liked to do in 10 characters anything that could be done in 30 characters instead, especially if it obfuscated things a bit more at the same time.
I just started using Java's functional side a few weeks ago. Lambdas are really nice and often make the code clearer. I like writing more declaratively.
But the stream interface is just horrible. I had to read a lot before I understood how to just filter a list and collect the results. And even after that, it still somehow didn't scream out what it was doing to me. I guess you get used to it.
When getting start with Zen, I see hill as hill, water as water
When get experienced with Zen, I see hill as more than hill, water as more than water.
When finally got mature with Zen, I once again see hill as hill, and water as water.
And the monkeys reading the code to see what it does. I don’t recall who it was exactly but some early computing heavyweight, maybe Von Neumann, thought that any language which requires a compiler was a waste of resources.
Computers used to be so expensive that it made more sense to hire a “programmer” to convert an “analyst’s” flowchart into machine code using only a keypunch.
As with so many other things, through mastery and practice of the “right ways” you will often discover flaws in them and that is when that which you are practicing begins to become a true art form.
New developers write simple but shortsighted code. It gets the job done, but it will be painful to work with in the future.
Intermediate developers, bitten by their past mistakes, decide to future proof their work. But they don’t just look one or two steps ahead, rather they try to look five steps ahead and identity problems that do not and may never exist. Consequently, they over-engineer, over-abstract, and over-complicate everything. Carmack’s recent “But here we are” speech resonated with me.
Advanced developers identify the right balance for each task between simple and complex, concrete and abstract, and pragmatic and idealistic. In my experience, they favor simple and pragmatic solutions, use abstraction effectively, and can satisfy near-term goals quickly without painting themselves into a corner. “As simple as possible, but not simpler.”
I try to avoid working for tech leads stuck in the second phase, which is not uncommon. If you suggest taking the “ifs and for loops” approach of solving a simple problem with a simple solution, they’ll assume you’re on the wrong side of the bell curve.
Had a boss once who insisted that all if statements should be pushed out to factory classes, and all control logic should be done by constructing a different instance of an interface. It was a pretty rigid system, but at least it did force me to write small focused classes that were easy to test.
Debated for a long time whether that methodology was stuck in the second phase or if it was actually the third. Still don't have an answer, but these days I think having a plan is better than just letting engineers run roughshod, as long as the conventions are easy to follow.
Programming alone vs programming in a team are very, very different occupations. A lot of what applies to one doesn’t apply to the other. I’m still painfully learning this, after all these years.
I think it comes down to there being two kinds of codebases...
In the first kind, all relevant developers have deep expertise in the system, or build towards having deep expertise. There's an expectation that flexible abstractions will be used, not abused, unless it's one of those scenarios where the use outweighs the abuse. The abstractions are tomato cages, and they're there to support the system as it grows, provide some structure, but not to strangle it.
In the second kind, the default expectation is that a developer will have little to no familiarity with the system, they will be isolated from it as much as possible, and they will be given such a tightly constrained sandbox that they can't break anything outside it. You will write your little plugin, or whatever, get in and out, and you're done.
These can both be useful kinds of systems/codebases in orgs of any size. The first kind of codebase can enable an experienced team to move really fast and be extremely productive. The second kind of system can help lots of different teams of different skill levels jump in and leverage your system with little required knowledge, and thus be productive that way. So there's really no way to say one of these patterns is good or bad.
But in general if you churn in and out a bunch of replaceable cog code monkeys, probably low-paid, the second kind of system just ends up working better. Giant "enterprise" software shops like parent poster aluded to typically end up in this kind of high turnover scenario after enough finance/MBA people have been brought in, hence their bad rap.
I have much more experience working alone, but: git alone is a breeze. I don’t need to impose myself some arbitrary constrain because I know (or think I do) I won’t abuse it. I can use the stack I consider best/am more productive on, instead of what’s fashionable this year. No linting, coding conventions, etc. Just the pure joy of problem solving.
On the other hand, on the right conditions, the amount you learn on a good team is ridiculous compared to what you’d do alone. Weeks vs years kind of thing.
The key insight to "at least destroying your architecture makes it easy to unit test" is that being able to unit test is not actually that important. There's other kinds of testing out there!
Often doing the stupid thing is the right thing to do, instead of thinking hard to do the smart thing that ultimately wont be needed and will be harder to understand.
So juniors thinks twice before doing stupid simple stuff. Intermediates thinks twice and does smart stuff. Seniors only does smart stuff where it is needed and does the stupid simple stuff without thinking in most places.
Funny, I took over a modern python service and I was pretty shocked at what I inherited. Long gone are the days of "There's one way to do things".
Instead, this thing would give the most "enterprisey" Spring JEE application a run for its money with its endless annotations, dependency injection magic, all sorts of pseudo-types - both the "built-in" Python 3 ones like Set and List, but also the libraries like Pydantic. But unlike Java, these types aren't even really guaranteed by the language at compile time, so even if your IDE plugin can successfully detect them, things will still (silently) slip through at runtime.
The async functionality that's been bolted on to the language is worse than even the old and confusing Java multi-threading primitives, and the funny thing is it still doesn't actually run things in multiple threads. For that, your simple Rest API is running on layers of C programs like Uvicorn which itself is then wrapped by another few processes running Gunicorn which in turn is probably running behind NGINX. LOL, and we thought the Servlet stuff with Tomcat and JBoss was clunky - this is insane.
To be honest, if there ever was a sweet spot for Python, it would have been for smaller code bases that weren't complex enough for big "enterprisey" langs like .Net or Java, but were more permanent and complex than shell scripts or (back in the day) Perl could handle.
But nowadays, I don't think modern Python fits any use case real well. It's still dynamically typed, slow, single-threaded, and has a poorly managed ecosystem and hodge-podge of tools like virtualenv, pyenv, poetry, etc. that never quite become standardized and stable.
So unless you've got a bunch of Python experts who aren't interested in moving to a better lang, I'd find it hard to go with Python for new projects.
> ...endless annotations, dependency injection magic, all sorts of pseudo-types
This sounds like what happens when a bunch of Java/C# developers jump over to python without learning the "python way" - this is more related to the developers than the project
> But nowadays, I don't think modern Python fits any use case real well
Python has effectively taken over the data science / machine learning space. For most use cases, the algorithms are massively more important than the language.
> poorly managed ecosystem and hodge-podge of tools like virtualenv, pyenv, poetry, etc. that never quite become standardized and
This is true, but Java and C# also have many issues in this respect. The move from Java 8->11 is particularly painful - many fundamental libraries related to security or connection handling were not backwards compatible. Many libraries now require multiple branches for different JDK levels. Maven and Nuget are about as good as pip - they all have weird edge cases.
I use both Java and Python on a daily basis - each has their strengths and weaknesses. Java is great if need long running processes with weeks/months of uptime, Python is great for backend data manipulation and analysis.
Long gone are the days when Python was a hacker tool. It’s been an enterprise thing since startups got wind of the fact that it could be used to attract programmers, to attract VCs.
I'm currently working with a team of Advanced developers for the first time in my career. Nobody is padding their CV with fancy shit. Everyone has gotten the "new shiny" out of their system.
Everything is as simple as it can be, no simpler and no more complex. Sometimes a bunch of flat JSON files in an S3 bucket is enough of a database, you don't need a 42 machine Aurora cluster.
All fancy "Machine learning" stuff really is just a bunch of ifs and for loops internally :D
Despite being a joke, I know it's the "Ha Ha Only Serious" [0] sort. I can't help but think this is severely biased by the trends of "enterprise software," where you eventually "give up", and clock your 9–5 writing if+for making a fine living, but erroneously pass that off as a mature, Jedi-like way of thinking about programming, like the meme suggests. (And, consequently, you spend no less time debugging hairy, nested, imperative conditionals with for-loops that are off-by-1.)
I have no beef with if+for, but a large part of the reason they're "goto tools", if you will, is because industry is slow to assimilate many state-of-the-art ideas, sometimes by as much as 40 years.
Simpler building blocks does not necessarily mean simpler solution. If only!
> you eventually "give up", and clock your 9–5 writing if+for making a fine living, but erroneously pass that off as a mature
This comment sure indicates to me where you most likely are on the curve.
In all seriousness, I think this is considerably off the mark. After enough experience you realize that expressivity and convenience are antipatterns and don't actually simplify things but are harbingers of complexity, bugs, tech debt, even the downfall of organizations and products.
Saying it is all ifs and for-loops is completely true. Everything else, all the abstractions and high level features, are just sugar.
I try to keep a healthy and balanced diet, myself.
> industry is slow to assimilate most state-of-the-art ideas, sometimes by as much as 40 years.
Most of those ideas are terrible. The industry is so incredibly young and has experienced so much change over those 40 years that I have a hard time accepting the notion that the industry is slow to adopt. The reason the old building blocks are still popular is because they are a thin abstraction over how computers work, and ultimately that is at the root of everything we do.
Really, there are no if+for, just compare and jump. Why don't we use what the metal uses, instead of these "expressive abstractions"?
If+for have no deeper foundational significance in the construction of programs or computations, literally, than say a lambda function. But because the latter is unfamiliar, it's spoken about in the same manner you present: as if it is some highly abstract, complicating, high-level feature (when truly that take is just baloney).
> Why don't we use what the metal uses, instead of these "expressive abstractions"?
Because the "expressive abstractions" are much easier to reason about and save programmers lots of mental effort. And, as I have commented upthread, ifs and for loops are by no means the only such abstractions.
> because the latter is unfamiliar, it's spoken about in the same manner you present: as if it is some highly abstract, complicating, high-level feature
If expressing your program in the lambda calculus is easy for you to reason about and saves you enough mental effort, go for it. But don't expect your code to be readable or understandable by many other people. The reason why ifs and for loops (and other "expressive abstractions", since as I have said, those are by no means the only ones) are ubiquitous in programs is that they are easy for lots of programmers to reason about. Whereas the lambda calculus is only easy for a very small subset of programmers to reason about.
I'm not suggesting people "express programs in the lambda calculus", but instead that incorporating a philosophy of functions and their composition is not at all a bizarre concept.
Loops and ifs work miserably with any hierarchical data, compared to recursion, for example. A lot of the world's data is hierarchical.
We now have a chicken-egg problem. I can freely admit that for+if is easy for programmers to understand solely because of how we are educated, and not due to any Bruce Lee hocus pocus about simplicity or fundamentalism, as so many others here suggest.
A programmer who, say, learned from SICP first would find a for loop awkward and bizarre when you could "just" tail-recurse.
> incorporating a philosophy of functions and their composition is not at all a bizarre concept
Sure, functions and their composition are very often another highly useful abstraction that is easy to reason about and saves the programmer a lot of mental effort.
> We now have a chicken-egg problem.
No, just a recognition, which was not there in the original claim, that ifs and for loops are not the only useful abstractions.
> A programmer who, say, learned from SICP first would find a for loop awkward and bizarre when you could "just" tail-recurse.
Perhaps, but I think such a programmer who needed his code to be readable and understandable by a lot of other people in a large project would end up inventing a "for-loop" macro or something similar that, while it might use tail recursion under the hood, would express the construct in a way that programmers could understand regardless of how they learned programming.
> as if it is some highly abstract, complicating, high-level feature
But symbol calculus is a highly abstract, complicating, high-level system assembled out more reality-based systems beneath it. If it seems simple to you, you're just under the curse of knowledge.
I'm not sure what a "symbol calculus" is. Do you mean "lambda calculus"? I think that's a lot less complicated and abstract than a fabled machine with an infinite tape that's controlled by a transition graph of symbolic states. :)
And I don't know what a "reality-based system" is.
Oof, and to think one could helpfully inform the other! :) To be clear, I am a programmer, not a computer scientist, so my opinions are based off writing code and managing teams to write code that works, and less so about abstract machinations of computer scientific thinking.
For the record, I generally think that people exaggerate the complexity of Functional styles.
I think the reason that stateful, imperative code is the default is because that is how we tend to "think". Example, if I wanted to get a list of items for the grocery store, I get a piece of paper (instantiate an array), go through the list of things I might need (loop through a source array) and write down the ones I definitely need (append onto the destination array). If I run out of space, I get a new piece of paper and continue the process.
With that algorithm, I can stop and test each individual statement; I don't need to "think about it" because I can write and execute a tiny part of it (even if I'm only executing it in my mind).
In a functional style, I have to think more declaratively. Given all my items, filter out the ones that need replacing and give me that list. It's much easier to reason about abstractly, but I have no details about w how that's actually happening.
I think this kind of thinking is superior most of the time: I would rather read map.filter.reduce more than a for loop with branch statements any day of the week, but I am trusting the implementations of the mapping and filtering and reducing functions. Of course, for any non-trivial algorithm one is still passing a function into the map/filter/reduce anyway so I can still reason about those smaller stateful sections without worrying about the map/filter/reduce piping.
Perhaps I've never worked in a truly pure functional style so I may not be dealing with the mountains of abstractions others seem to complain about
Which without side effects can be very heavily optimized (probably even better) than imperative code? Hell, the pipelining inside GPUs, and CPUs are perhaps closer to lambda calculus than Turing machines.
It’s not like the machine code will look much closer to your C code either. That’s also a spell of “compiler writers and hardware vendors trying to uphold the view that C programmers are so close to hardware and that memory access is flat”.
Personally I disagree. We should be using state machines and pure functions. If+for loops are just what's easiest to express in the major languages of today. they are no more or less computationally expensive but due to lack of tooling they are often cheaper to write.
In languages and libraries that allow FSM and pure functional kernel based designs you can get just as clear logic that is expressible not just to the programmer but also to business personnel. It's counter-intuitive to a certain extent because so much of programming is built around imperative programming but FSM based logic is and will continue to be easier to understand long term because you can trivially visualise it graphically. This ultimately is what a lot of the functional paradigm is built around. Use the mathematical and graphical representations we've used to understand systems for decades. They are well understood and most people can understand them with little to no additional education past what they learned in their business or engineering bachelors degrees.
In a very real way, it's all conditional jumps in assembly, and every thing you've learned to make programming easier by allowing more directly letting you express your high level intent is just sugar. It might even help some or most of the time. But what you're actually doing is creating a bunch of branches and loops, and as much as the high level stuff might help, you really shouldn't forget this is the medium you actually work in.
Most professions have a healthy respect for the base materials they work with no matter how high the abstractions and structures they build with it go. Artists know their paints, stone, metal, etc. Engineers know their melaterials as well. They build by taking the advantages of each material into consideration, not assuming that it's no longer relevant to their job because they get to just work in I-beams. Programmers would do well to adopt a healthy respect for their base materials, and it seems like often we don't.
I disagree with how your use of "just" here. It's common for programmers to dismiss the importance of syntax but syntax and notation are the interface and UX between the language semantics and your brain. It's no less important to get this right. There's a half-joke that Europe was able to rapidly advance in Calculus beyond Britain due to the superiority of Leibniz notation.
> healthy respect for their base materials
What's unique about computers is the theoretical guarantee that the base does not matter. Whether by lambda calculus, register machines or swarms of soldier crabs running from birds in specially designed enclosures, we're fine as long as we appropriately encode our instructions.
> bunch of branches and loops
You could also easily say it's just a bunch of state machines. We outsource tedious complexity and fine details to compiler abstractions. They track things for us that have analogues in logical deduction so that as long we follow their restrictions, we get a few guarantees. When say, writing asynchronous non-deterministic distributed programs, you'll need all the help you can get.
Even designing close to the machine (which most programs will not need) by paying attention to cache use, memory layout, branch divergence or using SIMD remain within the realm of abstractions.
Agree with this a lot. In other words, don’t be too clever. That leads to an unmaintainable codebase. There is value in simplicity and not overly using abstractions that take you farther and farther away from bare metal.
> There is value in simplicity and not overly using abstractions that take you farther and farther away from bare metal.
This is a contradiction. Simplicity is obtained through abstractions. As an example, fifty years ago, 'goto' reigned supreme. Then structured 'if/else/for' took over, as the superior abstraction over 'goto'. Now use of 'goto', while being far simpler to implement and closer to the bare metal, is commonly derided by many programmers.
The long term trend in software is constantly increasing abstraction hiding mountains of complexity, which increases both simplicity and complexity. Writing print('hello world') is simple, but the compiler, OS, silicon, etc. that makes that simplicity possible is extremely complex.
Programming is primarily about managing complexity. Other than perhaps mathematics, there is no other field that must, should and can apply the amount of abstraction on top of abstraction as software engineers do.
It is a must, because decades of business requirement built on top each other without understanding the whole is complex. Writing a JIT-compiler that can routinely change between interpreting code and executing it natively, a database optimizing queries, a mathematical library using some fancy algorithm are all complex, in a way that is not reducible.
Complexity easily outgrowth even the whole of our mathematics, we can’t prove any non-trivial property of a program, halting problem, etc.
So all in all, no, we can only respect our “base materials” by finding the proper abstraction for the problem, as our base material is complexity itself. It might be for loops and ifs, but it very well be a DSL built on top of who knows how many layers, because at that abstraction level can we even start to map the problem domain to human consumable ideas.
> Personally I disagree. We should be using state machines and pure functions. If+for loops are just what's easiest to express in the major languages of today. they are no more or less computationally expensive but due to lack of tooling they are often cheaper to write.
In my experience programming programming with primitives and basic flow control operations frequently tends to be at least be order of magnitude faster than more complex state management paradigms. Compilers are very good at optimizing that style of code. Loops often get unrolled, the and the branch predictor is kept happy. A good compiler may use vector expressions.
In many cases with cold code it flat out doesn't matter, the CPU is plenty fast, but when it does matter, explicit if-and-for code absolutely mops the floor with the alternatives.
Will your manually written imperative code beat an SQL database for the same task? Because it uses a very very high level description on what it has to do and chooses an appropriate algorithm for that for you.
You can optimize one specific query (quite painstakingly, I believe) to beat a general db, but it is very far from clear that “for loops will beat higher level abstractions”, imo.
In designing my search engine, I ended up building a custom index instead of using an off-the-shelf database solution. Franky it's so much data a SQL database can't even load it when run on the same server that runs my entire search engine.
With my own set-up I can have my data in pre-calculated immutable tables that are memory mapped off disk. That means I can write lock-free code. My updates are large hour-long batch operations off a written journal.
For the lexicon I'm also rolling a special one-way compression scheme/hash function that guarantees uniqueness but only works due to quirks with the data.
General purpose databases can't do these things, because then they wouldn't be general purpose databases.
SQL databases perform really well on average given your query is relatively optimized, but they do not represent the pinnacle of performance. Any time you want the fastest most efficient code to deal with a particular use case, it pays enormous dividends to roll your own implementation that has domain awareness. It means you can cut corners general solutions simply can't.
At least in the C++ space, stuff like boost-sml is able to produce assembly that is often as fast or occasionally faster than handwritten if or switch based FSM logic.
FSM and pure functional kernel based designs you can get just as clear logic that is expressible not just to the programmer but also to business personnel
I’m yet to see a secretary who could “return a new table state such that as if documents became a right fold as binding together a map of composition of signing and copy routines over documents” instead of “get these documents from the table, copy, sign and put them back in a binder”. This is a nonsense some of us want to believe in, but it is not true.
I inherited a piece of code that was designed as a state machine and the state machine design basically made the project become pure technical debt as requirements increased over time and engineers had to make it work with the state machine model that had been chosen when the project started.
If the project had instead been designed to have less unnecessary state and “transitions” it would have been a lot easier to make changes.
All those ideas sound good by themselves but they are really bad for “defensive” coding. Good luck selling a project to refactor something when it’s going to take multiple people-years. Once you’ve made the mistake of committing to an inflexible design it’s either that, replace/ignore the thing, or deal with low productivity as long as the thing exists.
I've written many SM implementation starting from one used in low protocols and up to business process middleware so I have an experience and know how incredibly useful and powerful those are when used in right place. But to use them everywhere especially in some math algos would be an insanity worse than GoTo.
>They are well understood and most people can understand them with little to no additional education past what they learned in their business or engineering bachelors degrees.
Imperative code:
Take A and B.
Add C ml of Water.
Stir for 2 minutes.
If it thickened, add D grams of flour, else go back to stiring.
This is easily understood by everyone, degree or no.
Once someone figures out loops and the difference between statement and expression, they can essentially understand imperative code.
I've always felt like explicit state machines are the sledge hammer you break out when you can't find any good abstraction to encapsulate the logic. As an intermediate step for parsers it's pretty powerful, but it's not something I want in my hand written code if I have any alternatives.
> Saying it is all ifs and for-loops is completely true. Everything else, all the abstractions and high level features, are just sugar.
You could just as well say that ifs and for loops are just sugar for gotos and all programming is just gotos.
The reason ifs and for loops are used instead of gotos is that they are very useful abstractions that are easy to reason about and save the programmer lots of mental effort. But they are not the only such abstractions.
To the extent that other abstractions can create problems, it's not because they're really just sugar for ifs and for loops, it's because they are not well crafted abstractions so they are not easy to reason about and don't really save the programmer any mental effort. But there are plenty of abstractions other than ifs and for loops that are well crafted and do save the programmer mental effort, in many cases lots of it.
I completely agree with this. A lot of people use the term "too much abstraction" when they mean "bad" or "ineffectual abstraction." For-loops can be a wonderful abstraction, but they're certainly no basis for all abstractions.
> After enough experience you realize that expressivity and convenience are antipatterns and don't actually simplify things but are harbingers of complexity, bugs, tech debt, even the downfall of organizations and products.
Suggesting that experience leads to jettisoning expressivity is at odds with my direct observations of experienced software engineers working in large teams. The more experience, the _better_ the engineer gets at picking the right level of abstraction to write code that can be maintained by others. Picking a single point on the abstraction spectrum (just above goto but not below it!) is far too rigid for the diversity of tasks that software engineers need to solve.
Math today does a lot of things you couldn't do with math back then. It isn't just syntactic sugar, there were simply lots of problems they didn't know how to solve since the math they had wasn't powerful enough to solve it. Calculus for example solved a lot of problems they have, and calculus isn't syntactic sugar for anything, it is a new instruction set.
However the code abstractions people used 50 years ago could still implement all programs we have today. We added a lot of sugar on top, but the core of coding remained the same. For example, you could implement Map, Reduce etc in C, they are just functions and not novel features of programming.
It’s just so sad that the lowest common denominator has become the standard now. When I first learnt Clojure it entirely changed the way I think and solve problems. The code really was elegant.
Obviously, it can only be read by someone who can also understand programming beyond ifs and fors. That’s a non-starter in most environments - enterprise or otherwise.
Funny enough, I see most innovations coming from consultants who do the same work for multiple companies and realise the repeating patterns and extract an abstraction.
Ifs and fors are the easiest concepts to explain to non-developers, so it makes sense to start there.
I wouldn't say that they are the standard now, but using and mastering all features in a language is hard.
Add to that design patterns, classes and code layout it becomes a full-time job to keep up.
I have been in contact with code most of my professional life, but still isn't comfortable writing large amounts of code. The simple reason is that i don't do it full-time.
Here are the features in C# just to illustrate how complex a programming language is.
I agree that modern software development for non-full time developers is brutal, several of my data scientist colleagues are remarkably brilliant people and yet they struggle with some more advanced programming concepts.
However, most of those features are relatively standard and are more conceptual than syntactical in nature. Bashing people because they don't know stuff is stupid and counterproductive, but I shouldn't be forced to code in the programming equivalent of roman numerals just because someone else can't be properly fucked to understand lambdas or interfaces or generics, all stuff that's literally covered in your run-of-the-mill CS 101 course.
It all boils down to having enough humility and empathy to understand that other people are not, in fact, the same as us.
That’s what I mean. Each language has a different syntax and it takes a while to gain mastery over it and that’s fine. But there are concepts that are immediately portable to multiple language.
>I have no beef with if+for, but a large part of the reason they're "goto tools", if you will, is because industry is slow to assimilate many state-of-the-art ideas, sometimes by as much as 40 years.
For assimilation to happen, the state-of-the-art solution also has to result in a net gain over the existing solution, and the higher the differential in complexity between the two, the bigger that gain has to be.
And, these days, "net gain" in an industrial context is typically tied to almost no aspect of the quality of the code, but more to the management of large groups of people, as well as stability and growth of the business.
What this really means that once you get to a certain level of experience and seniority the actual code you write in the small is pretty much irrelevant. What matters is the overall architecture of what you’re building: the data structures and APIs. The challenge becomes about working together as a team, and with other teams within your ecosystem. Sophisticated language constructs don’t actually help you solve those problems, and imo their benefit is marginal where they do help.
I view map/filter as better abstractions than for loops, and do not consider them to be sophisticated language constructs. They correlate with descriptions of a program’s goals much more naturally than for loops do: get the name of each user, remove everyone under 21, only show accounts with positive balances, etc. Reduce is more arguable, but I think it also applies: show the total amount of money in all of this user’s accounts.
With at additional level of abstraction you could say “goto jumps”, but “if and loops” gives an commonly understandable logic for everyone; deeper abstractions increase reading complexities, while higher abstraction is achieved via functions and overall architecture.
Scaling up those “if and loops” is the challenge as a team or a single, with the common goals being to keep the software under control.
Meh, most business logic really is "if" and "foreach". That doesn't mean it's not complicated, as you say. But all that category theory stuff, at the end of the day, really is just an attempt to manage the complexity of lots of conditional branching and iteration.
Sure, but the word "just" is doing a lot of work. It seems to be where a code base of uncomplicated ifs and fors leads to asymptotically, because both of those constructs don't prohibit you in any way from sneaking in "just another line" to either of them.
There are a lot of sophisticated problems dealing with enterprise software even in higher languages and even in situations where things like performance or resource usage is not a primary concern.
For example, how do you handle authorization, logging, and how do you make the code maintainable? That's a really tough problem that requires a lot of thought about the overall system design.
And of course it's always a lie to say that performance and resource usage aren't a concern -- they're not a concern until they are.
> is because industry is slow to assimilate many state-of-the-art ideas, sometimes by as much as 40 years
How convenient that the software industry is about 40 years old. So these ideas should "break through" this invisible arbitrary corporate waiting area into the limelight any day now, right?
They are breaking though. For instance, Python just got (a very limited form of) pattern matching. It has been A Good Idea since at least the 1970s. Garbage collection has been known since the 1950s but only became "mainstream" in Java.
The really useful 'print' debug lines might be kept at additional 'debug' flag levels. This is particularly useful for tracing the behavior of programs that no longer have debug symbols and are running in real environments.
In rare cases I pull out a real debugger, but most of the time the right prints in the right places are just as good. I can also iterate much faster because I'm not jumping between the code the the debugger, or pulling the debugger out of the loop it's stuck in.
I've come to the conclusion that it's a good skill to have since it's (or logging which is basically the same) the only debugging method that's always guaranteed to be available. For example there's lots of build and CI tools out there that have no Real Debugger.
I'd never seen that meme before, but there's a Bruce Lee quote (maybe apocryphal) that has had a lot of meaning for me ever since I got over the same hump myself.
“Before I learned the art, a punch was just a punch, and a kick, just a kick. After I learned the art, a punch was no longer a punch, a kick, no longer a kick. Now that I understand the art, a punch is just a punch and a kick is just a kick.”
Anyone who has ever written any software has felt like the unenlighted half-person in the middle of that distribution at least once -- for example, when learning how to code with a different approach in a new language.
I have felt like that more than once. Everyone here has, I suspect.
I'm may be biased because I spent too much time arguing about this
but you hear those $fancy_principles / fp / hard oop / "clean code" evangelists, and then you go to any repo of real world software - linux, compilers, kubernetes, git, blablabla and everywhere you see for loops, goto, ladders of if statements
I mean, you cherry-picked by quite a criteria there. It’s all C and Go, and they somewhat lack higher level abstraction capabilities. On the other hand compilers are often written in C++, or are bootstrapped in the very same language though. Also, what about OpenJDK, elastic search, all the web servers running the whole cloud? Linux might be the underlying OS, but that’s not the program actually doing business logic. If anything, it’s just another layer of abstraction.
Also, let’s be honest, C does all these “virtual method” magic on a per-project basis which will not be understood by any tool ever (all those function pointers to whole new implementations passed from God knows who, with barely any typing). At least FP and OOP knowledge somewhat transfers and is queryable by tools.
I gravitate towards useful abstraction. I've written the same loops (find an element, find a maximum, do something to every element, filter elements, etc.) 10 000s of times by now. It got old after the first 100.
This is basically the progression in any field or craft. As one becomes more experienced, one basically figures out the optimized stuff needed to successfully solve the problem in hand, successful in the sense that it both meets current requirements and enables future changes or evolution.
I describe this path of discovery as:
beginner: function over form
intermediate: form over function
transcendence: form is function
However, I will disagree that coding is just about ifs and for loops. To me, coding, programming, software development, or whatever you want to call it is about three things: how to use a computer to do something, communication between people (including your future self), and how to think about a domain. “ifs and for loops” does not capture this.
Except it never can be that simple because many systems have millions of ifs when the entire system is considered. So architecture and parallel evolution of those millions of ifs becomes an entire field of study :)
The problem with this is that most software is absolute garbage and most code is an awful, unmaintainable mess. The middle of that curve are all the people who are trying to improve that situation. The right side of the curve are the people who have given up. But maybe that does make them the smarter ones.
Do you write your code in notepad or pico, or use brainfuck as your primary programming language, or copy your code to a new folder for version control? Those things are all the simplest in their tool category.
George Hotz said something once that most modern developer jobs are depressing because you're not doing any actual programming. I.e. you're not given a problem to solve with code, you're just taping together frameworks and pieces of code that someone else wrote to order. It's a bit like studying to be a chef for five years and then having to put together one of five types of burgers.
Like everything Hotz says it's spiced up of course, but there's a kernel of truth to it.
I love George, but he's a bit of a reactionary to a fault and this anecdote is a perfect example. A person with deep knowledge and thoughtfulness will make almost the exact same point with much more nuance, aka Jim Keller: https://www.youtube.com/watch?v=Nb2tebYAaOA&t=1363s
The choice quote in contrast to Hotz is "executing recipes is unbelievably efficient -- if it's what you want to do"
There's definitely an assumption by Hotz that programming and solving "real" problems is what everyone should aspire to, and that anything else is just meaningless. Like anything in life, what's meaningful is of course completely subjective, all the way from some people actually finding it fulfilling to others just not being interested in putting in that much effort into their career and preferring to do other things with their time.
The irony of this point is if you ever watch a livestream from Hotz, he is, at an amazing level, literally taping together frameworks and systems to graduate to a point where he can begin to express solutions to a real problem. It's one of his great strengths -- nothing cannot be accomplished through hours dedicated to a problem with the extant tools we have. If he wants to impugn the flawed systems we have at our disposal it's just because nothing exists that is in congruity with what is happening in his head.
He is, however he may dislike it, good at taping together frameworks and stands as a success case for the systems he might look down on.
Thats why i am seeing DB schemas without indexes lately. End of the day people with this kinda thinking make others to fix the broken code that they left behind.
As a senior software engineer i had to spend a lot of time at night fixing code written by junior devs and interns.
the code that company and devs (just ifs and loops gang) proud of was a pain in the ass for me so i quit the job entirly and do freelancing these days.
I tried to explain how something was wrong and why but no one would listen all believe they were 10x developers, Life lesson learned, never ever try to correct an idiot.
Here are some of the practices they followed
* No indexes on tables, not even unique
* No DRY, NO KISS
* Switching to new shiny framework/library every week
* no tests
* Keeping entire codebase in a single file (4k LOC)
* no versioning
* Exposing DB credentials to the entire internet and not acting when being warned
I think there's a general lesson to be learned here, which is that you should be wary of any colleague within any branch of IT that honestly claims the things you're working on are "simple". I'll take an insecure colleague that wants to do a good job over a confident colleague that refuses to listen any day.
Writing code that works is simple. Writing code that doesn't break is not simple.
Isn’t this just a rant that the senior management/devs failed to set up an environment[1] to stop this sort of thing happening? I don’t really understand how it’s related to the linked tweet or the subtext of it.
[1] either a policy like strict code review or perhaps a cultural change about quality or responsibility or mentorship.
I was brought into to work on a 2 years old project built by junior devs and interns.
Companies thought they were saving money and devs thought they were born coders so no to read a book on software architecture/engineering and hence i had to deal with the big pile of ....
It’s a pretty fun and enlightening exercise to sit down and work out how you’d implement all the usual instructions when all you have is Subtract And Branch If Negative.
I started in GW-BASIC when I was a kid, and all I needed was IF and GOTO. You could do anything with that. No loops, no functions, no nothing. IF's and GOTO's!!!
If you want to go even lower than that, coding is basically just saying yes or no in response to a yes or a no.
Sure, that's oversimplifying it, but that's the smallest unit of information being changed during computation.
But yes, once you learn the basics that are shared between most programming languages and don't get distracted by the nuances, it doesn't take that long to pick up a different language. Being proficient is of course another question, but achieving a basic understanding shouldn't take all that long when you just focus on how to if-else, how to setup loops, and how to assign variables.
All machines that satisfy a test of whether they can do yes-no, store the result of that yes-no, and then go to another yes-no are verifiably considered yes-no machines, that is, they are yes-no complete.
A marathon is just putting one foot in front of the other, after all. What’s the big deal? I mean a two year old can do that, and they can’t even handle logic yet.
As a marathon runner, people sometimes ask "How does a person run for 3 hours O.O" and well it's about the same as running for 5 minutes except you don't stop.
Honestly that's not a great example given that you can't understand ZFC until you already know enough set theory to understand the motivations for ZFC.
UGH. Back in my day the only language was BASIC and we only had IF and GOTO. Dijkstra has made these children SOFT and I'd piss on his grave if I could be arsed to get out of my rocking chair.
Oh, I can't say that it didn't. But I was only 7, and more amused by things like
10 PRINT "POOP"
20 GOTO 10
than, what, reading a non-existant manual? My parents weren't programmers, so I learned by stumbling in the dark.
But, you've got me curious. I recall using BASICA, GWBASIC and QBASIC -- reading over their respective histories, I'd have gotten my start on BASICA on a hand-me-down 286. I'm not finding good docs on the language, but a familiar program [1] uses FOR loops -- so they were supported. But I distinctly recall hand-rolling a for-equivalent loop with GOTO and IF, counting down to zero.
Young you must have arrived at the concept of a for loop without knowing there was a keyword in the language designed to support that, which is pretty cool!
In my twenties, I wanted to use all the cool PL techniques: meta-programming, reflection, monads, macros, type-level programming, etc. I'm getting older and now I want my programs to be 90–95% functions, arrays, structs, enums, and loops and only parsimoniously throw in more advanced language features.
There’s a joke in the fp community I can’t find right now that describes the evolution of programs from imperative side-effectful statements to a for comprehension, with exception catching, that looks nearly identical.
Probably not the right one, but "The Evolution of a Haskell Programmer"[0] sounds like a similar idea which goes from a Freshman Haskell programmer's simple factorial to the abstract heights of a Post-doc Haskell programmer, then back down to a Tenured professor's simple factorial.
It's not "the" monoid in the category of endofunctors though, just "a" monoid. That also describes Applicative (less powerful) and Arrow (more powerful).
A monad is like when your cat poops on the carpet, so you wrap your cat in toilet paper. Then your cat has kittens, and the kittens are born with toilet paper wrapped around them!
No here’s a better explanation:
A monad is like when you have a clock, but it’s broken and goes too fast. So you rewind it, and the next morning you wake up and it’s 5000 BC.
Or here’s a better explanation:
A monad is like when Donald Trump won the election, and then stacked the Supreme Court, and now we’re totally fucked!
They look like concise programming for mathematicians, which in my opinion, is tending towards set theory notation. https://xkcd.com/2545/
Personally I prefer more names, more XML tags, more comments, more parables, more hyperlinks, more different ways of expressing the same thing, to make it less ambiguous and easier to communicate and agree what we're all talking about.
Good code explains the problem to the reader and implements the solution as a side effect.
You can often express the problem quite well using a combination of if, for, comments, variable names, function names, list/set and map data structures.
Sometimes (<10%) you encounter a problem that's interesting enough to define a custom type, and sometimes you make tree data structure out of it (<1%). On very rare occasions (<0.1%), you need a more complex data structure.
Code that uses thousands of classes is very hard for a reader to approach, but so is a single file with a never-ending ifs and for loops. Always write code for another person. If nothing else that other person might be you in a few months (or much sooner).
Coding is basically just writing for humans in a language a computer can execute.
Those mean nothing without the proteic machinery that translates codons into other proteins though. DNA and RNA without ribosomes is like a language without a compiler.
Codons are an (effectively arbitrary) grouping of 3 consecutive nucleic acid bases that ribosomes can translate into a amino acid when building a protein. Technically any combination of 3 consecutive bases could be considered a codon, however biologists usually only group base pairs based on how they are normally read by a ribosome. Note that this means it is possible for a single sequence of nucleic acids to code for multiple proteins, with the only difference being the starting offset.
Proteins are made of amino acids not nucleic acids. Nucleic acids are polymers whose elements contain a phosphorus group. Amino acids contain an nitrogen group (an amino group to be specific).
They are distinct biopolymers. DNA and RNA are polymers of just 5 different types of nucleotides. Proteins are polymers of some 20 radically different aminoacids.
Codon is just a triplet of nucleotides. But proteins are made out of amino acids, they have “nothing” to do with DNA in itself. It just happens that DNA transcoded by a codon table to proteins encode some useful ones that in certain environments can bootstrap the whole process and do some useful work.
I like the phrasing "software engineering is programming integrated over time." It involves things like rollouts, migrations, deprecations, adapting to new requirements, and designing in so a way as to make all those easier.
The part that does stuff is just ifs and + signs. But making a computer do stuff isn't hard; that's what they're for. The problem in software engineering is stopping it from doing the wrong things.
(Hot take: the way we do this is wrong; we should be adding superpowerful not-doing-stuff features to languages, vs doing-stuff features.)
I like "Coding is to Software Engineering, as building a chair is to running a furniture factory". You need to know a fair amount about the former to excel at the latter - but the latter also requires project management, process and product design, collaboration, and communication skills.
I remember explaining recursion to an aspiring programmer to apply to some tree node walking or something, and at some point it clicked! I saw the second it worked in the reflection in her eyes, they got big and lit up and there was this palpable sense of "a-ha" in the room! It was one of the coolest moments of my professional life.
But yeah, my kids (one of whom is picking up programming) would be right behind the "ifs and loops" statement.
Somewhere in the 90s a colleague asked about why those pesky types we had to use while we had void *. We fought, I won. (but somehow we still ended up with Javascript 20 years later)
The way I interpret this is that the 'just ifs and for loops' is like Matrix rain code. In the beginning it looks like gibberish scrolling down the screen. When you master it, it's still gibberish scrolling down the screen, but it's simultaneously something else on another level as well.
I often find myself writing simple things with a compact-but-high-level-conceptualization, that when edited by someone else, clearly only saw the rain.
Ifs and for loops are trash. Real programmers just write massive linear algebra operations that they can throw on a cluster of GPUs to get 50,000x parallelism. ;)
This was my first reaction too. Now I work in a (Swift) codebase where I could probably count the number of for loops we have on one hand. Pretty much everything is done through map/reduce/filter, etc.
At first, I thought it wasn't as readable, but now that I'm used to the syntax, I think it's much easier to parse what happening. I know what transformation is happening, and I know there shouldn't be any side-effects.
We used to have. Now we mostly chain map-functions. But there are those rare instances where we don't operate on iterables and still need to do something many times. I've seen people so weirded out by this that they prefer to generate an iterable to work on a rather than to use a for-loop.
Lol no. It is a whole sets of skills, from extracting just right abstracts from the problem domain and defining adequate sets of interfaces, to arrive at just right (clear, concise, mostly-functional) modular implementation.
Almost no one nowadays possess these skills. I we have is node_modules crap and J2EE-like design bullshit.
This like saying that poetry is just words and commas.
I'm one one of a small number of people at my FANNG company that can work on a legacy MM LoC Perl project.
Many of them are much smarter than me, but I think my insight which I remember getting around 6 months in is that it's just code, in many places hard to read/follow, but it will ultimately just do a logical set of instructions.
Ah, keen observation, young grasshopper! But nota bene: just as man cannot live on bread alone, one's understanding does not arrive merely from the consideration of a collection of atoms.
Just to be clear: a language with only if() conditionals and for() loops with known number of iterations (so for instance all for-each loops) isn't actually Turing complete. This is called primitive recursion, and an easy to prove example of what it cannot process is computing the Ackerman function.
Also, you can easily see that something is fishy because all such code must terminate (where's the halting problem here?)
joke aside. we know a language is Turing-incomplete if it has only ifs and bounded for loops. And if 90% programmers' tasks can be done using a Turing-incomplete language, is there any benifits we can get from this?
I think it's funny that it's kind of like a blockchain, and as soon as the business people and MBAs realize this, they're going to "evangelize this amazing application of blockchain to all code!" to death.
Basically it means that in the beginning we punch like we can.. then a martial arts student learns the proper and different ways to punch.. so a punch is no longer just a punch... Is a lot of instructions.. Then to sheer practice and repetition the right way to punch becomes second nature.. so the right way to punch is the only way so it's only a punch again.
So coding it's just ifs and loops... Then is OOP and Functional and Logic... But then once you transcend all those paradigms you understand that is all ifs and loops.