No, that seems like language bloat. Better to make your language have internal iterators, as then data structures can add their own logic that matches their need for iteration. Then special cases don't need additional syntax.
Trees aren't really a "special case". Sequences, trees and graphs are core abstractions to most programming, why try to flatten all trees and graphs to sequences?
Because your code is actual running serially so no matter what there is an iteration order, and the order matters for performance even if your code does not care.
For example if you literally don’t care about the order then your code can map over a default iterator that is efficient (DFS).
Not true due to parallelism. Also, SQL demonstrates that you can describe the desired result declaratively without being concerned about iteration order. I see SQL's CTEs as a good example of the kind of primitive the article is talking about.
Also a lot of trees have fixed arity, like binary trees. But to model a general tree, you'd need a dynamically allocated list of children for each node, adding a layer of indirection that can noticeably worsen performance.
Trees are often considered too diverse to include even in the standard library, let alone as a primitive. Even Python doesn't have trees in the standard library. I'm sure it's been proposed and rejected at some point
> But to model a general tree, you'd need a dynamically allocated list of children for each node, adding a layer of indirection that can noticeably worsen performance.
I hate to be a broken record, but this again goes back to the assumption of a specific sequential computation model. Modelling relations as in SQL automatically supports for N-ary child relations. There are other computation models! eg. logic, relational, etc.
I'm talking about cache optimization. Fundamentally, you simply cannot guarantee data locality while also allowing resizing. Growing the child list may necessitate moving it in memory. This is true no matter what your computation model is.
Those other models lack native mechanical sympathy with the hardware.
Let's be practical about it: the reason you might want to say map_tree is so it could potentially be optimized to run more quickly, e.g. via automatic parallelization.
But to even parallelize a tree read operation we'd have to pull in other threads, split work, we'd want metadata on each node (child counts), and at the end of the day within each worker you'd still have a serial execution order.
For map_tree to have practical benefit your language would need to make a ton of opinionated choices about trees and parallel programming. Feasible? Yes. Worth making it a basic primitive? Well, even LISPs don't offer map_tree, despite building the whole language on top of car/cdr.
> Sequences, trees and graphs are core abstractions to most programming, why try to flatten all trees and graphs to sequences?
Because "trees" and "graphs" aren't actually single concepts, they're families of concepts that can vary along many dimensions. Most programming languages offer one or a small handful of sequence types, because most sequence use patterns are the same. Most languages don't offer a tree or graph type, and as soon as you try to implement one you see why: there are multiple different kinds of trees and graphs, and multiple different representations that are appropriate for different use cases, and any one you might pick would be unsuitable for most programs people want to write.
> Most languages don't offer a tree or graph type, and as soon as you try to implement one you see why: there are multiple different kinds of trees and graphs, and multiple different representations that are appropriate for different use cases
I think it's more because the abstract interfaces for trees and graphs that can support multiple representations aren't as well known [1]. An iterator/sequence interface has a simple abstract structure that everyone immediately understands, but the structure needed to abstract over graphs and trees are trickier.
The point wasn't that it should be treated as a sequence of not, but that with good base abstractions you can move the decision of what to do, and how, into the data structure itself, rather than having to have the language decide for it.