> The essence of make is this: make is an implementation of constructive logic programming
To be perfectly pedantic, the essence of make is that it is an expert system for traversing a directed acyclic graph (DAG).
It works by constructing a graph of “filenames” (vertices) and the commands required to create those files (directed edges). Using that as it’s knowledge base, make performs a reverse topological sort on the target filename to determine the graph traversal necessary to create it’s target file, and runs each command in the traversal so as to arrive at that target. Since it has a list of all intermediate vertices that must exist before it’s target can exist, make is further able to determine whether it needs to run a command or whether it can use the cached result.
I don’t think the author is doing us any favors by trying to fit make’s behavior in to a constructive logic system. It’s missing operators for doing that, as the author states: “Note that the form of compound propositions allowed is extremely restricted, even by the standards of logic programming.”
I find the author’s conclusions then a mixed bag. They would be good conclusions if make was a constructive logic system, but it’s not, it’s an expert system for traversing a DAG. I think of make as a “workstate” system: do what you need to do to put my build in a particular state, whereas most of the author’s conclusions center around “workflow:” move this unit of work through it’s lifecycle.[1] Make only performs that work incidentally to putting your build in a particular state.
1: The workflow/workstate distinction is not my idea. It’s explored in “Adaptive Software Development” by James A. Highsmith III.
To build on the pedantry, make would be traversing a restricted form of directed acyclic hyper-graph, no? The edges are between sets of vertices to a single vertex.
I found these structures cropping up alongside Horn clauses, and Context-Free Grammars in interesting ways. In this case, viewing make rules as Horn clauses allows us to use a variant of Horn-SAT to find the minimum set of files we have to produce to build a rule, which is interesting to think about, and reason about.
This article felt as though it was food for thought and not really trying to push any particular idea too hard. In any case, I found it interesting as it found parallels to the sorts of structures I found when exploring various algorithms for Context-Free Grammars (such as pruning, or detecting erasable non-terminals, which can both be described in terms of Horn-SAT).
Well, you have a large graph defined with multiple end points, so the DAG itself doesn't necessarily point to any single vertex. Only once you define which endpoint you want does it collapse.
Well, if you want file `X` then you can ensure you find the minimal set of files to produce `X` from your given rules using Horn-SAT by adding `and X` to the Horn Formula representation of the build rules. Pretty neat!
It's very nice, but the cost of changing from Make means that most people don't use it. The Ninja design docs reference Tup, so they're aware it exists.
Solving the Latex problem of “Label(s) may have changed. Rerun to get cross-references right.” is hard. The assumption of make is that the only state is in the file system. i.e. running the same inputs should produce the same outputs. The above message shows that Latex violates this constraint.
The simple solution is to (ugh) write a wrapper script which runs Latex, and creates another output file if the labels are wrong. Only after the labels are right does the output file stop changing.
Which means the dependencies are a circular loop. Which Make doesn't handle. <sigh>
The limitations of 1970s design is apparent. But Make is so astonishingly powerful that anything new has to be much better than Make. So far that doesn't seem to have happened.
Just as a practical tip: use latexmk, it's an utility that runs all the necessary tools the necessary number of times to get everything right (labels, cross references, indexes, etc). It works really nice with Make.
I found make was also too much coupled to some programming practices (implicit extension-based rules) and not lazy enough, hence the dual design of tup. Tup felt a little too restrictive even though the smaller scope is a relief. Truth lies in the middle, I'm sure make will have a prodigal son sooner or later.
I think that at this point, with 37-some years worth of Makefiles on
the planet, a competitive replacement for make with a different syntax
or format would need to provide tools to process or translate old &
new formats to the other format, as Perl did with the (uni-directional)
a2p and s2p tools for awk and sed scripts, so that people considering
a transition could make the leap more easily and with greater confidence.
hehe seems like I didn't RTF-make-M deep enough. Indeed a migration plan would be of immense value. But I fear more than syntax, it's make semantics that would hurt this plan, since AFAIK it has non trivial scopes for variables. Complex makefiles translation might prove intractable.
> The simple solution is to (ugh) write a wrapper script which runs Latex, and creates another output file if the labels are wrong. Only after the labels are right does the output file stop changing.
The other alternative is to treat the rerunning of latex until it reaches a fixed point as a single build step, instead of multiple ones.
And? That's no different than the build script calling any external executable. It may stop, it may not stop, it may format the hard drive. Make doesn't (and shouldn't) care.
Again: So what? The build system doesn't need to care whether it will stop or not. It will just run it, and if it doesn't stop it doesn't stop and will require human intervention, but you have the same problem with running LaTeX manually. At some point, you will have to decide whether you think it makes sense to add another iteration, and if without a build system you decide after 10 iterations that you probably should stop, then you can also tell your build systems that it should just give up at 10 iterations.
C is also turing complete, so I cannot reliably determine if my program will stop or not. Doesn't preclude me from running it.
If anyone's interested, I generalized make to work beyond files (eg: if the tarball doesn't exist on the website already, then it must be uploaded; before uploading, the tarball must have been tested). Also to add parameters to rules, which work a bit like .c.o: rules in make, but fully general with multiple parameters.
To be fair I haven't worked hard to make this very usable, but I do use it myself a lot for all sorts of interesting things:
Personally I've found make incredibly useful as an init system (though it takes a static make in /bin). I have targets like mount_a, %.start, %.stop (those both turned out to be overkill so I just wrote the start/stop targets by hand, but it works fine), halt, etc.
Works great, is very lightweight, does away with the antiquated idea of integral runlevels, and is very easy to debug. Plus you can probably set it up in a single afternoon and then just boot with init="/bin/make -f /etc/init.mk multi_user".
I believe that one crucial feature of Make is that it does not plan the construction of a target from start to finish but that construction and planning are interleaved: Make does not look at the file system and build a plan which it then executes but it only executes the next step and then looks at the file system again. This precludes using Make to create a shell script (much like a compiler) which is then executed. I'm not sure that this interleaving is present in the description of Make in the article being discussed.
To be perfectly pedantic, the essence of make is that it is an expert system for traversing a directed acyclic graph (DAG).
It works by constructing a graph of “filenames” (vertices) and the commands required to create those files (directed edges). Using that as it’s knowledge base, make performs a reverse topological sort on the target filename to determine the graph traversal necessary to create it’s target file, and runs each command in the traversal so as to arrive at that target. Since it has a list of all intermediate vertices that must exist before it’s target can exist, make is further able to determine whether it needs to run a command or whether it can use the cached result.
I don’t think the author is doing us any favors by trying to fit make’s behavior in to a constructive logic system. It’s missing operators for doing that, as the author states: “Note that the form of compound propositions allowed is extremely restricted, even by the standards of logic programming.”
I find the author’s conclusions then a mixed bag. They would be good conclusions if make was a constructive logic system, but it’s not, it’s an expert system for traversing a DAG. I think of make as a “workstate” system: do what you need to do to put my build in a particular state, whereas most of the author’s conclusions center around “workflow:” move this unit of work through it’s lifecycle.[1] Make only performs that work incidentally to putting your build in a particular state.
1: The workflow/workstate distinction is not my idea. It’s explored in “Adaptive Software Development” by James A. Highsmith III.
[I originally made this comment at https://lobste.rs/s/rqf3f5/propositions_as_filenames_builds_...]