Hacker News new | past | comments | ask | show | jobs | submit login
Qake: GNU Make-based build system with a different approach (github.com/mkpankov)
54 points by nkurz on Nov 4, 2014 | hide | past | favorite | 30 comments



I've been recently using Tup [0] with great success in a project. It is interestingly different from other build systems in the sense that it defines dependencies in the opposite direction (from source file to product, not the other way around). It is incredibly fast and simple.

[0]: http://gittup.org/tup/


Tup came to my mind too when reading this. It must be doing a similar kind of dependency tracking to only rebuild what is necessary.


I did not know about tup. It looks excellent. Time to try some experiments...


I'm finding that for compiling code usually it is the configuration that makes life hard rather than the build system part. I've been using cmake [1] which makes the rules to build object files, libraries, executables pretty simple although determining and adapting to the eccentricities of any arbitrary system's environment is painful.

I am also liking snakemake[2] these days for running arbitrary chains of jobs that have dependencies (in addition to build system). It has a nice syntax, easy to build and run dependency graphs of different jobs and has built in multithread support and cluster integration if you want to scale up to bigger data sets. Very nice middle ground being much easier to maintain and rerun than a pile of shell scripts but much lighter weight than a whole hadoop "big data" setup.

[1] http://www.cmake.org [2] https://bitbucket.org/johanneskoester/snakemake/wiki/Home


Yeah, previously we had a no-configure setup (just a configuration file). I have some thoughts on how this could be implemented in Qake properly, but didn't get to it yet.


I've always wondered why Shake[0] wasn't more popular. Anyone not like Shake or know why others don't, or is this just a case of not being known?

0: https://github.com/ndmitchell/shake


The Manual[0] contains an intro that isn't particularly compelling. It contains things like:

   phony "clean" $ do
        putNormal "Cleaning files in _build"
        removeFilesAfter "_build" ["//*"]
Which would be the make equivalent:

  clean:
    rm _build/*
Now that's obviously the trivial example. But its not trivial to drop make entirely for something entirely different. A make solution can slowly get more complicated over time. It looks like shake might be useful for something really big. I can't quite tell where the investment of learning a new tool and re-develop something(that works well enough) would pay off.

0: https://github.com/ndmitchell/shake/blob/master/docs/Manual....


Yes, but that 'make clean' will stop working if there happens to be a file named 'clean' in the directory. You need to mark clean as a .PHONY target.

Make is complicated. By the time you get all *.c files to rebuild based upon a change to any header they include, your makefile is going to look like a magic spell. And it only gets worse from there if you're doing any code generation. Make is concise, but it's weird and painful.

I've not actually tried shake, but it looks a lot less magical. I'm willing to type a little more if it means I can actually understand it when I look at it again tomorrow. Or, that my coworkers can understand me today.

Though, on an open-source project I'd probably use CMake just for ubiquity. It has the added bonus of being able to spit out either makefiles or ninja build files (which can be used by shake).


So as someone thats recently been getting into Haskell I can say this, that example while sure its rather much more verbose for the simple case, the more complex stuff is where it starts to shine more.

That and without knowing haskell a little a lot of that will look like needless chatter. But like rake once you realize the dsl is just Haskell it starts to fall into place more.


Which is fine if you're using Haskell. I'm not sure I see the case for something like this if you're not using Haskell. Adding another package management system (cabal) into the mix seems like a lot of work.

Of course I don't think language-agnostic build systems in general make a whole lot of sense unless you have a huge enough project to have a team dedicated to the build system.


In my case, I was entirely unaware of Shake's existence. The no-nonsense support for file oriented workflows is the most compelling feature of Makefiles for me.

In that regard, Shake seems really compelling. It supports the file oriented nature of my build system. It's backed by an actual programming language, with lexical scoping, functions, a module system... all the things that suck about Makefiles.

Most importantly, it solves the one thing that drives me up the wall with Makefiles: supporting custom command line arguments sanely. I detest typing `make web TARGET=prod` and `shakeArgsWith` seems like it provides the flexibility of specifying targets and flags from custom command line arguments.

Thanks for this, I'm going to try and convert a few non trivial Makefiles to Shake. Any cons I should be aware of?


For me, it always was more about infrastructure - successfully distributing a project when it's built with something less-than-mainstream is a problem, even if majority of install base is Ubuntu systems.

In this particular case, we would need to either build GHC from source for every user, or provide a binary compiled version for every Ubuntu version to keep providing source-level access to build system.


  Goal
  
  The user is supposed to be never needing to call clean goal.
Does this track build tools also? If you upgrade a compiler and some intermediate binary format changes, will it automatically clean up those stale files?


Currently not. I imagine how this could be done, but it's quite a portion of work.


This is similar to what I wrote at work on and off over the last couple of months, replacing a build system using recursive Makefiles. Due to the way our product is composed I ended up adding support for building static libraries, programs, RPMs and documentation. With full support for dependencies, meaning if a source for a library is changed, the library will be rebuilt, programs that links against it will be re-linked, if the program is part of a RPM it will be rebuilt etc. Documentation will also be rebuilt if source embeds documentation. Another great bonus is of course that with one GNU Make instance and proper modeling of dependencies "make -j" works great, every time. I guess we have a couple of hundred source source files and "make -j" will happily start compiling them all. Read Peter Millers paper about recursive Makefiles about why the above is preferred.

Makefiles are of course a bit limited compared to shell scripts, but you can do a lot with implicit rules, static pattern rules, second expansion, call and eval, etc.


I ended up using $(eval ...) a lot, and turns out Make's support for it is... suboptimal.

As for paper - I believe it's "Recursive Make Considered Harmful" - I read it, it's great. Was one of the main motivators during build system rewrite.


Three or four levels down in $(eval ...) and $(call ...) still makes me stop and think how many $ I should have.

Yes, that's the paper, I was also heavily motivated by it. And when I read JDK also switched[1] to something similar, it just cemented my belief it was the right way to go, for the same reasons as for them.

[1] - http://openjdk.java.net/jeps/138


    wget --no-check-certificate https://... - | sh 
Might as well not have that https in there at all... sigh.


It's a self-signed cert - and just as encrypted as it would be with a traditionally signed cert.

This is the half of SSL that I care about - I really don't care if you handed your money over to some organization that verified you have a working phone number.

--

Actually it doesn’t appear to be a self-signed cert in this case - or even necessary. That cert is playing fine with both safari, and GNU Wget 1.14.


When you're installing software from https, you're not trying to make sure nobody can see the contents of the message (it's publicly available), you're trying to ensure that there's no man in the middle tampering with your software en route.

A self signed cert which you can't independently verify is entirely worthless in this context. A man in the middle could simply substitute his own self signed cert and you'd be non the wiser.

You use signed certificates so that a vendor can prove their identity reliably. I care that the software I'm downloading actually comes from the owner of the domain I'm downloading it from. I can't do that with a self-signed cert.


So the argument here is that we are trusting github not the author of the software. And that way we can trust the code audit we do on GitHub to be the same as the downloaded software. So we don't have to use our own software tools to do that audit, we can look at the code on github.

I can see that being a valid argument for github however for self-hosted non-famous authors the fact that they are who they say they are means nothing to me°. And as such I'm going to have to audit the software on my box regardless. (Or just forget about auditing and trust of the world is a safe place - which is what most people do anyhow - and if you are doing that you don't believe in mitm's anyhow.)

°also I would argue that they signed certificate doesn't prove that anyhow. And state actors can forge these anyhow, so we are now talking about people who control your pipes, but not the government, and who hasn't hacked the end point. And


The point is, if you don't verify ssl certificates, you might as well use http. Https with self signed certs provides you no security in any circumstances downloading public software.

Self-signed certificates and http connections are trivially intercepted and forged (ever used wifi in a public place?)

Signed certificates provide limited proof of identity true, but they can't be forged by jokers hijacking the wifi in a coffee shop.


Older versions of wget didn't like wildcard certs (IIRC). I remember this being an issue around the time that github.com when https-only a few years ago.

It's possible that this is necessary here (on older versions of wget), or that using '--no-check-certificate' is a bit of a cargo-cultism by the author (who learned to use it, but doesn't know why and when usage makes sense).


I guess you won't mind when I put my own self-signed cert there, then.


A self-signed is worthless since it can be trivially MITM'd.


Well, I'm going to admit I blatantly stole that piece from another project I use... Never considered that :)


Another alternative to make is mk [1], originally written for Version 10 Unix before becoming the standard build tool in Plan 9 and Inferno, and later ported to Linux/BSD/OS X as part of plan9port [2].

[1] http://doc.cat-v.org/plan_9/4th_edition/papers/mk

[2] http://swtch.com/plan9port/man/man1/mk.html


Surprised no-one has mentioned redo[0]. I much prefer it to `make` for personal projects these days.

[0] https://github.com/apenwarr/redo


I've always wondered why rake isn't more popular.

It's very make-like and simple, but with the advantage of a fully-featured scripting language.

https://github.com/ruby/rake


In some environments, even upgrading Make to a newer version is a hassle. Let alone installing some different tool.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: