Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Please – A Cross-Language Build System (please.build)
121 points by nikolay on Feb 28, 2018 | hide | past | favorite | 118 comments


I'd say meson is the most direct competitor. They don't mention it in the FAQ though.

http://mesonbuild.com/


>supported languages include C, C++, D, Fortran, Java, Rust

So it isn't general purpose?


Cross language but not cross platform -- no windows support, and none planned.



Based on its lack of entry, am I to understand that Nix is the way to go?


There are some other distro specific build systems missing. So I wouldn't think much of it. But here would be my point for 'Nix': QA doesn't tested for Cross-compiling so it doesn't work while it should be supported in principle.


I like this list! But the "Yocto" entry should be fixed. The build system is called bitbake not "Yocto".

There are other projects that use the bitbake build system, not just OpenEmbedded and the Yocto Project.


You include "Yocto" (which someone already pointed out should be bitbake) but not kconfig, which is used by Linux, Buildroot, Busybox, uClibc-ng and a couple more projects).


Nice list! CMake is missing a few dozen entries though!

"Stringly typed" - even lists are just semicolon-separated strings.

Insane quoting rules. I recently tried to work out exactly how the argument parsing works and the source code has undocumented behaviour. It's ridiculous.

Accessing undeclared variables works fine. They're just empty. Good luck tracking down silently ignored typos!

Scratching the surface...


Tup: No external dependencies

redo: build rules spread across many files


No love for D's DUB on your list?


I left out package managers from all languages and focused on generic build systems. The line is blurry though. Feel free to send a pull request.


"no hate", surely (:


Consider adding the fact that waf requires a binary blob in every god damn repo as a drawback to it.


waf continues to be the underappreciated gem in this space: high level build language supporting many languages and tools, cross platform, implemented and extensible in Python, nothing to install (except Python), and fast.


It is quite fast and Python is a nice language for specifying the build, but I found waf to be difficult to extend.

I had some tests that needed a binary file. That binary file was generated from a text file by a utility program. That utility program had to be built from source.

After a week of trying to get that correctly specified, I gave up. I thought I'd found the right way of doing it several times, but each time I discovered the dependency chain was broken in some way. I looked into how official modules that dealt with generated files were implemented, but I found them to be surprisingly complex.

By comparison, it was only mildly annoying to do that sort of thing in CMake (the build system I moved to after a year or two with waf). The relative simplicity of CMake was what convinced me to switch.

That being said, it's been several years since I last used waf. It may have improved during that time.


I have found the learning curve to be very steep. Extension points are several, subtle, and powerful, leveraging heavily not only normal OOP but also python meta programming. It is not out of the question go beyond all that and monkey patch some core class; I’ve seen this recommended by maintainers in some circumstances.

The docs are up front that waf is not a build system but a build system toolkit. For example, there is a demo that uses makefiles as the build language and runs them in waf. Another demo requires no build file but just finds the source files and builds them. But I have yet to see any external project released that says, “This is a build system... built on the waf build system toolkit,” like the Software Construction Toolkit was built on scons. Perhaps the toolkit is so close to being a build system that people just use it as is and require only small customizations, which are kept in their projects.


Wow. How is a system not supporting the most widespread desktop platform on the planet claiming to be cross-platform?

Having said that, there might be a way. From their FAQ:

> It might well be possible to make it work using Cygwin or MinGW / MSYS, or the recent Ubuntu on Windows development though. We aren't doing this work ourselves because we don't have any Windows machines, but we're happy to accept PRs in that direction.

Not sure I want to use a platform which the dev team can't test (and there are other alternatives), but it might work after all.

A side remark, the link posted below in this thread contains an awesome "awesome" list. For those interested, here is a (truly) awesome list of dataset resources: https://github.com/awesomedata/awesome-public-datasets.


Cross-platform just means more than one platform. Nothing more. There's segments of developer market that don't need Windows support. Those targeting Linux/BSD, Mac desktops + Linux-based backends (i.e. cloud), multiple UNIXen (eg Linux + Solaris), embedded developed on Linux boxes, etc.

I encourage Windows versions of cross-platform apps wherever the app could justify it due to all the developers and tooling on Windows. Yet, non-Windows apps can be cross-platform.


they could be cross platform, but thats not really what people think.. when someone says cross platform i definitely expect windows support


If someone says cross platform and it supports Linux, Mac, and BSDs, that is good enough for me. I have not done any development work on Windows in my entire life just like I have not done any development work on my Android phone.

Just offering a counterviewpoint.


I for one do not expect windows support in a build system. Recently, someone said adding support for space in the filename in make adds a complexity we can avoid. I'd argue adding Windows support adds a complexity we can avoid. I'd rather they didn't support Windows. I say we can revisit this issue if enough people still use Windows on the server ten years from now.


Thanks for letting me know not to bother!


How about support for multiple target architectures?


[flagged]


Hi, please consider reading https://news.ycombinator.com/newsguidelines.html for the future (snark and flamewar baiting is discouraged here).


So, it's like a worthless version of Bazel?


No Windows support is more of a feature than a downside.


I Googled to find the repository of a build system I wanted to mention and stumbled upon something else that I found interesting; https://shakebuild.com/

> Shake is a library for writing build systems. Most large projects have a custom-written build system, and developers working on the project are likely to run the build system many times a day, spending a noticeable amount of time waiting for the build system. This document explains why you might pick Shake over alternative tools for writing build systems (e.g. make, Ant, Scons).

https://shakebuild.com/why

Shake is written in Haskell and is open source.


One of the larger public projects working to migrate to shake over time is ghc. The current ghc build system uses a recursive make setup, and all that such entails. (:


Why are the examples in some horrible slide set instead of just on the website?


Since this thread has long since devolved into a discussion about build systems in general: if you don’t need a build system with Windows support, go with tup which is otherwise awesome. If you don’t care about bloat in your dev tools, go with meson (modern scons replacement). If you need cross platform and don’t like requirements on needless runtimes, I’m still searching for a good replacement for cmake (BSD Makefiles accomplish much of the same with a saner syntax just not as portably unless you need the ninja/msvc/qt/xcode intermediate build files).

Premake was promising but I fear it fell victim to early hype syndrome.


What's wrong with tup on windows? (I've never used it on windows, but its docs say that windows is supported).


Tup requires fuse because they use fuse to monitor the file system for changes instead of kqueue/inotify/FindFirstChangeNotification.

Fuse support on Windows is a no-go, very unstable, broken, and unavailable by default.


Why do all these build systems have magic built-in rules and special syntax? Custom languages, etc.

makefiles without any magic are comprehensible, but people quickly add magic :(


Both gnumake and bmake have lots of tooling, partly magic, at least for compiling C code.


gnumake has at least a half-dozen magic built-in rules, no?


A bit more than a half dozen, but mostly inconsequential:

https://www.gnu.org/software/make/manual/html_node/Catalogue...


I said, makefiles without the magic :)

If you only use plain rules and maybe simple pattern rules, avoid configure step, etc. Makefiles are simple, pretty and very generic.

Why do all other build systems have so much magic?

Why not declare rules and input/output in a pure functional manner?


I have used please, on a project also written by the please developers, of which I am not.

While my experience of `plz` may have been mired by the project that used it, I can't say I found it added anything special.

It arguably doesn't meet the needs of the `plz` developers themselves, as evidenced by the custom build-rules in the project that overrode the built-ins. And at one point, it was suggested to us that we drop Plz and move to buck/bazel/pants which was suggested would be a relatively simple replacement.

Sure, it built stuff, but the general impression was it got in the way, while due to the way it was configured, was essential to keeping the project running.

And why are there 20 releases on the 25th of November?


> Also we chose the domain name before almost anything else (priorities!).

I'm glad I'm not the only one with this problem :P


This looks so similar to Bazel..

What's the advantage of using Please?


There's a section in the FAQ titled "WHY USE PLEASE INSTEAD OF BAZEL, BUCK OR PANTS?" https://please.build/faq.html


i read that part but the explanation is poor at best... basically they dont like the jvm because. and thats all that they really said


It's tuned to the preferences of another company; if you prefer their taste on the finer points, that's one. Secondly, it's written in Go, so installation should be simpler.


What's hard about "brew install bazel"?


Well, for one, Linux/Windows/Solaris don't "brew"...


Apparently Windows doesn't "please" either! :)


nix for Linux, and maybe Solaris. Unfortunately Windows is go through instructions.


Why should I have to install the jdk for a build tool when I’m not using Java myself?


Why should I have to install dependencies when I don't directly use them!


i don't notice people complaining about needing ruby, python, perl or other things installed for a given tool that depends on them nearly as much as java. interesting.


I actually do (internally) complain when some tool uses python/perl/ruby. It's usually painful to install and maintain compared to most C or Go in this case.

Especially virtualenvs are an axe I like to grind.


python is generally preinstalled on all distributions i know of, because its needed for some utility applications.

ruby on the other hand gets quite a bit of flack as well, because most distributions are so far behind the official ones. that's probably why an embedded ruby binary has become a thing.


Are you opposed to java, or to installing being more than one step?


I’m opposed to tools bringing along the equivalent of an entire operating system, with a massive attack surface, versioning issues, separate package management, and everything else along for the ride.

A self-contained binary has everything it needs to function and is typically much smaller. If I’m not using Java for my work, then I don’t want to be encumbered with having to think about it, or stay up to speed with releases and everything else.

I like to see projects with as few dependencies as possible, even when statically linked. Bringing in the JDK when it’s not needed is tiresome.


Perhaps OpenJDK can be installed locally by Please when not detected in $PATH?


I mean simple in terms of the theoretical possibility of a single binary vs. any form of local compilation, packaging, or dependencies. In practice, I didn’t check, maybe they depend on other native libraries and installation is more than downloading a single executable.


Downvote? Both points were stated in the FAQ. I did extrapolate from Go to the installation point.


No runtime dependency on the JVM is huge.


Why?


The JVM is okay if you need to build the build-tool (or whatever tool you're interested in). Ideally, to use such a tool, one shouldn't have to install an entire programming language and development environment. Personally, if I work in languages other than Java, I don't want to install the JDK just to use a cool tool. The tool should stand on its own.


When it comes to a build system with Python-like syntax, I prefer Waf (https://github.com/waf-project/waf) to the rest of the competing solutions. It's written in Python, provides excellent support for many build scenarios, but can be extended with Python scripts without any restrictions, and has minimal dependencies (literally, you can supply Waf packed binary with your project and it'll work on all major platforms that have Python).


Meson and Ninja being adopted by gnome is game changing


I dislike that the only way they list to install is "curl https://get.please.build | bash". I know it's fast and easy but it really leaves your computer at their mercy.


How? It's over HTTPS, and you're already trusting them to execute code on your system... and it's not even root...

I see no way in which this "leaves your computer at their mercy" more than any other process of purposefully executing code they control on your system.


Luckily it's not root. Now it can only delete my entire $HOME!


I agree with the point you’re making and you clearly know what you’re talking about but:

I would caution you to use the phrase “no way in which” when discussing security - the less informed may read this and believe it.

While an edge case and requiring a mailicious targeted attack in this case there’s at least the possibility of being MiTM’d.

The problem - as you’re probably aware - with using absolute terms when speaking about a case like this is that it’s easy to extrapolate this sense of safety to something that may lead to an attack that requires much less precision thank convincing the browser that the MiTM proxy is “please.build”.


A glance inside their shell script shows they don't protect against something as simple as a broken connection. Because curl | bash is vulnerable to partial execution.

For the inner downloads in the script, they use the -fsSL flags, which would protect against such broken behaviour. But not their use-facing script.

More to the point, the install just downloads:

https://get.please.build/${GOOS}_amd64/${VERSION}/please_${V...

then unzips and links it to PATH. No checking the source isn't corrupt, no checking if the tar archive successfully expands. (And the var GOOS seems to depend on an environment variable I don't think is guaranteed to exist. It certainly doesn't on my Mac.)

If that's the case... Why not just provide a download link? It won't have the same issue as a broken install if the connection drops, and is just as easy. The only technical bit, linking to PATH, is something the end audience could be expected to know.


>the less informed may read this and believe it.

What they should be doing is to understand what's actually going on.

Once you download and run software from a TLS enabled website, you're putting trust in that website. It doesn't really matter if you are doing 'curl https://example.com | bash' or downloading a binary. They can't be MITMed any more so with the curl way than downloading a binary.

That's all there is to it. I realize there are many security people who advice against doing the curl thing, but I feel people should realize it is a rule of thumb with unsecured websites.


Completely agree that they should understand what’s going on however:

It’s sometimes easy to forget that there are various reasons people are less informed. For people early in the career for example, reading security advice on HN and believing it is not unheard of especially when you have real experts on here who know what they’re talking about.

Something that is “all there is to it” for you is not necessarily the same for someone else. I’m not saying it’s anyones job to inform them; just that it’s better not to speak in absolutes.

If security folks say something, I assume there’s a good reason until I become a security folk myself - it’s one of those fields where being sceptical about everything third-party helps.

My point really wasn’t about the MITM but a there’s always a possibility that a proxy on a public/compromised network can intervene between you and a “secured” website.

Binaries can be verified with checksums to make sure that the artifact hosted on the repository is indeed what you received. You are 100% correct that you are still trusting the code from the third party developer though!


>My point really wasn’t about the MITM but a there’s always a possibility that a proxy on a public/compromised network can intervene between you and a “secured” website.

Same can happen to a binary.

>Binaries can be verified with checksums to make sure that the artifact hosted on the repository is indeed what you received

Here's the problem. The checksum is usually on the same page the binary is located. Which pretty much defeats the purpose.


I see, and this was your original point.

I guess the only threat there (that I can see) is that the install script can be malicious, but your point was that you’re trusting the owners of the website anyway by downloading their binary and executing their code locally. It can be argued that it’s probably easier to ship malicious code outside the main repository (e.g. in an install script) but I do not have a good counter besides this weak argument.

The checksum is indeed usually on the same page and does make it useless in my hypothetical MITM.


Are there any serious comparisons with package managers versus curl-to-bash? Seems as if there are a lot of trade-offs.


this seems like a 'perfect is the enemy of good' framing.

in any event, it's more surface area. their web server being compromised and serving a bad shell script is just more that can go wrong.


> in any event, it's more surface area. their web server being compromised and serving a bad shell script is just more that can go wrong.

If they were serving up a binary you would have the same exact threat that you mentioned.

The threat model barely, barely changes when talking about curl | sh vs downloading and manually executing a binary. Barely.


The name is not very google friendly. It took me a bit longer than I would have liked to find examples for my specific use-case.


I could see them using "plzbuild" as their version of "golang", for the same problem.


Very cool site!


Apologies in advance for shitting on this, but PLEASE STOP BUILDING BUILD SYSTEMS.

We already have a serious incompatibility problem with projects using autotools vs CMake vs Meson vs gyp vs Boost.Build vs SCons vs BUCK vs... and now we throw Please onto the pile.

It sucks when you find a smallish library and discover it uses an esoteric build system whose dependencies dwarf the library themselves (cough Yoga). The OSS community needs build system consolidation, not tacking on a 15th wheel to the cart.

Read the FAQ, it's just bananas:

> It [Bazel] is a great system but we have slightly different goals

> We preferred [Buck] to other options available, but again we're focused on different goals

> we didn't think it [Pants] was the ideal fit for us at the time

"All other systems had slightly different goals or weren't the exactly ideal fit, so rather than contribute to or extend them we poured an enormous amount of energy into rebuilding them in a slightly different way."

Relevant: http://www.rojtberg.net/1481/do-not-use-meson/


Could you please not use uppercase for emphasis, regardless of how much build systems annoy you? It's basically yelling, and the site guidelines ask you not to: https://news.ycombinator.com/newsguidelines.html.


Some might think, myself included, that some of the problems you are stating is caused by build system dogma, e.g. "stop building build systems, other projects aren't moving, so it makes you esoteric". Current build systems suck. It also sucks that there are so many. It's unfair that everyone has to hear that they contribute to the latter just because they want to solve the former.

What sucks more to me than finding a lib using its own build system is finding a lib that won't adopt more modern ones or languages that refuse to fix their defacto language-specific build systems, both caused by the sentiments you state about having too many and/or it being too hard for builders to change their ways. There is a middle ground here and it starts with asking software to move forward, not asking it to stop.


>PLEASE STOP BUILDING BUILD SYSTEMS.

This couls also be said about many other things as well. For example package managers and test suites. Even worse is that every language feels like they are obligated to build their own because they are obviously the best programmers on the planet (them using that language is a proof of that obviously). What results is dozens upon dozens restrictive and sub-par build systems, package mangagers and test suite that are all used by only a small group of people and are just an extra dependency for everyone else.

>It sucks when you find a smallish library and discover it uses an esoteric build system whose dependencies dwarf the library themselves (cough Yoga). The OSS community needs build system consolidation, not tacking on a 15th wheel to the cart.

Developers generally don't think of the build system as a dependency because tehy already have it installed on their own machine. Though it shoudl also be mentioned that a big portion of developers (especially those using languages that encourage to install everything with their own package manager) have a very low or no barrier for adding new dependencies.


> Relevant: http://www.rojtberg.net/1481/do-not-use-meson/

Meson is fantastic.


Except it’s a build system for C/C++ that brings in a dependency on the entire Python kitchen sink just to build a 10kb otherwise dependency-free binary.


The systems that I use it for all have Python installed by default, so really it's just: 'pip install meson', which takes all of a few seconds to run.

The binary it produces is still dependency free.


I’m (honestly) glad it works for you.

Distributing binaries outside of package managers is a) a rarity in the Linux world, and b) not in keeping with the open source philosophy. Requiring end users to have python so they can compile a c/c++ application will never make sense for a large portion of the user base build systems target.


It's no more egregious than requiring cmake or autotools.

And like I said, many Linux systems (and almost any used as a developers machine) will have Python installed by default.

And the simplicity of meson's build files far outweighs any issues installing it.


>The binary it produces is still dependency free.

But the project now depends on Python, pip and Meson.


As opposed to cmake or autotools?

I mean it's not like Python is a rarity on Linux systems, or a difficult thing to install (assuming it wasn't installed by default)


Python might not be rare but Meson for sure is. Cmake and autotools are very common compared to that.


Rare yes, but no more effort to install, and significantly more pleasant to use.


Well, and frankly, python people shouldn't use this anyway.


>contribute to or extend

I don't think it's that simple. Sure, I could spend time figuring out the GNU Make source code. I could somehow shove a C preprocessor into it and make it automatically generate dependencies based on include directives. I could make it depend on clang. Will the make maintainers accept my patch if I send it their way, though? Somehow I doubt it. What if I changed the make language to make it easy and unambiguous to write paths with spaces and other problematic characters in them? Do you think the maintainers are going to accept a patch that will most definitely break existing makefiles that work perfectly fine just because I fixed a major limitation of the program? We're talking about a tool whose grammar requires that commands be prefixed with a tab just because the author experimented with that rule and then couldn't change it because about a dozen friends were already using it. Fixing make is simply not possible at this point. If you try, it will just become yet another incompatible tool, some kind of make derivative that will have to be maintained independently. At this point, why limit yourself to make? Might as well rethink the whole thing.

As far as GNU/Linux distributions go, make and autotools are the default. If you're using anything else, it's gonna be a build time dependency that people are going to have to install in order to build your program. So you look at the other tools and you find they don't quite do what you want either. They only do about 80% of what you want, and getting it to do the remaining 20% makes you really wish the tool was an actual programming language instead of a limited build script. Is it any wonder people roll their own?

Also, I believe people should work on what they personally like. There is no obligation to contribute to some project just because it's "standard". It's an interesting problem that's difficult to solve, so it's not weird to see lots of people offering their executable opinions on how things should be done. If programmers don't like what's available out there, they are welcome to try and come up with something better. Who knows what will happen? Maybe the new wheel will turn out to be better than the current one. Maybe it won't. It might introduce new ideas that will influence new designs. These ideas might even make their way into other systems.


> I could somehow shove a C preprocessor into it and make it automatically generate dependencies based on include directives.

But the C compilers already provide tools for example `gcc -MMD`


PRs welcome

This is awful demanding. Not entirely unexpected in tech tho

“Why don’t others fix my frustration in open source projects!” they shout with no hint of irony


Please don't make such low-effort, insinuating, cliche comments on Hacker News. It makes the reading experience very poor.

Consider that the parent comment author might already be making pull requests to the projects of his interest. The snarky "PRs welcome" is therefore unnecessary and unwelcome. Everyone here knows PRs are welcome in an open source project. Some of us do send PRs. But whether someone sends PRs or not has no bearing on whether one can appreciate or criticize an idea as long as it is done in a substantive manner.

In fact, whether the parent comment author sends PRs or not is orthogonal to his complaint that we have too many build systems that is increasing the burden on users and maintainers. If you have something to say against this point, please do so on its own merit in a substantive manner without insinuations or ad-hominem attacks.


I hear you. Building this was really hard work. Open sourcing it is an achievement: the team had to navigate fraught corporate politics, untangle internal dependencies, etc. Technically it's excellent, no doubt. The site is appealing and convincing: legit kudos to whoever designed and built it.

Maybe Please is so dramatically better that it pulls tons of projects into its orbit! What a great outcome: I'd contribute PRs for every missing use case, accelerating the consolidation.

But if (as the site says) the goal is building something at parity but with a slightly different focus, then it can only produce further fragmentation. Projects that adopt Please inject dependencies on random S3 buckets and .bashrc edits. I can't accept that in the software I maintain, so any Please-built projects are de-facto inaccessible. That's bad.

By all means do what's best for ThoughtMachine, but publishing and promoting this makes building OS software harder, not easier.


i'm unclear on your message. are you saying "PRs welcome" is demanding? are you saying the creators of 'please' should have contributed to bazel/buck/pants/x/y/z?


The FAQ basically says Windows can be supported, but only if the community manages it.


Why would I use this instead of Nix?


Do people use Nix for building and deploying project dependencies? Example project? Can you consume the result in a format that isn't all Nix-y symlinked?


nixpkgs contains tools for building Docker-compatible container images with `dockerTools`, and also AppImage-style standalone executables with `nix-bundle`.

My project, as well as many other projects, ships its own default.nix for builds. Linking an example would deanonymize this account a little too much, but there's plenty of examples out there.


Please bazel, buck or pants?


Since Bazel is a version of the thing everybody is copying (Google’s internal build system Blaze) I would say Bazel is most likely the best of the bunch.

I’ve used Buck; it’s slow. I haven’t tried Pants.


I've used blaze for 2.5 years, best build system ever, but haven't used at all "pants please or buck"


So err... where is C# / Mono in all this?


Nice work, I will try using it on my next project. 404 Not Found WinLover at localhost.


is the 404 thing humor? the meaning is a bit unclear.


this looks very interesting! Has anyone used it before?


>well I tried but they dont have samples so I gave up, no point in spending hours in reading docs and setting up something for a build system that may be useless. It also seemed suspiciously identical to bazel with really no features that may be special in any way? I mean really, why use this and not just bazel. The webpage doesnt even say

I agree with this guy. It shouldn't be that hard to include an example of an advanced script.

https://www.gnu.org/software/make/manual/make.html#toc-Compl...


favorite thing about it is their use of the pragmatapro font on the website


Is there a problem with cmake? Why are people still inventing new build systems?


cmake cons (IMHO):

- the archaic custom scripting language (that's its main problem)

- there are many ways to do the same thing, resulting in each (non-trivial) build script looking different

- for the above reason, importing dependencies written by somebody else is non-trivial, and its often better to rewrite the cmake scripts completely

- it lacks some 'integration features' found in more modern systems like Rust's cargo: (1) proper dependency/package management (2) run targets (3) switching between build configurations should be easier (not just release/debug, also different build target platforms)

cmake pros (also IMHO):

- can generate IDE project files (that's what most new build systems completely ignore)

- very good cross-compilation support

- describing a simple build is reasonably simple

- broad support for build tools and IDEs


Microsoft (both vscode and studio) seems to be moving toward ide adaptation — parsing output from other compilers, supporting compilation databases and folder-only (non solution) projects. Perhaps this, combined with Clang viability for MSVC ABI, will reduce the impetus for cross-platform projects to use cmake just to support Windows. Of course, there’s added inertia in favor of cmake due to LLVM, Boost, and others moving over. Not to mention that Microsoft now also supports the “cmake server” concept (there will be some deep irony if after 20 years of “we need cmake to support Microsoft” the argument for the next N years becomes “we should use cmake because Microsoft supports it”).


Another disadvantage of cmake, when generating VS solution and project files, is that they are not standalone - just like when it generates makefiles [1]. It makes it nearly impossible to reason about your build dependencies from the VS IDE.

[1] https://github.com/qznc/annoying-build-systems#cmake


Because make is awkward and not very developer friendly, and it relies on shell scripting, which is arguably even more awkward, hard to learn, and developer-unfriendly.

Also, make is very barebones and "close to the metal". Good modern build systems provide libraries of common tasks so you don't have to reinvent the wheel; they can handle parallelism well; and can fall back on makefiles where needed.

They let you get more done in less time.


I think you missed the fact that he's talking about Cmake not make. While Cmake can generate make scripts, it can also create IDE projects and some other outputs like Ninja files.


Have you used CMake?? Remind me again how argument parsing works? How do I know if a variable is a list or not? What's that? Lists are strings? Everything is strings???

CMake is firmly in the "filenames don't contain spaces" era, along with make, Autotools etc. Granted it is more or less the standard at the moment but look at Meson to see how totally insane it is.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: