> Yocto is totally the opposite. Buildroot was created as a scrappy project by the BusyBox/uClibc folks. Yocto is a giant industry-sponsored project with tons of different moving parts. You will see this build system referred to as Yocto, OpenEmbedded, and Poky, and I did some reading before publishing this article because I never really understood the relationship. I think the first is the overall head project, the second is the set of base packages, and the third is the… nope, I still don’t know. Someone complain in the comments and clarify, please.
Bitbake is the generic build system around which openembedded recipes (for particular packages) were implemented. Anyone could create a distro using those recipes, dates to the early to mid 2000s.
Yocto was/is a project of the Linux Foundation, basically a working group, where they looked at the state of embedded linux and said, "We want to contribute to this project with documentation and more recipes", starting externally but with hopes to get it mainlined. Poky was/is their reference distro for this effort.
Nowadays all the packages contributed by the Yocto project have been consolidated into openembedded, but poky remains the reference distro.
tl;dr: Yocto is first and foremost an organization of people. Bitbake is the build system. OpenEmbedded is a community of distro-agnostic build system recipes. Poky is a distro maintained by the Yocto organization utilizing OpenEmbedded recipes.
> But here’s where Yocto falls flat for me as a hardware person: it has absolutely no interest in helping you build images for the shiny new custom board you just made. It is not a tool for quickly hacking together a kernel/U-Boot/rootfs during the early stages of prototyping (say, during this entire blog project).
Let me suggest looking into the `devtool' utility. It's a yocto utility that enables on-the-fly work that the author enjoyed with buildroot. For instance, running `devtool modify virtual/kernel' will place the configured kernel source in a workspace directory where you can grep and modify and patch to your hearts content; on a new board, I might work for weeks in this state bringing up a new board as I patch drivers, or develop patches to play out-of-tree code over the mainline kernel. When I'm happy with my changes, I add them back into my recipe and test it by disabling the temporary workspace `devtool reset virtual/kernel' and building my recipe from scratch again.
Yocto has other amenities that ease iteration on existing boards. For one, it straightforward to cross-compile my python3 extension modules in a recipe in one base layer for my product family. Later, when I'm spinning up a derivative project, I can setup a product-specific layer to override the CPP flags, configurations, or patch the source to better target my board.
The yocto learning curve may be steeper, but the benefits of proper dependency tracking and layers far outweigh the drawbacks. At this point, if I use a board from a vendor that ships a buildroot BSP, I'll take a day to port it to yocto before moving further.
I agree completely. Yocto/OE gets a bad reputation as overly complicated, especially because a lot of people are doing this with hobbyist boards where they just want blinkenlichten. Yocto is definitely not easier for quick spinup on weekend projects.
However, if you're doing this full-time, and you want to do anything remotely complicated (you will), and especially if you have multiple products (you will), you start yearning for OpenEmbedded. It takes some time to learn, and the abstractions are hard to understand outside-looking-in, but it's well worth the effort.
This will output a self-extracting archive in your deploy directory. You can install this mostly self-contained SDK to a directory of your choosing. Then source the environment file it installed to get the correct build variables setup (CC, CXX, CFLAGS, LD, LDFLAGS, PATH, etc). From there, the variables will be setup to use your embedded image's sysroot. I do almost all of my userspace development using the SDK. If you're using CMake to build your native code, it will pick up the variables for you.
There are some gotchas with it, in particular, there are some modifications that might be necessary for your configuration to get the SDK to include more things relevant to your build. Probably the most notable is that static libraries are not output into the SDK by default.
I've used bare crosstools, buildroot and now yocto in production projects spanning 14 years. Personally I find it a lot faster to move with yocto once you grasp its principles and structure: especially if you have several related product lines that can be expressed as combinations of layers.
I've used Buildroot for the last 8 years and Yocto/OE for the last year.
There is a significantly steeper learning curve for Yocto when compared to Buildroot. Buildroot is faster for the initial build, but often slower than Yocto after the initial build.
Here's what I like about Yocto:
1. It forces you to be organized, everything has a home and things can't conflict with each other.
2. By using Yocto's shared state cache, you can have one central build server and automatically fetch pre-built artifacts on other computers. With this I can get a developer hooked up with everything that they need to build a full system image on their computer in just a few minutes -- and completely build the image at that time.
3. I am confident that incremental changes are built correctly. If you change a build-time parameter of a package in Buildroot, things which depend on that package are not rebuilt. This is not the case with Yocto. This can also result in unfortunate rebuilds of many packages just because of a minor change to, say, glibc. I know that they do not need to be rebuilt but Yocto does not.
4. Buildroot puts built items in a single staging directory. Package install order differences mean that you can overwrite files accidentally. Consider /usr/include/debug.h in two different packages, or something like that.
If you are not explicit with dependencies, the build may actually succeed but it may not be deterministic. If package A happens to be built before package B, you're golden. This does not always happen, and sometimes this is not found until you do a clean and a rebuild. Yocto forces you to be explicit -- the build tree only includes artifacts for recipes which have explicitly been defined.
5. Yocto can use the same tree and shared state cache to build multiple images for a given product without having to clean the world.
I loved buildroot -- it was fast, nimble, and easy to use. It also lets you cut corners and find yourself in situations where builds would unexpectedly fail after a clean. I am also very happy that I took the time to learn how to effectively use Yocto.
These are all excellent points, it just saddens me that embedded has still not moved past the "recursive Makefile" phase.
Part of that fault lies with the hardware manufacturers. They are invariably hardware companies that don't value software. They pick an open-source project like OpenWRT or Buildroot and literally hack at it until the resulting monster can build an image for a reference system that can stay up just long enough to pass a few end-to-end tests. And the damage is incredible. Nothing is spared mutilation at the hands of their incompetent developers, the entire software stack from u-boot, over to Linux, across essential system services and concepts all the way up say the LuCI interface OpenWRT ships is modified, mostly haphazardly to support one specific configuration. The resulting garbage is frozen in time, zipped up and thrown over the fence to their partner companies trying to turn their hardware into a portfolio of consumer products increasingly defined by software first. It's hard to describe the level of stupidity; they will base their shitty proprietary Linux modules on LTS versions of the kernel, then never update anyways! They adopt "standardized" upstream things like nl80211, then require you use all the proprietary interfaces they previously had and just stuffed into some side-channel.
The other problem is using something like OpenWRT or Buildroot in the first place. This is not to disparage these projects, obviously these are mostly driven by hobbyists who are free to use their time however they want. But there is certainly a tendency in these projects with adherence to arbitrary, mostly terribly old and shitty Linux standards and 'practices' grossly unfit for what you would want in an reliable embedded systems. There is a focus on breadth, expansion and freedom instead of relying on robust building blocks. They try to collect the entire history of open-source software and bend their build systems to make and package the original .tar.gz downloaded from some FTP server. Shell scrips rule supreme, not just in the build but often on the resulting firmware images. A lot of these choices are supremely unfit for the purpose of making long-term supported firmware for embedded devices.
Lots of praise here for Android. Sure, they started with the same recursive Makefile stuff in their original startup roots. But they iterated. They saw the problems. A monumental achievement in the field to have a build system that will first draw up a plan, then go about executing it with considerable chance of success instead of failing randomly in the middle of some lazily recursed Makefile. They critically look at all the pieces that build and end up running on the device; they standardized on the Clang toolchain, they don't try to give you a choice of three compilers and four standard libraries. They didn't scare away from the long haul of pushing that singular toolchain across the entire stack; being able to compile the Linux kernel with Clang is the result of foundational Android work. They revolted at the sight of glibc or uclibc and build and maintain their in-house libc, on a tight feature leash. Their focus with bionic isn't to be truthful to some obscure corner of a POSIX standard circa 1983, it's to enable things like a safe allocator or uniform crash handling and report generation across all of userspace. Any sort of shell is intentionally hamstrung and scripts absent. No patience for oldschool crap like SysV init here.
Just as a data point. Google WiFi is built with Qualcomm WiFi radios, but it uses none of Qualcomms proprietary software. They preferred to use the open-source upstream drivers. Zero confidence in any of Qualcomms "software".
Bitbake is a bit further than 'recursive makefile', it's much more along the lines of a package manager like nix or portage (though it's less well designed in most aspects, the build file syntax is insane and debugging it is a nightmare. I think it's grown the necessary features instead of stepping back and understanding the problem). And it's important to realise it's mostly focused on building packages like a linux distribution, even if the systems are rarely managed by installing/uninstalling packages. This is where the whole idea of taking an upstream tar or repo and patching it together comes from, and it makes perfect sense when 90% of your workflow is 'take someone else's code and modify it slightly to integrate into your system', especially when that code gets updated (it's still not painless, but you have some hope of doing it). When you're google and can afford to rewrite huge parts of the system and have no need for compatibility then you can make a more nicely integrated system, but most embedded applications cannot afford this.
IMHO with modern architectures these days, and from the last couple of years in particular, if you start feeling the need for something like Yocto I'd rather use a full blown distribution like Debian or Arch Linux if its available for your platform.
I have tried Yocto through the years and deployed a bunch of projects with it. Although it gives you the sense that it has a more coherent environment than Buildroot, I find that it is difficult to maintain since it has too much unnecessary complexity for embedded targets, and sooner or later that ends up getting in the way and biting you back. Not enough visibility, which is crucial for embedded systems. More so if the systems have network connectivity and need to be properly secured.
It could be that I am too acquainted to Buildroot's pretty flat and its spartan, no frills architecture. With highly constrained devices that is an advantage. Automating configurations takes low effort, they build quite fast, and you can maintain testing and deployments easily with your own bash or python scripts. There are few places to fiddle with variables and you can easily assemble images on the fly or through overlays. Many times you just need to modify the configuration file and rebuild.
The last years I have been progressively using Buildroot along with docker. They complement each other well and you can create base systems and then grow them tailored to your application taking advantage of the docker incremental build system. I regularly maintain cross-compilation and qemu-based docker image toolchains for my target architectures that way. They can be recreated and tracked reliably, 100% of the time. I use them to build the application images or just augment them with a buildroot overlay with the extra files and then also deploy them remotely through the docker facilities.
Using a slow release distro as debian (either directly or as a reference) also have the benefit of compatible versions. I.e. no need trying to figure out how to get both $A and $B compile against a common library.
FWIW, I still use ltib (originally a Freescale open-source build environment) and it's mostly great. Fewer layers of abstraction, builds all the source modules using essentially the "linux-from-scratch" system, with .rpm intermediaries that get installed to a rootfs.
I played with Bitbake, but the learning curve seemed much worse than ltib.
Does anyone have experience using either BUildroot or Yocto to build a virtual appliance? That is, a VM image that runs on a typical x64-based hypervisor. I'd be particularly interested in experience building an immutable image with an A/B setup for updates, in the style of Chromium OS or the old CoreOS (now Flatcar Linux).
I was able to load up yocto's standard VMDKs into HyperV on my windows desktop, but AWS's import tool barfed on it. I build an embedded linux and wanted to use AWS as a hypervisor for larger scaled testing. (It was fairly easy to setup a custom machine config in yocto. (Require tune-core2.inc and you're off to the races.) I use the variable overrides system to tweak build build arguments on some packages to have them stub out hardware in the virtual images.
I've found though that making your own image classes to be the most direct way of formatting your image if you need something exotic. The image classes system is kind of fun - you can keep stacking on more reusable classes that act as a kind of pipeline.
To get it working on AWS I ended up doing what I needed to do by spawning a VM with an off the shelf AMI, attaching a second disk, dd-ing my image on, and then capturing that as an AMI. (I'm kind of annoyed that I couldn't find a proper API for this, but maybe I'm not looking in the right place. Their import tools all want to understand things about your image and mess with it, and I just wanted to send them a flat disk image.) This process was an annoying enough slow down for testing that I ended up making an image that would boot up just enough to get networking up, then fetch the real image from S3 based on directions in user-data, over-write itself, and reboot. If going with newer AWS instance types, don't forget to include their kernel modules, and have fun debugging when it doesn't boot :)
Yes, Buildroot comes with a bunch of configs you can use to build images that run on x86_64 and qemu.
'make list-defconfigs' will list everything that is available
The A/B update thing typically works by having two disk partitions and then using your boot loader to switch between them and only updating one or the other. You can probably write a u-boot script to track failed boots and switch between images or something like that though I've never ventured down that road.
Bitbake is the generic build system around which openembedded recipes (for particular packages) were implemented. Anyone could create a distro using those recipes, dates to the early to mid 2000s.
Yocto was/is a project of the Linux Foundation, basically a working group, where they looked at the state of embedded linux and said, "We want to contribute to this project with documentation and more recipes", starting externally but with hopes to get it mainlined. Poky was/is their reference distro for this effort.
Nowadays all the packages contributed by the Yocto project have been consolidated into openembedded, but poky remains the reference distro.
tl;dr: Yocto is first and foremost an organization of people. Bitbake is the build system. OpenEmbedded is a community of distro-agnostic build system recipes. Poky is a distro maintained by the Yocto organization utilizing OpenEmbedded recipes.