My build process is divided into three parts:
- Toolchain
- Userspace
- Kernel (and modules)
The toolchain is a mess of bash scripts that facilitate downloading tarballs, extracting them, patching their source trees, and then building them. Part of this is the native toolchain - gcc, binutils - but the standard build also includes cross-compiling a number of libraries - newlib, zlib, libpng, Cairo, Mesa. Ideally, this gets run once when setting up a development machine, but updating external libraries and making patches will necessitate re-running at least parts of it - newlib in particular gets rebuilt so often I bothered to write a script just for rebuilding it alone.
The userspace build process is managed mostly by a Makefile, but is also aided by a Python script that
automatically detects dependences between applications and libraries. That Python script used to also manage the build process until I moved it into the core Makefile. The resulting binaries are placed within a template directory (that also includes a bunch of static content like default configs, resources for the UI, etc.).
The kernel is also built by the same Makefile, but without any additional magic. I use a single Makefile for everything, mostly with automatic targets (%.o from %.c). The same process also builds kernel modules, which end up in the template directory under a "mod" subdirectory.
After building the userspace and the kernel, a disk image is generated from the template directory. The standard setup doesn't bother installing a bootloader as in a development environment I just run qemu directly with its built-in multiboot support. For distribution purposes, I have a cobbled-together script that uses mkfs.ext2, some crazy Linux disk magic, and grub-install.
You can take a look at a sample run of the userspace and kernel steps of the build process
through Travis CI, or curl
a log into your terminal. Travis also performs a simple test over a serial console.