This series of posts and the original question diverged quite a bit (maybe split it into another thread?) but I think it's worthwhile to talk about that.
First a note on the "Linux loads fs modules in order to be able to probe file systems": That is completely wrong. The reason that some distributions include many fs modules in their default kernels is that they want their boot process to "just work" if a user switches to a different file system. They don't want to build an individual initrd for every single configuration.
Loading kernel modules in a modern Linux system is not the kernel's responsibility; it is done by the user space udev daemon. It would be easy to write a fs recognition udev rule in user space and load the required modules during run time.
Brendan wrote:Note that for GPT, to auto-detect the correct type of file system you only need to look at the 16-byte "partition type GUID" in the partition table entry.
That is not entirely true: GPT's partition type GUID tells you the intended use (aka "this is a root partition" or "this is a generic data partition") but not the file system type (aka "this is ext2" or "this is ntfs"). You still need the "probe superblock" magic.
Brendan wrote:While that description could be applied to other OSs (with wildly varying degrees of applicability); Linux is unique and earns that description more than any other OS for multiple reasons; where the largest reason is their inability to effectively coordinate disparate groups of developers (the "herding headless chickens" problem).
I basically agree with that statement but I don't think that this is necessarily a bad thing. Linux generally prefers working implementations over a abstract design processes. This often leads to sub-optimal designs and APIs that have to be revised and rewritten after limitations are hit. However it also opens up the development process: Device and CPU manufacturers can get their code into the kernel with relatively little coordination. The "herding headless chickens" development arguably led to Linux' success over other OS like the BSDs, Solaris or Windows.
Brendan wrote:Note that I am mostly reacting to assumptions (that are false assumptions more often than not) that take the form "Linux does it like this, therefore ..." (e.g. "therefore any other way won't work", or "therefore that must be the best way", or "therefore I'm not going to bother to think for myself", etc).
Citing Linux as an optimal design choice is almost always wrong. However Linux implementations are often practical: The way Linux does something is usually not completely braindead. Looking at Linux is often a good starting point if you want to implement a new feature. And Linux has quite a few subsystems that are actually quite well thought out: For example the VFS layer is actually pretty good. Another quite remarkable algorithm that Linux introduced is the RCU subsystem's quiescent state garbage collection.
Yet for us hobby OS developers the most valuable feature of the Linux source code is its large collection of drivers, hardware quirks and errata that would otherwise not be available to the public. I lost count of how many differences between official documentation and Linux' implementation I encountered where Linux' implementation did the right thing and the docs were just plain wrong.