oh yeah
but, before you start thinking about how not to get it, think about why you did get it in the first place. It was not done because it was a simple way so save some bits in the file tables. It was done so that if your file grew, it would have space to grow without allocating another block of disk space.
Now you know why, attack the part where the system is effective, but not successful. For the files that do not grow, it's a waste. Nowadays, with hundred-gigabyte harddisks, there are lots of files (thousands to possibly millions) that will never be written to. Yet they waste disk space. For the worst examples, try unpacking source to a fat16 harddisk of 2G or something similar.
Then how to do so efficiently? Well, not the simple way of allocation by the byte. That could increase the amount of sector reads you need for a file.
But, if you store all but the last (incomplete) block in a normal set of blocks, possibly large ones (64K? 1M?) and then use one of the predefined space-section blocks for the end section (which can be shared), then you do not have the problem of slack space. Also, because the last sector is forced to fit in a single block (if it doesn't fit, allocate a new block for it), you still get the same performance as with normal filesystems.
My favorite filesystem? Based on what?
On functionality: NTFS
On usability: ext3
On speed: ReiserFS
On all, but availability: my own
BTW, there already is an FS concept thread in quicklinks, look there and post there. Pype, could you move these there too ?