I'm currently writing the linux mkfs.simplefs program (for Windozers: the format utility for simplefs), but I think I need a bit of imaginative thinking: I'm looking for an algorithm to determine the initial size of the index area.
Currently I've got a linear progression: 10 entries per megabyte. That means an 1.38 MiB floppy would start with 14 unused entries (896 bytes), while a 200 GiB HD would initially have 2048000 entries (125 MiB). That would yield a relatively constant starting index area size % - about 0,06% of the disk.
However, I reckon this is an ineffective approach: with it, floppies will almost surely need expanding their index area, while hard drives (or flash cards) will probably be wasting index area space. Thus, I am thinking about sub-linear progressions, but... should I use sqtr? log2? log10? ln? I am really puzzled... ???
If anyone has an idea, the constraints I work on are:
- The initial index area (which contains 64-byte entries, one or more per file depending on the file name length) must not span more than 0.25% of the drive)
- For floppies and other sizes near 1 MiB, the algorithm should not give less than 20 or 25 entries
- For a 200 GiB drive, the algorithm should not give more than 50k entries, or about 3 MiB of index area
- If possible, I'd prefer a "smooth" mathematical expression to a "chunked" algorithm or looking up into a table
Edit: oho, i slipped one too many zero in the 200gib max entry count