rdos wrote:bloodline wrote:rdos wrote:I think FAT works fine, and so no reason to implement anything more complex.
I can think several reasons why I think FAT is an abomination from the third circle of Hell:
1. It’s clear from the way FAT works that it was never meant to have directories, the design is fundamentally suited to a single flat root directory with no subdirectories. The implementation of subdirectories is a horrible hack.
2. The Long file names kludge is a hideous bastardisation of the directory table, and makes traversing the directory very slow and complex requiring many in memory buffers (and logic) to fix.
3. It is by design non-extensible, I have had to write both detection code for FAT16 and FAT32, and two separate code paths to handle the different size FAT tables. If the implementations are incompatible then it would have made sense to implement a totally new system free of the above kludges.
4. It has far too much redundant data, the data structures are a mess of meaningless data.
I will point you to the Original Amiga File System (
https://wiki.osdev.org/FFS_(Amiga) this link claims it’s the FFS but it actually describes the Original Amiga file system not the Amiga Fast File System, which was compatible but optimised away a lot of legacy cruft) as an example of a very simple, but properly implemented hierarchical file system. It too suffers from problem 4 as listed above, full of legacy cruft, but at least the overall structure is logical and very easy to parse in memory.
The Amiga FS doesn't seem very efficient. You need to follow long linked lists which is highly inefficient since those cannot easily be cached. FAT allows that FAT table to be partially or completely cached, and when you read one FAT sector you get links to many blocks, something that is much more efficient than following links.
Well neither FAT nor AmigaFS are particularly special when it comes to efficiency, but the only time AmigaFS is less efficient than FAT is when listing a directory's contents which does require jumping around the disk... But, how often is listing a Dir a time critical event, the designers added directory caching in the second version when hard drives became popular, and modern SSDs don't really care... But if you know the name of the entry in a directory the AmigaFS uses a hash table to find the entry, so is MUCH faster.
FAT32 has the sector to the root directory in the boot record, so doesn't suffer from the problems of a fixed size root directory.
I do like the simplicity of this part of the FAT design, but it's fatally flawed for SSD as all write operations require hitting the FAT sectors... with the AmigaFS, each file essentially has it's own FAT.
The obvious comeback you can point out here, is that AmigaFS has a bitmap... I dislike this part of the design, so used a "freefile" as backwards compatible solution to reduce flash wear when I used the file system on SD cards (in an audio sampler design I was working on, which needed to write to disk a lot).
Also, if you want to support FAT, you need to support FAT12, FAT16 and FAT32. FAT12 is pretty messy given that 1.5 bytes are used as FAT links. OTOH, it is basically only the FAT table code that is different between FAT versions, and so most of the other code can be shared.
I only need to support FAT16 and FAT32... and yes, I've reduced the FAT type issue to a single getCluster() function which handles that.
As for the long filename cludge, I agree that this was a pretty horrible design. Especially the idea to put wide character codes in the extended directory entries.
Ok... I've managed to write some code which sort of works... If attribute byte == 0xF then use the entry data to build the name for the immediately following normal entry, if the normal entry starts with 0xE5 ignore it.
The problem I have now is that the MacOS FAT FS driver seems to be intent of littering the drive with 4K metadata files, who's only identifying feature is a leading underscore
