Page 2 of 3

Re: Why not FAT file systems?

Posted: Tue Oct 06, 2020 2:54 pm
by bloodline
I'm currently implementing my first FAT32 file system driver... I thought the old Amiga File System was a mess, but compared with FAT32 it is a towering masterpiece of exquisite logical consistency.

Trying to traverse directory entries is surprisingly difficult, as the on disk structures do not map well to being in memory... Not to mention all the "junk entries" that seem to fill the table!? I get the 8.3 entires, and I understand the long file name entries... but there appear to be other types as well :(

I'm really not enjoying having to use it, I'm only doing using it for practical reasons (using GRUB and easy tools available on other operating systems).

Re: Why not FAT file systems?

Posted: Wed Oct 07, 2020 2:01 pm
by rdos
I think FAT works fine, and so no reason to implement anything more complex. I couldn't care less about access rights, but it would be nice to build a driver for NTFS or ext* just to be able to access everything by ignoring the access rights. After all, ACLs doesn't solve the issue of malicious software ignoring access rights. Undocumented FSes or encryption might, but not some extra meta data.

Besides, FAT is pretty stable if implemented in a way that just ignores any errors. Sure, clusters can get lost, but that's not a big deal. The worst thing probably is to end up with crosslinked files. I create two partitions on my targets. One is the boot partition and the other the data partition. If things go really wrong, I can just repartition the data partition.

Re: Why not FAT file systems?

Posted: Thu Oct 08, 2020 8:15 am
by bloodline
rdos wrote:I think FAT works fine, and so no reason to implement anything more complex.
I can think several reasons why I think FAT is an abomination from the third circle of Hell:

1. It’s clear from the way FAT works that it was never meant to have directories, the design is fundamentally suited to a single flat root directory with no subdirectories. The implementation of subdirectories is a horrible hack.

2. The Long file names kludge is a hideous bastardisation of the directory table, and makes traversing the directory very slow and complex requiring many in memory buffers (and logic) to fix.

3. It is by design non-extensible, I have had to write both detection code for FAT16 and FAT32, and two separate code paths to handle the different size FAT tables. If the implementations are incompatible then it would have made sense to implement a totally new system free of the above kludges.

4. It has far too much redundant data, the data structures are a mess of meaningless data.

I will point you to the Original Amiga File System (https://wiki.osdev.org/FFS_(Amiga) this link claims it’s the FFS but it actually describes the Original Amiga file system not the Amiga Fast File System, which was compatible but optimised away a lot of legacy cruft) as an example of a very simple, but properly implemented hierarchical file system. It too suffers from problem 4 as listed above, full of legacy cruft, but at least the overall structure is logical and very easy to parse in memory.

Re: Why not FAT file systems?

Posted: Thu Oct 08, 2020 12:26 pm
by rdos
bloodline wrote:
rdos wrote:I think FAT works fine, and so no reason to implement anything more complex.
I can think several reasons why I think FAT is an abomination from the third circle of Hell:

1. It’s clear from the way FAT works that it was never meant to have directories, the design is fundamentally suited to a single flat root directory with no subdirectories. The implementation of subdirectories is a horrible hack.

2. The Long file names kludge is a hideous bastardisation of the directory table, and makes traversing the directory very slow and complex requiring many in memory buffers (and logic) to fix.

3. It is by design non-extensible, I have had to write both detection code for FAT16 and FAT32, and two separate code paths to handle the different size FAT tables. If the implementations are incompatible then it would have made sense to implement a totally new system free of the above kludges.

4. It has far too much redundant data, the data structures are a mess of meaningless data.

I will point you to the Original Amiga File System (https://wiki.osdev.org/FFS_(Amiga) this link claims it’s the FFS but it actually describes the Original Amiga file system not the Amiga Fast File System, which was compatible but optimised away a lot of legacy cruft) as an example of a very simple, but properly implemented hierarchical file system. It too suffers from problem 4 as listed above, full of legacy cruft, but at least the overall structure is logical and very easy to parse in memory.
The Amiga FS doesn't seem very efficient. You need to follow long linked lists which is highly inefficient since those cannot easily be cached. FAT allows that FAT table to be partially or completely cached, and when you read one FAT sector you get links to many blocks, something that is much more efficient than following links.

FAT32 has the sector to the root directory in the boot record, so doesn't suffer from the problems of a fixed size root directory.

As for the long filename cludge, I agree that this was a pretty horrible design. Especially the idea to put wide character codes in the extended directory entries.

Also, if you want to support FAT, you need to support FAT12, FAT16 and FAT32. FAT12 is pretty messy given that 1.5 bytes are used as FAT links. OTOH, it is basically only the FAT table code that is different between FAT versions, and so most of the other code can be shared.

Another issue that is a bit troublesome is the 4G file size limitation.

Re: Why not FAT file systems?

Posted: Fri Oct 09, 2020 8:51 am
by bzt
rdos wrote:The Amiga FS doesn't seem very efficient. You need to follow long linked lists which is highly inefficient since those cannot easily be cached.
I don't see why not. Both are using long linked lists, why couldn't you cache the links for AmigaFS as well?
rdos wrote:FAT allows that FAT table to be partially or completely cached, and when you read one FAT sector you get links to many blocks, something that is much more efficient than following links.
Little correction, technically FAT does use exactly the same links, the only difference is FAT has collected the links into a table, and not mixing them with the data. (Which makes reading all the links at once easier).
rdos wrote:Also, if you want to support FAT, you need to support FAT12, FAT16 and FAT32. FAT12 is pretty messy given that 1.5 bytes are used as FAT links.
Yeah, but probably you'll never need FAT12 these days. I agree it's looking ugly, but actually you can handle 1.5 bytes with a single trinary operator, does not influence the code much.

Code: Select all

offset = 3 * cluster / 2;
nextcluster = cluster & 1 ?
    ((fat[offset+1] << 4) | (fat[offset] >> 4)) :
    ((fat[offset+1] & 0xF) << 8) | fat[offset]);
(note, I haven't tested this code, just wrote it from memory to give you the idea, fat[] should be an unsigned char array.)
rdos wrote:Another issue that is a bit troublesome is the 4G file size limitation.
That's why exFAT was added to the family. It is much more complex than the other FAT file systems though. And yes, the file size limitation is another reason why FAT isn't recommended as your root file system.

Cheers,
bzt

Re: Why not FAT file systems?

Posted: Fri Oct 09, 2020 8:53 am
by nexos
Does anyone know of tools for HPFS? I have been wanting to try using that.

Re: Why not FAT file systems?

Posted: Fri Oct 09, 2020 9:20 am
by bzt
nexos wrote:Does anyone know of tools for HPFS? I have been wanting to try using that.
If you mean the OS/2 file system, then here are some links for the tools (like mkfs.hpfs), and you can compile a HPFS fs driver kernel module for Linux (to support mounting).

Cheers,
bzt

Re: Why not FAT file systems?

Posted: Fri Oct 09, 2020 1:26 pm
by rdos
bzt wrote:
rdos wrote:The Amiga FS doesn't seem very efficient. You need to follow long linked lists which is highly inefficient since those cannot easily be cached.
I don't see why not. Both are using long linked lists, why couldn't you cache the links for AmigaFS as well?
The caching is a bit automatic for FAT since you can just cache disc contents and then use this memory to read and update links. With Amiga FS, the links are scattered on many different sectors, and some might be cached while other's might not. In worst case, you need to read & cache a sector for each access to a link. Even if they are cached, you need to look for the buffer for each link.

OTOH, with modern hardware, and probable limitations of Amiga FS, you possibly can read & cache the entire disc and that way create an efficient implementation.
bzt wrote: Yeah, but probably you'll never need FAT12 these days. I agree it's looking ugly, but actually you can handle 1.5 bytes with a single trinary operator, does not influence the code much.

Code: Select all

offset = 3 * cluster / 2;
nextcluster = cluster & 1 ?
    ((fat[offset+1] << 4) | (fat[offset] >> 4)) :
    ((fat[offset+1] & 0xF) << 8) | fat[offset]);
(note, I haven't tested this code, just wrote it from memory to give you the idea, fat[] should be an unsigned char array.)
I don't use any C code in my FS drivers. They are all pure assembly. :-)
bzt wrote:
rdos wrote:Another issue that is a bit troublesome is the 4G file size limitation.
That's why exFAT was added to the family. It is much more complex than the other FAT file systems though. And yes, the file size limitation is another reason why FAT isn't recommended as your root file system.
Having the possibility to create files larger than 4G is nice, but it's hardly a must. The only time I think I've had an issue with this is when I had ADC sample buffers with close to 100G of data, and then realized I couldn't save it as a file on disc. OTOH, I could write it to fixed sectors, or allocate a couple of 128G partitions and write the data there. Would be much more efficient given that the overhead of file links and meta data would be non-existent, and I could just read/write n consequitive sectors.

Re: Why not FAT file systems?

Posted: Mon Oct 12, 2020 5:58 am
by bloodline
rdos wrote:
bloodline wrote:
rdos wrote:I think FAT works fine, and so no reason to implement anything more complex.
I can think several reasons why I think FAT is an abomination from the third circle of Hell:

1. It’s clear from the way FAT works that it was never meant to have directories, the design is fundamentally suited to a single flat root directory with no subdirectories. The implementation of subdirectories is a horrible hack.

2. The Long file names kludge is a hideous bastardisation of the directory table, and makes traversing the directory very slow and complex requiring many in memory buffers (and logic) to fix.

3. It is by design non-extensible, I have had to write both detection code for FAT16 and FAT32, and two separate code paths to handle the different size FAT tables. If the implementations are incompatible then it would have made sense to implement a totally new system free of the above kludges.

4. It has far too much redundant data, the data structures are a mess of meaningless data.

I will point you to the Original Amiga File System (https://wiki.osdev.org/FFS_(Amiga) this link claims it’s the FFS but it actually describes the Original Amiga file system not the Amiga Fast File System, which was compatible but optimised away a lot of legacy cruft) as an example of a very simple, but properly implemented hierarchical file system. It too suffers from problem 4 as listed above, full of legacy cruft, but at least the overall structure is logical and very easy to parse in memory.
The Amiga FS doesn't seem very efficient. You need to follow long linked lists which is highly inefficient since those cannot easily be cached. FAT allows that FAT table to be partially or completely cached, and when you read one FAT sector you get links to many blocks, something that is much more efficient than following links.
Well neither FAT nor AmigaFS are particularly special when it comes to efficiency, but the only time AmigaFS is less efficient than FAT is when listing a directory's contents which does require jumping around the disk... But, how often is listing a Dir a time critical event, the designers added directory caching in the second version when hard drives became popular, and modern SSDs don't really care... But if you know the name of the entry in a directory the AmigaFS uses a hash table to find the entry, so is MUCH faster.
FAT32 has the sector to the root directory in the boot record, so doesn't suffer from the problems of a fixed size root directory.
I do like the simplicity of this part of the FAT design, but it's fatally flawed for SSD as all write operations require hitting the FAT sectors... with the AmigaFS, each file essentially has it's own FAT.

The obvious comeback you can point out here, is that AmigaFS has a bitmap... I dislike this part of the design, so used a "freefile" as backwards compatible solution to reduce flash wear when I used the file system on SD cards (in an audio sampler design I was working on, which needed to write to disk a lot).

Also, if you want to support FAT, you need to support FAT12, FAT16 and FAT32. FAT12 is pretty messy given that 1.5 bytes are used as FAT links. OTOH, it is basically only the FAT table code that is different between FAT versions, and so most of the other code can be shared.
I only need to support FAT16 and FAT32... and yes, I've reduced the FAT type issue to a single getCluster() function which handles that.
As for the long filename cludge, I agree that this was a pretty horrible design. Especially the idea to put wide character codes in the extended directory entries.
Ok... I've managed to write some code which sort of works... If attribute byte == 0xF then use the entry data to build the name for the immediately following normal entry, if the normal entry starts with 0xE5 ignore it.

The problem I have now is that the MacOS FAT FS driver seems to be intent of littering the drive with 4K metadata files, who's only identifying feature is a leading underscore :-x

Re: Why not FAT file systems?

Posted: Mon Oct 12, 2020 3:16 pm
by Octocontrabass
bloodline wrote:I do like the simplicity of this part of the FAT design, but it's fatally flawed for SSD as all write operations require hitting the FAT sectors... with the AmigaFS, each file essentially has it's own FAT.
SSDs perform wear leveling, so this isn't an issue as long as you follow reasonable SSD usage patterns (e.g. don't completely fill the disk, don't power the disk down when it's idle, use ATA TRIM/SCSI UNMAP when deleting data).
bloodline wrote:The obvious comeback you can point out here, is that AmigaFS has a bitmap... I dislike this part of the design, so used a "freefile" as backwards compatible solution to reduce flash wear when I used the file system on SD cards (in an audio sampler design I was working on, which needed to write to disk a lot).
The SD specification requires FAT on SD cards, and some cards will misbehave if you format them differently. All but the cheapest SD cards perform wear leveling anyway, so the best thing you can do for them is following the write patterns suggested in the SD specification.

Re: Why not FAT file systems?

Posted: Tue Oct 13, 2020 3:57 am
by bloodline
Octocontrabass wrote:
bloodline wrote:I do like the simplicity of this part of the FAT design, but it's fatally flawed for SSD as all write operations require hitting the FAT sectors... with the AmigaFS, each file essentially has it's own FAT.
SSDs perform wear leveling, so this isn't an issue as long as you follow reasonable SSD usage patterns (e.g. don't completely fill the disk, don't power the disk down when it's idle, use ATA TRIM/SCSI UNMAP when deleting data).
Like so much in the world of computing, most of our systems account for Microsoft and Intel's crappy design decisions :lol:
bloodline wrote:The obvious comeback you can point out here, is that AmigaFS has a bitmap... I dislike this part of the design, so used a "freefile" as backwards compatible solution to reduce flash wear when I used the file system on SD cards (in an audio sampler design I was working on, which needed to write to disk a lot).
The SD specification requires FAT on SD cards, and some cards will misbehave if you format them differently. All but the cheapest SD cards perform wear leveling anyway, so the best thing you can do for them is following the write patterns suggested in the SD specification.
Typical, I didn't read the spec beyond interfacing with the it via SPI and just used FFS as I already knew how that worked. I'm king of reinventing the wheel, but perhaps we all are, and that's why we are here :wink:

Re: Why not FAT file systems?

Posted: Tue Oct 13, 2020 5:40 am
by clementttttttttt
Fragmentation. Have you ever run a defragmentation program on Linux (with ext*) before?

Re: Why not FAT file systems?

Posted: Tue Oct 13, 2020 9:54 am
by bloodline
clementttttttttt wrote:Fragmentation. Have you ever run a defragmentation program on Linux (with ext*) before?
Very few filesystems are immune from fragmentation issues, the problem is somewhat academic now, with very large disks which allow mitigation strategies and the fact that SSDs are largely unaffected by fragmentation.

Re: Why not FAT file systems?

Posted: Tue Oct 13, 2020 11:49 am
by Octocontrabass
bloodline wrote:
Octocontrabass wrote:SSDs perform wear leveling, so this isn't an issue as long as you follow reasonable SSD usage patterns (e.g. don't completely fill the disk, don't power the disk down when it's idle, use ATA TRIM/SCSI UNMAP when deleting data).
Like so much in the world of computing, most of our systems account for Microsoft and Intel's crappy design decisions :lol:
Actually, putting wear leveling inside the SSD instead of exposing it to software allows manufacturers more flexibility to do clever things like using multiple write densities within a single flash chip to provide a high-speed write cache for a chip that's otherwise very slow to write. (I bought a SSD from Intel that does this. I don't do anything that would fill up the cache faster than the drive can empty it, so it works just as well as a more expensive SSD for me.)

Re: Why not FAT file systems?

Posted: Tue Oct 13, 2020 12:53 pm
by nexos
bloodline wrote:Like so much in the world of computing, most of our systems account for Microsoft and Intel's crappy design decisions
I have just learned to accept the fact that Microsoft and Intel know what there doing, despite what some people here say. x86 may be somewhat of a mess, but IBM is as much to blame for that. I personally find x86_64 as a great architecture, UEFI as a slightly bloated, but still good standard, and ACPI as having its own sort of interesting beauty. I am not going to question Intel, who we have to thank for USB, PCI, cheap and affordable PCs, and standardized power management. Me and Linus Torvalds differ on this :D :wink: .