Candy wrote:
I count the driver plus the physical layout as the file system itself.
On a dual-boot Windows / Linux system with a shared FAT32 partition, that would mean you're using a different "file system" on the same partition depending on which OS (and driver) you booted.
In most (if not all) current filesystems that "support" undeleting files, files are normally deleted and can only be brought back in special circumstances, among which the accidental not overwriting of the file and the file being contiguous on disk.
Hmpf. Sometimes I feel like I'm unable to make myself clear.
Yes, when the "delete" function of your OS is to write some zeroes to some crucial metadata structure, and the "undelete" function of your OS is to hope that nothing bad has happened in the file system yet, you're in deep horse droppings.
That's why your OS should be smart enough to hold "deleted" files in a meta-state from which they can be
safely and
reliably be recovered as long as they're in this state (i.e., "trashcan", or however you'd like to call it.)
However, if you /DO/ put it in the filesystem, where you'd only delete files off the physical disk when you need the space they take, you can save deleted files for the entire content of the disk (no lame 4% / 2GB limit or something like that)...
That's what I claim: This handling can be done just as well in the OS, or the File Server if you're in microkernel land, for
any on-disk data structure. You can employ such a scheme for a FAT32 that everyone can read, for a XFS that can handle huge partitions, or an AmigaFFS you keep out of nostalgia.
Seeing as it is very easy to put in this place, it logically works in this place...
You need OS (or tool) support if you want to select a file that you deleted last week for undeletion, unless you want to undo every file operation you did since then sequentially. (Hey, you even need OS / tool support for
that!)
Means, you need an "undelete selector", and an "undo", in your user interface anyway, in addition to the file system driver. You just needlessly spreaded the "undelete" logic across two layers.
...it is unreliable if put in a different layer...
How can placing it into data structures (which are hard to change / improve once published) and disk driver code (which runs in rather critical places of your OS) be more reliable than implementing it in user space?
...and it cannot even be made to work decently for most cases...
What in hell are you referring to? Microsoft screwed it up, and KDE screwed it up likewise. Is that proof enough for you that "it cannot be made to work decently"? Damn, we all should stop working on our OS'es immediately, it can't be done better...
If you want to undelete files that's ok, but don't use the crap left behind by some programs as "recovery information" or "a feature".
Not "by some programs", but your OS. The OS decides what std::remove() or DeleteFile() do and what not, right? Regardless of the layer you implement it in, the OS has to also provide ListDeletedFiles() and Undelete(), right? The OS is the level where you can cleanly abstract those functions from the underlying FS driver(s), making the concept more flexible, and allowing you to have Undelete() working even on partitions you share with mainstream operating systems.
Instead you are proposing to work out a new filesystem, throwing away tons of tested, proven, and well supported concepts and driver code, just because it doesn't provide metadata you could just as well hold elsewhere?
Sorry but I'm bewildered. I'd like to have a brain dump from you so I can backtrace where that idea comes from... ???