Hi,
IMHO the main function of the VFS layer is to handle requests to open and close files. This often involves handling
mount and
unmount, and maintaining a list of which file systems are mounted where. For example, if an application asks to open the file "/foo/bar" and the VFS layer finds that "/foo" is where a file system is mounted, then the VFS layer would ask the file system driver for the file "/bar". Anything more than this (e.g. caching) is optional
.
[note: an example of a VFS layer that doesn't use mount/unmount would be DOS, where "C:/foo" gets redirected to the filesystem driver assigned to "C:" (a device list would be used instead of a list of mount points)].
Poseidon wrote:i've now a pretty good idea what a vfs does, but shall i have for every device its own filesystem driver (so the filesystemdriver can be loaded multiple times, could be handy for writing it to the memory and write it every 30 seconds in one time to the harddisk), or one filesystem driver used by multiple block devices (wastes less memory, but makes it more complex to create a write-to-memory-and-after-30-seconds-to-blockdevices-system, multiple dma buffers needed)
I'd be tempted to use multi-threaded file system drivers, where a file system driver's code is loaded into memory once, and multiple threads are used for multiple devices (e.g. one thread per file system). This is a design issue though - how you decide to do it is up to you.
Poseidon wrote:i get now the idea of storing the data which is written to the harddisk in the memory of the vfs driver and write it then every 30 seconds to the filesystem drivers, so the filesystem driver don't have to do that.
I'd make the VFS layer write data to the file system driver every N seconds, where N can be set differently for each file system. For removeable media you'd want N to be less than 30 seconds because you can't be too sure when the user will remove it. You'd also want the option of setting N to 0 (effectively providing a "write-through" option).
To begin with I'd get the VFS working without any caching though (if the OS design allows it).
You may also want to consider how storage device drivers work. For example, on some OSs a storage device driver creates on or more files in the VFS (e.g. a hard disk driver might create the files "/dev/hda", "/dev/hda0" and "/dev/hda1") and the file system drivers can mount
any file.
In this case, if an application asks for the file "/foo/bar" the VFS would find that "/foo" is a file system and ask the associated file system driver for the file "/bar". The file system driver would read data from the file "/dev/hda0" to find/access the file "/bar". In this case the file system driver would ask the VFS to read (and write) data to the file "/dev/hda0", and the VFS would redirect these requests to the hard disk driver.
With the above, the VFS layer would be involved twice for each request (if no caching is used). When caching is used you don't want to cache data for the file "/foo/bar"
and cache the data for the file "/dev/hda0" - it would be best to cache "/foo/bar" only and make "/dev/hda0" cache-less. Better still would be to have seperate file IO functions (one set for cached and the other set for no cache). This is because a file system driver can mount
any file.
Consider this:
Code: Select all
;mount the hard drive as "/foo"
mount -tFAT /dev/hda0 /foo
;mount a file on the file system "/foo" as "/foo/bar"
mount -tFAT /foo/image.bin /foo/bar
;mount a file on the file system "/foo/bar" as "/temp"
mount -tFAT /foo/bar/image.bin /temp
Now, see if you can work out what happens when an application asks to read from the file "/temp/hello.txt"...
Cheers,
Brendan