Filesystem horror scenario handling
-
- Member
- Posts: 223
- Joined: Thu Jul 05, 2007 8:58 am
Filesystem horror scenario handling
I'm currently working on my first writable filesystem and whilst developing it I've bumped into all sorts of intriguing scenarios such as:
What should happen when a file that is opened by one process is deleted by another
What should happen when a directory that is the current directory for one process is deleted by another
What should happen when the root directory for a process is removed (or unmounted)
What I'm wondering is what is the behaviour you guys implemented for these actions? My current approach for all but the first of these is a variant of crash and burn, which obviously ain't gonna cut it long term.
What should happen when a file that is opened by one process is deleted by another
What should happen when a directory that is the current directory for one process is deleted by another
What should happen when the root directory for a process is removed (or unmounted)
What I'm wondering is what is the behaviour you guys implemented for these actions? My current approach for all but the first of these is a variant of crash and burn, which obviously ain't gonna cut it long term.
-
- Member
- Posts: 255
- Joined: Tue Jun 15, 2010 9:27 am
- Location: Flyover State, United States
- Contact:
Re: Filesystem horror scenario handling
I would not allow it to be deleted (return an error from the delete system call), unless the owner of the deleting process is like a superuser or root.What should happen when a file that is opened by one process is deleted by another
Same as above.What should happen when a directory that is the current directory for one process is deleted by another
I wouldn't allows this either.What should happen when the root directory for a process is removed (or unmounted)
In all of these cases I'm pretty sure both Linux and Windows (Vista) prevent the file/directory from being removed, and most other operating systems do the same.
-
- Member
- Posts: 223
- Joined: Thu Jul 05, 2007 8:58 am
Re: Filesystem horror scenario handling
You can't categorically block those from happing as the unmount might be forced due to for example a removable media being ejected/removed, in which case it is not a bad idea to have some backup plan. Also, removing the current directory of another process is allowed under linux, the kernel somehow keeps enough alive to get you back to non-removed territory.
-
- Member
- Posts: 255
- Joined: Tue Jun 15, 2010 9:27 am
- Location: Flyover State, United States
- Contact:
Re: Filesystem horror scenario handling
I think when a process's working directory is removed, then that working directory is moved up parent directories until it gets to one that still exists, and any open files that were in that directory are closed. As for files, maybe let the process delete it, but if the other process writes to it again, recreate it. Otherwise you might have to close it.
And if the removal of a file or filesystem is forced, then I would just mark all the open files of the processes as closed. There really isn't much else you can do, as the files are no longer accessible. But if you are unmounting a filesystem and you have enough time beforehand, you probably should write any cached changes to disk before removing the filesystem.
As a test, I tried removing a USB pen-drive while some of its files and directories where in use or otherwise open. This was the results under Gentoo Linux with GNOME: The file manager (Nautilus) closed. The archive manager remained open and I was able to traverse directories, but any attempt to open or extract a file was met with an error saying the file no longer existed. With other programs, I was able to modify the file in memory, but when I tried to save to disk I got errors.
And if the removal of a file or filesystem is forced, then I would just mark all the open files of the processes as closed. There really isn't much else you can do, as the files are no longer accessible. But if you are unmounting a filesystem and you have enough time beforehand, you probably should write any cached changes to disk before removing the filesystem.
As a test, I tried removing a USB pen-drive while some of its files and directories where in use or otherwise open. This was the results under Gentoo Linux with GNOME: The file manager (Nautilus) closed. The archive manager remained open and I was able to traverse directories, but any attempt to open or extract a file was met with an error saying the file no longer existed. With other programs, I was able to modify the file in memory, but when I tried to save to disk I got errors.
Re: Filesystem horror scenario handling
The typical answers usually have to do with reference counting. When a file or directory has zero current references, it gets deleted. When you open a file or directory, you increase its reference count. When a program tries to delete a file or directory, you decrease its reference count. When a program closes a file or directory, you decrease the reference count. That way, files and dirs that are in use remain in existence until all apps that have them open do a close on them. Then they are deleted as part of the last "close" operation.
Programs running on filesystems that are being unmounted can be issued "kill" signals, to close all their files, if you like. Programs trying to access open or new files on filesystems that have been unmounted should get error returns, of course.
Programs running on filesystems that are being unmounted can be issued "kill" signals, to close all their files, if you like. Programs trying to access open or new files on filesystems that have been unmounted should get error returns, of course.
Re: Filesystem horror scenario handling
I keep file handles (and buffers) until their reference counts goes to zero. The file will be physically deleted on the drive, but the handle will exist as long as somebody has it open. As for directory functions, I have an API that takes a lock on the filesystem, and then builds a list of files. This list is kept until the directory handle is closed. This also means that changes after the directlory handle open operation will not be visible. A delete operation will first lock the filesystem, and thus this solves the issue.
The tricky thing is to handle removable media and file system caches. I'm not sure if my implementation will handle this gracefully in every possible case yet.
The tricky thing is to handle removable media and file system caches. I'm not sure if my implementation will handle this gracefully in every possible case yet.
Re: Filesystem horror scenario handling
Actually, while the directory entry is removed, so you'll have a hard time to open() it any longer, reading/writing to a deleted file works just fine. It's a common idiom for temporary files to open them and then immediately unlink them. You can still use them without restrictions until you close them.berkus wrote:Wrong. In linux you're allowed to delete any files or directories at any time as long as you have the rights to do so. Only when your process issues a open/read/readdir or similar sort of system call it will receive a nice error.
Re: Filesystem horror scenario handling
Sure, you knew that already, but perhaps the OP didn't, and you kind of just glossed over the possibility of this anyway in your previous explanation.berkus wrote:Thank you, but I know that already.Kevin wrote:Actually, while the directory entry is removed, so you'll have a hard time to open() it any longer, reading/writing to a deleted file works just fine. It's a common idiom for temporary files to open them and then immediately unlink them. You can still use them without restrictions until you close them.berkus wrote:Wrong. In linux you're allowed to delete any files or directories at any time as long as you have the rights to do so. Only when your process issues a open/read/readdir or similar sort of system call it will receive a nice error.
Also your attitude of late seems to have taken a turn for the worse.
- Brynet-Inc
- Member
- Posts: 2426
- Joined: Tue Oct 17, 2006 9:29 pm
- Libera.chat IRC: brynet
- Location: Canada
- Contact:
Re: Filesystem horror scenario handling
While I share your concern for humanity, we have always however; as a species.. been collectively stupid, there has never been a time where the educated have outnumbered the uneducated.berkus wrote:Definitely, I come here mostly for teh lulz and becoming somewhat annoyed by incapability of the general populace to figure out the most trivial things. It's kind of frustrating.quok wrote:Sure, you knew that already, but perhaps the OP didn't, and you kind of just glossed over the possibility of this anyway in your previous explanation.
Also your attitude of late seems to have taken a turn for the worse.
A computer illiterate brain surgeon for example, would be infinitely more capable of performing surgery than you, and they might hold your incompetence in the same regard.
Fortunately I've taken to whining about humanity to more productive places, like IRC.. people are always at their bestest there.
- JackScott
- Member
- Posts: 1036
- Joined: Thu Dec 21, 2006 3:03 am
- Location: Hobart, Australia
- Mastodon: https://aus.social/@jackscottau
- Matrix: @JackScottAU:matrix.org
- GitHub: https://github.com/JackScottAU
- Contact:
Re: Filesystem horror scenario handling
Please don't.berkus wrote:Forum trolling is just a little vent I use from time to time.
Re: Filesystem horror scenario handling
Then don't answer. Problem solved.berkus wrote:It gets too boring to answer 99% of the questions with "Use google."