Filesystem horror scenario handling

Discussions on more advanced topics such as monolithic vs micro-kernels, transactional memory models, and paging vs segmentation should go here. Use this forum to expand and improve the wiki!
Post Reply
davidv1992
Member
Member
Posts: 223
Joined: Thu Jul 05, 2007 8:58 am

Filesystem horror scenario handling

Post by davidv1992 »

I'm currently working on my first writable filesystem and whilst developing it I've bumped into all sorts of intriguing scenarios such as:

What should happen when a file that is opened by one process is deleted by another
What should happen when a directory that is the current directory for one process is deleted by another
What should happen when the root directory for a process is removed (or unmounted)

What I'm wondering is what is the behaviour you guys implemented for these actions? My current approach for all but the first of these is a variant of crash and burn, which obviously ain't gonna cut it long term.
Tosi
Member
Member
Posts: 255
Joined: Tue Jun 15, 2010 9:27 am
Location: Flyover State, United States
Contact:

Re: Filesystem horror scenario handling

Post by Tosi »

What should happen when a file that is opened by one process is deleted by another
I would not allow it to be deleted (return an error from the delete system call), unless the owner of the deleting process is like a superuser or root.
What should happen when a directory that is the current directory for one process is deleted by another
Same as above.
What should happen when the root directory for a process is removed (or unmounted)
I wouldn't allows this either.

In all of these cases I'm pretty sure both Linux and Windows (Vista) prevent the file/directory from being removed, and most other operating systems do the same.
davidv1992
Member
Member
Posts: 223
Joined: Thu Jul 05, 2007 8:58 am

Re: Filesystem horror scenario handling

Post by davidv1992 »

You can't categorically block those from happing as the unmount might be forced due to for example a removable media being ejected/removed, in which case it is not a bad idea to have some backup plan. Also, removing the current directory of another process is allowed under linux, the kernel somehow keeps enough alive to get you back to non-removed territory.
Tosi
Member
Member
Posts: 255
Joined: Tue Jun 15, 2010 9:27 am
Location: Flyover State, United States
Contact:

Re: Filesystem horror scenario handling

Post by Tosi »

I think when a process's working directory is removed, then that working directory is moved up parent directories until it gets to one that still exists, and any open files that were in that directory are closed. As for files, maybe let the process delete it, but if the other process writes to it again, recreate it. Otherwise you might have to close it.

And if the removal of a file or filesystem is forced, then I would just mark all the open files of the processes as closed. There really isn't much else you can do, as the files are no longer accessible. But if you are unmounting a filesystem and you have enough time beforehand, you probably should write any cached changes to disk before removing the filesystem.

As a test, I tried removing a USB pen-drive while some of its files and directories where in use or otherwise open. This was the results under Gentoo Linux with GNOME: The file manager (Nautilus) closed. The archive manager remained open and I was able to traverse directories, but any attempt to open or extract a file was met with an error saying the file no longer existed. With other programs, I was able to modify the file in memory, but when I tried to save to disk I got errors.
User avatar
bewing
Member
Member
Posts: 1401
Joined: Wed Feb 07, 2007 1:45 pm
Location: Eugene, OR, US

Re: Filesystem horror scenario handling

Post by bewing »

The typical answers usually have to do with reference counting. When a file or directory has zero current references, it gets deleted. When you open a file or directory, you increase its reference count. When a program tries to delete a file or directory, you decrease its reference count. When a program closes a file or directory, you decrease the reference count. That way, files and dirs that are in use remain in existence until all apps that have them open do a close on them. Then they are deleted as part of the last "close" operation.

Programs running on filesystems that are being unmounted can be issued "kill" signals, to close all their files, if you like. Programs trying to access open or new files on filesystems that have been unmounted should get error returns, of course.
User avatar
qw
Member
Member
Posts: 792
Joined: Mon Jan 26, 2009 2:48 am

Re: Filesystem horror scenario handling

Post by qw »

rdos
Member
Member
Posts: 3347
Joined: Wed Oct 01, 2008 1:55 pm

Re: Filesystem horror scenario handling

Post by rdos »

I keep file handles (and buffers) until their reference counts goes to zero. The file will be physically deleted on the drive, but the handle will exist as long as somebody has it open. As for directory functions, I have an API that takes a lock on the filesystem, and then builds a list of files. This list is kept until the directory handle is closed. This also means that changes after the directlory handle open operation will not be visible. A delete operation will first lock the filesystem, and thus this solves the issue.

The tricky thing is to handle removable media and file system caches. I'm not sure if my implementation will handle this gracefully in every possible case yet.
Kevin
Member
Member
Posts: 1071
Joined: Sun Feb 01, 2009 6:11 am
Location: Germany
Contact:

Re: Filesystem horror scenario handling

Post by Kevin »

berkus wrote:Wrong. In linux you're allowed to delete any files or directories at any time as long as you have the rights to do so. Only when your process issues a open/read/readdir or similar sort of system call it will receive a nice error.
Actually, while the directory entry is removed, so you'll have a hard time to open() it any longer, reading/writing to a deleted file works just fine. It's a common idiom for temporary files to open them and then immediately unlink them. You can still use them without restrictions until you close them.
Developer of tyndur - community OS of Lowlevel (German)
quok
Member
Member
Posts: 490
Joined: Wed Oct 18, 2006 10:43 pm
Location: Kansas City, KS, USA

Re: Filesystem horror scenario handling

Post by quok »

berkus wrote:
Kevin wrote:
berkus wrote:Wrong. In linux you're allowed to delete any files or directories at any time as long as you have the rights to do so. Only when your process issues a open/read/readdir or similar sort of system call it will receive a nice error.
Actually, while the directory entry is removed, so you'll have a hard time to open() it any longer, reading/writing to a deleted file works just fine. It's a common idiom for temporary files to open them and then immediately unlink them. You can still use them without restrictions until you close them.
Thank you, but I know that already.
Sure, you knew that already, but perhaps the OP didn't, and you kind of just glossed over the possibility of this anyway in your previous explanation.

Also your attitude of late seems to have taken a turn for the worse.
User avatar
Brynet-Inc
Member
Member
Posts: 2426
Joined: Tue Oct 17, 2006 9:29 pm
Libera.chat IRC: brynet
Location: Canada
Contact:

Re: Filesystem horror scenario handling

Post by Brynet-Inc »

berkus wrote:
quok wrote:Sure, you knew that already, but perhaps the OP didn't, and you kind of just glossed over the possibility of this anyway in your previous explanation.

Also your attitude of late seems to have taken a turn for the worse.
Definitely, I come here mostly for teh lulz and becoming somewhat annoyed by incapability of the general populace to figure out the most trivial things. It's kind of frustrating.
While I share your concern for humanity, we have always however; as a species.. been collectively stupid, there has never been a time where the educated have outnumbered the uneducated.

A computer illiterate brain surgeon for example, would be infinitely more capable of performing surgery than you, and they might hold your incompetence in the same regard.

Fortunately I've taken to whining about humanity to more productive places, like IRC.. people are always at their bestest there.
Image
Twitter: @canadianbryan. Award by smcerm, I stole it. Original was larger.
User avatar
JackScott
Member
Member
Posts: 1036
Joined: Thu Dec 21, 2006 3:03 am
Location: Hobart, Australia
Mastodon: https://aus.social/@jackscottau
Matrix: @JackScottAU:matrix.org
GitHub: https://github.com/JackScottAU
Contact:

Re: Filesystem horror scenario handling

Post by JackScott »

berkus wrote:Forum trolling is just a little vent I use from time to time.
Please don't.
User avatar
qw
Member
Member
Posts: 792
Joined: Mon Jan 26, 2009 2:48 am

Re: Filesystem horror scenario handling

Post by qw »

berkus wrote:It gets too boring to answer 99% of the questions with "Use google."
Then don't answer. Problem solved.
Post Reply