file system

Question about which tools to use, bugs, the best way to implement a function, etc should go here. Don't forget to see if your question is answered in the wiki first! When in doubt post here.
Post Reply
elias

file system

Post by elias »

i have an idea for afile system, and i got it from a unix book i read, describing its file system, btu i wantot know if its good. the disk is divided up into sectors, each consisting of 512 bytes, so minimum file size is 512 bytes. if a file is bigger than that, it can use a double indiret pointer, whihc contains the addresses of different sectors of a file. since teh double indirect block is 512 bytes, and can onyl hold so many adresses, if a file needs more thatn that, you can have a triple indirect block, whihc holds adresses of doubel indirect blocks. this system wastes very little disk space. is this ext fs? ive never foudn documentation on ext so i dont know, btu if unix used this, linux coudl too. are there any changes i shoudl make?
jrfritz

Re:file system

Post by jrfritz »

Well, the file info...if it contains strings...it'll probly be bigger than 512 bytes.
elias

Re:file system

Post by elias »

then itll take up the necessary amount of blocks, plus 1 block for thr doubel indirect used to access thsoe blocks.
Curufir

Re:file system

Post by Curufir »

Sounds very much like the Minix filesystem, which I assume is based on the traditional Unix filesystem. Operating Systems Design And Implentation gives a very good accounting of it.

Note that 512 bytes isn't the minimum size of a file, it's only the minimum amount of diskspace that can be allocated to a file if you insist on using sector size as the lowest allocation unit. Then again changing that would need a whole new filesystem.
jrfritz

Re:file system

Post by jrfritz »

What if...if a file system's file was expanible to the smallest byte...no need for sector suff...like this:

1's byte of file - some hex number showing that this is the start of a file.

Stuff just before last byte-0x<end of file hex>.
elias

Re:file system

Post by elias »

the problem with that is that when that file is deleted, you will have to keep track of that one byte of space, and it will quickly become a mess. i also forgot to mention in my fs, there is a super block, telling what blocks are open in a certain range. does anyone see any flaws with my idea? can someone explain ext, ext2 and ext3 to me? for some reason i think this is the same thing
jrfritz

Re:file system

Post by jrfritz »

I don't see how it will become a mess...regular FS's keep track of the sectors.

I will put this file system I said in FritzOS.
Tim

Re:file system

Post by Tim »

elias is right. The trouble with file systems is that files get fragmented. Imagine you've got three files, each 1 byte in length. In jrfritz' system, they will each come one after another. Now imagine what will happen if you expand the middle file to two bytes -- where will the second byte go? You'll have to account for every byte in the file. Accounting for every byte is far too inefficient. Most of the time, accounting for every sector is still too inefficient, which is why ext2 etc. have the concept of blocks, and FAT and NTFS have clusters.
jrfritz

Re:file system

Post by jrfritz »

Ok then...i'll think about that.

But look at this idea:

A idea for a nondefragmenting filesystem:

If the file is not too large, the FS automaticly moves the files to another part of the disk, giving room for the file that needs to grow bigger. And if the file is too large ( the one that is ahead of the file that needs to be expanded ), the file that is growing is size, is moved. If the file that is growing in size is too big, then the system fragments.

How about that?

Oh, and by the way, my name is Tom, just changed my screenname.
Curufir

Re:file system

Post by Curufir »

It's just not that big a problem. You only ever lose an average of half an allocation unit from the last allocated cluster in the file.

Using ntfs as an example this means an average waste of 2048 bytes/file since it allocates in 4k clusters by default afaik (Win2k/XP/NT allow you to alter this). Considering the advantages of allocating in big units (Faster reads and writes, easier to handle fragmentation etc) this just isn't that big a loss when you consider modern hard drives are usually over 10gb capacity and file sizes are rarely this small. Eg 2million files will waste 4gb hard disk space, but considering that this is an extremely high number of files it's not going to affect the average user.
jrfritz

Re:file system

Post by jrfritz »

What about my idea?
Tim

Re:file system

Post by Tim »

Sounds slow. What if you had 10 consecutive files, each 512 bytes in length, and you wanted to grow the first one? The first time you wrote a byte, the system would move the second file; then, after 512 bytes, it would move the third one, and so on.

I think a file system with a low-priority defragmenting thread would be a better way. Then again, fragmentation isn't a big problem, and development time would be better spent on writing a good space allocation algorithm in the first place.
jrfritz

Re:file system

Post by jrfritz »

:-\ I think i'll just add a auto low priority defragger...but I am looking into the BeFS and see if I like that.
User avatar
Pype.Clicker
Member
Member
Posts: 5964
Joined: Wed Oct 18, 2006 2:31 am
Location: In a galaxy, far, far away
Contact:

Re:file system

Post by Pype.Clicker »

note that fragmenting is mainly a nuisance if you put all your files on the disk with a "bottom-up" policy. If you scatter them more across the disk (i.e. you split your disk into N cylinders chunks and when a new file has to be created, you apply a sort of round-robin scheme on chunks...)

HDD controllers (iirc) can handle very quickly sectors that fall on the same cylinder, but have more troubles when a head displacement (changing cylinder) is involved, so there's no problem if your file is fragmented *within* a cylinder, provided that file.which_cylinder(offset) is a monotonic increasing function ...
Tim

Re:file system

Post by Tim »

It's also interesting to note that files written to a FAT volume by NT end up less fragmented than those written by DOS, because NT can afford to be much more clever with its space allocation. So fragmentation doesn't depend so much on the file system in use as the software that's allocating the space.

And remember that fragmentation is a fact of life for any file system running on a disk of finite size.
Post Reply