Page 2 of 4

Re: About LeanFS

Posted: Tue Mar 23, 2021 7:43 pm
by BenLunt
Ethin wrote:I'm confused about why you'd need perms, especially if the FS is abandoned, as it seems to be here. Why not take over the project entirely?
Hi,

Salvatore is the author of the original specs. I contributed quite a few ideas and enhancements to the current release of 0.6, so I guess I could be considered a co-author.
However, since it is originally his project, he started it, and it has his name on it, out of respect for that, I feel that I should have his permission before I release the next version.

With that said, I have been in fairly frequent communication with him in the past. Unfortunately, no known reason why, he has not responded to any of my recent requests.
I don't feel that it would be right for me to release the next version, with his name on it, co-author or not, without first getting his approval.

If after a proper amount of time passes and I do not hear from him, I may change the new specification (0.7) so that it still gives him credit for the original design, but doesn't hold him, his name, or any way hold him accountable for anything that may or may not happen if someone uses the next version (0.7).

This only seams right to me, so I am going to give him a little while to respond, and then I will decide what to do if he does not.

Thank you,
Ben

Re: About LeanFS

Posted: Wed Mar 24, 2021 10:10 pm
by BenLunt
With Salvatore's permission, I have updated the LEAN specification to version 0.7.0rc0.

The overview can be found at: http://www.fysnet.net/leanfs/index.php
The specification can be found at: http://www.fysnet.net/leanfs/specification.php

This is a pre-release, request for comments, release.

Structurally, except for two new entries in the Superblock and a member size change in the Indirect Block, as long as you are using 512-byte sectors, nothing has changed. However, the specification now allows an arbitrary sized block, with the idea that when used on larger sectored media, less book-keeping access is needed. i.e.: Less Indirect Blocks are needed.

I welcome your comments, good and bad.

Once the rc process is over and the official 0.7 is released, I will update/enhance the utilities and will include some example images.

Thank you,
Ben

Re: About LeanFS

Posted: Thu Mar 25, 2021 1:58 am
by bzt
Hi Ben,

I've checked the spec, updated my code and the links to your site in my repo.

Notes:
  • I see you haven't added inodeSize to the inode, makes sense since it's constant anyway
  • You haven't changed the extents' behaviour; while it seems extremely odd that for empty files sectorCount and specially extentCount isn't zero, makes sense to keep backward compatibility.
  • If I were you, I would fix those minor portability issues in mleanfs and lean_chk and I would add them to the downloads.php page as the number one examples. That would be a big advantage, having dependency-free, portable reference implementations (but you should update the wxWidgets version too to v0.7 if you can).
  • That's all, otherwise everything looks good!
Cheers,
bzt

Re: About LeanFS

Posted: Thu Mar 25, 2021 5:26 pm
by BenLunt
bzt wrote:I see you haven't added inodeSize to the inode, makes sense since it's constant anyway
Agreed. It was a quick thought, that didn't last too long, though I forgot to remove it from the test suite you were using.
bzt wrote:You haven't changed the extents' behaviour; while it seems extremely odd that for empty files sectorCount and specially extentCount isn't zero, makes sense to keep backward compatibility.
Without expecting an answer, how many zero-length files are you going to have? I thought about your earlier comment about extents only pointing to data, not to book-keeping. My conclusion came back to simplicity. All extents should remain the same, whether there is data or not (a zero-length file).
bzt wrote:If I were you, I would fix those minor portability issues in mleanfs and lean_chk and I would add them to the downloads.php page as the number one examples. That would be a big advantage, having dependency-free, portable reference implementations (but you should update the wxWidgets version too to v0.7 if you can).
Absolutely. After I verify a few other items and commit to the new version, I will most definitely update my utilities. I will leave the wxWidgets version to Salvo's discretion.

Thank you again,
Ben

Re: About LeanFS

Posted: Mon Oct 31, 2022 7:24 pm
by BenLunt
Hi guys,

I have been working on my implementation of this LeanFS and would like to comment on some older posts.
bzt wrote:LeanFS spec is pretty good, that's why I'm interested in it. But to my experience the spec isn't enough to implement a read-write driver.
BenLunt wrote:So, why the LeanFS? There are no other utilities or driver code. I have to make everything myself. From scratch. It is the enjoyment of doing so.
I agree, the problem here is, there must be a way a validate if your generated image (or partition) actually complies with the spec and validates as a correct LeanFS. Without you can't be sure that your driver is correct.
I have updated the Check function of my Ultimate utility to do a much more detailed check. It now checks Link Counts, phantom bitmap entries, indirects, etc., creating a report you can copy to the Windows clipboard.
bzt wrote:So my question is, if you're recommending this file system, do you have a simple, small, portable image creator for it? Because I can't even find one. (Proprietary and closed-source, or windows-only out of question.) I'd be happy with the 0.6 version of the mkfs.c too, even though it isn't a tool just a toy (as it compiles, dependency-free, written in portable C and simple. Simple is good.)
As you are probably far ahead of this request now, I will simply comment for those who might be reading for the first time. An older utility, with C source, is at https://github.com/fysnet/FYSOS/tree/ma ... ls/mleanfs. However, the new Windows only utility (sorry, though is open source) creates Lean images as well.
bzt wrote:Any working driver example would be nice too (fuse, grub-module, linux kernel module, dokan driver what ever).
This is something I agree would be desirable. I will have to work on this request.
bzt wrote:
BenLunt wrote:- An inode must not be less than (32 + size of the first bitmap). LBAs 0 through 32 are reserved for the boot code and the Superblock.
Are you sure? The specification doesn't say anything like that. It says

Code: Select all

The superblock must be stored in one of any sector in range from 1 to 32, inclusive.
and about the bitmapStart field:

Code: Select all

uint64_t bitmapStart	The address of the sector where the bitmap of the first band (that is, band 0) starts. This is usually the sector right after the superblock.
Now if superblock can be at sector 1, and bitmap starts at the sector right after the superblock, this means the lowest root inode number cloud be at sector 3.
BenLunt wrote:The Bitmap Start LBA must not be less than 33. You have it as 2.
I haven't seen this written anywhere in the spec. It only says that bitmap should start at the sector right after the superblock.
At the time I wrote this, I wasn't clear about this at all. However, as you comment and now that I have done more work with it, you are correct. The Superblock must be within blocks 1 and 32 inclusive. This is a fact. However, the bitmap and root can and usually follow. Therefore, if the Superblock is at LBA 1 (which would be typical for an EFI partition), the bitmap and root could and usually would follow starting at block 2. So sorry for the confusion. It was my error.
bzt wrote:I've checked the spec, updated my code and the links to your site in my repo.
As always, your work is impressive. Congrats to you.
bzt wrote:Notes:
  • If I were you, I would fix those minor portability issues in mleanfs and lean_chk and I would add them to the downloads.php page as the number one examples. That would be a big advantage, having dependency-free, portable reference implementations (but you should update the wxWidgets version too to v0.7 if you can).
These two files have lagged behind, considerably. I can't say that my Ultimate app is top notch by any means, though since it does what I need it to, I haven't gone back to these two files. Something I will have to put on my (very large) todo list.

Anyway, with the new function, I am wondering if anyone, including yourself, might have an image file that includes a Lean partition that I may download and run my tests through. I have tried most things, but other minds think different than I, so if you have a test image, I would like to see it.

Thanks again. I am always impressed by the work that a lot of you have done. Some projects I see here are just amazing.

Ben
- https://www.fysnet.net/osdesign_book_series.htm

Re: About LeanFS

Posted: Sat Nov 05, 2022 7:04 pm
by thewrongchristian
I've been looking over the LeanFS spec, and while I'm mostly positive about it, and seriously considering implementing it as the native FS for my OS, I'm have a few reservations around the directory entries:
A directory entry is a variable length structure, containing a fixed 12-byte header and a variable length part containing the file name. Every directory entry must be aligned to a 16-byte boundary, thus the size of the whole structure must be a multiple of 16 bytes.
But
struct DirEntry
...
uint8_t recLen - This is the total size (thus including the fixed length header) of the directory entry, in 16-byte units. It must be at least 1. This field must be valid even for deleted or empty directory entries, as it must be used as a link to the next directory entry.
uint16_t nameLen - The length in bytes of the file name stored in the name field. It must be greater than zero.
With a single byte length, we're limited to directory records of 4096, but the filename length limit uses a 16-bit field.

This seems a bit of a waste. Can these fields be split 12/12 bits instead perhaps, or even reversing the sizes and making recLen a byte count? After all, is 255 file name entry limit an onerous limit?

I also see 16-bit alignment as quite wasteful as well. Why not 8 byte?

Why must nameLen be greater than zero? An unused entry has no name, surely, and so should be zero?
A single directory entry may span across different blocks.
I think this is a mistake. I prefer the ext2 restriction of not crossing block boundaries, with each directory block being self contained.

With a bigger recLen field, directory entries can claim all the space in a directory block up to block sizes of 64K, even if it's not used for the file name. Directories blocks can then be initialised with a single, large, unused directory entry.

Creating a new entry then becomes a simple matter of finding an existing entry with sufficient free space, and splitting it between the existing and the new entry.

Removing an entry involves simply merging the space of the entry being deleted into the recLen of the previous entry.

In both cases, there is no need to separately track the tail of the directory. Directories will be allocated and sized block by block.

All in all, though, I like the look of LeanFS.

Using separate blocks for inodes is something I've looked at in the past, with a view to using tail packing to further reduce space usage of small files, but extended attributes is also a noble use of such space. Perhaps a file tail could be an extended attribute itself, best of all worlds.

One problem of inodes as blocks is the loss of the indirection, making the block location fixed once set in the directory entry, and mandating the use of in-place updates (perhaps mitigated by a journal.)

The other problem I foresee is inodes can no longer be found other than via directory entries. So the loss of a directory block or entry to corruption makes the file contents lost as well, with little hope of fsck finding and linking it back into "/lost+found".

The use of extents is also a win, much more compact (and simpler IMO) than direct and indirect block pointers.

Re: About LeanFS

Posted: Sun Nov 06, 2022 10:42 am
by BenLunt
thewrongchristian wrote:I've been looking over the LeanFS spec, and while I'm mostly positive about it, and seriously considering implementing it as the native FS for my OS, I'm have a few reservations around the directory entries:
Absolutely, and thank you for the comments.
thewrongchristian wrote:With a single byte length, we're limited to directory records of 4096, but the filename length limit uses a 16-bit field.

This seems a bit of a waste. Can these fields be split 12/12 bits instead perhaps, or even reversing the sizes and making recLen a byte count? After all, is 255 file name entry limit an onerous limit?

I also see 16-bit alignment as quite wasteful as well. Why not 8 byte?

Why must nameLen be greater than zero? An unused entry has no name, surely, and so should be zero?
The main goal is simplicity. With a 16-byte alignment, it is a simple task to "jump" to the next entry, as well as this entry being paragraph aligned. A later comment will explain a little better.

nameLen must be greater than zero when the entry is in use. If the entry's FileType field is zero, the now unused parts of the entry are undefined. However, if a driver is concerned about undeleting an entry, the nameLen field now becomes relevant to be able to undelete the file. However, undelete capabilities are not specified within the specification and are driver specific. Therefore, in theory, all entries, used and unused, will have a filename and need a length for that name. See a later comment why this is true.
thewrongchristian wrote:
A single directory entry may span across different blocks.
I think this is a mistake. I prefer the ext2 restriction of not crossing block boundaries, with each directory block being self contained.
With a bigger recLen field, directory entries can claim all the space in a directory block up to block sizes of 64K, even if it's not used for the file name. Directories blocks can then be initialised with a single, large, unused directory entry.

Creating a new entry then becomes a simple matter of finding an existing entry with sufficient free space, and splitting it between the existing and the new entry.

Removing an entry involves simply merging the space of the entry being deleted into the recLen of the previous entry.

In both cases, there is no need to separately track the tail of the directory. Directories will be allocated and sized block by block.
In theory, there should not be any records after the last used record. A directory is simply a file, nothing more.

For example, if there is only the '.', '..', and a single file within a directory, the directory's file will only have a length of 16 + 16 + 12 + length of filename + padding to a paragraph length. The fileSize field in the Inode indicates this length. Only when adding another file to this directory will the driver add another entry, in turn increasing the size of the file. Therefore, in theory, there will be no empty entries. When a file is deleted, a driver may consolidate the used entries, in turn removing the now unused entry, in theory never having an unused entry.

As for the entries crossing a block boundary, again, a directory is simply a file, and a file should not know about block boundaries.
thewrongchristian wrote:All in all, though, I like the look of LeanFS.
Using separate blocks for inodes is something I've looked at in the past, with a view to using tail packing to further reduce space usage of small files, but extended attributes is also a noble use of such space. Perhaps a file tail could be an extended attribute itself, best of all worlds.

One problem of inodes as blocks is the loss of the indirection, making the block location fixed once set in the directory entry, and mandating the use of in-place updates (perhaps mitigated by a journal.)

The other problem I foresee is inodes can no longer be found other than via directory entries. So the loss of a directory block or entry to corruption makes the file contents lost as well, with little hope of fsck finding and linking it back into "/lost+found".

The use of extents is also a win, much more compact (and simpler IMO) than direct and indirect block pointers.
Again, the goal is simplicity. A directory is simply a file, made up of file records, hopefully, but not mandatory, no empty records, especially no trailing empty records. With the capabilities of preallocating extra blocks when the file (directory) is created, in theory, no allocation is needed to add a new record to the file, which makes for a quick and simple task. The driver only needs to allocate a new extent when the current preallocated extents are consumed, again preallocating extra extents. See the Superblock's preallocCount entry and the Inode's iaPrealloc attribute.

With a block size of 4096 and a preallocCount of 1 (allocate 2 blocks when creating the directory), with an average filename length of 20, and the unused tail of the inode used for file contents, this allows approximately 250 directory entries to be added until a new extent is needed. Since there is no Inode in the first block of this new allocation, this next preallocation will allow 256 entries to be added until preallocation is again needed. Simply increasing preallocCount to 3 (4 additional blocks), this doubles the count of entries allowed until allocation is needed.

Therefore, in theory, using preallocation, a file (in this case a directory) allocates more space than needed at file creation. When a file is added to the directory, only the Inode's checksum, fileSize, and time fields are modified, along with the relevant extent(s). No need to access the Superblock, bitmap, or any other part of the volume outside of the Inode and its already allocated extents.

So to (hopefully) answer your questions, a directory should have no empty entries, though this is not mandatory. Allocating large unused entries, splitting them into two when adding an entry, is perfectly allowed. However, if no empty entries are included, it is a simple task to add to the end of the directory. You already know the offset within the file (fileSize), you should already have preallocated extents, though it is a very simple task to find out if you do or not, and you simply need to append the file with an entry. With the example above, you will only need to allocate more space on every 256th entry added. This is a very small percentage. A block size of 512 and a preallocCount of 7 (8 total blocks) will have an allocation needed on every 128th entry added. A similar small percentage.

As for the "/lost+found" aspect, having a list of Inodes, this was once discussed between Salvo and I. However, we came to the conclusion that if added, we would be simply reinventing Ext2. Not what we were intending to do. We wish to keep simplicity within the filesystem.

I do appreciate the comments and don't hesitate to continue. If you have more questions while implementing this filesystem, feel free to post.

Thank you,
Ben

Re: About LeanFS

Posted: Sun Nov 06, 2022 11:37 am
by thewrongchristian
BenLunt wrote: I do appreciate the comments and don't hesitate to continue. If you have more questions while implementing this filesystem, feel free to post.
Actually, I have a question on the space after the inode in the first block:
Actual file data starts right after the inode structure or, if the iaInlineExtAttr attribute is set, at the beginning of the subsequent data block (perhaps in the same extent).
So, if iaInlineExtAttr is not set, the file data follows immediately after the inode. This is good, for small files, that can fit in a single block alongside the inode.

But, what about files that don't fit in the block with the inode? Does this space contain the start of the file data, or the tail of the file data? The spec implies the former, which would make "block" based navigation of the file substantially sub-optimal.

Consider, OSes typically cache file contents in the VM layer. This caching is done by VM pages, typically 4096 bytes long. For a file from LeanFS, for the first page of a file, 3920 bytes will come from the inode block, then 176 bytes will come from the second block. Every page aligned file access will require access to at least two LeanFS blocks (assuming 4K blocks).

Some of this will be mitigated by read-ahead from an I/O point of view, but all page based access of such a file will require copying data to/from multiple LeanFS block buffers to deal with this skew, instead of using the page memory directly as the buffer.

Is this what is intended? Or is the data after the inode intended to be the file tail, and if the file exceeds the 3920 bytes long, that data now migrated to the next block and the 3920 remaining in the inode used for the new tail once the file grows beyond the last data block?

Re: About LeanFS

Posted: Sun Nov 06, 2022 7:01 pm
by BenLunt
thewrongchristian wrote:if iaInlineExtAttr is not set, the file data follows immediately after the inode. This is good, for small files, that can fit in a single block alongside the inode.

But, what about files that don't fit in the block with the inode? Does this space contain the start of the file data, or the tail of the file data? The spec implies the former, which would make "block" based navigation of the file substantially sub-optimal.
Your first statement above is absolutely correct. Your second question is (mostly) answered by the first as well.

If iaInlineExtAttr is clear, no matter the file length, the first of the file appends the Inode. If iaInlineExtAttr is set, the file data starts at the next block defined by the extents.

This feature is designed mainly for small files. With a 4k block, you can fit most any average small file within the first block. However, with any file larger than ~4k, the ideal is to start with the next block defined in the extents, leaving the room after the inode for Extended Attributes, or simply vacant if none are given. Note that there must be at least one extended attribute indicating an empty attribute using the remaining space when iaInlineExtAttr is set.

This feature is per Inode and can be used either way on any (regular file/directory) Inode, independent of another. Therefore, if the driver knows the size of the file at creation time, it can set or clear the flag accordingly. If at any time the file exceeds this space after the Inode, the driver can set the iaInlineExtAttr flag and move all of the file data to the next block in the extent(s), and visa-versa if desired.

If the iaInlineExtAttr flag is clear and the Fork field is zero, there are no Extended Attributes, not even the 'empty' one. Also, a Fork must not have Extended Attributes (iaInlineExtAttr and Fork must be zero).

Ben

Re: About LeanFS

Posted: Mon Nov 07, 2022 12:40 pm
by thewrongchristian
BenLunt wrote:
thewrongchristian wrote:if iaInlineExtAttr is not set, the file data follows immediately after the inode. This is good, for small files, that can fit in a single block alongside the inode.

But, what about files that don't fit in the block with the inode? Does this space contain the start of the file data, or the tail of the file data? The spec implies the former, which would make "block" based navigation of the file substantially sub-optimal.
Your first statement above is absolutely correct. Your second question is (mostly) answered by the first as well.

If iaInlineExtAttr is clear, no matter the file length, the first of the file appends the Inode. If iaInlineExtAttr is set, the file data starts at the next block defined by the extents.

This feature is designed mainly for small files. With a 4k block, you can fit most any average small file within the first block. However, with any file larger than ~4k, the ideal is to start with the next block defined in the extents, leaving the room after the inode for Extended Attributes, or simply vacant if none are given. Note that there must be at least one extended attribute indicating an empty attribute using the remaining space when iaInlineExtAttr is set.

This feature is per Inode and can be used either way on any (regular file/directory) Inode, independent of another. Therefore, if the driver knows the size of the file at creation time, it can set or clear the flag accordingly. If at any time the file exceeds this space after the Inode, the driver can set the iaInlineExtAttr flag and move all of the file data to the next block in the extent(s), and visa-versa if desired.

If the iaInlineExtAttr flag is clear and the Fork field is zero, there are no Extended Attributes, not even the 'empty' one. Also, a Fork must not have Extended Attributes (iaInlineExtAttr and Fork must be zero).
Great. So my implementation can just enable extended attributes, and know my file blocks will be aligned with data blocks.

So that brings me back to (from my original reply):
Using separate blocks for inodes is something I've looked at in the past, with a view to using tail packing to further reduce space usage of small files, but extended attributes is also a noble use of such space. Perhaps a file tail could be an extended attribute itself, best of all worlds.
So, it sounds like this is something that might be worth doing, but doing it myself would obviously tie it to my impementation. Perhaps there could be some sort of capability flags listed in the superblock to indicate optional capabilities an implementation might provide (tail packing extended attribute being an example?)

Of course, this might also go against the simplicity of the FS, but I'd counter that the current idea of having the start of the data in the inode in the first place is contrary to that simplicity. I think it'd be simpler to discard the extra data space in the inode at the point the data spills into another block, and potentially waste the space as a result.

For very small files that fit in the inode, we won't be wasting the space.

For small files that don't fit into the inode, we'll be wasting the spare inode space, but in a diminishing proportion as the file gets bigger.

For big files, the wasted space in the inode is noise.

But going back to storing the file tail in the inode (as opposed to the file head), you get the double whammy benefit that appending the file in small amounts (such as might be done in a slowly growing log file) will write not only the new data, but also the inode, in the single inode update (where the tail fits in the remaining space in the inode.)

So tail packing overall sounds like a better idea than storing the head of the file in the inode, and is as optimal as the current solution, with major benefits.

To start with, I'll ignore both, and just have empty extended attributes (for files I write). But I'd very much urge this change to the use of the spare space in the inode block.

Re: About LeanFS

Posted: Mon Nov 07, 2022 4:06 pm
by BenLunt
thewrongchristian wrote:...storing the file tail in the inode (as opposed to the file head), you get the double whammy benefit that appending the file in small amounts (such as might be done in a slowly growing log file) will write not only the new data, but also the inode, in the single inode update (where the tail fits in the remaining space in the inode.)
Which is a very appealing idea, the idea that appending to the file is all done with a single block read and a write to the same block, appending the file as well as updating the Inode with one write.
thewrongchristian wrote:So tail packing overall sounds like a better idea than storing the head of the file in the inode, and is as optimal as the current solution, with major benefits.
The only reason to store the head of the file in the same block as the inode is if the whole file will fit in that space after the inode. As soon as the file is larger than that space, the file should then be started on the next block in the extents, freeing the space after the inode. I can see no other benefits of having the head of the file just after the inode when the file is larger than that space. It is more efficient to have the Extended Attributes after the inode, since the alternative would create a new Inode anyway (as a fork).
thewrongchristian wrote:To start with, I'll ignore both, and just have empty extended attributes (for files I write). But I'd very much urge this change to the use of the spare space in the inode block.
It is duly noted and will be thought upon, due to its appeal. However, I can see drawbacks to it. For example:

Let's say the file has allocated 16 consecutive blocks of space on the media, but only occupies 10 at the moment, isn't it just as fast or faster to read all 16 blocks (which includes the inode), append the file, update the inode, then write all 16 blocks back. As for the disk access, this is extremely fast. As for the offset calculation,

Code: Select all

base + fileSize = offset to append data
If the tail was in the first block just after the inode, a few calculations would have to be made, and possibly some data having to be moved to another block to make room for the new tail.

Don't get me wrong, your idea is appealing and I will give it some thought. A simple flag in the inode and a few qualifications if this flag is set, and it could easily be implemented.

However, I have targeted this file system to 4k blocks (as you have given a fine example earlier why this size is desired). Most small files can fit within this first 4k block. Icons, small .txt files, .bat files, etc. However, the appeal factor is consecutive blocks within an extent, (in theory) having to only read and write no more than two extents for any write access to a file, with these extents being very close to each other on the physical media. Even better if all blocks where in a single extent. Makes for extremely fast media access.

Anyway, again, your comments are greatly appreciated. Once you have a working driver, I would love to see an image file with (many) files on it in (many) directories so that I can test my work as well.

Thank you,
Ben

Re: About LeanFS

Posted: Mon Nov 07, 2022 8:06 pm
by BenLunt
For your information, I have just updated the Lean specification to version 0.8.0, adding undelete capabilities, changing the "Hidden" attribute function, and adding a bit more clarification to existing information.

A version 0.7 driver should require very minimal modification to be version 0.8 compliant.

Thanks again for your comments and look forward to any more you or anyone else might have.

Ben

Re: About LeanFS

Posted: Thu Nov 10, 2022 4:58 pm
by thewrongchristian
BenLunt wrote:For your information, I have just updated the Lean specification to version 0.8.0, adding undelete capabilities, changing the "Hidden" attribute function, and adding a bit more clarification to existing information.

A version 0.7 driver should require very minimal modification to be version 0.8 compliant.

Thanks again for your comments and look forward to any more you or anyone else might have.
Cool, thanks.

My plan is to implement this as a FUSE driver:
  • As that seems to be something that's missing.
  • I plan to make FUSE an option on my OS, perhaps even the main FS option.
  • So I need to learn how to do it .
I haven't decided on a license for any of my stuff yet, but I notice that libfuse itself is GPLv2, but your license doesn't look GPL based.

I'm talking about the license at: https://github.com/fysnet/FYSOS/blob/master/license.txt

So I don't think I'll be using any of your code, but if I do, is your license compatible with GPLv2? My inner lawyer is weak.

But it all might take a bit of time, as I'll also be porting my kernel utility library to user land so I can use the same facilities in both.

Re: About LeanFS

Posted: Fri Nov 11, 2022 12:59 am
by BenLunt
Hi,

I don't mind if you use my code. That "license" is just to keep me safe in case something happens. Just make sure that anyone reading your code can find the original specs and maybe give me credit somewhere.

Thanks again,
Ben

Re: About LeanFS

Posted: Tue Dec 06, 2022 4:44 pm
by BenLunt
For announcement and for your information, I have released version 1.0.0-rc0 of this Lean File System specification.

It is in "request for comments" stage. This is a major release and I don't plan on changing it much after this, unless someone finds a flaw in my implementation :-). Hopefully the way I have written it, function can be added without breaking existing implementations (those starting with version 1.0).

Thanks again for all the comments I have received before. My Ultimate app is version 1.0.0 compliant and should be uploaded to the github within a day or so.

Thanks,
Ben