Algorithims

Question about which tools to use, bugs, the best way to implement a function, etc should go here. Don't forget to see if your question is answered in the wiki first! When in doubt post here.
Post Reply
elderK

Algorithims

Post by elderK »

Hey all, Im just curious.

What kind of structure are you guys using to track Allocated / free blocks?

Lists?
(singly linked, doubly linked?)

Trees?
(Heap?)

Just wondering :)

Also, Whats the granularity of your allocations?
Like, the Minimum block size? Is it a Byte? half a Kilobyte?

~Zeii.
User avatar
Pype.Clicker
Member
Member
Posts: 5964
Joined: Wed Oct 18, 2006 2:31 am
Location: In a galaxy, far, far away
Contact:

Re:Algorithims

Post by Pype.Clicker »

<rant> i'm always amazed by how much the word "algorithm" can be tortured by keyboards </rant>

Well ... you _really_ want to know ... well.

1) i have bitmap-based allocator for physical pages. At least for a part of the kernel, other parts use bare "watermark" allocator (drivers, mainly, for ISA buffers iirc). It has been somehow "tweaked" to ensure we do not need to sweep the whole table everytime a new page is requested but instead it keeps the lowest and highest free pages indices and the position of the last page found (optimistically assuming that the next page will be found next time and that pages that have been found busy lately are still going to be busy in a near future). 4KB granularity

2) i have sorted-linked-list allocator for "kmalloc". 16 bytes granularity. It uses so-called "boxes" to slice the list and quickly identify which 64KB region still has free memory and what's the largest list element starting in that "box". That way you can "jump" in the middle of the list, skipping small cells when you're looking for a place to keep large stuff.

3) i have a double-AVL tree for "vmalloc", that is, bookkeeping the use of virtual memory in the "user process" area. I say "double AVL" because the same elements are organized in two distinct trees, one sorting them by addresses and the other sorting them by size. Pretty handy when you look for a place where to load that 7-MB shared library and equally handy when you try to find out who the hell is responsible for that page fault at 0xcafebabe.

4) The above "regions allocator" quite sucks when you need to allocate only a few pages ... be it for temporary message/buffer mapping or whatever. For that purpose i have something i could qualify of "in-place buddy lists" embedded in the page tables. Let's say when a page is indeed "free", its page table entry indicate the next free element of the same "size class", its actual size, etc. It cannot handle areas going over a single page table, though, so you're limitted to 4MB to map temporary stuff.

I think that's all ... oh, no, wait

5) There's the "cell allocator" which takes 4KB blocks (or larger) and slice them into smaller items (as small as 4 bytes iirc) and can allocate/free them quicker than kmalloc. Somehow it looks like linux slab, but not quite exactly. It has tunable strategies to keep 'blocks' that have more free cells ahead of the list so that you maximize the probability of finding a free cell while minimizing the probability that you'll have to reorder the list. It also allocates cell _without headers/footers_ that is, memory consumed by your two-pointers cells is exactly 8 bytes while the kmalloc puts a 4-letter tag, the size, the current weather or whatever else in a 'header' before the 'useful' data. Still, it is capable of telling whether the address you try to free does indeed belong to the cell provider or not.

Did i mention it was supposed to be a microkernel ?

::)
paulbarker

Re:Algorithims

Post by paulbarker »

For allocating n contiguous units from some resource (ie. malloc() or allocating DMA pages for old-style DMA), doubly-linked lists of either single units (so each list entry describes one unit) or ranges of units (described by start and limit or end) work well, as do buddy-allocators when most allocations will be a power of 2.

For allocating a single unit, bitmaps and lists work well. Bitmaps are smaller and use a constant amount of space and so are simpler, but lists use no resources if all units are free.

I use the phrase unit as this discussion could apply to physical pages, inodes, disk blocks, virtual addresses or even printers on a network.

It really comes down to personal preference. The important considerations are starting the allocator (avoiding a chicken and egg problem) and making sure that an allocator which uses a variable amount of memory (eg. list-based) does not recurse indefinitely if it needs an extra page internally.

As far as granularity, for physical pages I'd say a single page, for malloc I'd say 8 bytes as a purely arbitrary value which seems to work well. It all depends on how much memory the allocator uses internally per block.

Lastly, I'd suggest you read up on slabs and object caches if you haven't already done so. Even if you don't intend to implement them, it's a good mental excersize. The paper should be somewhere in the FAQ or quicklinks.
evincarofautumn

Re:Algorithims

Post by evincarofautumn »

Hmm...

Well I have a very basic system that runs as a kernel-level subsystem called [tt]sensory/ms[/tt]: the "memory system". At the head of an arbitrary area of memory (beginning after the kernel) is the "zero block" that contains the location and size of a descriptor table, plus a few values that indicate:
  • the minimum size of a block which defaults to either 512 bytes, the size of a sector on the boot disk, or the size of a physical page
  • the maximum memory index
  • a "partition table" that defines areas of memory that shouldn't be used by the allocator (e.g. VGA memory, BIOS area, etc.).
Also indexed in the partition table are areas that require or have requested copy-on-write to another location.

The descriptor tables are implemented exactly the same way as [tt]sensory/fs[/tt] so that the ramdisk (currently a skeleton ;D) and paging (not yet implemented :P) can be handled without any time-consuming conversions. The structure is a doubly-linked tree of ranges.
Seven11

Re:Algorithims

Post by Seven11 »

for page (both physical and virtual) allocating/freeing I use a stack approach , for device buffers and similar things I use a bitmap approach. For the malloc and free part I now use (currently implementing) the Next Fit algorithm.

I'm thinking about using a tree approach for device buffers but for now bitmaps is enough.
elderK

Re:Algorithims

Post by elderK »

Ah, I showed my enthusiastic lecturer all the concepts for Citadel.
IPC, PMM / VMM, etc...

He made sense. He said once a reasonable amount of allocations were made using a Linked List structure for Blocks (VMM Block Allocation), that searching for a Free block, and such would begin to seriously slow down the system.

He recommended I read up about the Heap Data structure.
So, thats what im doing. :).

Thanks guys for your input.
And yeah, I slaughtered the word Algorithim. ;) Sorry Pype.

~Zeii.
distantvoices
Member
Member
Posts: 1600
Joined: Wed Oct 18, 2006 11:59 am
Location: Vienna/Austria
Contact:

Re:Algorithims

Post by distantvoices »

Zeii, you are such a shameless slaughterer of words! I'm amazed. *rofl* It's still "algorithm", no way around that, despite one might perceive a hint of 'i' between 'th' & 'm' when saying the word.

sweet gosh ...

as for your search of algorithms: for vmem allocations in user process address space I also use avl-trees. for stuff which is to be allocated quick & dirty, I use preallocated & prepared datastructures (f. ex. the reverse mapping stuff & list of available/dealtout pages - that's a preallocated stack). Oh, and yes, it's a micro kernel.

stay safe
... the osdever formerly known as beyond infinity ...
BlueillusionOS iso image
Post Reply