What is the proper way to set/map up user-space app memory?

Discussions on more advanced topics such as monolithic vs micro-kernels, transactional memory models, and paging vs segmentation should go here. Use this forum to expand and improve the wiki!
Post Reply
kashimoto
Posts: 8
Joined: Wed Nov 25, 2020 4:00 pm

What is the proper way to set/map up user-space app memory?

Post by kashimoto »

How people usually load a user-space app into memory?

So like the kernel has its own page-dir and the user-space app also has its own(inheriting the kernel's page-dir entries, etc).
I load the binary, i read the headers, figure out the segments, etc. all in kernel memory.

How am i going to map/copy the .text segment into the user-space app's address space when my kernel has no access to the user-app address space?

Do i reserve the needed amount of frames in kernel space(at some unused kernel vm address) then load the data into it, then i remap it to the user-app's address space?

Thanks for any answer, cheers! i just cant get my head around it :/
User avatar
iansjack
Member
Member
Posts: 4671
Joined: Sat Mar 31, 2012 3:07 am
Location: Chichester, UK

Re: What is the proper way to set/map up user-space app memo

Post by iansjack »

Loading a user-space program typically is done as a result of an exec system call, or its equivalent. A system call doesn’t involve a change of page tables; it would be pretty inefficient if it did. So the kernel has access to the calling process’s memory space and can create additional mappings in the current page table if necessary.
nullplan
Member
Member
Posts: 1744
Joined: Wed Aug 30, 2017 8:24 am

Re: What is the proper way to set/map up user-space app memo

Post by nullplan »

kashimoto wrote:How am i going to map/copy the .text segment into the user-space app's address space when my kernel has no access to the user-app address space?
See, this is where you're wrong. When executing a new program, the kernel has the run of the entire userspace memory.

The basic abstraction here is the "virtual space", in which file segments (identified by file and offset) get mapped to addresses. When the kernel executes a new program, it creates a new virtual space which is empty. It then maps the program segments in. Then it checks how big arguments, environment, and possibly aux vectors are going to be, and finds a nice place to fit the stack. It may need to find and map more memory (e.g. Linux will also map a VDSO and VVAR page. Also a heap for brk() somewhere). If any of that fails, the kernel has to return the failure to the already executing program, so none of this can happen in a way that destroys the existing virtual space.

If you have a sufficiently general understanding of a "file", namely one which allows for anonymous files, which have no name or backing store (or node in the VFS) and only consist of memory, then all of user space consists of file mappings. (If even your anonymous files have a reference counter, then you can implement full fork() semantics.)

Anyway, all of the above is arch independent. The OS only needs to turn it all into something the CPU can understand when it has to load the virtual space. The CPU doesn't know about files, however, so a common way to handle files that are not currently in memory is to mark those pages as non-present. Then access to those parts of address space generated a page-fault, and the handler for that can load the requested parts of the file. This may entail first clearing out some other file from memory to make space.
kashimoto wrote:Do i reserve the needed amount of frames in kernel space(at some unused kernel vm address) then load the data into it, then i remap it to the user-app's address space?
When the page fault handler notices it has to load a page into memory, it allocates a buffer the size of a page, reads a page of the file into the buffer, and maps the buffer into user space. Whether you need a kernel address or not depends on circumstances. But usually not. For one, in 64-bit mode you can map all physical memory, so you can always access all of it. For two, if the file is on a device supporting DMA, you still only need a physical address to tell the device where to load the data to. If you don't have DMA, then yes, you need a virtual address, and likely a kernel one, but you can remap the pages.
Carpe diem!
Barry
Member
Member
Posts: 54
Joined: Mon Aug 27, 2018 12:50 pm

Re: What is the proper way to set/map up user-space app memo

Post by Barry »

kashimoto wrote:So like the kernel has its own page-dir and the user-space app also has its own(inheriting the kernel's page-dir entries, etc).
I load the binary, i read the headers, figure out the segments, etc. all in kernel memory.
This entirely depends on your kernel design, but when a system call or interrupt occurs you don't need to switch back into the kernel's page directory. Since the user program's page directory inherits your kernel's page tables (I assume the kernel doesn't use the entire address space), the kernel is capable of operating perfectly while still using the program's address space. It just runs in the kernel tables that are linked in, but the user's tables are still there.

On a Unix-like OS, the exec() family of calls is used to load a new program, and it gets loaded in the same address space (same page directory, although a lot of the mappings may be changed). When the currently running program issues the exec() system call, the kernel takes over and is still executing in that program's context, and has access to that program's memory. From there, all you need to do is read the new executable, parse it for sections, then place them in user-space memory. The only difference between user memory and kernel memory is the protections on the pages, and possibly that the kernel tables are global. When the program is loaded, just return back from the exec() call to it's entry point.

On other operating systems, you may have a spawn() system call or similar that will run an executable as a new process. For that call, the kernel will create a new address space (with the kernel tables mapped in), switch to it, then load the executable sections there, again putting them in user memory.

For actually loading the executable you have a couple of options:
  1. Read the content of the sections into the correct memory location during the exec()
    This means that the program will take up physical memory space, and is ready to be run completely, although it may allocate more memory at runtime.
    The advantage of this method is that you won't run out of physical memory while the program is running (preventing it from running), that error will occur when loading.
  2. Use your kernel's virtual memory structures to describe the file mappings in memory, letting it get loaded as it runs by page-fault handling (lazy loading)
    This has several advantages, among them:
    • Loading is quicker as you don't need to actually read the file, which is potentially quite large
    • Fits seamlessly with other memory mappings, such as for the heap and stack areas
    • Makes implementing shared libraries easier
    • Physical memory will only be used for parts of the file that run, as any parts that don't run (e.g. parts that are only used on other architectures/operating systems) don't get loaded
    The only downside is you may encounter out of memory errors when loading the file at runtime, so you need a way to resolve them. Alternatively you could reserve the correct number of pages at load time, ensuring that error never arises.
    This method requires more functionality from your kernel (or services, in the case of a micro-kernel), so is harder to implement.
If you haven't got quite a lot of your virtual memory or virtual file system implemented yet, consider the first option as a placeholder until you're equipped for the second option.

In an older kernel I wrote, I used separate page directories for every thread, but threads of the same process shared virtual memory mappings. So if one thread tried to access and area of memory, whether it was the heap or part of the executable, it would cause a page fault, the kernel would check the memory mappings and see that those pages already in memory (in use by the sibling threads), and would map them into that thread's page directory. The nice thing about doing it this way was that each thread could have a stack at the same position, but different stack content, which made creating new threads so much easier. I also used this to put kernel stacks outside of global memory (still supervisor only), and to have TLS support easily. This design fell short when I wanted to share data that was on a thread's stack as it involved copying it to the heap. But having the page fault handler take care of it was a lot easier than manually linking the pages in the exec()/clone()/fork() calls, I'd definitely recommend the lazy loading approach if your kernel can support it.
kashimoto
Posts: 8
Joined: Wed Nov 25, 2020 4:00 pm

Re: What is the proper way to set/map up user-space app memo

Post by kashimoto »

Ah right, i also thought it is a problem with my concept :)
Thank you very much!
Barry
Member
Member
Posts: 54
Joined: Mon Aug 27, 2018 12:50 pm

Re: What is the proper way to set/map up user-space app memo

Post by Barry »

While it's probably sensible to not switch the page directory for every system call, you could still load a program in the "wrong" page directory. Even if the kernel uses a different virtual address space, it should have access to the process' structures and thereby all of the virtual memory information (page tables and/or kernel structures). From there it should be trivial to map the parts of the file into it's address space.
kashimoto
Posts: 8
Joined: Wed Nov 25, 2020 4:00 pm

Re: What is the proper way to set/map up user-space app memo

Post by kashimoto »

Hello back again!

A little feedback: my os runs quite smooth now, thanks to you! :D

But i would have a related question in order to improve it.

I have a 32bit the kernel that just simply pre-allocates the page tables in the "main" page directory entries, (0xc0000000-0xffffffff, 128pages), so the processes never need to worry about the changing kernel region page-tables as they are always mapped and not changing.
This way I do not need to switch page directories during syscalls.

Now, how else can a process's page directory kept up-to-date with the kernel's?
There is no other way, just to switch to kernel page directory after a point in the syscall?

I am thinking about it because would be nice to move onto 64bit, but first i would like to see my options.

Thank you!
Octocontrabass
Member
Member
Posts: 5452
Joined: Mon Mar 25, 2013 7:01 pm

Re: What is the proper way to set/map up user-space app memo

Post by Octocontrabass »

kashimoto wrote:Now, how else can a process's page directory kept up-to-date with the kernel's?
Keep a version number in each context. Any time the context-specific version number is different from the global "current" version number, refresh the kernel page directory entries. Any time you modify the kernel page directory entries, increment the global version number.

Keep a flag in each context. Any time the flag is set, refresh the kernel page directory entries and then clear the flag. Any time you modify the kernel page directory entries, set the flag for every context.

Use PAE. The kernel (0xC0000000-0xFFFFFFFF) will have its own page directory, shared by all address spaces.

There might be other options too, this is just what I could think of off the top of my head.
kashimoto
Posts: 8
Joined: Wed Nov 25, 2020 4:00 pm

Re: What is the proper way to set/map up user-space app memo

Post by kashimoto »

Yeah but what if i check the version in a syscall, and another syscall right after the check interrupts and changes the kernel space? Them my syscall thinks we are good, but actually it is out of date.

it can easily happen with a multi-core system( even with a single core one)
thewrongchristian
Member
Member
Posts: 417
Joined: Tue Apr 03, 2018 2:44 am

Re: What is the proper way to set/map up user-space app memo

Post by thewrongchristian »

kashimoto wrote:Yeah but what if i check the version in a syscall, and another syscall right after the check interrupts and changes the kernel space? Them my syscall thinks we are good, but actually it is out of date.

it can easily happen with a multi-core system( even with a single core one)
There is nothing stopping you having the kernel page tables as a single shared resource, referenced by all your address space contexts. Adding a new mapping to these shared page tables will be pulled in automatically by the paging hardware on first use. You would only have to sync each address space context when a new kernel level page table is created, so the new shared page table can be added to each address space page directory.

You should already have mechanisms to invalidate changes to existing mappings anyway if you're in a multi-core system. Changes to existing pages should not be creating new page tables.

If the worst comes to the worst, and you take a page fault in kernel mode that you were not otherwise expecting, you should have standard VMM structures backing kernel virtual memory, so any missing translations can be fixed up on demand, as you do for regular user demand paging. That way, the only mappings you absolutely need to keep available and in sync are the mappings that map the page fault code, and anything that relies on (typically the VMM subsystem.)

In my kernel, all the VMM structures are maintained on the heap. Referencing them could trigger page faults in the heap, as the heap is virtually mapped, and I have a special VMM structure describing the heap that does not rely on the heap itself, which is allocated fixed mappings at kernel bootstrap. So any kernel address page fault in my kernel will ultimitely land at VMM code that uses fixed mappings.

Handling kernel virtual memory essentially the same as user virtual memory, using the same structures, reduces the amount of special case code, which can only be good.
nullplan
Member
Member
Posts: 1744
Joined: Wed Aug 30, 2017 8:24 am

Re: What is the proper way to set/map up user-space app memo

Post by nullplan »

thewrongchristian wrote:You would only have to sync each address space context when a new kernel level page table is created, so the new shared page table can be added to each address space page directory.
You can also do this part lazily. Leave the highest level page table unfilled in most processes, and when a page fault happens, copy the assignment from the master page table. This way, you only need a sync point when you remove memory mappings from kernel space. So avoid that. It is pretty simple to write an allocator that only expands its memory, for example, and also pretty simple to keep larger resources (like thread stacks) lying around in garbage heaps until needed.

If you want it to be super awesome, you can attempt to copy Linux' cache mechanism, which separates memory allocation and object life time. Then you have a generic way to free memory if you do get under memory pressure.
Carpe diem!
Post Reply