Why does do UNIX's use fork to create a new process? I am just wondering what the reasoning is behind this. I have been studying JamesM tutoral on multitasking, and looking at his code for creating a new process. It makes sense as to what he is doing, but my first inclination for implementing this in my own kernel was to do something like this:
Code: Select all
pde_t* create_user_space(u32 start, u32 end) {
u32 i;
pde_t* user_pde = (pde_t*)prim_malloc_a(sizeof(pde_t));
mem_set((u32*)user_pde, 0, sizeof(pde_t));
// make sure address range is page aligned
start &= 0xFFFFF000;
end &= 0xFFFFF000;
end += 0x1000;
for(i = start; i < end; i += 0x1000)
alloc_frame(get_page(i, 1, user_pde), 0, 0);
// make sure kernel space is mapped into user process
for(i = KERNEL_START; i < KERNEL_END; i += 0x1000) {
user_pde->tables[i / 1024] = kernel_pde->tables[i / 1024];
user_pde->tables_phys[i / 1024] = kernel_pde->tables_phys[i / 1024];
}
return user_pde;
}
So basically, when you want to create a process, you call this function to get a page directory for it and create its address space. This function just creates page tables for the given address range you expect the process to occupy, and copies in the kernel page tables as well. I admit this function isnt working 100% yet, but the basic idea is that you create a pde, create and allocate tables for the address range the executable expects to execute in, and just link in kernel tables, as opposed to cloning a directory. Is there something Im missing here that makes this a bad idea, or would this work for creating processes and switching to them? I dont plan to implement fork, but rather just setup processes from scratch