User memory access in disk driver

Question about which tools to use, bugs, the best way to implement a function, etc should go here. Don't forget to see if your question is answered in the wiki first! When in doubt post here.
Post Reply
JohnB
Posts: 3
Joined: Tue Oct 06, 2020 1:25 am
Location: London, UK

User memory access in disk driver

Post by JohnB »

Hello, I'm working on my first operating system kernel and after a weekend of setting up basic booting etc got to thinking about disk drivers for it.

Apologies if this is discussed elsewhere but I couldn't immediately find anything. If there are any other discussions or documents I missed please direct me there!
Although I am writing what I hope is "production standard" code for this, it's very much a learning exercise so I'm looking for the simplest way to do things even where they are slow or have long term issues as my plan is to do this again "properly" once I've learned enough.

I have a basic 32 bit x86 kernel working written in C++ and a little assembly. I boot up using multiboot in qemu. I Set up gdt, idt, tss, paging etc.
I can run tasks in kernel mode or user mode and schedule them using the timer interrupt.
I have implemented a few basic system calls. After many many crashes it's all good!

Now it's time to implement reading and writing disk.
I want to use PATA as it appears to be the most simple interface supported by QEMU.

My plan is this :-
1) My disk driver will keep a queue of requests where each request is {read/write, disk block number, memory address}
2) I will have a (temporary) system call that can be called from user mode that adds a request to this queue. (*)
3) The task that issued the request will be moved to a "blocked" queue so it won't be scheduled

4) Issue requests to read/write to the disk hardware based on the item at the top of the queue and set up an interrupt to be called when it's complete. If it was a write request then send the data and wait for interrupt.
5) When the interrupt happens, if it was a read op, then use PIO to read the contents into memory.
6) After the above, if there are any more requests in the queue, issue the next one as in stage 4)
7) Unblock whichever task was blocked in stage 3 so it can continue next time it is scheduled.

- (*) I know this isn't a good interface for security etc but it seems as easy way to test things.

All this looks reasonable to me but I'd welcome any comments.

I have one problem though.
The read and writes requests to the hardware will mostly occur during interrupt processing.
This means that the task that issued the request probably isn't running which means CR3 will be different so I can't just read and write data to/from the address I stored in step 1.

As I see it I have three choices.
1) In the driver, switch my CR3 to that of the requesting task and access the memory, then switch back to the CR3 of the interrupted task. That seems slow and ugly though.
2) Have a buffer to store the data in my request queue and on system call entry/exit check the buffers to see and if needed copy it into the user space location. That seems inefficient and ugly.
3) In the driver when the time comes, look at the task and the virtual address stored in the request and manually go though the page tables to work out the physical memory location of the memory the user mentioned, and then temporarily map that physical location into a fixed place in the current task and access it there. That seems fast, and elegant but rather complicated.

What do people do? And are there any better options?

Thanks!
Octocontrabass
Member
Member
Posts: 5568
Joined: Mon Mar 25, 2013 7:01 pm

Re: User memory access in disk driver

Post by Octocontrabass »

JohnB wrote:And are there any better options?
Use DMA to pass data between the drive and the appropriate location in memory without mapping anything.

If you're sticking with PIO, I think your best bet is option 3. However, IDE is a bit of an exception. Most other disk interfaces don't even support PIO, so you might not want to put in the effort for something you'll stop using once you get DMA to work.
JohnB
Posts: 3
Joined: Tue Oct 06, 2020 1:25 am
Location: London, UK

Re: User memory access in disk driver

Post by JohnB »

Thank you. I'll look into DMA.
My goal with this kernel was to do everything the most simple way as a learning excercise so that in my next kernel I could do a good job after mastering the basics.

But it seems like DMA might actually be the simplest way here and doesn't look that difficult.

I guess it means I have to deal with things like the requesting process going away before the DMA completes somehow but that doesn't seem impossible.
linguofreak
Member
Member
Posts: 510
Joined: Wed Mar 09, 2011 3:55 am

Re: User memory access in disk driver

Post by linguofreak »

JohnB wrote:Thank you. I'll look into DMA.
My goal with this kernel was to do everything the most simple way as a learning excercise so that in my next kernel I could do a good job after mastering the basics.

But it seems like DMA might actually be the simplest way here and doesn't look that difficult.

I guess it means I have to deal with things like the requesting process going away before the DMA completes somehow but that doesn't seem impossible.
Keep in mind that if you do DMA, you'll have to do the following from your OP:
3) In the driver when the time comes, look at the task and the virtual address stored in the request and manually go though the page tables to work out the physical memory location of the memory the user mentioned, and then temporarily map that physical location into a fixed place in the current task and access it there. That seems fast, and elegant but rather complicated.
This is because your DMA controller and the devices it's talking to are only going to use physical addresses. They have no idea of how the CPU has things mapped, so if you're DMAing directly into a process's address space, you'll have to do the address translation yourself ahead of time so you can hand the hardware a physical address. That's *not* to say that you should use PIO instead, though: In addition to other objections that might be raised to its use, PIO is also slow and CPU intensive (so you have to worry about the consequences for scheduling, etc, from spending a significant amount of time in your disk interrupt handler).
JohnB
Posts: 3
Joined: Tue Oct 06, 2020 1:25 am
Location: London, UK

Re: User memory access in disk driver

Post by JohnB »

Thank you, this makes a lot of sense.

I need to create some types in my code for storing physical address and linear addresses as I currently just use pointers and this is bound to lead to confusion at some point, as well as a proper API for converting between them when needed.
thewrongchristian
Member
Member
Posts: 426
Joined: Tue Apr 03, 2018 2:44 am

Re: User memory access in disk driver

Post by thewrongchristian »

JohnB wrote:Thank you, this makes a lot of sense.

I need to create some types in my code for storing physical address and linear addresses as I currently just use pointers and this is bound to lead to confusion at some point, as well as a proper API for converting between them when needed.
Just a suggestion, you'll need a filesystem interface and filesystem anyway, why not write that to start with?

You can use a RAM disk to simulate disk I/O, multiboot can load your RAM disk when you boot. Then your disk driver is a simple memcpy to/from the RAM disk.

That will allow you to model your disk hardware interface without having to do the nitty gritty of hardware I/O, as well as integrate it with your virtual memory system.

I've managed to write USTAR and FAT filesystems, exec ELF files, read/write files from user space and integrate it all with my paging system all without writing a single hardware disk driver (though I do have a basic IDE driver now.)
Post Reply