Hi,
how have you solved the problem where you have to copy a buffer read from disk back to the user thread which issued the call?
- user thread calls system routine "read" (i.e to read a file), goes to kernel mode and starts disk I/O.
- system call returns immediately since this is non-blocking I/O. User thread does other things...
- Disk interrupts, at this point some kernel space DMA buffers are filled with the data the user needs.
- How to get the data back to the user's address space?
The solutions I could imagine are:
- Windows style APCs (where a "completion-routine" is queued to the thread that issued the I/O. Before the thread runs, the APC routine runs first. At this point we have the user's address space).
- Temporarily map the user's address space at a suitable moment after the interrupt?
Do you know how this is handled in Linux?
Bye
Zoomby
I/O: Get buffer from kernel to usermode
Re: I/O: Get buffer from kernel to usermode
Hi,
You could have some reference about that from book "Understanding the Linux Kernel"3rd, or
"Linux Device Driver"3rd.
It illustrate about that specifically...
You could have some reference about that from book "Understanding the Linux Kernel"3rd, or
"Linux Device Driver"3rd.
It illustrate about that specifically...
Re: I/O: Get buffer from kernel to usermode
Thanks kop99,
I think I understood it:
If "read" returns EAGAIN, user mode has to wait on "select", and then voluntarily call "read" again, so the kernel can access the user mode buffers from "read" finally.
Is there another way in linux?
And how have you (all) solved it?
I think I understood it:
If "read" returns EAGAIN, user mode has to wait on "select", and then voluntarily call "read" again, so the kernel can access the user mode buffers from "read" finally.
Is there another way in linux?
And how have you (all) solved it?
- NickJohnson
- Member
- Posts: 1249
- Joined: Tue Mar 24, 2009 8:11 pm
- Location: Sunnyvale, California
Re: I/O: Get buffer from kernel to usermode
Well, the way I'm at least *planning* to do it on my system is based on my event driven / hybrid kernel design. The user process sends a non-blocking message to the driver, which begins the request. Once the driver finishes, it copies to a piece of shard memory, and sends a message back to the user process, which then handles it and sets a global "done" flag, and then resumes the read call. I guess on the surface it's like what the Linux interface seems to be. That's probably similar to how Linux works, except the first message is a system call and the second is handled completely in the kernel.
It's easy to get the data back to userspace either way: direct addressing from the kernel or shared memory in driver processes; the tricky part is doing asynchronous read calls and getting the user process to respond.
It's easy to get the data back to userspace either way: direct addressing from the kernel or shared memory in driver processes; the tricky part is doing asynchronous read calls and getting the user process to respond.