This seems reasonable. However, if I were to implement this, I'd keep the creation/deletion calls, and just put the handle after the counter. I would also not support using this between processes. There is IPC for such things. No need to support synchronization between processes in shared memory.Owen wrote:The case to optimize for is the uncontested case. If you're blocking, you've already failed at scaling. Besides, if you can lock and unlock the futex in 1/5th of the time (And this is realistic with privilege level switch times on x86) then you significantly reduce the time in which there is an opportunity for blocking to occur (Assuming the application holds the lock for the shortest time possible; a hopefully reasonable assumption)
Letting the kernel handle mutexes / semaphores with IPIs
Re: Letting the kernel handle mutexes / semaphores with IPIs
Re: Letting the kernel handle mutexes / semaphores with IPIs
The problems with such support clearly prevails. There is nothing that can be done with shared memory synchronization that cannot be done with IPC (possibly in conjunction with shared memory). The primary problem with global handles is that they might not become deleted as such handles cannot automatically be purged when processes terminate.berkus wrote:Exactly the opposite, I would say.rdos wrote:I would also not support using this between processes. There is IPC for such things. No need to support synchronization between processes in shared memory.
- Owen
- Member
- Posts: 1700
- Joined: Fri Jun 13, 2008 3:21 pm
- Location: Cambridge, United Kingdom
- Contact:
Re: Letting the kernel handle mutexes / semaphores with IPIs
The Futex structure I showed was a possible optimization. The Linux implementation uses just a uint32_t.
One possible train of thought: If you have shared memory, you don't need to do as much memory copying during IPC.
One possible train of thought: If you have shared memory, you don't need to do as much memory copying during IPC.
Re: Letting the kernel handle mutexes / semaphores with IPIs
OK, then they must have the seach problem when blocking. Using a handle seems like a more optimal solution. If an OS must support futexes in shared memory, it could implement global handles and provide a new API to initialize a shared futex.Owen wrote:The Futex structure I showed was a possible optimization. The Linux implementation uses just a uint32_t.
Shared memory doesn't scale. If you need to add more machines, shared memory won't do. And adding more cores doesn't scale either as the memory system becomes a bottleneck. A generic IPC that works across machines on a network scales much better than any shared memory attempt.Owen wrote:One possible train of thought: If you have shared memory, you don't need to do as much memory copying during IPC.
And I do not copy during local IPC. I allocate page-aligned buffers in order to only transfer the page-tables between sender & receiver.
- Owen
- Member
- Posts: 1700
- Joined: Fri Jun 13, 2008 3:21 pm
- Location: Cambridge, United Kingdom
- Contact:
Re: Letting the kernel handle mutexes / semaphores with IPIs
Agreed, but
- Lets not throw away the benefits of shared memory on a single machine, and
- Shared memory makes a great base on which to build high-performance local IPC (In which the SHM is an implementation detail)
- Shared memory is a great way to implement file access (i.e. all access is by memory mapping the file; this is implemented by mapping the RAM backing the cache into the process). Files are essentially shared memory anyway, and for most cases you'll get better performance with coarse-grained locking (as occurs with shared memory) than with sending fine-grained file ops over the network
Re: Letting the kernel handle mutexes / semaphores with IPIs
Certainly. File buffers are best implemented using shared (kernel) memory, and their pages can easily be mapped into a process address-space as well for faster access. I have implemented memory-mapped files since a while back.Owen wrote:Shared memory is a great way to implement file access (i.e. all access is by memory mapping the file; this is implemented by mapping the RAM backing the cache into the process). Files are essentially shared memory anyway, and for most cases you'll get better performance with coarse-grained locking (as occurs with shared memory) than with sending fine-grained file ops over the network