Discussions on more advanced topics such as monolithic vs micro-kernels, transactional memory models, and paging vs segmentation should go here. Use this forum to expand and improve the wiki!
Brendan wrote:To me, the biggest problem with IPC is changing from one process' "working set" to another process' "working set" - e.g. fetching cache lines for the second process and flushing (and if modified, writing) cache lines used by the previous process to make room. Paging does make the cost of this "working set switch" a little higher, but "single address space managed code" doesn't avoid it. Given that the cost of a "working set switch" can't be avoided, my approach is to minimise the number of "working set switches". Basically, when message/s are sent they're just put in a FIFO queue so that any "working set switches" can be postponed - e.g. you can send 100 messages and do one "working set switch" rather than doing 100 of these "working set switches".
Now, putting a reference into a FIFO queue is fast, and it doesn't matter much what this reference is. It could be the address of the message data, or it could be the address of a page or page table that contains the message data. Managed code would make little difference.
With multiple address spaces you never get rid of the copying messages from one address space to another though, unless you send entire pages.
I either copy the data (small messages), move entire pages (medium messages) or move a page table (large messages, up to 2 MiB).
For my scheme, actually moving the message data (from sender's address space to FIFO, and eventually from FIFO to receiver's address space) isn't really a problem. The main bottleneck is memory management (allocating pages in the message buffer when creating new messages; and freeing pages in the message buffer when they're no longer needed).
I'm not saying my method is the fastest way (using shared memory would be faster). I like it for other reasons - "messages disappear when sent and appear when received" is a much cleaner concept for application developers to deal with (especially for distributed systems), and it's incredibly difficult for an application developer to screw it up.
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
OSwhatever wrote:As I see it, the fundamental change is the protection granularity, you go from protection on process level to protection to object level. There is very little HW support for object protection so it must be handled by SW.
This remark in the original post drew my attention. Does hardware support object protection? Hows does/would that work? Just being curious...
OSwhatever wrote:As I see it, the fundamental change is the protection granularity, you go from protection on process level to protection to object level. There is very little HW support for object protection so it must be handled by SW.
This remark in the original post drew my attention. Does hardware support object protection? Hows does/would that work? Just being curious...
Thanks, I didn't know about this CPU which seems to be before its time. It's about this type of architecture I would have in mind if object protection would be implemented in HW, basically descriptors.
iAPX 432 systems were expensive and very slow.
I guess this is a bad thing when you try to sell them.............
However, now with the process technology greatly improved this type of processor could easily fit on a SoC.
Object protection would be the natural way to solve protection, I would have proposed it if I was a CPU designer. I don't know where the idea with a page table comes from and I would never have thought about it as it seems far fetched. The page table has some advantages but many can be solved in other ways.
OSwhatever wrote:Object protection would be the natural way to solve protection, I would have proposed it if I was a CPU designer. I don't know where the idea with a page table comes from and I would never have thought about it as it seems far fetched. The page table has some advantages but many can be solved in other ways.
The page table falls out of virtual memory (hardware likes tables. They're easy to access; though you see different designs, e.g. PowerPC's hash table which essentially is an extension of the TLB). Virtual memory falls out of asking the question "How do I make this machine behave, from a program's point of view, like multiple isolated machines". That question falls out of Unix fork/exec, which initially worked by swapping out the whole contents of RAM to disk
I guess on the x86, one could use segmentation for object protection. Of course it has its limitations (not to mention bad performance) but it is possible through the LDT to grant each process access to a limited set of objects.
> I guess on the x86, one could use segmentation for object protection.
I also like this idea very much but it's a way to rdos.
And object protection is smth more complex than just bounds checking and/or read/write protecion of the whole object (because some class members may be read-only and others are not, then some of them may be public and others are private).
After all, managed execution with JIT seems the best way, although some CPU-side support is required to achieve speed.
First, we must understand what is exactly meant by managed language.
Managed language is equivalent to an unmanaged language when its runtime library is removed.
The runtime saves the programmer from managing low level aspects of memory directly. But since an operating system will require to manage memory at extremely low levels, a managed language won't be applicable.
As for the benefits of managed language; it provides a lot of protection(assuming that the vm is perfect). I cannot see why the protection cannot be implemented in MMU. Taking all the facts into consideration; I think that MMU will continue to exist forever in one form or another; and that a managed language is a managed language only because of its runtime. There is no reason why a native code cannot run as managed code; you only need a special runtime and MMU.
Managed language is equivalent to an unmanaged language when its runtime library is removed.
I disagree. Managed language is undefined when its required runtime is removed.
A managed language is a managed language only because of it is being managed, and irrelevant to runtime.
How it is run,with or without runtime is not important, it can even be interpreted.
(but a runtime is almost necessary for a complex language anyway)
And by the way, some protection can be done by syntax / bound checking when compile.
Well, what an MMU does is test things like: is the page present, is the page readable/writable/executable (as requested) and does the page belong to the requesting process? In a managed environment, the same may be done by software.
But software has a hard time doing that as asynchronous and as fast as dedicated silicon can. Which brings us back to all the other arguments that have been posted so far.
"Certainly avoid yourself. He is a newbie and might not realize it. You'll hate his code deeply a few years down the road." - Sortie
[ My OS ] [ VDisk/SFS ]
Combuster wrote:But software has a hard time doing that as asynchronous and as fast as dedicated silicon can.
And that is why MMU will always continue to exist as dedicated hardware.
MIPS has only a TLB and uses a software approach for the actual page table. Page table walker are quite large compared to the rest of the logic of a CPU excluding the caches and MIPS usually have a smaller footprint than their ARM equivalents.
I'm more concerned how page tables scale with larger memories. In the future we will have terabytes of internal RAM memory and the question is if the page table is going to be a bottleneck as it would require something like 5 page table levels. They already tried with the inverted page table but it has other drawbacks.
I think per object protection could be an interesting approach in the future even if I'm skeptical if the MMU will disappear completely. MMU is still needed for virtualization, however if we could increase the page size to megabytes, the page table would be much smaller.
OSwhatever wrote:I'm more concerned how page tables scale with larger memories. In the future we will have terabytes of internal RAM memory and the question is if the page table is going to be a bottleneck as it would require something like 5 page table levels. They already tried with the inverted page table but it has other drawbacks.
Perhaps the amount of memory visible to a core (or group of cores in order to support multi-threaded process) is set to a fair limit, and greatly increase the number of core. ie. each piece of memory is exclusive to specific core / core group, and provide some memory-mapped-io for inter-cpu communications.