You sure im wrong ? It depends ,if im running a tight 1 line CPU bench mark how can this checking happen or even multiple loops but all code in 1 mem page ? It only happens on larger benchmarks and when accessing new data structures (pages) .Hence it really is a memory test not a CPU one.CPU intensive benchmarks are meaningless unless they use OS calls ( which the papers show are MUCH cheaper with Singularity) .
Wrong. CPU intensive code running under software isolation (with no API calls) suffers additional overhead (caused by things like array bounds checking, etc). The paragraph from the Singularity paper that you quoted mentions this ("the runtime overhead for safe code is under 5%").
If you only read the 1 paper .. The others have some of these benchmarks ie the Bartok compiler one . Compilation is a decent benchmark. With the software overhead a smart compiler can reduce this significantly as it can work out the memory a loop can covers . Once the compiler is proven and checks it at compile time why do they even need to verify this at run time ?For CPU intensive workloads software isolation has more overhead, and for "IPC ping-pong" workloads hardware isolation has more overhead. I'm just saying it'd make sense to do a "IPC ping-pong" benchmark *and* a "CPU intensive" benchmark, so that people don't get the wrong idea and think that software isolation is always gives better performance.
Of course you'd think people reading research papers would be smart enough to read between the lines; but obviously this isn't the case...
I didnt believe some of the claims either so i down loaded it and wanted to have a closer look. The design documents are better than the papers but are more speculative. Im sure some results are not available and a number of design documents are missing (The last one is number 87)Benk wrote:
Memory benchmarks are prob useful . There is also charts comparing the cost for API calls , WIndows creation etc to Windows , Linux etc. There are many other benchmarks in the other 6 papers and 30 design documents released with the distribution showing memory usage etc.
I think I've seen 2 papers and a few videos (for e.g.), and none of the design documents.
If you download it there are
2 Technical reports
9 Papers ( only from 2005-2007)
4 Getting started docs which cover some of the ideas.
38 Design docs.
In the later video they mentioned they had 3 days to get the benchmarks for the paper deadlines and no time for optimization etc.
This only would affect really small embedded OS. How many apps use less than 4k or even 500K . Take a 401K app with 4K pages it will use 101 pages (404k) , with 16K pages it will use 26 pages ( 416K) . Now your loosing 12k =3 % BUT your page table is 1/4 of the size giving you a lot of memory back. So you prob end up loosing 1% ( So out of 1Gig of user apps you loose 10M) . You also gain performance as you will have less TLB misses and less page table in the CPU caches.Benk wrote:
With regard to paging i think
1) 4K pages are too much overhead for most applications these days . Singularity in the test uses 4k pages.
The problem with large pages is there's wastage - if you're using 4 KiB pages and only need 1 KiB of RAM then you have to waste the other 3 KiB of RAM; and if you're using 1 GiB pages the OS will run out of RAM before it can display a login screen (and then it'll need to swap entire 1 GiB chunks to disk).
Considering mem is so big and cheap i dont think its an issue.
In a managed environment like Singularity since you dont use the pages for security you could run a single garbage collector and a big page mem manager and a small memory manager. The small memory manager can just take big pages from the big mm and hand it out as smaller ones but underneath everything works on big 1M pages.
Me bad.. I should have been more clear i was referring to Virtual memory being paged to disk , paging it back takes for ever. DDR3 is only expensive cause its new ,DDR2 was very expensive compared to DDR/SDRAM when it was released.Benk wrote:
2) Virtual memory is terrible in practice. Its an order of magnitude quicker to restart most application again than to restart it after it has memory stolen away from it and besides memory is cheap by the time these OS hit the street we are all looking at 8Gig + systems.
Its not an order of magnitude quicker to restart most applications - are you sure you know how virtual memory works?
By the time we're looking at 8 GiB+ systems we'll also be looking at 8 GiB+ applications (or at least 2 GiB applications running on 3 GiB OSs with 2 GiB of file cache and 1 GiB of video data). At the moment DDR2 is cheap, but DDR3 isn't, and newer computers need DDR3.
What happens now ? If the machine is really out of memory ( and not pretend because mem is full of disk blocks) and pages heavily for longer than a short period it often becomes unusable and its time for the reset switch . In Singularity their is no dynamic loading of content so these things cant happen the only thing that can happen is a new app fails to load which also happens when you run out of swap.Benk wrote:
Bad blocks or disk drivers can have really nasty impact on a machine and further more you need tight coupling between the disk driver and mm.
If the OS runs out of RAM, then what do you suggest should happen:
* Do a kernel panic or "blue screen of death" and shutdown the OS
* Terminate processes to free up RAM
* Let processes fail because they can't get the RAM they need, and let the process terminate itself
* Everyone doubles the amount of RAM installed every time they run out of RAM (to avoid random denial of service problems caused by the first 3 options), even if they only run out of RAM for once in 5 years
* Allow data to be paged to/from disk
With servers people plan to make sure there is no paging anyway. I think paging is something that was important in the early 80s and 90s not now. For most windows machines i use i turn it of and dont have much problem ( in fact its much better for my usage pattern ).
Agree for nearly all OS models , this is why Sing intrigues me as it does not necessarily need it. The indirection means an intermediate MM can remap memory and hence no fragmentation. See other post.Benk wrote:
3) There are many ways of dealing with memory fragmentation.
Sure, and they all suck (but paging sucks less).