Re: Physical segmentation advantages
Posted: Fri May 09, 2014 12:49 am
And again I wonder how a technical discussion can turn into a personal fight. What is wrong with you guys?
The Place to Start for Operating System Developers
https://f.osdev.org/
So when I memory map a 4TB file... you are proposing to try and allocate 4TB of physical memory?rdos wrote:Using paging to support those is ancient (70s?)Brendan wrote: Wrong. You're ignoring all of the things that make segmentation bad (memory mapped files, swap space, physical address space de-fragmentation, etc) and focusing on a few negligible things in a deluded attempt at pretending segmentation doesn't suck badly.
1. Memory mapped files - no reason why this cannot be implemented without paging
OK, no swap space. What about paging out of unmodified file mappings (such as executable code)? Note that many programs contain considerable quantities of initialization code which can safely be discarded after startuprdos wrote:2. Swap space - you simply do not support such ancient constructs
Still very much a problem. When my system has 7 of 8GB of RAM used, how much contiguous RAM do you think the operating system will be able to find? Hint: Probably chunks of no more than 1MB.rdos wrote:3. Physical address defragmentation - not much of a problem when computers have many GB of memory.
You may find hints of the exact opposite in the fact that the signal UNIX sends a process for most memory faults is SIGSEGV - segmentation violation. Its' just that the early machines had different notions of segments (and no notions of paging). Indeed, the vfork(2) system call was introduced in order to make the fork/exec paradigm efficient on such machines (because otherwise you had to copy the entire process' memory in order to discard it immediately after)rdos wrote:Not so, popular OS designs (UNIX-compability) requires paging as the whole concept is constructed on top of paging.Brendan wrote: Wrong. In fact it's the exact opposite - OSs use segmentation for backward compatibility, and inevitably break compatibility to remove segmentation in later versions (once they realise the pain of keeping it is worse than the pain of breaking compatibility).
There are going to be problems with fragmentation unless a segmented OS has compaction of the memory, then you will have large chunks of memory again even if the available memory is low. Compaction is the key here. Also if you don't like compaction there are other allocation strategies to minimize fragmentation. In paged OSes, there are problems with fragmentation there as well, how are they dealing with it?Owen wrote:3. Physical address defragmentation - not much of a problem when computers have many GB of memory. Still very much a problem. When my system has 7 of 8GB of RAM used, how much contiguous RAM do you think the operating system will be able to find? Hint: Probably chunks of no more than 1MB.
See this post.OSwhatever wrote:There are going to be problems with fragmentation unless a segmented OS has compaction of the memory, then you will have large chunks of memory again even if the available memory is low. Compaction is the key here. Also if you don't like compaction there are other allocation strategies to minimize fragmentation. In paged OSes, there are problems with fragmentation there as well, how are they dealing with it?Owen wrote:3. Physical address defragmentation - not much of a problem when computers have many GB of memory. Still very much a problem. When my system has 7 of 8GB of RAM used, how much contiguous RAM do you think the operating system will be able to find? Hint: Probably chunks of no more than 1MB.
Segmentation advocates like to say that. Then you ask them how to do basic things (e.g. a large memory mapped file with pieces in the middle still on disk) and they can't think of a sane way. It's like they think that if they say it often enough it might magically become true one day.OSwhatever wrote:Most then stuff you do with paging, can be achieved with segmentation as well.
You can't let applications create their own GDT/LDT entries (otherwise they'll create a CPL=3 data segment to write wherever they like in kernel space) so you end up with a kernel API call for every single allocation regardless of how small it is. Normally (for paging) the application asks for larger chunks and divides it up with no security problems and much fewer kernel API calls.OSwhatever wrote:User mode programs are likely to request blocks of memories in power of two sizes anyway in order to simplify the allocation and that you don't need to visit the kernel for every little memory allocation in user space.
I just think the translation (done by segmentation) is just unnecessary, it's just an extra layer of indirection that we don't really need. Also with segmentation you have to plan out your segments. With paging this is not necessary as they're just page/s anywhere in physical memory and you don't need to deal with segments at all. So in a sense, paging is also some kind of translation, but it's been repeatedly proven to give you more freedom to choose, the layout, structures and sizes of your virtual address space areas.OSwhatever wrote:I just think that translation is just unnecessary, it's just an extra layer and indirection that we don't really need. It's like fetching water from the other side of the river. Also with paging you have to plan your virtual address space. With segments this is not necessary as they are a segment with a base address somewhere in the physical memory and you don't deal with addresses at all, only offsets within your segment. So in a sense segmentation is also some kind of translation but I think it gives you more freedom to choose, the layout, structures and sizes of your segments.
Wait, if you have per-process virtual address spaces and non-user pages are properly marked as supervisor-only, user segments can be allowed to point anywhere in the address space and page translation will still catch user's disallowed page accesses. AFAIU, we have that already working in x86 Windows and Linux. You can have user code and data segments span the entire 4G without a problem.Brendan wrote: You can't let applications create their own GDT/LDT entries (otherwise they'll create a CPL=3 data segment to write wherever they like in kernel space) so you end up with a kernel API call for every single allocation regardless of how small it is. Normally (for paging) the application asks for larger chunks and divides it up with no security problems and much fewer kernel API calls.
Brendan, you are very x86 oriented and often bring up the limitations of that segmentation. I'm talking about the general case, a CPU that does not exist but implement segmentation. There is a lot of things you can do differently with segmentation rather than the x86 case. For example:Brendan wrote:Hi,
Basically, both paging and segmentation have overhead, but the overhead of segmentation is worse.
Segmentation advocates like to say that. Then you ask them how to do basic things (e.g. a large memory mapped file with pieces in the middle still on disk) and they can't think of a sane way. It's like they think that if they say it often enough it might magically become true one day.
You can't let applications create their own GDT/LDT entries (otherwise they'll create a CPL=3 data segment to write wherever they like in kernel space) so you end up with a kernel API call for every single allocation regardless of how small it is. Normally (for paging) the application asks for larger chunks and divides it up with no security problems and much fewer kernel API calls.
Cheers,
Brendan
Of course paging solves the problem. We were talking about segmentation without paging though.alexfru wrote:Wait, if you have per-process virtual address spaces and non-user pages are properly marked as supervisor-only, user segments can be allowed to point anywhere in the address space and page translation will still catch user's disallowed page accesses. AFAIU, we have that already working in x86 Windows and Linux. You can have user code and data segments span the entire 4G without a problem.Brendan wrote: You can't let applications create their own GDT/LDT entries (otherwise they'll create a CPL=3 data segment to write wherever they like in kernel space) so you end up with a kernel API call for every single allocation regardless of how small it is. Normally (for paging) the application asks for larger chunks and divides it up with no security problems and much fewer kernel API calls.
The 3 main disadvantages of paging are TLB misses, TLB invalidation and "multi-CPU TLB shootdown". These are the problems that the typical "anti-paging" fools are trying to avoid. As soon as you add "a very large segment descriptor cache" you get (the equivalent of) these 3 problems - "segment descriptor cache misses", "segment descriptor cache invalidation" and "multi-CPU segment descriptor cache shootdown".OSwhatever wrote:Brendan, you are very x86 oriented and often bring up the limitations of that segmentation. I'm talking about the general case, a CPU that does not exist but implement segmentation. There is a lot of things you can do differently with segmentation rather than the x86 case. For example:
Very large segment descriptor cache (like a TLB)
Allow user processes to create their own segment descriptors based on segments given by the kernel
Atomically and cache aware updating the base address of a segment.
Expand the number of allowed segments compared to the x86 case.
Compaction is slow. Compaction is not the solution, compaction is the problem.OSwhatever wrote:There are going to be problems with fragmentation unless a segmented OS has compaction of the memory, then you will have large chunks of memory again even if the available memory is low. Compaction is the key here. Also if you don't like compaction there are other allocation strategies to minimize fragmentation. In paged OSes, there are problems with fragmentation there as well, how are they dealing with it?Owen wrote:3. Physical address defragmentation - not much of a problem when computers have many GB of memory. Still very much a problem. When my system has 7 of 8GB of RAM used, how much contiguous RAM do you think the operating system will be able to find? Hint: Probably chunks of no more than 1MB.
Not true. Segmentation can subdivide larger segments into chunks just like paging can. This of course is necessary with x86 segmentation as Intel didn't think clearly when they moved to 32-bit (they should have extended the segment registers to 32-bit as well).Brendan wrote: You can't let applications create their own GDT/LDT entries (otherwise they'll create a CPL=3 data segment to write wherever they like in kernel space) so you end up with a kernel API call for every single allocation regardless of how small it is. Normally (for paging) the application asks for larger chunks and divides it up with no security problems and much fewer kernel API calls.
Only with x86 segments because Intel didn't extend segment registers to 32-bit. Another implementation doesn't need to have these limitations.Brendan wrote: Also with segmentation you have to plan out your segments.