Physical segmentation advantages

Discussions on more advanced topics such as monolithic vs micro-kernels, transactional memory models, and paging vs segmentation should go here. Use this forum to expand and improve the wiki!
randoll
Posts: 7
Joined: Thu Feb 13, 2014 5:58 am

Physical segmentation advantages

Post by randoll »

Hello, I know that there are many disadvantages (comparing addresses, memory wrap around) and I was trying to find some advantages, but I haven't found any, other than accessing more memory than width of registers.

Does overlapping segments give me any advantage?
Does using segmentation give me any other advantages?
User avatar
Combuster
Member
Member
Posts: 9301
Joined: Wed Oct 18, 2006 3:45 am
Libera.chat IRC: [com]buster
Location: On the balcony, where I can actually keep 1½m distance
Contact:

Re: Physical segmentation advantages

Post by Combuster »

Segmentation allows for implementations of No-Execute, Thread local storage, and small address spaces if done correctly.
"Certainly avoid yourself. He is a newbie and might not realize it. You'll hate his code deeply a few years down the road." - Sortie
[ My OS ] [ VDisk/SFS ]
randoll
Posts: 7
Joined: Thu Feb 13, 2014 5:58 am

Re: Physical segmentation advantages

Post by randoll »

Thank you.
What about overlapping physical segmentation without any access control and other flags like in Intel 8086?
User avatar
qw
Member
Member
Posts: 792
Joined: Mon Jan 26, 2009 2:48 am

Re: Physical segmentation advantages

Post by qw »

Real mode segmentation provides a kind of position independent code, with a granularity of 16 bytes. You may give your code a fixed bass address (offset) and load it anywhere in physical memory (segment:offset)

Also, when you have three code segments A-B-C then both A and C may access B with near jumps only, saving space and time. I believe this was Intel's original intent.
User avatar
JohnBurger
Posts: 7
Joined: Tue Mar 18, 2014 12:20 am
Location: Canberra, Australia

Re: Physical segmentation advantages

Post by JohnBurger »

Randoll,

One thing that I love about Segmentation is that it allows for position-independent coding.
By that I mean that the code that is stored on disk doesn't have to be "fixed up" at load time to take into account where in memory it is being loaded. Since everything is relative to the base of the Code Segment (or Data Segment for data fix ups), and the CPU adds that base in automatically to every memory reference (in hardware, so there's no overhead), you can load the same code at address 0x01000000 today and 0x02000000 tomorrow. The Segment's base is changed, but the (relative) references for function pointers and data loads are still the same.

In a multi-address space environment, that is ameliorated somewhat by paging, since the same executable can be loaded multiple times in different address spaces, and always at the same virtual address (0x00400000 for MS-Windows, for example). But look at DLLs, which are designed to be loaded by different executables, and therefore probably at different addresses. That means that the loader has to fix up parts of the code (the DLL header has pointers to where fix ups are required) - and once the code is fixed up, it is more difficult to simply discard the pages and reload them later from the original file image: they'd have to be fixed up again.

In other words, the loader fix up routine is doing in software what the CPU already has ready and waiting to go in hardware...
alexfru
Member
Member
Posts: 1111
Joined: Tue Mar 04, 2014 5:27 am

Re: Physical segmentation advantages

Post by alexfru »

Memory defragmentation can be done and done simpler than with a flat address space. The stack segment can be easily extended as long as there's free memory. Some buffer overruns can be detected automatically in hardware.
User avatar
qw
Member
Member
Posts: 792
Joined: Mon Jan 26, 2009 2:48 am

Re: Physical segmentation advantages

Post by qw »

Protected mode segmentation had NX built in.

EDIT: Sorry, Combuster already said that.
Last edited by qw on Wed Apr 30, 2014 2:31 am, edited 1 time in total.
Nable
Member
Member
Posts: 453
Joined: Tue Nov 08, 2011 11:35 am

Re: Physical segmentation advantages

Post by Nable »

JohnBurger wrote:In other words, the loader fix up routine is doing in software what the CPU already has ready and waiting to go in hardware...
You won't believe. Seriously, it doesn't wait for anything and good architectures has this feature. I mean RIP-relative (or PC-relative, i.e. relative to program counter) addressing. With such addressing mode one can easily generate position-independent code, i.e. code that can work at any virtual address without fix-ups.
User avatar
JohnBurger
Posts: 7
Joined: Tue Mar 18, 2014 12:20 am
Location: Canberra, Australia

Re: Physical segmentation advantages

Post by JohnBurger »

Nable wrote:I mean RIP-relative (or PC-relative, i.e. relative to program counter) addressing. With such addressing mode one can easily generate position-independent code, i.e. code that can work at any virtual address without fix-ups.
Agreed. The x86 architecture uses PC-relative addressing for all its near CALLs and JMPs (both conditional and unconditional), so there are fewer fixups than would otherwise be required. It is also possible, with convoluted coding, to write position independent data accesses. But that's my point: the x86 architecture is position independent with Segmentation - the decision to effectively disable this with the flat address model subverts this, and requires convoluted software to replace what the hardware does natively.

For example, with Object Oriented high level languages, polymorphism is a foundational concept. To implement this, you need pointers to functions.

With the flat model, the easy implementation, raw pointers, requires fixups for each pointer. A more convoluted implementation can be written with "thunks", mini-routines to vector to the correct function via indirect pointers. And this is regardless of architecture, and the presence or absence of PC-relative addressing modes.

With the Segmented model, pointers to functions are static: the pointer is relative to the base of the Segment, so no fixups are required. And if LDTs are used, the compiler can pre-assign all the segments too - the LDT is private to each process, so the compiler can have free rein with assigning all the Code, Data, Heap and Stack segments it wants.
User avatar
Owen
Member
Member
Posts: 1700
Joined: Fri Jun 13, 2008 3:21 pm
Location: Cambridge, United Kingdom
Contact:

Re: Physical segmentation advantages

Post by Owen »

JohnBurger wrote:
Nable wrote:I mean RIP-relative (or PC-relative, i.e. relative to program counter) addressing. With such addressing mode one can easily generate position-independent code, i.e. code that can work at any virtual address without fix-ups.
Agreed. The x86 architecture uses PC-relative addressing for all its near CALLs and JMPs (both conditional and unconditional), so there are fewer fixups than would otherwise be required. It is also possible, with convoluted coding, to write position independent data accesses. But that's my point: the x86 architecture is position independent with Segmentation - the decision to effectively disable this with the flat address model subverts this, and requires convoluted software to replace what the hardware does natively.

For example, with Object Oriented high level languages, polymorphism is a foundational concept. To implement this, you need pointers to functions.

With the flat model, the easy implementation, raw pointers, requires fixups for each pointer. A more convoluted implementation can be written with "thunks", mini-routines to vector to the correct function via indirect pointers. And this is regardless of architecture, and the presence or absence of PC-relative addressing modes.

With the Segmented model, pointers to functions are static: the pointer is relative to the base of the Segment, so no fixups are required. And if LDTs are used, the compiler can pre-assign all the segments too - the LDT is private to each process, so the compiler can have free rein with assigning all the Code, Data, Heap and Stack segments it wants.
It doesn't matter if the hardware does it natively if the hardware is slower

And the hardware is slower. Segment switches are slow. Many of the address calculations run slower.

The lack of IP relative addressing in x86 is a problem, but segmentation isn't the fix.
linguofreak
Member
Member
Posts: 510
Joined: Wed Mar 09, 2011 3:55 am

Re: Physical segmentation advantages

Post by linguofreak »

Owen wrote:
JohnBurger wrote:
Nable wrote:I mean RIP-relative (or PC-relative, i.e. relative to program counter) addressing. With such addressing mode one can easily generate position-independent code, i.e. code that can work at any virtual address without fix-ups.
Agreed. The x86 architecture uses PC-relative addressing for all its near CALLs and JMPs (both conditional and unconditional), so there are fewer fixups than would otherwise be required. It is also possible, with convoluted coding, to write position independent data accesses. But that's my point: the x86 architecture is position independent with Segmentation - the decision to effectively disable this with the flat address model subverts this, and requires convoluted software to replace what the hardware does natively.

For example, with Object Oriented high level languages, polymorphism is a foundational concept. To implement this, you need pointers to functions.

With the flat model, the easy implementation, raw pointers, requires fixups for each pointer. A more convoluted implementation can be written with "thunks", mini-routines to vector to the correct function via indirect pointers. And this is regardless of architecture, and the presence or absence of PC-relative addressing modes.

With the Segmented model, pointers to functions are static: the pointer is relative to the base of the Segment, so no fixups are required. And if LDTs are used, the compiler can pre-assign all the segments too - the LDT is private to each process, so the compiler can have free rein with assigning all the Code, Data, Heap and Stack segments it wants.
It doesn't matter if the hardware does it natively if the hardware is slower

And the hardware is slower. Segment switches are slow. Many of the address calculations run slower.

The lack of IP relative addressing in x86 is a problem, but segmentation isn't the fix.
I think that segmentation could be implemented to be faster, if you implemented segments more as userspace-visible ASIDs than as offsets into a single paged address space.
User avatar
qw
Member
Member
Posts: 792
Joined: Mon Jan 26, 2009 2:48 am

Re: Physical segmentation advantages

Post by qw »

Maybe it is me, but I fail to see why segmentation should be slow and paging fast. I think it is simply a result of Intel's design decisions. Because everybody was using a flat memory model already, Intel decided to concentrate on paging. It could have been different. But for some reason, many seem to think segmentation is difficult.
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: Physical segmentation advantages

Post by Rusky »

Paging offered separate address spaces, which was important with only 32-bit address spaces. Segmentation was entirely unused by the time 64-bit arrived, so they didn't bother. Intel's implementation of segmentation was also a little sad, given that it originated as a way to get around the limitations of 16-bit. It also had a few inherent inefficiencies- programs had to go through the kernel to modify descriptor tables, and there were very few segment registers and reloading them was slow.

The Mill has a single address space with a protection model similar to segmentation. Their security talk explains it in depth (among other things), but basically they get PIC a different way and use what they call a protection lookaside buffer to assign permissions to arbitrary ranges of memory (rather than page-sized chunks). Protection domains (they call them turfs, sort of the equivalent of an address space in their SaS system) can grant access to those regions to each other directly, through their equivalent of a far call.

That way there's no limit on the number of segments, processes can manipulate their own without going through the kernel, and there are no segment registers to reload. It actually looks pretty nice. Do note that it also has paging underneath (without the protection bits) just like x86, as it's still useful for things like virtual memory, fragmentation, etc.
onlyonemac
Member
Member
Posts: 1146
Joined: Sat Mar 01, 2014 2:59 pm

Re: Physical segmentation advantages

Post by onlyonemac »

I use segmentation because it allows me to load an executable into any part of memory and have it execute without any awareness of where it is located. Thus there are no issues with relocation or anything like that.

Furthermore, my memory model works on the principle of segments: every application has a code/data segment, an extra segment and a stack segment. Using segmentation keeps my stack out the way. The kernel also has additional segments for its own use.

Overall I find using segments logical with the way I'm doing it, because segment:offset basically translates into datablock:value_index.
When you start writing an OS you do the minimum possible to get the x86 processor in a usable state, then you try to get as far away from it as possible.

Syntax checkup:
Wrong: OS's, IRQ's, zero'ing
Right: OSes, IRQs, zeroing
OSwhatever
Member
Member
Posts: 595
Joined: Mon Jul 05, 2010 4:15 pm

Re: Physical segmentation advantages

Post by OSwhatever »

There has not been any serious attempt with segmentation in HW since the 80s. The segmentation of the x86 is slow because there was no emphasis on that method when paging was defacto preferred method. Segmentation model can be optimized just like with page table translation using a TLB. The segments are cached in a x86 processor but very sparsely as there have been no demand for it. In practice you can cache a whole lot of segment descriptors.

Segmentation is about to get a new renaissance.

New 64-bit processors greatly expand the virtual address space and page tables are about 5 levels deep on a 64-bit machine. A TLB miss is enormously expensive.
HW page table walkers are also complex, sucks lot of power.
New object oriented languages put protection on object level rather on process level. This has led to that programs are easier to debug and faster time to market. Segmentation is usually a better fit for these languages.
Post Reply