16bitPM wrote:True, but it's not very well documented if you ask me.
Look here:
https://github.com/open-watcom/open-wat ... umentation
16bitPM wrote:I'm guessing that also has to do with OS design. For me, the logical distinction comes natural (and it did so too when Multics was designed).
No, more or less HLL design. It's much more natural to treat memory as a logical sequence of bytes than as a sequence of segments in HLLs (C, C++, Ada, and so on). It's
possible to use segments, but doesn't feel very natural.
16bitPM wrote:First of all, there is also overhead in maintaining a paged system even with PCID. Secondly, reloading the LDT is just loading the appropriate LDT selector, putting the data in LDTR and do some checks. This is independent of the number of selectors. The timings for the LLDT instruction on old CPU's (I only have those readily available atm, but they should be within the same order of magnitude for newer CPU's) are :
The entire load isn't just at the LLDT instruction. That's one part of the data. You also have to invalidate the segment-descriptor cache, and than reload those for memory accesses. Of course, as I said in my earlier post, how relevant that is depends on how granular your segments are. With PCID, you change tags, and that's it (more or less).
16bitPM wrote:That's not so bad. Of course, the LDT has to be filled, but that's probably mostly at the start of the process.
But segmentation is pointless if all LDT work is done at creation time. You need some way to dynamically create segments if you want more granularity.
16bitPM wrote:You are confusing both. For segments <1MiB, the granularity is 1 byte, and for big (1MiB-4GiB) segments, it's 4096 bytes : the same as paging.
As far as I know, the minimum page size is still 4096 bytes.
.. and then you waste 128MiB on the tables necessary to do this. Of course, that could be (more or less) Intel's fault.
16bitPM wrote:That was only a problem because of the 64KiB limit, not of the segmentation concept per se.
Segmentation doesn't feel natural in HLL's, no matter what.
16bitPM wrote:Also, if you ask me, paging has become a mess. Just looking at all the features that have been added in the past 20 years...
That's not paging's fault. That's Intel's fault. Intel had made a mess out of x86.
But still, I'd rather work with Intel's messy paging than segmentation. Segmentation's mess permeates everywhere.
16bitPM wrote:They COULD have added a descriptor cache, but they didn't.
They actually did. If they didn't, that would require 2 memory accesses per memory access
16bitPM wrote:They also could have added a TSS cache, but... they didn't.
TSS? That disaster? Let's not touch that this time
16bitPM wrote:A lot of things that come automatically with the concept of segmentation,
16bitPM wrote:position independent code
PC-relative addressing is the solution with paging. Which CPU makers should have thought of a long time before they did, IMO.
16bitPM wrote:limit checking
That is the one thing segmentation has over paging. I won't try to argue with that. But still, segmentation is much more difficult to work with, as I have said above multiple times.
16bitPM wrote:protection against stack overflow
Guard pages are the solution. Those are pretty simple, and work great.
16bitPM wrote:oh yeah, and it's possible to address more than 4GiB on a 32-bit system within 1 process space
What about something like Microsoft's AWE? But then again, 64-bit paging make this a million time simpler.
In summary, segmentation is a pain to work with, and doesn't feel very natural in HLLs. The industry selected what was better for its needs. Paging was better for its needs.