what's the real SLOW parts in popular OS/OS theories?

Question about which tools to use, bugs, the best way to implement a function, etc should go here. Don't forget to see if your question is answered in the wiki first! When in doubt post here.
rdos
Member
Member
Posts: 3310
Joined: Wed Oct 01, 2008 1:55 pm

Re: what's the real SLOW parts in popular OS/OS theories?

Post by rdos »

gerryg400 wrote: Efficiency has more to do with good design than good implementation.
A good design is the first priority, but then portability concerns limits the possible designs, which leads to inefficient designs.

Some examples:
* Portable designs cannot use segmentation
* Portable designs are bloated with endian-issues
* Portable designs are always "every possible platform must support this feature".

In the long run, portable designs hamper creativity of chip-designers, because new features will be left unused when the portable OS cannot make use of them. Therefore we will have to live with all the problems of unprotected address-spaces that use paging only for physical address mapping, which is far from an optimal solution.
rdos
Member
Member
Posts: 3310
Joined: Wed Oct 01, 2008 1:55 pm

Re: what's the real SLOW parts in popular OS/OS theories?

Post by rdos »

Tosi wrote:Code that needs to be called periodically (i.e., a timer interrupt) needs mentioned. If your task scheduler is slow that could be a huge bottleneck.
I cannot imagine how the task scheduler could be slow on a modern processor. Back in the days of the 386-processor, I calculated that the time-slice I aimed to use (1ms) would use 10% of the available CPU power. Since then this has changed to such a small impact that it probably needs to be expressed in "parts-per-million" or something. Yet I notice that Windows XP is still running sluggish, as if their time-slice can be measured in seconds or minutes.

Also, in our much older system based on the V25 processor (basically an 8086 running at 20MHz), a 1 ms timeslice also works, and does not impact performance a lot. The latter is primarily due to context switches being faster in real-mode (fewer & smaller registers, no selectors).
User avatar
JamesM
Member
Member
Posts: 2935
Joined: Tue Jul 10, 2007 5:27 am
Location: York, United Kingdom
Contact:

Re: what's the real SLOW parts in popular OS/OS theories?

Post by JamesM »

* Portable designs cannot use segmentation
* Portable designs are bloated with endian-issues
* Portable designs are always "every possible platform must support this feature".
I don't know why I even bother replying to these, but here goes:

Segmentation is slow. It's deprecated, and Intel doesn't design its chips to be fast with it any more. It optimises for paging, with a flat memory model. The small exception is TLS and use of swapgs to get into the kernel.

Little-endian is the de facto standard; unless you're writing an RTOS to run in routers, you'll be using little endian.
* Portable designs are always "every possible platform must support this feature".
No. This just shows that you obviously don't know how to produce good portable designs. Good designs utilise all possible features on all platforms, but provide them through a unified interface and degrade gracefully in the presence of a platform that doesn't support a feature.
rdos
Member
Member
Posts: 3310
Joined: Wed Oct 01, 2008 1:55 pm

Re: what's the real SLOW parts in popular OS/OS theories?

Post by rdos »

JamesM wrote:I don't know why I even bother replying to these, but here goes:

Segmentation is slow. It's deprecated, and Intel doesn't design its chips to be fast with it any more. It optimises for paging, with a flat memory model. The small exception is TLS and use of swapgs to get into the kernel.
That was not the issue. The issue was that a technology that don't fit into mainstream ideas of a "portable OS" become depreciated even by the chip-vendors themselves. Intel certainly believed in this new feature when they defined the IA32, but since then "portable OSes" have made the feature obsolete.
JamesM wrote:
* Portable designs are always "every possible platform must support this feature".
No. This just shows that you obviously don't know how to produce good portable designs. Good designs utilise all possible features on all platforms, but provide them through a unified interface and degrade gracefully in the presence of a platform that doesn't support a feature.
Not so. Most of today's portable designs:

* Do not work on processors without paging
* Do not work on processors with less than 32-bits addressing
* Do not work in segmented memory models
* Do not use advanced memory-protection models (beyond paging) even if they are present
* Do not work on processors with odd memory configurations (like signal processors)

Add to this the "bloat factor". I've looked at GCC and LIBC, and these are horribly bloated designs because they want to support horribly bloated library standards that almost nobbody uses in their full definition. In essence, the success of "portable designs" is the success of bloated software.
User avatar
xfelix
Member
Member
Posts: 25
Joined: Fri Feb 18, 2011 5:40 pm

Re: what's the real SLOW parts in popular OS/OS theories?

Post by xfelix »

rdos wrote: But you are wrong. No sane OS should need to resort to swapping processes to disk with 4GB of memory. Any reason for this must be grounded in bloated software, which was one point on my list. :mrgreen:
I was thinking of the scenario when a user is (listening to jams, while playing half life 2, and instant messaging, with the anti virus software running in the background). Basically so many programs that they can't all be paged into main memory, that the last resort is to start storing them to the hard disk. The whole purpose of Virtual Memory is to make it seem as if we have infinite RAM, and when the page isn't resident in RAM a page fault occurs, and we swap the page from the hard disk to the RAM. Many Operating Systems supports this, but linux has it disabled by default I believe.
The More I See, The More I See There Is To See!
User avatar
Solar
Member
Member
Posts: 7615
Joined: Thu Nov 16, 2006 12:01 pm
Location: Germany
Contact:

Re: what's the real SLOW parts in popular OS/OS theories?

Post by Solar »

rdos wrote:Intel certainly believed in this new feature when they defined the IA32, but since then "portable OSes" have made the feature obsolete.
Not "portable OSes" did this. The IA32 success rode on DOS and Windows 3.11, neither of which even an idiot could call "portable" designs.

Portable languges did this. By concept they simply don't support segmented memory properly to make it useful.
rdos wrote:Good designs utilise all possible features on all platforms, but provide them through a unified interface and degrade gracefully in the presence of a platform that doesn't support a feature.
That is your definition. And a poor one, at that: Such an OS would not run efficiently on any platform because it would have to "degrade" quite a bit on every single one of them.

A good portable design utilizes those features that allow it to run on all intended targets with as little degradation (and thus, efficiency) as possible. It also minimizes the development effort necessary to support all intended targets by encapsuling those parts that are target-specific.

Without meaning to offend, but you look on everything through your Assembler / RDOS glasses. One cannot help but notice that you still feel you have an axe to grind about Intel no longer considering segmentation-based operating systems, over a decade ago. This skews your POV significantly, IMHO.
rdos wrote:Most of today's portable designs:

* Do not work on processors without paging
* Do not work on processors with less than 32-bits addressing
* Do not work in segmented memory models
* Do not use advanced memory-protection models (beyond paging) even if they are present
* Do not work on processors with odd memory configurations (like signal processors)
Looking at that list, I'd like to ask which "portable designs" you looked at when making that evaluation. Aside from the Unixes (*BSD, Linux et al.) I'd be hard-pressed to name any "portable designs"...

The "big irons" today all have paging, >=32 bit addressing, couldn't care less about segmentation and have pretty standard memory configurations. Thus, operating systems adapt to that environment.

Those systems not meeting these criteria usually fall in the "embedded" / "dedicated" arena, where there's a wholly different ballgame going on, where "portability" isn't that much of a subject as the work to be done is pretty specific anyway.
rdos wrote:Add to this the "bloat factor". I've looked at GCC and LIBC, and these are horribly bloated designs because they want to support horribly bloated library standards that almost nobbody uses in their full definition.
That would be C99, C++0x, Objective-C and POSIX, then. Your definition of "almost nobody" is a bit funny.
rdos wrote:In essence, the success of "portable designs" is the success of bloated software.
The success of portable designs is that human work is much more expensive than another CPU core added to the task. Highly efficient software is still being written, but it has become a niche in the embedded / dedicated arena (see above).

I'd be surprised if there is any company today that employs Java programmers because it's a great language or because it's so fast and efficient or because it's "write once, run anywhere". They do employ them because they are readily available on the workforce market, and because they can RAD circles around a C or ASM coder. Those companies couldn't care less about segmentation or advanced memory protection schemes if they can undercut the competition by 5% because their code monkeys are churning out code faster than the competition's code monkeys.

We - meaning highly skilled specialists that can do more than pasting together design patterns at piece rate - are a dying species. We can try and fight, keeping a niche of efficient beauty alive. We can try and fight, trying to at least go down in flaming glory. We can give up and join the masses of the code monkeys.

What we cannot justly do is blame either the CPU manufacturers nor the "mainstream" OS developers to go where the profit is.
Every good solution is obvious once you've found it.
rdos
Member
Member
Posts: 3310
Joined: Wed Oct 01, 2008 1:55 pm

Re: what's the real SLOW parts in popular OS/OS theories?

Post by rdos »

Solar wrote:Portable languges did this. By concept they simply don't support segmented memory properly to make it useful.
I agree.
Solar wrote:Without meaning to offend, but you look on everything through your Assembler / RDOS glasses. One cannot help but notice that you still feel you have an axe to grind about Intel no longer considering segmentation-based operating systems, over a decade ago. This skews your POV significantly, IMHO.
Possibly.
Solar wrote:We - meaning highly skilled specialists that can do more than pasting together design patterns at piece rate - are a dying species. We can try and fight, keeping a niche of efficient beauty alive. We can try and fight, trying to at least go down in flaming glory. We can give up and join the masses of the code monkeys.
I've already decided not to join the code monkeys. I'd rather do something completely different than to become a code monkey. :mrgreen:
User avatar
JamesM
Member
Member
Posts: 2935
Joined: Tue Jul 10, 2007 5:27 am
Location: York, United Kingdom
Contact:

Re: what's the real SLOW parts in popular OS/OS theories?

Post by JamesM »

That is your definition. And a poor one, at that: Such an OS would not run efficiently on any platform because it would have to "degrade" quite a bit on every single one of them.
Actually, that's quoting me not rdos and I'd stand by my definition (even if it is somewhat idealistic, but hey, if you're not idealistic in your "should have"'s and "could have"'s, you never get anything good).
FlashBurn
Member
Member
Posts: 313
Joined: Fri Oct 20, 2006 10:14 am

Re: what's the real SLOW parts in popular OS/OS theories?

Post by FlashBurn »

I don´t want to start a flame-war, but why are almost all people hate segmentation? I mean with segmentation one can do many things and we wouldn´t have some of the problems we now have (e.g. stack overwriting code).

If there would be a compiler which supports segmentation on IA32 I would use it. I also see it the way, that the portable designs are an innovation brake.
User avatar
JamesM
Member
Member
Posts: 2935
Joined: Tue Jul 10, 2007 5:27 am
Location: York, United Kingdom
Contact:

Re: what's the real SLOW parts in popular OS/OS theories?

Post by JamesM »

If there would be a compiler which supports segmentation on IA32 I would use it.
As a professional compiler developer, please don't make me do this.
User avatar
Combuster
Member
Member
Posts: 9301
Joined: Wed Oct 18, 2006 3:45 am
Libera.chat IRC: [com]buster
Location: On the balcony, where I can actually keep 1½m distance
Contact:

Re: what's the real SLOW parts in popular OS/OS theories?

Post by Combuster »

You don't need a segmentation-aware compiler to use segmentation. GCC for instance just expects DS ES and SS to point to the same region of memory. That region need not be 4G and it need not start at 0 linear. Code may even overlap in offsets with the data section. You can implement no-execute data on a 386. You can isolate several processes in a single complete address range. None of that requires modifications to existing compilers.

The moment the compiler can not make those assumptions, it becomes a problem as every memory access might suddenly involve a segment register reload, which is a dreadfully slow approach. It does however not make compiler implementations that more difficult - only the instruction scheduler will need to become aware that it needs two registers to store an address. If it can do 64-bit adds on 32-bit registers (the ADD/ADC and SUB/SBB pairs), it will have little more trouble figuring out the seg and offset pairs of addresses.

The point jamesM did not make, is that you wouldn't want to use such a configuration.

What I think is necessary for segmentation to become a viable part of implementation is that you need two pointer types, and that you only use the far pointers by exception so the rest of the code generation can still assume that DS and SS are a constant.
"Certainly avoid yourself. He is a newbie and might not realize it. You'll hate his code deeply a few years down the road." - Sortie
[ My OS ] [ VDisk/SFS ]
Casm
Member
Member
Posts: 221
Joined: Sun Oct 17, 2010 2:21 pm
Location: United Kingdom

Re: what's the real SLOW parts in popular OS/OS theories?

Post by Casm »

FlashBurn wrote:If there would be a compiler which supports segmentation on IA32 I would use it.
Try Watcom.
rdos
Member
Member
Posts: 3310
Joined: Wed Oct 01, 2008 1:55 pm

Re: what's the real SLOW parts in popular OS/OS theories?

Post by rdos »

Combuster wrote:What I think is necessary for segmentation to become a viable part of implementation is that you need two pointer types, and that you only use the far pointers by exception so the rest of the code generation can still assume that DS and SS are a constant.
Yes. I think I've found this memory model in the OpenWatcom project. It is the compact 32-bit memory model with the option that pegs dgroup to DS. In this model, accesses to static data and local data never uses segment loads, while all pointers will use segment register loads. The best thing is that code written for a flat-memory model usually works unmodified with this memory model. An alternative is to use the small 32-bit memory model, but then all far data needs to be explicitly defined as far.
gerryg400
Member
Member
Posts: 1801
Joined: Thu Mar 25, 2010 11:26 pm
Location: Melbourne, Australia

Re: what's the real SLOW parts in popular OS/OS theories?

Post by gerryg400 »

What's the benefit of a segmented memory model in an application. In particular an application written in C ?
If a trainstation is where trains stop, what is a workstation ?
rdos
Member
Member
Posts: 3310
Joined: Wed Oct 01, 2008 1:55 pm

Re: what's the real SLOW parts in popular OS/OS theories?

Post by rdos »

gerryg400 wrote:What's the benefit of a segmented memory model in an application. In particular an application written in C ?
It primarily has to do with stability. If written properly, limts of all objects will be enforced. The stack-limit will be enforced. Deleted objects cannot be referenced. Unallocated memory cannot be referenced. This run-time checking has a cost, but if common pointer errors is to be found in C/C++, it is a good idea to validate the application with these run-time checks. If speed is a factor (or a protected, segmented memory model is not viable in the target environment) , the checks can be removed in the production release. This is how I found most of the bugs in our terminal software back when it was only running in real-mode on a V25 processor. These checks cannot be made equally well with only paging. Paging will usually catch unallocated pointers, but it won't catch overwrites, using deleted objects, stack overflows, invalid returns with a corrupt stack and similar.

In an OS kernel, segmentation can be used to enforce strong isolation between modules. At a similar level of effciency as running drivers in separate processes, but with much less overhead.
Post Reply