Page 1 of 6
what's the real SLOW parts in popular OS/OS theories?
Posted: Sun Mar 20, 2011 9:00 am
by lemonyii
Hi, everyone!
I'm feeling that many OS (theory or sample) are with "advanced" architecture, which are declared to have a high performance or ease maintenance. But in fact, they are usually very slow. Though some popular systems are relatively faster, but that's still not enough.
So, i'm wondering what's the MOST SLOW parts in OS.
For example (if it is right), those OS depend on message passing will extremely limited by the message mechanism.
and frequent switches between processes is also a problem.
thanks!
Re: what's the real SLOW parts in popular OS/OS theories?
Posted: Sun Mar 20, 2011 9:17 am
by NickJohnson
Although the OS (i.e. the kernel) itself can take up a significant fraction of processor time, I'm guessing that the software running under the OS has a much greater effect on system performance than the kernel, especially in a microkernel system, where drivers are also running under the kernel. Beyond user processes, I think most time is spent in drivers/performing I/O, which really can't be optimized much without hardware improvements. Of course, you really have to do some profiling to know for sure.
And why do you say that current OSes are slow? What's your point of reference?
Re: what's the real SLOW parts in popular OS/OS theories?
Posted: Sun Mar 20, 2011 9:20 am
by Karlosoft
My frame rendering algorithm is pretty slow, so I have to use low resolution like 800x600 to have a fast refresh. I'm implementing some sse2 functions so the time it spends to copy the screen from the buffer to the video ram will be less.
Re: what's the real SLOW parts in popular OS/OS theories?
Posted: Sun Mar 20, 2011 1:36 pm
by bewing
On Linux, the page swapping algorithm either is buggy or has a lousy theory behind it, because its performance is awful. The X-window subsystem also has a lot of overhead compared to the Windows GDI interface, for example (which has been optimized to some degree).
Just in general, the linux design concept of a library, built on top of another library, which depends on another library, ad nauseum, is somewhat inefficient -- and probably slows down the system by a factor of 2 to 4.
NickJohnson wrote:
And why do you say that current OSes are slow? What's your point of reference?
In the 1980s, CPUs ran at 8 MIPS. Now they run above 8 GIPS. Does your current OS run 1000 times faster than those OSes did then? No it does not. Why not? Coding inefficiency. QED.
Re: what's the real SLOW parts in popular OS/OS theories?
Posted: Sun Mar 20, 2011 1:59 pm
by NickJohnson
bewing wrote:On Linux, the page swapping algorithm either is buggy or has a lousy theory behind it, because its performance is awful.
I have swapping disabled (i.e. not compiled in) in the Linux kernel on my laptop, and it doesn't make a huge performance difference, although it is noticeable.
bewing wrote:NickJohnson wrote:
And why do you say that current OSes are slow? What's your point of reference?
In the 1980s, CPUs ran at 8 MIPS. Now they run above 8 GIPS. Does your current OS run 1000 times faster than those OSes did then? No it does not. Why not? Coding inefficiency. QED.
Not QED: all you have proven is that OSes today are less efficient than thirty years ago, not
why they are less efficient. If a modern OS had the same feature set as that 1980s OS, it would run at least a few hundred times faster today. Nobody needs a system 1000 times faster (or even 100 times faster) than in the 1980s, so OS designers added features - which added overhead - instead.
Re: what's the real SLOW parts in popular OS/OS theories?
Posted: Sun Mar 20, 2011 2:55 pm
by rdos
Here are a couple of things that make modern OSes inefficient:
* They are coded in C
* They are bloated with features (which makes the code too complex, and thus inefficient)
* Drivers have been placed in separate processes
* Message-passing is used for APIs
* There is a single entry-point into kernel, which requires endless decoding before anything useful happens
* Some APIs (for instance the file API) has mixed many different interfaces into the same interface, again making for inefficiency.
* Applications often are run by interpreters (for instance Java, PHP)
* Interfaces are created with XML instead of being binary coded as before.
* Portabillity between processors makes for inefficient code as special features cannot be used.
Re: what's the real SLOW parts in popular OS/OS theories?
Posted: Sun Mar 20, 2011 3:51 pm
by Tosi
Code that needs to be called periodically (i.e., a timer interrupt) needs mentioned. If your task scheduler is slow that could be a huge bottleneck.
Optimizing I/O operations to switch tasks or wait for an interrupt to signify that they are complete can decrease latency, since the kernel can do other things while the device gets its act together. Not being preemptible can lead to unresponsiveness, but preemptions adds its own problems that also affect efficiency.
Memory allocation also demands a reasonably fast algorithm. I would recommend optimizing memory allocation before memory freeing, but both can be pretty slow. Paging brings a whole new set of problems to deal with. The only thing I can think of is to reduce page walks and TLB flushes.
An area where there are nearly an infinity of optimizations is graphics. There are many more algorithms for drawing graphics primitives such as lines, circles, and filled rectangles that are faster than the common-sense versions. One obvious optimization is to only redraw the parts of the screen since that have changed since the last frame or update. That way if the user moves the mouse you only need to update the rectangle within which the cursor moved. This can add more overhead as well though.
The way I optimize is to get it working first. Then if it is too slow, I optimize it. If the optimization broke it, get it working again and repeat. I almost never write anything in assembly unless it's absolutely necessary or if I don't want the compiler messing with my code.
Re: what's the real SLOW parts in popular OS/OS theories?
Posted: Sun Mar 20, 2011 4:03 pm
by Jezze
Now when we are speaking of performance here there is something I've always thought about but haven't really understood.
Imagine we have a huge project like say the size of the Linux kernel. Programming something as big as that in a higher level language like C should in theory be better because the compiler should be able to optimize the whole program into something so optimized it would be incredibly fast, tiny and also quite uncomprehensible if trying to understand it after the optimization has been applied. In the real world though this rarely seems to be the case. If you are a talented assembly programmer you can almost always do the same thing but slightly faster even though you don't have the possibility to keep the entire project in your mind at the same time. I mean a compiler should in theory be able to grab stuff from all parts of the program that might seem to have no relation to eachother and combine them in ways we can not even begin to fathom because it would seem unlogical to us.
This just doesn't happen. Is this because C as an higher level language really isn't good enough for the task? Imperative languages might not be suited for whole program optimization at all? Should a functional language better suit this purpose? At what size/complexity does a compiler actually becomes better than an assembly programmer?
I just don't know.
Re: what's the real SLOW parts in popular OS/OS theories?
Posted: Sun Mar 20, 2011 4:22 pm
by diodesign
rdos wrote:Here are a couple of things that make modern OSes inefficient:
* They are coded in C
* Drivers have been placed in separate processes
* Message-passing is used for APIs
Quick, someone get QNX on the phone. They're doing it all wrong.
Re: what's the real SLOW parts in popular OS/OS theories?
Posted: Sun Mar 20, 2011 4:30 pm
by gerryg400
diodesign wrote:rdos wrote:Here are a couple of things that make modern OSes inefficient:
* They are coded in C
* Drivers have been placed in separate processes
* Message-passing is used for APIs
Quick, someone get QNX on the phone. They're doing it all wrong.
I'm with you Diodesign. Add to that list the following which also apply to QNX Neutrino and still don't appear to make it inefficient
* They are bloated with features (which makes the code too complex, and thus inefficient)
* There is a single entry-point into kernel, which requires endless decoding before anything useful happens
* Some APIs (for instance the file API) has mixed many different interfaces into the same interface, again making for inefficiency.
* Portabillity between processors makes for inefficient code as special features cannot be used.
Efficiency has more to do with good design than good implementation.
Re: what's the real SLOW parts in popular OS/OS theories?
Posted: Sun Mar 20, 2011 4:34 pm
by JamesM
gerryg400 wrote:* Portabillity between processors makes for inefficient code as special features cannot be used.
If you can't make the distinction between portable abstraction and nonportable concretion then it's your design that's at fault.
Re: what's the real SLOW parts in popular OS/OS theories?
Posted: Sun Mar 20, 2011 5:37 pm
by xfelix
I don't think it's code that is slowing the OS down (all though at times it very well can be), but hardware. To access the Hard Disk it take 4.17 ms on average for a 7200 RPM spindle (see
http://en.wikipedia.org/wiki/Access_time). If you begin to thrash, while swapping processes between main memory and the hard disk you have to put up with this overhead, where the typical time quantum for a round robin scheduling algorithm (for putting processes into the CPU) can be about 4ms. The reason the harddisk is so slow, is because the speed depends on physical motion of the platters just to read the data.
I think the only overhead you need to deal with handing a process to the CPU (given no swapping is necessary
) is the context switch saving and restoring the (registers, stack frame, eip, etc), which is so fast it takes only a little part of the time quantum. There is a timer interrupt in the CPU chip that is set for a given time quantum, and either the process takes up that whole time (CPU bound programs), or is finished and willingly gives up the CPU chip (typically IO bound programs).
...and I think C is a beautiful language, that gets the job done.
Re: what's the real SLOW parts in popular OS/OS theories?
Posted: Sun Mar 20, 2011 7:59 pm
by Casm
lemonyii wrote:Hi, everyone!
I'm feeling that many OS (theory or sample) are with "advanced" architecture, which are declared to have a high performance or ease maintenance. But in fact, they are usually very slow.
The reason commercial operating systems are relatively slow is the vendors' need to stuff ever more features into them, just to keep their marketing departments happy.
I have both Windows XP and Windows 7 on my computer. I can't say that I find it overwhelmingly obvious that Windows 7 is a big improvement on XP, but whereas XP could live quite happily in 256Mb of RAM, Windows 7 needs 1Gb just to get off the ground. Windows 8 stands a good chance of being the last one which will fit in 4Gb.
Re: what's the real SLOW parts in popular OS/OS theories?
Posted: Sun Mar 20, 2011 9:16 pm
by lemonyii
Thank u for replying and i have learnt a lot this time! And i dont know what to say now, for all those i thought has been present here. just thanks!
Re: what's the real SLOW parts in popular OS/OS theories?
Posted: Mon Mar 21, 2011 1:32 am
by rdos
xfelix wrote:I don't think it's code that is slowing the OS down (all though at times it very well can be), but hardware. To access the Hard Disk it take 4.17 ms on average for a 7200 RPM spindle (see
http://en.wikipedia.org/wiki/Access_time). If you begin to thrash, while swapping processes between main memory and the hard disk you have to put up with this overhead, where the typical time quantum for a round robin scheduling algorithm (for putting processes into the CPU) can be about 4ms. The reason the harddisk is so slow, is because the speed depends on physical motion of the platters just to read the data.
But you are wrong. No sane OS should need to resort to swapping processes to disk with 4GB of memory. Any reason for this must be grounded in bloated software, which was one point on my list.
I think even 256MB, which is the practical minimum on many PC/104 boards, is enough memory to not even have to bother about physical memory usage. I did have to write a filesystem cache reclaim utility recently as all the files in the UI sequences used up 256MB, but that is as far as I will go. I will never write code that swaps applications. I'll rewrite the applications instead if I run out of physical memory because of them.