I couldn't help but notice the amount of really fundamental bugs fixed in the latest OpenWatcom. I wouldn't trust that compiler with my life - it's undermaintained and not thoroughly tested. But yea, it exists.Casm wrote:Try Watcom.FlashBurn wrote:If there would be a compiler which supports segmentation on IA32 I would use it.
what's the real SLOW parts in popular OS/OS theories?
- Combuster
- Member
- Posts: 9301
- Joined: Wed Oct 18, 2006 3:45 am
- Libera.chat IRC: [com]buster
- Location: On the balcony, where I can actually keep 1½m distance
- Contact:
Re: what's the real SLOW parts in popular OS/OS theories?
Re: what's the real SLOW parts in popular OS/OS theories?
Yes, but it is not a valid argument. If people want a good segmented compiler, they need to fix these issues. It is an open source project, so these things will not become fixed by themselves.Combuster wrote:I couldn't help but notice the amount of really fundamental bugs fixed in the latest OpenWatcom. I wouldn't trust that compiler with my life - it's undermaintained and not thoroughly tested. But yea, it exists.
The best thing is that OpenWatcom once supported OS/2s segmented memory models, which means this code should be fairly bug-free. At least compared to using any other compiler, or writing your own compiler.
But I agree that there are some issues, even in the flat memory model. I have some problems with memory corruption, but I'm not sure if it is related to the compiler, the heap manager, or RDOS not saving registers it should save.
There is also an issue with the long double type not being properly implemented.
Re: what's the real SLOW parts in popular OS/OS theories?
Hmm..... I think I found a new project to work on.rdos wrote:If people want a good segmented compiler, they need to fix these issues. It is an open source project, so these things will not become fixed by themselves.
Programming is not about using a language to solve a problem, it's about using logic to find a solution !
Re: what's the real SLOW parts in popular OS/OS theories?
I'm curious as to how this would work exactly. I understand how you would enforce the stack and code with segments, but not issues with object size or un/deallocated memory. There obviously aren't enough segments or segment registers to do this, especially not efficiently, by using a segment for each object.rdos wrote:It primarily has to do with stability. If written properly, limts of all objects will be enforced. The stack-limit will be enforced. Deleted objects cannot be referenced. Unallocated memory cannot be referenced. This run-time checking has a cost, but if common pointer errors is to be found in C/C++, it is a good idea to validate the application with these run-time checks. If speed is a factor (or a protected, segmented memory model is not viable in the target environment) , the checks can be removed in the production release. This is how I found most of the bugs in our terminal software back when it was only running in real-mode on a V25 processor. These checks cannot be made equally well with only paging. Paging will usually catch unallocated pointers, but it won't catch overwrites, using deleted objects, stack overflows, invalid returns with a corrupt stack and similar.
In an OS kernel, segmentation can be used to enforce strong isolation between modules. At a similar level of effciency as running drivers in separate processes, but with much less overhead.
Do segments just offer tighter control over usable regions or is there something I'm missing that lets them really solve these problems completely? I'm also not sure why you say paging can't catch several of these issues. Can't paging catch stack overflows and invalid returns just as well as segmentation?
However, even if segmentation really does what you claim better than paging, I'm not sure either are the best idea. Research operating systems without any runtime checking enforce safety through language features and have significantly less overhead for the same stability. Obviously this isn't always possible as it forces everything to be written in the same language or at least for the same VM and also introduces startup overhead, but if you're going to go a non-mainstream route why not the more efficient (and much more portable) route?
Re: what's the real SLOW parts in popular OS/OS theories?
IPC is a good example where segmentation would be really faster.
Assume no paging only Segmentation (but it would also work for paging+segmentation). Instead of copying memory and/or pages you would only pass a segment (which would be as large as the message, so the receiver can only see the message).
The real problem is, that segmentation is not as fast as it could be and not as good as it could be. We would either need more segment registers or faster segment loading, but this will never happen
Assume no paging only Segmentation (but it would also work for paging+segmentation). Instead of copying memory and/or pages you would only pass a segment (which would be as large as the message, so the receiver can only see the message).
The real problem is, that segmentation is not as fast as it could be and not as good as it could be. We would either need more segment registers or faster segment loading, but this will never happen
Re: what's the real SLOW parts in popular OS/OS theories?
With the exception of the fact that with paging you must deal with page-sized and page-aligned chunks, I fail to see how this is any different in the segmented world from the standard paged world.FlashBurn wrote:IPC is a good example where segmentation would be really faster.
Assume no paging only Segmentation (but it would also work for paging+segmentation). Instead of copying memory and/or pages you would only pass a segment (which would be as large as the message, so the receiver can only see the message).
The real problem is, that segmentation is not as fast as it could be and not as good as it could be. We would either need more segment registers or faster segment loading, but this will never happen
Re: what's the real SLOW parts in popular OS/OS theories?
It was simple. I added my own malloc/new and free/delete, and let them allocate a selector in the LDT with the exact limit. There are 8192 selectors available in the LDT, so this was no issue at that time. It was however broken later as we added blacklists for cards that used a malloc per object, and which could get as large as several ten-thousand entries. This could have been solved by letting this code allocate a larger object, and subdivide it internally.Rusky wrote:I'm curious as to how this would work exactly. I understand how you would enforce the stack and code with segments, but not issues with object size or un/deallocated memory. There obviously aren't enough segments or segment registers to do this, especially not efficiently, by using a segment for each object.
It can solve them to some extent, but not as effciently. I have a new method now that allocates all memory in page-chunks, skips a page in between allocations, and never resuses the page frames until all linear address space is exhausted. This could be combined with writing a signature on the whole page, and check if it is intact above the allocated limit when the page is freed. This does not catch the heap-problems we have though, because when this algorithm is used, the code seems to work.Rusky wrote:Do segments just offer tighter control over usable regions or is there something I'm missing that lets them really solve these problems completely? I'm also not sure why you say paging can't catch several of these issues. Can't paging catch stack overflows and invalid returns just as well as segmentation?
And for invalid returns, with paging, the return is performed, and then EIP is pointing to trash (like being 0). With segmentation, the return faults on the ret instruction. That makes a big difference.
Re: what's the real SLOW parts in popular OS/OS theories?
As a segment can have a limit with byte granularity you give the receiver only access to the memory of the message. If you use paging and map the pages into the address space of the receiver, he can see all the memory which is in these pages (I assume that the message is not page aligned and that there is more data than only the message in these pages), but with segmentation you can limit this memory to the size of the message.JamesM wrote: With the exception of the fact that with paging you must deal with page-sized and page-aligned chunks, I fail to see how this is any different in the segmented world from the standard paged world.
Re: what's the real SLOW parts in popular OS/OS theories?
Not only that, but when memory is allocated in a pool (which is typical for heap implementations), all of the allocated memory is visible all the time with paging. Paging provides no access-control whatsoever for a normal heap implementation. With segmentation, only the memory areas loaded into segment registers are visible at any given time.FlashBurn wrote:As a segment can have a limit with byte granularity you give the receiver only access to the memory of the message. If you use paging and map the pages into the address space of the receiver, he can see all the memory which is in these pages (I assume that the message is not page aligned and that there is more data than only the message in these pages), but with segmentation you can limit this memory to the size of the message.JamesM wrote: With the exception of the fact that with paging you must deal with page-sized and page-aligned chunks, I fail to see how this is any different in the segmented world from the standard paged world.
Re: what's the real SLOW parts in popular OS/OS theories?
Variations on a theme - the only difference is the granularity level, am I correct?rdos wrote:Not only that, but when memory is allocated in a pool (which is typical for heap implementations), all of the allocated memory is visible all the time with paging. Paging provides no access-control whatsoever for a normal heap implementation. With segmentation, only the memory areas loaded into segment registers are visible at any given time.FlashBurn wrote:As a segment can have a limit with byte granularity you give the receiver only access to the memory of the message. If you use paging and map the pages into the address space of the receiver, he can see all the memory which is in these pages (I assume that the message is not page aligned and that there is more data than only the message in these pages), but with segmentation you can limit this memory to the size of the message.JamesM wrote: With the exception of the fact that with paging you must deal with page-sized and page-aligned chunks, I fail to see how this is any different in the segmented world from the standard paged world.
- gravaera
- Member
- Posts: 737
- Joined: Tue Jun 02, 2009 4:35 pm
- Location: Supporting the cause: Use \tabs to indent code. NOT \x20 spaces.
Re: what's the real SLOW parts in popular OS/OS theories?
Segmentation exists in x86-32 and is implemented quite nicely: there's an internal cache in the CPU that is reloaded on segment reload, so looking up segment descriptors and calculating offsets isn't slow at all. There's not much more optimization they can do. On die caching is as close to "the fastest possible lookup" as you can get.JamesM wrote:Segmentation is slow. It's deprecated, and Intel doesn't design its chips to be fast with it any more. It optimises for paging, with a flat memory model. The small exception is TLS and use of swapgs to get into the kernel.* Portable designs cannot use segmentation
* Portable designs are bloated with endian-issues
* Portable designs are always "every possible platform must support this feature".
I'm not very sure about that: at least one thing makes it probably a good idea to use a big endian architecture for hardcore networking: the fact that IP's stack of protocols are big endian encoded. At least from a pragmatic point of view, a big endian processor has the chance to shave a lot of cycles on each network transmission.Little-endian is the de facto standard; unless you're writing an RTOS to run in routers, you'll be using little endian.
And this is mostly why I replied to this topic: QFT at JamesM's post, if you can't design portably, you probably can't design well overall: when you refine a design to be portable, you usually see a lot of flaws in the original, non-portable design.No. This just shows that you obviously don't know how to produce good portable designs. Good designs utilise all possible features on all platforms, but provide them through a unified interface and degrade gracefully in the presence of a platform that doesn't support a feature.* Portable designs are always "every possible platform must support this feature".
--All the best,
gravaera
17:56 < sortie> Paging is called paging because you need to draw it on pages in your notebook to succeed at it.
- gravaera
- Member
- Posts: 737
- Joined: Tue Jun 02, 2009 4:35 pm
- Location: Supporting the cause: Use \tabs to indent code. NOT \x20 spaces.
Re: what's the real SLOW parts in popular OS/OS theories?
Hi:
Then came general purpose computers, and the need for a single computer to run several different varying software with different purposes. Then portable languages became even more important, and their portability became even more profoundly needed. At this point, kernels became even more burdened with the responsibility to abstract hardware specifics from applications. And so applications used highly hardware specific features less and less, and relied more on kernel APIs.
Next came the age of the portable kernel, which would run on multiple platforms. At this point, hardware became increasingly more similar in design, and even began to be designed with the available operating system software's functionality support in mind, rather than custom kernel's being written with the features of their target hardware being kept in mind, and now hardware manufacturers keep software in mind, and kernel writers also optimize for hardware: a very nice circle of consideration and care for each other, and so most modern platforms are at least similar enough to lend themselves to portable design.
And so out the window goes segmentation, etc etc, all things that the hardware manufacturers realized nobody uses, and the kernel developers realized that no other architectures implement. My point?
Portable design is not bad simply because it does not take advantage of features that are to be phased out due to hardware and kernel developers' joint, unstated agreement that it's not useful to support. Good portable design will always take full advantage of all available useful features on a hardware platform and not sacrifice anything to the goal of achieving portability: in fact the aim of portable design is to ensure that every useful feature is fully exploited on all platforms that do support it, and either provide basic support on platforms that don't, or find a way to handle the special case of no-support without any over-compromising and without excessive code paths to handle the special case.
Portable design is good, yo
--Nice topic
gravaera
Once upon a time there were large computers with custom builds and they usually were designed to do one set of fixed, known functions, and their custom hardware builds made it both feasible, and probably easy to write a kernel specifically for that hardware. The fixed requirements for functionality of that computer made it also easy and at that time, feasible to write non-portable software: the software was custom made for that specific purpose, and it would probably only ever need to run on that hardware, so writing an application in HLL with bit of "asm volatile()" (or even writing applications in assembly wholesale) would have made sense, and in such a case, using segmentation for example, by placing ASM statements in software would certainly make sense.rdos wrote:Yes, but it is not a valid argument. If people want a good segmented compiler, they need to fix these issues. It is an open source project, so these things will not become fixed by themselves.Combuster wrote:I couldn't help but notice the amount of really fundamental bugs fixed in the latest OpenWatcom. I wouldn't trust that compiler with my life - it's undermaintained and not thoroughly tested. But yea, it exists.
The best thing is that OpenWatcom once supported OS/2s segmented memory models, which means this code should be fairly bug-free. At least compared to using any other compiler, or writing your own compiler.
But I agree that there are some issues, even in the flat memory model. I have some problems with memory corruption, but I'm not sure if it is related to the compiler, the heap manager, or RDOS not saving registers it should save.
There is also an issue with the long double type not being properly implemented.
Then came general purpose computers, and the need for a single computer to run several different varying software with different purposes. Then portable languages became even more important, and their portability became even more profoundly needed. At this point, kernels became even more burdened with the responsibility to abstract hardware specifics from applications. And so applications used highly hardware specific features less and less, and relied more on kernel APIs.
Next came the age of the portable kernel, which would run on multiple platforms. At this point, hardware became increasingly more similar in design, and even began to be designed with the available operating system software's functionality support in mind, rather than custom kernel's being written with the features of their target hardware being kept in mind, and now hardware manufacturers keep software in mind, and kernel writers also optimize for hardware: a very nice circle of consideration and care for each other, and so most modern platforms are at least similar enough to lend themselves to portable design.
And so out the window goes segmentation, etc etc, all things that the hardware manufacturers realized nobody uses, and the kernel developers realized that no other architectures implement. My point?
Portable design is not bad simply because it does not take advantage of features that are to be phased out due to hardware and kernel developers' joint, unstated agreement that it's not useful to support. Good portable design will always take full advantage of all available useful features on a hardware platform and not sacrifice anything to the goal of achieving portability: in fact the aim of portable design is to ensure that every useful feature is fully exploited on all platforms that do support it, and either provide basic support on platforms that don't, or find a way to handle the special case of no-support without any over-compromising and without excessive code paths to handle the special case.
Portable design is good, yo
--Nice topic
gravaera
17:56 < sortie> Paging is called paging because you need to draw it on pages in your notebook to succeed at it.
Re: what's the real SLOW parts in popular OS/OS theories?
At the end of the segment war, why don't we continue discussing the OS performance?
I'm wondering what features should and can easily implemented by an portable OS which aimed at desktop or better platforms? say, regardless those embed platforms or things like a cellphone?
I'm wondering what features should and can easily implemented by an portable OS which aimed at desktop or better platforms? say, regardless those embed platforms or things like a cellphone?
Enjoy my life!------A fish with a tattooed retina
- NickJohnson
- Member
- Posts: 1249
- Joined: Tue Mar 24, 2009 8:11 pm
- Location: Sunnyvale, California
Re: what's the real SLOW parts in popular OS/OS theories?
Do you prefer to program on a smartphone? As long as there is software, there will be those who write software, hence there will be computers with full-sized keyboards a.k.a desktops.berkus wrote:Desktop is a dying breed.
Re: what's the real SLOW parts in popular OS/OS theories?
I do my private dev on a laptop. But if I could I'd do some on an iPad, I would. I spend 2 hrs a day on a train. Reading specs, code review would be okay. But if iPad ever gets a terminal, Eclipse and can run my cross-compiler, I'll buy one.NickJohnson wrote:Do you prefer to program on a smartphone? As long as there is software, there will be those who write software, hence there will be computers with full-sized keyboards a.k.a desktops.berkus wrote:Desktop is a dying breed.
If a trainstation is where trains stop, what is a workstation ?