Page 4 of 6

Re: Theoretical: Why the original 386 design was bad

Posted: Thu Jan 27, 2011 3:54 am
by rdos
tom9876543 wrote:That eliminates the following:
GDT/LDT, Segments (in protected mode), Limits, CPL0-3, TSS, IO Permission Map, Call Gates, Instructions like ARPL, eflags I/O privilege level and NT, address / operand size overrides
No need to eliminate any of them. They are superior design features.
tom9876543 wrote: OS/2 - IBM would have been forced to write a pure 32 bit version of OS/2, and I believe it would have been able to run 16 bit OS/2 applications by using v8086 mode.
No way.
tom9876543 wrote: Yes a lot of these things you never see such as segment overrides.
My OS uses them a lot. Even flat C compilers uses them for TLS storage.
tom9876543 wrote: But, a benefit of simplifying the 386 is it allows you to reallocate the opcodes to more useful purposes. For example segment overrides + address/op overrides are 7 single byte opcodes that have been wasted.

Also the protection checks are a complete waste of time, how many transistors are wasted checking the CS limit?
They are not. The protection features of the x86 is the key to writing stable, bug-free OSes in assembly. If used properly.
tom9876543 wrote: Another reason is there is no equivalent on other architectures such as ARM or PowerPC, so any portable operating system simply won't use them.
Portable OSes are ****. They will never be able to take advantage of a particular CPU. They will perform poorly on all CPUs instead of decently on one.
tom9876543 wrote: I guess your microkernel is NOT portable. Does ARM or PowerPC support IOPL???? If the IOPL / IO Bitmap was never implemented you would not be here today complaining about something that never existed.
The IOPL was a major feature to be able to virtualize IO-ports used in old DOS programs that tended to use hardware directly. It is not very useful today, but it was essential at the time of the 386.
tom9876543 wrote: A 386 has a 32 bit physical address space. BIOSes today have to use the Unreal Mode "feature" to access all the address space while in real mode. I would say there needs to be a replacement for Unreal Mode since its not possible under this proposed design.
There is no need for an Unreal Mode with a 386+ processor. BIOS can simply switch to protected mode, create a flat environment and achieve the same thing, and then switch back to real mode.
tom9876543 wrote: Intel kept 286 compatibility even though any true 32 bit operating system does not really need the 286 protection model.
The resulting model with an ability to mix 8, 16 and 32 bit operands (as well as 16 and 32 bit segments) is superior and a major design feat.
tom9876543 wrote: How many transistors are wasted to implement call gates, task gates etc?????
Call gates are superior to switch from user to kernel space. First, it defines trusted entry-points, and secondly, it goes directly to the destination without any intervening code. Clearly superior to anything currently used (SYS ENTER/SYS EXIT). When used with registers only, it is possible to define common entry-points for both 16 and 32 bit applications with almost no overhead.

Re: Theoretical: Why the original 386 design was bad

Posted: Thu Jan 27, 2011 3:57 am
by tom9876543
rdos wrote: That is an easy one. The 286 processor reinterpreted the use of segment registers from real mode. In real mode, addresses are calculated by shifting the segment four bits left and adding the offset. In 286 protected mode, the segment register (called a selector) loads a 24-bit base (and 16-bit limit) from the GDT/LDT, and adds this base to the offset to form the physical address. Therefore, 286 protected mode applications can address 16MB physical memory, and thus cannot execute in real mode which only can address 1MB. The addressing scheme is completely incompatible. Some 16 bit protected mode applications also define "huge segments" by allocating several adjacent selectors, where all but the last have a 64k limit. Many 16-bit protected mode applications also depend on being able to allocate and manipulate selectors.
OK with a bit of imagination and v8086 tweaking, you can implement the requirements provided:

1) 16 MB address space. That is easy. Have an extra setting in v8086 mode, lets call it V8086_MEM_EXTEND_FLAG. This could be a single bit in CR0 (I would make v8086 mode a single bit in CR0 as well). When that bit is set, the left shift of a segment register changes from 4 bits (standard 8086) to 8 bits..... to give you a 16MB address space.
2) I have never heard of these huge segments. However, they could be emulated indirectly on a 386 in v8086 mode. How? Choose segment values that map to unpaged memory. Say you have 10 selectors 0x100 - 0x109. In extended v8086 mode (see 1), they map to physical addresses 0x10000 - 0x10900. Simply make sure those addresses cause a page fault. When that happens, the page fault handler can replace the DS or ES value with a "fixed up value".
3) 16 bit applications manipulate selectors? Are you saying a 286 app can go and change its own segment limit without getting permission from the OS? Are you 100% sure on that? Surely the operating system controls what memory an application can access? If you have some documentation to prove me wrong please let me know. And I would hope that allocating selectors is controlled by the operating system, not the app.

Re: Theoretical: Why the original 386 design was bad

Posted: Thu Jan 27, 2011 4:15 am
by rdos
tom9876543 wrote:OK with a bit of imagination and v8086 tweaking, you can implement the requirements provided:

1) 16 MB address space. That is easy. Have an extra setting in v8086 mode, lets call it V8086_MEM_EXTEND_FLAG. This could be a single bit in CR0 (I would make v8086 mode a single bit in CR0 as well). When that bit is set, the left shift of a segment register changes from 4 bits (standard 8086) to 8 bits..... to give you a 16MB address space.
That would not work. If you make it 12 bits there is some hope it could work. But then the original 16-bit application would use an 28-bit address space. You'd need to tweak the loader, but at least by shifting 12 bits there will be no overlap between segments in linear address space, which is more or less an requirement for being able to run a 16-bit protected mode application without base/limit fields for selectors.
tom9876543 wrote: 2) I have never heard of these huge segments. However, they could be emulated indirectly on a 386 in v8086 mode. How?
Shifting 12 bits as above would actually work.
tom9876543 wrote: 3) 16 bit applications manipulate selectors? Are you saying a 286 app can go and change its own segment limit without getting permission from the OS? Are you 100% sure on that? Surely the operating system controls what memory an application can access? If you have some documentation to prove me wrong please let me know. And I would hope that allocating selectors is controlled by the operating system, not the app.
Maybe this is mostly an issue with DOS extenders. They can use the DPMI-interface to manipulate protected mode features like selectors. I'm not sure if DOS extenders existed already when the 386 was designed (there are 16-bit versions at least), but providing the VCPI and/ or DPMI interface with an extension to real mode is impossible.

Re: Theoretical: Why the original 386 design was bad

Posted: Thu Jan 27, 2011 4:36 am
by tom9876543
I think we both agree that 16 bit 286 applications could run under v8086 mode on a 386..... just it needs a few extra tweaks.

In regards to a DOS Extender, I thought they were first introduced with the 386 to access up to 4GB of RAM.
There is no easy solution under the proposed "pure" 386, because 32 bit EAX would not even be available in 8086 mode (no address/operand size override).

Someone earlier suggested the 386 should have started in protected 32 bit mode. This would have forced BIOS manufacturers to rewrite their code as 32 bit. If that happened, BIOSes would probably have had 32 bit entry points and DOS Extenders would never have been required.
Even better, the 32 BIOS could look at the signature on the disk. If it is 55AA, drop back to 8086 mode and start OS. If it is something different (say 66BB), the OS is 32 bit and don't go back to real mode at all.

This is another reason why maintaining 286 compatibility was bad. Intel could have forced BIOS writers to move to a 32 bit world but instead backwards compatibility was more important.

Of course this is all theoretical.

Re: Theoretical: Why the original 386 design was bad

Posted: Thu Jan 27, 2011 4:39 am
by Solar
rdos wrote:
tom9876543 wrote: Also the protection checks are a complete waste of time, how many transistors are wasted checking the CS limit?
They are not. The protection features of the x86 is the key to writing stable, bug-free OSes in assembly.
Which is absolutely not the market segment Intel and AMD are aiming for.

I, too, have a strong dislike for the "industrializing" of software engineering. But it is a fact of life that time-to-market has become much more important to the business than efficiency, or even quality.

So, since Intel and AMD want to make a profit, they optimize performance per square millimeter of silicon.

Writing great code close to the metal is an artform, I grant you that. (I wouldn't be here if I didn't believe in that.) But art is not business.

Re: Theoretical: Why the original 386 design was bad

Posted: Thu Jan 27, 2011 5:09 am
by rdos
tom9876543 wrote:This is another reason why maintaining 286 compatibility was bad. Intel could have forced BIOS writers to move to a 32 bit world but instead backwards compatibility was more important.
Yes, or real mode code. Perhaps there would be no need for setting up V86 mode in order to switch video modes with VBE if BIOSes were forced to be 32 bit? Other than that, BIOS is more or less irrelevant for an OS.

Re: Theoretical: Why the original 386 design was bad

Posted: Thu Jan 27, 2011 5:10 am
by rdos
Solar wrote:Writing great code close to the metal is an artform, I grant you that. (I wouldn't be here if I didn't believe in that.) But art is not business.
Yes, but I write OSes for pleasure, not for business. Even if it is used for commercial purposes right now, which was not the original intent.

Re: Theoretical: Why the original 386 design was bad

Posted: Thu Jan 27, 2011 5:11 am
by Combuster
rdos wrote:Other than that, BIOS is more or less irrelevant for an OS.
Very, very wrong, as seen in ACPI.

Re: Theoretical: Why the original 386 design was bad

Posted: Thu Jan 27, 2011 7:22 am
by Solar
rdos wrote:
Solar wrote:Writing great code close to the metal is an artform, I grant you that. (I wouldn't be here if I didn't believe in that.) But art is not business.
Yes, but I write OSes for pleasure, not for business.
In that case, I would have chosen the 680x0 as platform had I been in your place. Now there was a beautiful architecture, and no mistake...

Re: Theoretical: Why the original 386 design was bad

Posted: Thu Jan 27, 2011 7:27 am
by rdos
Combuster wrote:
rdos wrote:Other than that, BIOS is more or less irrelevant for an OS.
Very, very wrong, as seen in ACPI.
ACPI is only a table, not code, and as such is not dependent of the operating mode of the CPU. It is the code that is more or less irrelevant, except for mode-switching the video card.

Re: Theoretical: Why the original 386 design was bad

Posted: Thu Jan 27, 2011 7:29 am
by rdos
Solar wrote:
rdos wrote:
Solar wrote:Writing great code close to the metal is an artform, I grant you that. (I wouldn't be here if I didn't believe in that.) But art is not business.
Yes, but I write OSes for pleasure, not for business.
In that case, I would have chosen the 680x0 as platform had I been in your place. Now there was a beautiful architecture, and no mistake...
If I had worked with the 680x0 platform at that time, I might have done that, but I've only worked with Z80, PDP/VAX, x86 and Texas signal processors, so I have no knowledge about 680x0.

Re: Theoretical: Why the original 386 design was bad

Posted: Thu Jan 27, 2011 10:20 am
by Combuster
rdos wrote:ACPI is only a table, not code
This, my friend, would be a good time to go make arguments based on actual facts rather than trying to win a debate by making things up and resorting to other fallacies.

Re: Theoretical: Why the original 386 design was bad

Posted: Thu Jan 27, 2011 12:35 pm
by OSwhatever
rdos wrote:
Combuster wrote:
rdos wrote:Other than that, BIOS is more or less irrelevant for an OS.
Very, very wrong, as seen in ACPI.
ACPI is only a table, not code, and as such is not dependent of the operating mode of the CPU. It is the code that is more or less irrelevant, except for mode-switching the video card.
Depends how much you want to support. Many power saving features are implemented in ACPI and there is a language called AML that is required if you want to support some features.

ACPI is quite difficult to get your head around and it has all complexity that you would expect from a company like Intel. Luckily enough, Intel provides source code and a ready made framwork if you want to deal with ACPI. Implementing ACPI full support for ACPI from scratch would be a quite large job.

Now, I'm not sure if ACPI is being replaced by EFI as I'm writing this. It was quite some time ago I touched ACPI, and I'm happy that I don't deal with it anymore.

Re: Theoretical: Why the original 386 design was bad

Posted: Thu Jan 27, 2011 12:57 pm
by quok
OSwhatever wrote:ACPI is quite difficult to get your head around and it has all complexity that you would expect from a company like Intel. Luckily enough, Intel provides source code and a ready made framwork if you want to deal with ACPI. Implementing ACPI full support for ACPI from scratch would be a quite large job.

Now, I'm not sure if ACPI is being replaced by EFI as I'm writing this. It was quite some time ago I touched ACPI, and I'm happy that I don't deal with it anymore.
ACPI is certainly not being replaced by EFI. EFI is only meant to replace BIOS, and ACPI still works hand-in-hand with EFI. In fact, ACPI 4.0a was released on April 5th, 2010, and ACPI 5.0 is currently under development.

As you said though, Intel provides a reference implementation called the ACPICA. More information about it is available at http://www.acpica.org/ and it even supports APCI 4.0a. (Which surprised me a bit, as for a long time it was stuck at ACPI 1.1 support only.)

Re: Theoretical: Why the original 386 design was bad

Posted: Fri Jan 28, 2011 7:00 am
by rdos
quok wrote:As you said though, Intel provides a reference implementation called the ACPICA. More information about it is available at http://www.acpica.org/ and it even supports APCI 4.0a. (Which surprised me a bit, as for a long time it was stuck at ACPI 1.1 support only.)
Interesting. It is claimed to be "OS independent", yet the source-code is C, and it clearly will not run easily in anyhing else than a 32-bit flat memory model. IOW, it is not OS independent since it won't work on OSes written i assembly, won't work on 16-bit OSes, and possibly won't work in a segmented environment.

Possibly, with some effort, ACPICA might work as a 32-bit RDOS device-driver using Open Watcoms 32 bit segmented memory model with a single code and data segment. This makes for an interesting use of this new device-driver model.

EDIT: I've just confirmed that ACPICA compiles cleanly with Open Watcom, using a 32-bit segmented, small, memory model.

However, as I noted before, ACPI tables could exist without a BIOS, and regardless of the operating model and bitness of the BIOS. They are just tables, with some AML-code.