Intel 80186 detection

Question about which tools to use, bugs, the best way to implement a function, etc should go here. Don't forget to see if your question is answered in the wiki first! When in doubt post here.
Antti
Member
Member
Posts: 923
Joined: Thu Jul 05, 2012 5:12 am
Location: Finland

Intel 80186 detection

Post by Antti »

Intel wrote:The setting of the stored values of bits 12 through 15 (which includes the IOPL field and the NT flag) in the EFLAGS register by the PUSHF instruction, by interrupts, and by exceptions is different with the 32-bit IA-32 processors than with the 8086 and Intel 286 processors. The differences are as follows:

• 8086 processor—bits 12 through 15 are always set.
• Intel 286 processor—bits 12 through 15 are always cleared in real-address mode.
• 32-bit processors in real-address mode—bit 15 (reserved) is always cleared, and bits 12 through 14 have the last value loaded into them.
How to detect that the CPU is 80186? I have not found the answer even though I have been searching. If I do a proper 8086 test and it fails, does that mean the CPU is at least 80186 (or even 80286)? I want to make a simple test before I start using "pusha/popa" instructions (supported since 80186).
User avatar
qw
Member
Member
Posts: 792
Joined: Mon Jan 26, 2009 2:48 am

Re: Intel 80186 detection

Post by qw »

Robert R. Collins wrote:When a word write is performed at offset FFFFh in a segment, the 8086 will write one byte at offset FFFFh, and the other at offset 0, while an 80186 family processor will write one byte at offset FFFFh, and the other at offset 10000h (one byte beyond the end of the segment).
Antti
Member
Member
Posts: 923
Joined: Thu Jul 05, 2012 5:12 am
Location: Finland

Re: Intel 80186 detection

Post by Antti »

Articles by Robert R. Collins were good. Maybe I simply trust my current code.

Code: Select all

        ...
        pushf                   ; Push FLAGS
        pop bx                  ; bx = FLAGS
        and bh, 0x0F            ; Clear FLAGS bits [15-12]
        push bx
        popf                    ; FLAGS registers with bits [15-12] cleared
        pushf                   ; Push FLAGS
        pop bx                  ; bx = FLAGS
        and bh, 0xF0            ; Check FLAGS bits [15-12] only
        cmp bh, 0xF0            ; If FLAGS bits [15-12] are all set, the CPU...
        je .ferr                ; ...is not supported

        ...
        pusha                   ; The CPU is at least 80186
        ...
metallevel
Posts: 18
Joined: Thu May 17, 2012 12:43 pm
Location: in front of a computer

Re: Intel 80186 detection

Post by metallevel »

I believe the original 8086/8088 had a unique bug where pushing the stack pointer would push an already decremented value, instead of pushing the stack pointer and then decrementing it.

So:

Code: Select all

mov ax,sp
push sp
pop bx
If ax and bx are different afterwards, you have an 8086/8088. If they are the same, you have a 186 or newer. Either way sp is the same afterwards, so the stack isn't corrupted.

What I am a bit curious about is how to easily tell a 186 from a 286. On pre-Pentiums (which don't have CPUID) I normally install an undefined opcode handler in the IVT and test instructions to see whether they are executed or cause an exception. But the only instructions on the 286 that aren't on the 186 seem to be dealing with 16-bit protected mode (which would be annoying to attempt to setup) or deal with the FPU (which may not be present at all). This could be a problem if targeting very old hardware, since from what I've read the 186 had integrated peripherals and even though it was used in a couple of PC clones, it isn't fully PC compatible.
User avatar
qw
Member
Member
Posts: 792
Joined: Mon Jan 26, 2009 2:48 am

Re: Intel 80186 detection

Post by qw »

metallevel - The 80186 had the same bug. This trick is used to distinguish an 8086 or 80186 from an 80286 or above.
mikegonta
Member
Member
Posts: 229
Joined: Thu May 19, 2011 5:13 am
Contact:

Re: Intel 80186 detection

Post by mikegonta »

Antti wrote:How to detect that the CPU is 80186?

Code: Select all

  mov cx, 0x121
  shl ch, cl
  je not_at_least_a_186
If the CPU is at least a 80186 the shift count is limited to 5 bits.
Last edited by mikegonta on Fri Nov 01, 2013 3:17 pm, edited 1 time in total.
Mike Gonta
look and see - many look but few see

https://mikegonta.com
metallevel
Posts: 18
Joined: Thu May 17, 2012 12:43 pm
Location: in front of a computer

Re: Intel 80186 detection

Post by metallevel »

Hobbes wrote: metallevel - The 80186 had the same bug. This trick is used to distinguish an 8086 or 80186 from an 80286 or above.
Well looking at the manual:
Intel Software Developer Manual wrote: For IA-32 processors from the Intel 286 on, the PUSH ESP instruction pushes the
value of the ESP register as it existed before the instruction was executed. (This is
also true for Intel 64 architecture, real-address and virtual-8086 modes of IA-32
architecture.) For the Intel® 8086 processor, the PUSH SP instruction pushes the new
value of the SP register (that is the value after it has been decremented by 2).
I'm guessing that means you're right.
mikegonta wrote: If the CPU is at least a 80186 the shift count is limited to 5 bits.
Again looking at the manual:
Intel Software Developer Manual wrote: The 8086 does not mask the shift count. However, all other IA-32 processors
(starting with the Intel 286 processor) do mask the shift count to 5 bits, resulting in
a maximum count of 31. This masking is done in all operating modes (including the
virtual-8086 mode) to reduce the maximum execution time of the instructions.
Also a bit ambiguous, but I'd assume this also would only discern a 8086 or 80186 from a 80286 or above.

I recall hearing a rumor that, upon hitting an invalid opcode, the 8086/8088 would skip two bytes and continue executing. If that's true, then you could try pushing an immediate byte and check to see if the stack pointer changed. Of course this is completely undocumented, and would probably be very bad if you have an 8086 clone instead of an Intel made chip. Then again, probably all of these methods are, or at least were when the 8086 was first produced.

Looks like Hobbes' method is the best.
User avatar
qw
Member
Member
Posts: 792
Joined: Mon Jan 26, 2009 2:48 am

Re: Intel 80186 detection

Post by qw »

metallevel wrote:Looks like Hobbes' method is the best.
Collins' method. Credit where credit is due.

Good research, metallevel.
Antti
Member
Member
Posts: 923
Joined: Thu Jul 05, 2012 5:12 am
Location: Finland

Re: Intel 80186 detection

Post by Antti »

Thank you for your replies. The problem is definitely solved. There are millions of users trying to run my OS on their 8086/80186 PCs. Sad to say, they will receive an error message when trying to do that.
tom9876543
Member
Member
Posts: 170
Joined: Wed Jul 18, 2007 5:51 am

Re: Intel 80186 detection

Post by tom9876543 »

Dare I ask, how many 80186 CPUs do you have actually have in your posession?
Antti
Member
Member
Posts: 923
Joined: Thu Jul 05, 2012 5:12 am
Location: Finland

Re: Intel 80186 detection

Post by Antti »

tom9876543 wrote:Dare I ask, how many 80186 CPUs do you have actually have in your posession?
I was waiting a question like "why you are doing this?" and this is close enough. Here comes the answer:

I do not have any 8086/8088/80186 CPUs. Chances for code branching to "CPU is not supported" are practically zero. However, I want to make it right. If I see boot code that starts using instructions newer than 8086/8088 without first making a proper test to see if those are supported, I do not trust the overall code quality being high enough. It is like a making a bad first impression.

Doing a test like this is more like saying "you can trust that I am pedantic about details". It is more likely that a programmer doing this will also write all other code following this same principle. Those are people I would trust. It is not possible to check everything so trust is important if doing teamwork. In short: nowadays the ancient CPU test is a social issue rather than technical.
User avatar
qw
Member
Member
Posts: 792
Joined: Mon Jan 26, 2009 2:48 am

Re: Intel 80186 detection

Post by qw »

Antti +1
tom9876543
Member
Member
Posts: 170
Joined: Wed Jul 18, 2007 5:51 am

Re: Intel 80186 detection

Post by tom9876543 »

Antti wrote:
tom9876543 wrote:Dare I ask, how many 80186 CPUs do you have actually have in your posession?
I was waiting a question like "why you are doing this?" and this is close enough. Here comes the answer:

I do not have any 8086/8088/80186 CPUs. Chances for code branching to "CPU is not supported" are practically zero. However, I want to make it right. If I see boot code that starts using instructions newer than 8086/8088 without first making a proper test to see if those are supported, I do not trust the overall code quality being high enough. It is like a making a bad first impression.

Doing a test like this is more like saying "you can trust that I am pedantic about details". It is more likely that a programmer doing this will also write all other code following this same principle. Those are people I would trust. It is not possible to check everything so trust is important if doing teamwork. In short: nowadays the ancient CPU test is a social issue rather than technical.
OK What is the minimum CPU your Operating System will run on?
Rudster816
Member
Member
Posts: 141
Joined: Thu Jun 17, 2010 2:36 am

Re: Intel 80186 detection

Post by Rudster816 »

Why don't you just setup a real mode exception handler for the illegal opcode exception that jumps to your code that handles an unsupported CPU? Then you can set the lowest level ISA you want to support via your assembler (this can be done via the CPU directive in NASM). You will then be 100% sure that you either gracefully error out if you use an instruction that isn't supported on some dinosaur CPU or your assembler will throw you an error if you try to use an instruction that isn't supported on your target CPU.
Antti
Member
Member
Posts: 923
Joined: Thu Jul 05, 2012 5:12 am
Location: Finland

Re: Intel 80186 detection

Post by Antti »

tom9876543 wrote:What is the minimum CPU your Operating System will run on?
It is not ready yet but it will require 80286. Or course, the system that runs on 80286 is much more limited than 80386 and x86-64 versions. I will write a simple kernel for 80286 that supports COM executables and a subset of MS-DOS API. I have no plans to support existing MS-DOS applications but applications written for my system (using only a subset of "int 21h" services) will run e.g. on MS-DOS, 32-bit editions of Windows, or DOSBox. The system runs in Protected Mode and direct hardware access is not allowed. It is not a real-mode OS! BIOS services are not used.

My 32-bit and 64-bit kernels will natively support applications written for this 16-bit system. These kernels also introduce their own applications (32-bit or 64-bit) that are not compatible with any other system. Of course, these can use all the modern features the kernel provides.
Rudster816 wrote:Why don't you just setup a real mode exception handler for the illegal opcode exception that jumps to your code that handles an unsupported CPU?
Why? If I can detect the CPU without doing that, I do not want to set up a real mode exception handler. While being in real mode, I only use BIOS services and do not touch anything else (configure hardware or disable interrupts, for example). While the BIOS is in control, I respect it by not touching the exception/interrupt addresses. When the kernel takes control, the BIOS is not used anymore and I can configure hardware and do whatever I want to.
Rudster816 wrote:Then you can set the lowest level ISA you want to support via your assembler (this can be done via the CPU directive in NASM).
I already do that. It is a good feature.
Post Reply