But how it is possible to determine which flag to use in ICW1? And how PCI & ISA devices work together since they have different way to deliver their ints to 8259? I mean, that I have only one LTIM for both PCI and ISA devices. I guess that all ISA devices are connected to PCI bus via a bridge, but this is just an assumption...
And I have a question about buffered mode. Is this mode required by some external conditions? If yes, when I have to use it?
Also, it is interesting to me, was 'rotational' mode of 8259 ever used?
I hope, you are not tired yet, because of my questions .
8259A Help need.
Hi,
The 8259A chip is a general purpose chip that wasn't specifically designed for 80x86. Because of this it supports things that are entirely useless for 80x86 (but may have been desirable for other architectures that were around at the time).
The other thing most people have forgotten is that the original IBM computers (the things modern 80x86 evolved from, and are still mostly compatible with) were a box of crap. None of it was really designed to suit, there was no consideration for future versions, and all of the electronics were chosen with cost as a major factor.
For an example, the reason why the PIT timer uses 1.191 MHz can be traced back to the frequency of the oscillators used in NTSC televisions (which made the parts used in the oscillator cheaper due to mass production).
BTW, the MCS-80 and MCS-85 were IBM machines that used Intel 8080 and 8085 CPUs. These CPUs aren't compatible with the later 8086 - they used 16-bit physical address which limited them to 64 KB, an 8-bit data bus, had 7 general registers (A, B, C, D, E, H and L), plus the PC (program counter), SP (stack pointer) and an 8-bit flags register. The 8085 was made redundant by the Z80 (a compatible but much better CPU made by Zilog) while Intel moved on to the 8086.
Cheers,
Brendan
Due to the way it was wired in original 80x86 motherboards, ICW1 must be set to 0x11. Possibly the easiest way to determine this for the first time is to look at the datasheets for a modern chipset. For example, the Intel 865 chipset datasheet has the following description of ICW1 bits:Mikae wrote:But how it is possible to determine which flag to use in ICW1?
- Bits 7:5, ICW/OCW Select - WO. These bits are MSC-80 specific, and not needed. Should be programmed to 000b.
Bit 4, ICW/OCW Select - WO. This bit must be set to 1 to select ICW1 and enable the ICW2, ICW3 and ICW4 sequence
Bit 3, Edge/Level Bank Select (LTIM) - WO. Disabled. Replaced by the edge/level triggered control registers (ELCR).
Bit 2, ADI - WO. Ignored for the Intel ICH5. Should be programmed to 0.
Bit 1, Single or Cascade (SNGL) - WO. Must be programmed to 0 to indicate two controllers operating in cascade mode.
Bit 0, ICW4 Write Required (IC4) - WO. This bit must be programmed to a 1 to indicate that ICW4 needs to be programmed.
The 8259A chip is a general purpose chip that wasn't specifically designed for 80x86. Because of this it supports things that are entirely useless for 80x86 (but may have been desirable for other architectures that were around at the time).
The other thing most people have forgotten is that the original IBM computers (the things modern 80x86 evolved from, and are still mostly compatible with) were a box of crap. None of it was really designed to suit, there was no consideration for future versions, and all of the electronics were chosen with cost as a major factor.
For an example, the reason why the PIT timer uses 1.191 MHz can be traced back to the frequency of the oscillators used in NTSC televisions (which made the parts used in the oscillator cheaper due to mass production).
BTW, the MCS-80 and MCS-85 were IBM machines that used Intel 8080 and 8085 CPUs. These CPUs aren't compatible with the later 8086 - they used 16-bit physical address which limited them to 64 KB, an 8-bit data bus, had 7 general registers (A, B, C, D, E, H and L), plus the PC (program counter), SP (stack pointer) and an 8-bit flags register. The 8085 was made redundant by the Z80 (a compatible but much better CPU made by Zilog) while Intel moved on to the 8086.
Some IRQs are used for ISA and are programmed in the ELCR (I/O ports 0x4D0 and 0x4D1) as edge triggered, while some are used for PCI and programmed in the ELCR as level triggered. The LTIM in the PIC chip itself is either ignored completely (not implemented) or set to 0 for compatability.Mikae wrote:And how PCI & ISA devices work together since they have different way to deliver their ints to 8259? I mean, that I have only one LTIM for both PCI and ISA devices. I guess that all ISA devices are connected to PCI bus via a bridge, but this is just an assumption...
If you can still find an 8259A and decide to use it in some sort of embedded device, then it's possible that (depending on how you design the hardware) buffered mode might be useful. For real computers "buffered mode" doesn't exist anymore (for e.g. it's described as "not used, should always be programmed to 0" in the 865 chipset datasheets).Mikae wrote:And I have a question about buffered mode. Is this mode required by some external conditions? If yes, when I have to use it?
It should still be supported, but I can't think of a reason why anyone would use it for 80x86. It looks like it's intended for situations where all IRQs are treated equally and load balancing is desirable - for example, consider a PIC chip used with 8 serial ports connected to 8 dumb terminals.Mikae wrote:Also, it is interesting to me, was 'rotational' mode of 8259 ever used?
Hehe - no. I woke up at 6:00 yesterday afternoon, spent most of this morning pulling a roof apart to install some air-conditioning cables and it's 3:00 in the afternoon now - I'm tired for other reasons (your question was a much needed break from the roof).Mikae wrote:I hope, you are not tired yet, because of my questions .
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Hi,
For the "PIC chips" backwards compatability is the main concern. All later versions behave the same as the original 8259A chip, if programmed to suit it's original wiring in AT machines (IIRC XT machines didn't have a slave PIC, and were therefore wired differently and configured by software differently).
The only chipset dependant concern is if you're programming the PIC chip wrong. In this case an original 8259A would use your programming to configure itself in a way that makes no sense in 80x86 machines (for e.g. using MCS-80 mode would competely stuff up bus traffic and probably lockup the machine or cause a hardware reset). In newer chipsets wrong programming may behave differently (for e.g. MSC-80 mode may not be implemented at all, and trying to enable it may not make any difference).
The only exception to this is the addition of the ELCR, which can be considered a seperate piece of electronics outside of the "PIC chip".
The ELCR can be left in the state that was programmed by the BIOS if the OS doesn't mess with the PCI IRQ router (the PCI IRQ router determines which interrupt lines from the PCI bus are mapped to which PIC chip inputs). If the PCI IRQ router's configuration is changed, then the ELCR also needs to be changed. In general, there's no real reason for an OS to do this...
The next architectural change is the I/O APIC, but it completely replaces the PIC chip rather than adding extensions to it.
Cheers,
Brendan
In general there isn't really a definitive standard anymore. Originally IBM published specifications that formed the "standard architecture" for XT and AT machines. Since then a large number of companies have added a large number of variations/extensions that have become de facto standard extensions, usually limited only by a desire for backwards compatability and market forces.Mikae wrote:As I understood from your posts, some things in programming 8259 are chipset-dependent. Is there a standard, which determies, that LTIM, for example, have to be zeroed anyway, or OS have to determine which chipset target machine uses?
For the "PIC chips" backwards compatability is the main concern. All later versions behave the same as the original 8259A chip, if programmed to suit it's original wiring in AT machines (IIRC XT machines didn't have a slave PIC, and were therefore wired differently and configured by software differently).
The only chipset dependant concern is if you're programming the PIC chip wrong. In this case an original 8259A would use your programming to configure itself in a way that makes no sense in 80x86 machines (for e.g. using MCS-80 mode would competely stuff up bus traffic and probably lockup the machine or cause a hardware reset). In newer chipsets wrong programming may behave differently (for e.g. MSC-80 mode may not be implemented at all, and trying to enable it may not make any difference).
The only exception to this is the addition of the ELCR, which can be considered a seperate piece of electronics outside of the "PIC chip".
The ELCR can be left in the state that was programmed by the BIOS if the OS doesn't mess with the PCI IRQ router (the PCI IRQ router determines which interrupt lines from the PCI bus are mapped to which PIC chip inputs). If the PCI IRQ router's configuration is changed, then the ELCR also needs to be changed. In general, there's no real reason for an OS to do this...
The next architectural change is the I/O APIC, but it completely replaces the PIC chip rather than adding extensions to it.
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Thank you for interesting history tour!
So, I will programm my controller as I would programm 8259 for x86, leaving all ambigous fields as defaults.
Ough, I totally forgot about the last (I hope) question, about SFNM. I want to use SFNM, since I think that this is the most honest mode. In the datasheet it is said:
So, I will programm my controller as I would programm 8259 for x86, leaving all ambigous fields as defaults.
Ough, I totally forgot about the last (I hope) question, about SFNM. I want to use SFNM, since I think that this is the most honest mode. In the datasheet it is said:
But why it is possible in this mode to have 2 ISR bits to be set up in the slave? Or this check is just hardware requirments?b. When exiting the Interrupt Service routine the
software has to check whether the interrupt serviced
was the only one from that slave. This is
done by sending a non-specific End of Interrupt
(EOI) command to the slave and then reading its
In-Service register and checking for zero. If it is
empty, a non-specific EOI can be sent to the
master too. If not, no EOI should be sent.