I just remembered an issue with some dual IDE interface implementations on the motherboard, it has to do with shared signals and how if you try simultaneous transfers on both channels on an affected system you're risking data corruption. They're supposed to be seperate but some aren't. I checked two (identical) Pentium-era motherboards I have and they have the problem, haven't checked any of my other systems though.Kemp wrote:As regards interfacing to devices, as long as the devices are actually seperate (eg, not two hard drives on the same channel) and they are not being talked to via a central controlling chip of some sort (not too sure which ones count here though as BI mentioned the keyboard and mouse don't care what you do to the other, I thought they used a common data register or somesuch?) then concurrent/interleaved accesses should be fine.
Driver and what to expect them to expect?
Re:Driver and what to expect them to expect?
- Pype.Clicker
- Member
- Posts: 5964
- Joined: Wed Oct 18, 2006 2:31 am
- Location: In a galaxy, far, far away
- Contact:
Re:Driver and what to expect them to expect?
from what i know from Gb ethernet cards, you rarely write a driver for those which is purely irq-based. You use IRQs to detect the presence of a new packet, but chances are that more new packets needed to be signalled by the time you handle the IRQ (you might 've been busy with interrupts disabled for more than 10 ?s) ...
As a result, those drivers (at least in linux) have a quota of time to spend with IRQ response and will get data out of the card as long as there's data pending and time to spend.
Oh, and btw, if you put 20 GiGE cards in a single machines (is there a PCI bus that supports that, yes?) probably you want the machine to have multiple CPU and one or more of these CPUs will be actively polling and processing packets all the time, ignoring any interrupts because interrupts handling would just be a waste of time ...
(my guess is that you'd love to give a look at IXP network processors if you seriously think about putting 20 GiGE cards together )
As a result, those drivers (at least in linux) have a quota of time to spend with IRQ response and will get data out of the card as long as there's data pending and time to spend.
Oh, and btw, if you put 20 GiGE cards in a single machines (is there a PCI bus that supports that, yes?) probably you want the machine to have multiple CPU and one or more of these CPUs will be actively polling and processing packets all the time, ignoring any interrupts because interrupts handling would just be a waste of time ...
(my guess is that you'd love to give a look at IXP network processors if you seriously think about putting 20 GiGE cards together )
Re:Driver and what to expect them to expect?
Well.. I haven't done one driver yet, but what I think is, if it requires the IRQ to detect for new packets even on a "non IRQ based driver" you'd be just clogging the the PCI bus and for no reason I would think.. It just sounds silly to me, but I don't know. If you see 20 of those ethernet cards on a single PCI bus maybe it would be better to open up the case, take out the tazer and give the CPU a zap to let out its misery too. ;D ..But then maybe the processor isn't the problem, maybe its the limited DMA channels and PCI bus bandwidth..
edit: I apologize for bad grammers and or spellings, its due to the side effects of not getting enough rest, or known as the "mystran" side effects.
edit: I apologize for bad grammers and or spellings, its due to the side effects of not getting enough rest, or known as the "mystran" side effects.
Re:Driver and what to expect them to expect?
beyond infinity wrote:
Basicly, when I write a technical question, I want a technical answer. I'll ask warm&fuzzy questions if I want warm&fuzzy answers.
If I write a obviously stupid or malformed technical question, either because I've confused, or by mistake, the best answer is one that answers the question that should have been asked. Your reply did that fine
Oh and I do know what happens on physical level, sure, no need to go there. Was just wondering how stupid devices is one going to have to interface.
Ryu wrote:
That's just a draft though, have to see how to all works out.
Hope you didn't think I was offended. Your original reply was great.@mystran: Maybe that feeling as if I consider your writing a full load of nonsense stems from me writing bluntly without caring for warm & fuzzy feelings. No offense intended, honestly.
Basicly, when I write a technical question, I want a technical answer. I'll ask warm&fuzzy questions if I want warm&fuzzy answers.
If I write a obviously stupid or malformed technical question, either because I've confused, or by mistake, the best answer is one that answers the question that should have been asked. Your reply did that fine
Oh and I do know what happens on physical level, sure, no need to go there. Was just wondering how stupid devices is one going to have to interface.
Ryu wrote:
Yeah well, except while I'm not going to support sharing of IO resources, I might have to support sharing of IRQs which means I need some extra logic to figure out which device caused IRQn when three devices are using it (so we have three drivers for it). But there's basicly one way to do it: ask every driver (using that interrupt) until one of them says "yeah this was mine". At least I'm not aware of better solutions.At the same time, they are searched when a interrupt is generated to find which driver is responsible for it (which is why I was concern about search times in the other thread). I guess that is the same as your "Device-manager can then tell associated drivers to check if it's their device, and once the suspect is found, device-manager notifies kernel that IRQ be unmasked".
That's just a draft though, have to see how to all works out.
- Pype.Clicker
- Member
- Posts: 5964
- Joined: Wed Oct 18, 2006 2:31 am
- Location: In a galaxy, far, far away
- Contact:
Re:Driver and what to expect them to expect?
well, the NICs might very well be busmastering devices, pushing and pulling packets directly into main memory. Yet, with full duplex systems, that would require the PCI bus to draw at least 40Gbps. Good thing is that most memory acces will then be bursts (what DRAM does the best), but still i expect that means system bus clocked at 1GHz ...
IXP2850 requires dedicated DRAM controllers and (iirc) 4 independent DRAM channels to face less than 10Gbps ...
so if you're to open the box, make sure you know where your towel -- err, i mean spacesuit -- stands.
IXP2850 requires dedicated DRAM controllers and (iirc) 4 independent DRAM channels to face less than 10Gbps ...
so if you're to open the box, make sure you know where your towel -- err, i mean spacesuit -- stands.
Re:Driver and what to expect them to expect?
Hmm I'm only aware of 33mhz and 66mhz PCI buses, which is at most 133/MBs and 533/MBs, as for busmastering I nevered got into that and always thought that was just a way for devices to comunicate with the chipset DMA controller to transfer memory. But one thing is I know for sure, only one device can be master of the bus at one time.
- Pype.Clicker
- Member
- Posts: 5964
- Joined: Wed Oct 18, 2006 2:31 am
- Location: In a galaxy, far, far away
- Contact:
Re:Driver and what to expect them to expect?
no: busmastering means that (for a given period of time), your device ownz the bus. It can use it to "push" or "pull" data directly to the memory chips (okay, and the chipset bridge has to translate those requests into requests to main memory on the system bus, as we all know DRAM is not attached to the PCI bus )
That contrasts with the "regular" DMA controller issues bus cycles on both ISA bus and system bus to have data moved between memory and your soundblaster PRO card, for instance.
Technically speaking, i think there's nothing that would even prevent your device from pushing its data to another device (provided that they've learnt about each other, etc.)
That contrasts with the "regular" DMA controller issues bus cycles on both ISA bus and system bus to have data moved between memory and your soundblaster PRO card, for instance.
Technically speaking, i think there's nothing that would even prevent your device from pushing its data to another device (provided that they've learnt about each other, etc.)
Re:Driver and what to expect them to expect?
Btw, is it ok for system to enable bus mastering on several devices in the same PCI bus at the same time?
I mean, is it operating system, or some hardware mechanism, that selects who owns the bus at any given moment?
I mean, is it operating system, or some hardware mechanism, that selects who owns the bus at any given moment?
- Pype.Clicker
- Member
- Posts: 5964
- Joined: Wed Oct 18, 2006 2:31 am
- Location: In a galaxy, far, far away
- Contact:
Re:Driver and what to expect them to expect?
that's supposed to be the job of the PCI bus arbitrer, which is in your chipset afaik.
Btw, busmastering device raise another (non-trivial) problem for user-mode drivers: since your device can potentially write (and read) anywhere in memory, it can trash anything. and there's no generic way to know where the device is about to write since that could be defined by any of the device's registers (e.g. at the vendor's choice).
Btw, busmastering device raise another (non-trivial) problem for user-mode drivers: since your device can potentially write (and read) anywhere in memory, it can trash anything. and there's no generic way to know where the device is about to write since that could be defined by any of the device's registers (e.g. at the vendor's choice).
-
- Member
- Posts: 1600
- Joined: Wed Oct 18, 2006 11:59 am
- Location: Vienna/Austria
- Contact:
Re:Driver and what to expect them to expect?
If I 'm not mistaken it is up to the cpu's of either the devices (pci cards, f. ex. NIC's or Sound cards) or the DMA to tell: "STOP! I am writing to the bus, gerroff wi' ya grubby paws". That's micro code stuff IIRC. And as long as the bus is written to by one Device, the others have to stand by else we 'll get what in Lan Talk is known as Packet Collision.
Stay safe
@Pype: For that, is suffices to stuff the wrong physical address to a NIC ring buffer. PCI DMA cares zilch about MMU's & paging and accesses memory directly. You do so and then wonder why you suddelny find ethernet packet data in place of your kernel page tables. Ouch, that hurts. Thats why I have had to introduce something like "alloc_dma_region" in my vmm subsystem.
Stay safe
@Pype: For that, is suffices to stuff the wrong physical address to a NIC ring buffer. PCI DMA cares zilch about MMU's & paging and accesses memory directly. You do so and then wonder why you suddelny find ethernet packet data in place of your kernel page tables. Ouch, that hurts. Thats why I have had to introduce something like "alloc_dma_region" in my vmm subsystem.
... the osdever formerly known as beyond infinity ...
BlueillusionOS iso image
BlueillusionOS iso image
Re:Driver and what to expect them to expect?
Basically, all of the devices that want to be bus master at a given moment in time signal their intent, and then wait from the bus arbiter to tell them whether or not they win the lottery. So if you have 2 NICs and a tv capture card that are all trying to take over the bus at once, the bus arbiter will pick one, and the others have to (by order of the spec) sit and wait for their chance. They keep trying until they're told it's their turn.beyond infinity wrote: If I 'm not mistaken it is up to the cpu's of either the devices (pci cards, f. ex. NIC's or Sound cards) or the DMA to tell: "STOP! I am writing to the bus, gerroff wi' ya grubby paws". That's micro code stuff IIRC. And as long as the bus is written to by one Device, the others have to stand by else we 'll get what in Lan Talk is known as Packet Collision.
Re:Driver and what to expect them to expect?
Hmmh, ok.bkilgore wrote: Basically, all of the devices that want to be bus master at a given moment in time signal their intent, and then wait from the bus arbiter to tell them whether or not they win the lottery.
So it's basicly the same logic as if 2 CPUs want to read/write memory: do arbitration to select who gets to lock the bus, do whatever you want, and finally drop the lock.
Thanks to both you and Pype, that cleared it.
---
In fact I'm starting to understand the whole thing, and a lot of little facts I've known for long time finally start fitting together.
You people are lovely.
Re:Driver and what to expect them to expect?
Sure, that's why you have to trust your drivers. But keeping drivers in userspace, and separated from each other (in some sane way like put tcp/ip stack and network drivers in same process maybe) has the advantage that it reduces the amount of code that can trash your system.Pype.Clicker wrote: Btw, busmastering device raise another (non-trivial) problem for user-mode drivers: since your device can potentially write (and read) anywhere in memory, it can trash anything. and there's no generic way to know where the device is about to write since that could be defined by any of the device's registers (e.g. at the vendor's choice).
If they live in kernel, they can trash anything at any time. If they live in userspace, the driver will hopefully die from page fault before it issues a command to trash the system.
If you have 10 lines of code that you need to make paranoid, it's a LOT easier than if you have 1000 lines of code that you have to make paranoid. You might have bad luck, and trash the thing anyway, but then it's a lot less likely.
Another good reason for userspace drivers is memory management. In kernel anything dealing with memory is pretty nasty, and leaking memory mean panic sooner or later. In an userspace driver, if you notice that you are out of memory, it's possible to restart with all memory available for reuse.
Consider that device driver code is typically the "necessary evil" I think it's worth the trouble to make it a little safer, even if not quite safe.
Re:Driver and what to expect them to expect?
I can't seem to find it again, but I read somewhere last weekPype.Clicker wrote: that's supposed to be the job of the PCI bus arbitrer, which is in your chipset afaik.
Btw, busmastering device raise another (non-trivial) problem for user-mode drivers: since your device can potentially write (and read) anywhere in memory, it can trash anything. and there's no generic way to know where the device is about to write since that could be defined by any of the device's registers (e.g. at the vendor's choice).
that this is a problem PCI-X is solving. They have a IOMMU
that translates between virtual adresses the cards / drivers
see and the real memory. This way an operating system can
enforce the read/writes.