Discussions on more advanced topics such as monolithic vs micro-kernels, transactional memory models, and paging vs segmentation should go here. Use this forum to expand and improve the wiki!
No, the last two pages are about how *your* different solution is better (t'was your point of view, of course) and how everyone disagree with you on that point. And you still claim you know better than everybody else.
First stage of grief...
... denial.
"Certainly avoid yourself. He is a newbie and might not realize it. You'll hate his code deeply a few years down the road." - Sortie
[ My OS ] [ VDisk/SFS ]
Griwes wrote:No, the last two pages are about how *your* different solution is better (t'was your point of view, of course) and how everyone disagree with you on that point. And you still claim you know better than everybody else.
Are you seriously planning to outperform rdos on defending his only correct solution?
I tried, at least
Reaver Project :: Repository :: Ohloh project page
<klange> This is a horror story about what happens when you need a hammer and all you have is the skulls of the damned.
<drake1> as long as the lock is read and modified by atomic operations
Well, it wasn't my intent to provoke a war on this thread... anyway I just want to say that IT WORKS... Thanks to all your suggestions (Brendan's expecially) I managed to write a fairly decent scheduler. I can finally write random "a", "b" and "c" from different thread. Each thread goes to sleep for some ms, and it works so good I can't believe... Here's my design (hope this isn't the beginning of another world war):
The scheduler is multilevel feedback queue like, using 16 queues. Internally each queue is just plain RR.
There is only one sleep queue per CPU. It's implemented like a priority queue, using as sorting parameter the WakeupTicks (from HPET). So each time I just have to compare only the first thread from the queue with the HPET's main counter.
Each thread inherits a "Base Priority" from it's process. The Base Priority is assigned accordingly the process kind. The kind is given from the process itself (I'm a daemon, I'm a GUI program, I'm just crunching numbers).
The CurrentPriority of a thread scales down each time the thread consumes it's whole time slice (5 ms). Anyway it can never get lower than the BasePriority. When the thread starts the CurrentPriority is 15.
When a thread blocks waiting for I/O it's CurrentPriority goes up by 1 (actually when it waits on a semaphore). I hope this enhances responsiveness.
WIP: when a thread is found starved on a low priority queue for a long time (2, 3 seconds??) it's priority is boosted to 15, it's given double time slice, and then gets back to it's original priority. (suggested from Win XP scheduler)
In future when GUI will be ready, windows with focus will have a higher base priority.
For CPUs balancing I will probably use a high priority thread that is activated once each # seconds that will take care of balancing (AFAIK this is the method used by linux).
I also read that linux CPU schedulers GRAB threads from other CPU's queues (when they're idle) instead of pushing to other CPUs (when they're busy), or using a global queue. This also implies that the idle CPU checks other CPUs. What about this method?
I'm just so happy I can't tell...
Please, correct my English...
Motherboard: ASUS Rampage II Extreme
CPU: Core i7 950 @ 3.06 GHz OC at 3.6 GHz
RAM: 4 GB 1600 MHz DDR3
Video: nVidia GeForce 210 GTS... it sucks...
rdos wrote:No, all USER tasks have the same default priority.
The following example (from the book of Robert Love "Linux Kernel Development") demonstrates the broken logic of such scheduling approach.
We have network driver that is not from user space and hence has higher priority. The top half of driver (IRQ service routine) leaves most of work (not to disable interrupts for a long time) for the bottom half of driver. The bottom half is scheduled (with top priority, as mentioned). It works, and at that time new network packet arrives, top half schedules the task for bottom half again. DDOS attack may be? Slow PC in network with heavy load? Doesn't matter. With bad scheduler the system will stop respond because all that it will do is to execute postponed interrupts without allowing user space application to react on them or even to react on ALL user input, including system shutdown or disabling overloaded network subsystem.
rdos wrote:No, all USER tasks have the same default priority.
The following example (from the book of Robert Love "Linux Kernel Development") demonstrates the broken logic of such scheduling approach.
We have network driver that is not from user space and hence has higher priority. The top half of driver (IRQ service routine) leaves most of work (not to disable interrupts for a long time) for the bottom half of driver. The bottom half is scheduled (with top priority, as mentioned). It works, and at that time new network packet arrives, top half schedules the task for bottom half again. DDOS attack may be? Slow PC in network with heavy load? Doesn't matter. With bad scheduler the system will stop respond because all that it will do is to execute postponed interrupts without allowing user space application to react on them or even to react on ALL user input, including system shutdown or disabling overloaded network subsystem.
That's just a bad network stack. My network stack does not use the top-half/bottom-half approach, rather has a thread for handling network packets. The only thing the network IRQ does is to send a signal to the network thread to wake it up. The network handling thread does have above-normal priority, but it doesn't do user-level callbacks or anything like that. It typically sends the packets to the TCP/IP socket or queues them on the remote-IPC protocol.
And I know I have no problem with continous network traffic as I've tested to run my IPC-test apps over IP letting many threads on different computers send as many messages as the can. The system never becomes overloaded or unresponsive, and doesn't even come close to becoming overloaded.
Lock requested to prevent this stupidity from spreading.
"Certainly avoid yourself. He is a newbie and might not realize it. You'll hate his code deeply a few years down the road." - Sortie
[ My OS ] [ VDisk/SFS ]
I just uploaded the proof that the CPU will not become overloaded even in the presence of continous network traffic from a local network to "what do your OS look like".
That's NOT question of network stack or any other particular subsystem. I just showed you practical example of scheduling flaw when tasks with higher priority prevent execution of lower priority tasks and that will prevent the normal functioning of the whole system. But it seems that you believe in infallibility of your opinion, so I'm going to sleep according to the Combuster's recommendation .
I'm not denying that such things could happen with fixed priority scheduling. It is the responsibility of the writers of system device-drivers to make sure that scheduling algorithms does not make the system unresponsive, like in the example of the Linux network stack. It is even possible that using a gigabit switch instead of the 100Mbit switch I used for the performance test could make the system malfunction, although I suspect it will start losing packets rather than bringing down the whole system.
Besides, what is the alternative, and why would those alternatives garantee that the system will not malfunction under heavy network load?
After all, if you let the network server thread in my design have the same priority as ordinary user tasks, this might result in packet overruns in the network card, which gives really nasty effects when the NIC needs to be reset and packets needs to be resent.
It's like saying I provide a string library but I do not check for NULL, it's the programmer's responsibility to ensure no exceptional case.
While I agree part of this idea, it's usually not work in reality and the responsibility shift toward the "provider" instead of "user".
And in rare case a driver developer would know/care much about scheduling (or other in-deep kernel stuff), if they kind enough to write(port) drivers to your OS.
bluemoon wrote:And in rare case a driver developer would know/care much about scheduling (or other in-deep kernel stuff), if they kind enough to write(port) drivers to your OS.
In the case of writing network drivers for RDOS, people have no choice of how to implement them. They must make the IRQ signal the network stack (an oslevel call), and the network server thread is created in the network stack, not directly by a particular NIC implementation. Therefore, I even have a choice to change priority and add new scheduling algorithms for any present or future NIC drivers without breaking them. But I won't do that unless the current design seem problematic. It is even possible to use the top/bottom half approach of Linux/Windows without breaking NIC drivers.
OTOH, the physical disc drivers do create their server threads themselves, and decides their priorities.
rdos wrote:In the case of writing network drivers for RDOS
Reaver Project :: Repository :: Ohloh project page
<klange> This is a horror story about what happens when you need a hammer and all you have is the skulls of the damned.
<drake1> as long as the lock is read and modified by atomic operations
rdos wrote:Besides, what is the alternative, and why would those alternatives garantee that the system will not malfunction under heavy network load?
The alternative is described in that book too. The solution is to provide some guaranteed amount of CPU time (for example, 10%) for other tasks too.
OK. That seems reasonable, but what is "other tasks"? All user level tasks, or some particular user level task?
One method that could garantee that all threads are executed regulary is to temporarily increase priority of unexecuted tasks every second or so. That doesn't need to be done in realtime in the scheduler, rather can be achieved with a high priority kernel thread that regulary scans for starved tasks and boosts them.