Hi,
rdos wrote:Brendan wrote:If they want to run something like this in the background, can they do it without your OS taking them to P0 permanently?
It won't and shouldn't run on RDOS anyway, so whats the big deal? I don't want a Windoze or Linux clone that runs lots of things I've not told it to run. I want the machine to run exactly what I tell it to run, nothing else.
Oh, sorry - I forgot that your OS is only ever going to be useful for running ATM code that you write, due to too many excuses for intentionally poor design.
rdos wrote:Brendan wrote:rdos wrote:I don't predict load, I meassure it.
Wow - I wish I could predict the load of unknown processes at unknown times on unknown hardware.
As I wrote, I
meassure load and adjust P-state after current load, and therefore there is no prediction involved.
Just to clarify, you're telling me that you can accurately measure the load of processes that haven't been written yet, even when the OS is running on CPUs that haven't been invented? You're telling me that you've already done all these measurements and discovered that none of the processes that will ever run on RDOS involve one or more medium priority tasks that run for a reasonably lengthy amount of time? If this is the case, I'd strongly recommend patenting whatever method you use to take measurements from the future.
rdos wrote:Brendan wrote:rdos wrote:No, because you could do the same job at the same performance with lower power, less temperature and noise just be selecting optimal P-states. If you want longer battery-life or less noise you could just tweak the parameters and select lower P-states that doesn't generate the same performance. It's not a major redesign, just some parameter changes. However, the default would be to keep performance at the lowest possible power.
You can't do the same job at the same performance with lower power. You can only make a compromise between performance and power. For high priority tasks you want high performance, for low priority tasks you want lower power. Surely you can see that for medium priority tasks you want something in between?
Wrong. Because the relationship between frequency / performance and power consumption is not linear, I can do just that.
Example (from AMD Athlon):
P0 runs at 3000MHz, and consumes 125W
P5 runs at 2000MHz, and consumes 60W
If I have 30% load at P0, that corresponds to 3000/2000 * 30% = 45% load at P5. No lets say that when the system is idle, it consumes negligable power. In the P0 state, you consume 125W during 30% of the time = 37.5W. In P5, you consume 60W during 45% of the time = 27W. Since the clocks are at a higher frequency when the processor is idle, and the idle time is longer at P0, it will consume more power in P0 also during idle time, so the 10.5W difference is a minimum value.
For the way you measure load, if you have 30% load at P0 then you've got one or more tasks that are constantly blocking/unblocking (e.g. IO bound tasks). Let's assume it's some sort of network service - it blocks until a packet arrives, processes the packet, then sends a reply packet and blocks again. Let's say it's handling 1000 packets per second and (at P0) takes 300 us to handle each packet. Call that "300 us of latency". You switch to P5, and it's still handling 1000 packets per second, but now it takes 450 us to handle each packet. Call that "50% higher latency". Is the performance the same?
rdos wrote:Brendan wrote:Users don't need to select priorities (although it'd be nice if they could if/when they want to). Software should tell the scheduler what it wants. A thread that's responsible for updating the user interface should be relatively high priority, a thread that does spell checking while the user types could be medium priority, a thread that regenerates search indexes could be low priority. Whoever wrote the code can use reasonable defaults.
Realistically, everyone wants their code to perform well, so you will probably see an inflation in priorities.
Realistically, most people don't want less important work screwing up their interactive threads. For a simple example, you might have a word processor with a medium priority thread doing spell checking, a low priority thread that saves an automatic backup every 5 seconds, and a higher priority thread for the user interface. When the user presses a key you want to switch to the higher priority user interface thread immediately - you don't want to wait for both the automatic backup thread and the spell checker to finish their time slice before the user interface thread gets any CPU time.
For a test, you should be able to have 2000 low priority tasks all doing heavy calculations (or just wasting CPU time in a loop if you like - sooner or later everyone writes a "dummy load" process), and the user shouldn't be able to notice because the GUI and all the applications should be just as fast and responsive as they are when there's nothing else going on. For a bad system (e.g. round robin scheduler with no task priorities) 2000 tasks will cripple the entire OS.
To prevent "priority inflation", I give each process a maximum limit. When one process starts another process, it sets the new process' maximum limit to anything equal to or lower than its own limit. The OS might have a "virtual screen" process that is limited to "max. priority = 0x30", which starts a GUI process that is limited to "max. priority = 0x40", which starts a text editor which is limited to "max. priority = 0x48". The text editor can spawn a low priority thread (e.g. with "priority = 0xC0") or a relatively high priority thread (e.g. "priority = 0x50"); but if the text editor tries to spawn a very high priority thread (e.g. "priority = 0x20") then the kernel limits the new thread's priority and the thread ends up being "priority = max. priority for process = 0x50". Of course something like this won't work well if the OS only has 3 priorities (high, medium or low) - you'd run out of priorities and end up with all your applications running as "low priority".
Note: It doesn't matter much what the threads are. Each process always has some sort of "user" (even if the "user" is another process or another computer on the network, and not a human), and as soon as you start looking at multi-threaded processes (necessary for processes that take advantage of multi-CPU) you start wanting to use different priorities for threads that do different work in the same process.
Cheers,
Brendan