We have a communication error here, but I'll try to describe the case more clear.
Imagine there is 3 application, named A, B, C and one kernel K running in a single core CPU.
For plain time sharing scheduling as you proposed, we suppose the PIT run at 1KHz
We further assume the following scenario as one of the possible case:
A: A shell
B: Alarm clock
C: Prime number crusher
So, A get scheduled and it will run for 1ms, however A is a waiting for user input, without any mechanism to give up (yield) its time slice the machine is doing nothing until next scheduling.
Then at T=1ms, K got scheduled, it loops thru' all the driver but it finished quickly since there is not much events, without any mechanism to give up (yield) its time slice the machine is doing nothing until next scheduling.
At T=2ms, B got scheduled, it check for current time and its not alarm time, without any mechanism to give up (yield) its time slice the machine is doing nothing until next scheduling.
At T=3ms, K got scheduled, it loops thru' all the driver but it finished quickly since there is not much events, without any mechanism to give up (yield) its time slice the machine is doing nothing until next scheduling.
At T=4ms, C got scheduled, it used up all its time slice for calculation.
As you can see, the machine is running with less than 1/6 of its power to do useful work, the more process there are, the less efficient the machine.
Anyway, when it comes to scheduling, Brendan describe it way better than I.
