Conditional task fragmentation based processes
Posted: Sun Nov 18, 2012 2:52 am
I've been speculating on this concept for awhile. The core principles are that tasks don't actually run within confined limits of memory, but are rather broken up into fragments of code that the kernel constantly manages and patches. Ideally this would rid the requirement for dynamically loaded libraries, and other complexities, but itself would pose some situations.
Consider from the perspective of a single operating task, the kernel would have no requirement to split the process up into fragments of code. Since there would be no requirement to do so. From the perspective of two operating tasks, it might very well require no fragmentation as well, unless the kernel infers at some point both tasks share a common set of instructions. This is where my idea comes into play.
Assume application A and application B both implement functionality that matches semantics of the following C program:
(which happens to be a compare function for qsort)
This can be represented as the following in x86_64 assembly
(this assumes the x86_64 calling convention in the SYSv ABI)
Lets bang a bytecount now:
Now I understand there is tons of issues within respect to this generalization. For the sake of argument lets assume the kernel had to perform semantic checks to ensure operation was correct and equal. There is still other things that could break this, mainly position dependent code, which for the sake of argument the kernel would also have to check. I'm sure there are tons of others but those are merely politics
But this process would essentially remove the requirement for the following things:
(and quite possibly a few other things)
Consider from the perspective of a single operating task, the kernel would have no requirement to split the process up into fragments of code. Since there would be no requirement to do so. From the perspective of two operating tasks, it might very well require no fragmentation as well, unless the kernel infers at some point both tasks share a common set of instructions. This is where my idea comes into play.
Assume application A and application B both implement functionality that matches semantics of the following C program:
(which happens to be a compare function for qsort)
Code: Select all
int compare_ints(void *a, void *b) {
return (*(int*)a - *(int*)b);
}
Code: Select all
mov (%rdx),%rax
sub (%rcx),%rax
retq
Lets bang a bytecount now:
- mov - both operands registers, one access = three bytes proof: 0x48, 0x8B, 0x02
- sub - both operands registers, one accessg = three bytes proof: 0x48, 0x2B, 0x01
- retq - single byte
Now I understand there is tons of issues within respect to this generalization. For the sake of argument lets assume the kernel had to perform semantic checks to ensure operation was correct and equal. There is still other things that could break this, mainly position dependent code, which for the sake of argument the kernel would also have to check. I'm sure there are tons of others but those are merely politics
But this process would essentially remove the requirement for the following things:
(and quite possibly a few other things)
- Dynamic linking
- System calls
- Binary compatibility