Some while back someone here asked about
exokernels, and while I meant to get back to that
thread and answer, I kept putting it off. Anyway,
here is my understanding of the concept.
The primary goal of the exokernel is to maximize
efficiency by eliminating abstractions; to this
end, it provides *no* application-level services,
acting solely to multiplex hardware resources
between independent processes. All actual
hardware access would be through libraries, as
would services like UI, IPC, networking, etc.
If it helps any, you might want to consider it as
something like a standalone version of VMWare;
while it can allow multiple, independent systems
to share the same resources, it does not itself
do anything else. You can run anything you want
under it, tranparently, with each system thinking
it has the whole machine to itself, and cannot
ask for an OS service directly from VMware.
The goal is to provide only the minimum support
for the programs running, and nothing more, and
the services can be custom-tailored to the
specific application. For example, an HTTP server
may have only those libraries needed to implement
networking, file reading, and server-side
scripting, and file library used (for example)
can be one specifically optimized for queuing
pages and their con-commitant images, scripts,
etc. for serving to the network client.
IMHO, the reasoning behind this has some serious
flaws. First, few applications can really benefit
from such hyperoptimization; the speedup would be
negligible in most cases, and can easily be lost
due to the added overhead of having multiple,
independent libraries for common services, each
of which may be swapped in and out as other
processes run. Also, if more than one process is
using the same code, then there would be multiple
copies of identical code running in memory. While
it would be possible to multiplex libraries in
the same was as hardware, it would mean adding to
the system just the sort of basic support code
that the exokernel was meant to avoid - and would
require two or more kernel context switches to
use the libraries.
More importantly, applications, especially
interactive ones, do not exist in a vaccuum. IPC
overhead can quickly swamp a system in which
user-level IPC libraries have to synchornize back
and forth repeatedly.
Finally, the overhead in writing and using
multiple libraries to do more or less the same
task is contradictory to the original purpose of
an OS to begin with:
My analysis is that while it would be of value in
dedicated servers, where a well-optimized library
is used and no shared code is used, it is
ineffcient as a general-purpose design. I have been told that newer designs of this type answer these issues, but I haven't seen these new designs so I cannot say. The best thing to do is to look at the pages on exokernels at MIT (sorry, don't have a URL handy right now).
Exokernels answer (sorry for long delay)
RE:xokernels answer (sorry for long delay)
>On 2002-02-18 12:11:15, Schol-R-LEA wrote:
>Some while back someone here asked about
>exokernels, and while I meant to get back to that
>thread and answer, I kept putting it off. Anyway,
>here is my understanding of the concept.
>
>The primary goal of the exokernel is to maximize
>efficiency by eliminating abstractions; to this
>end, it provides *no* application-level services,
>acting solely to multiplex hardware resources
>between independent processes. All actual
>hardware access would be through libraries, as
>would services like UI, IPC, networking, etc.
Application development would be similar to the
development for DOS?
>
>If it helps any, you might want to consider it as
>something like a standalone version of VMWare;
>while it can allow multiple, independent systems
>to share the same resources, it does not itself
>do anything else. You can run anything you want
>under it, tranparently, with each system thinking
>it has the whole machine to itself, and cannot
>ask for an OS service directly from VMware.
>
>The goal is to provide only the minimum support
>for the programs running, and nothing more, and
>the services can be custom-tailored to the
>specific application. For example, an HTTP server
>may have only those libraries needed to implement
>networking, file reading, and server-side
>scripting, and file library used (for example)
>can be one specifically optimized for queuing
>pages and their con-commitant images, scripts,
>etc. for serving to the network client.
For the normal application it would be a back step, right?
One advantage of Windows and Linux over DOS is the existence
of a real API.
>
>IMHO, the reasoning behind this has some serious
>flaws. First, few applications can really benefit
>from such hyperoptimization; the speedup would be
>negligible in most cases, and can easily be lost
>due to the added overhead of having multiple,
>independent libraries for common services, each
>of which may be swapped in and out as other
>processes run. Also, if more than one process is
>using the same code, then there would be multiple
>copies of identical code running in memory. While
>it would be possible to multiplex libraries in
>the same was as hardware, it would mean adding to
>the system just the sort of basic support code
>that the exokernel was meant to avoid - and would
>require two or more kernel context switches to
>use the libraries.
Two for a library access? Sounds like this would
invalidate the speed increase of an exokernel ...
>
>More importantly, applications, especially
>interactive ones, do not exist in a vaccuum. IPC
>overhead can quickly swamp a system in which
>user-level IPC libraries have to synchornize back
>and forth repeatedly.
Well one way to do IPC quick seems to be COM. Look
at DirectX!
>
>Finally, the overhead in writing and using
>multiple libraries to do more or less the same
>task is contradictory to the original purpose of
>an OS to begin with:
>
>My analysis is that while it would be of value in
>dedicated servers, where a well-optimized library
>is used and no shared code is used, it is
>ineffcient as a general-purpose design. I have been told that newer designs of this type answer these issues, but I haven't seen these new designs so I cannot say. The best thing to do is to look at the pages on exokernels at MIT (sorry, don't have a URL handy right now).
Then I'll check those new designs out!
Currently, I'll stick with a micro kernel design ...
Thanks!
The Legend
>Some while back someone here asked about
>exokernels, and while I meant to get back to that
>thread and answer, I kept putting it off. Anyway,
>here is my understanding of the concept.
>
>The primary goal of the exokernel is to maximize
>efficiency by eliminating abstractions; to this
>end, it provides *no* application-level services,
>acting solely to multiplex hardware resources
>between independent processes. All actual
>hardware access would be through libraries, as
>would services like UI, IPC, networking, etc.
Application development would be similar to the
development for DOS?
>
>If it helps any, you might want to consider it as
>something like a standalone version of VMWare;
>while it can allow multiple, independent systems
>to share the same resources, it does not itself
>do anything else. You can run anything you want
>under it, tranparently, with each system thinking
>it has the whole machine to itself, and cannot
>ask for an OS service directly from VMware.
>
>The goal is to provide only the minimum support
>for the programs running, and nothing more, and
>the services can be custom-tailored to the
>specific application. For example, an HTTP server
>may have only those libraries needed to implement
>networking, file reading, and server-side
>scripting, and file library used (for example)
>can be one specifically optimized for queuing
>pages and their con-commitant images, scripts,
>etc. for serving to the network client.
For the normal application it would be a back step, right?
One advantage of Windows and Linux over DOS is the existence
of a real API.
>
>IMHO, the reasoning behind this has some serious
>flaws. First, few applications can really benefit
>from such hyperoptimization; the speedup would be
>negligible in most cases, and can easily be lost
>due to the added overhead of having multiple,
>independent libraries for common services, each
>of which may be swapped in and out as other
>processes run. Also, if more than one process is
>using the same code, then there would be multiple
>copies of identical code running in memory. While
>it would be possible to multiplex libraries in
>the same was as hardware, it would mean adding to
>the system just the sort of basic support code
>that the exokernel was meant to avoid - and would
>require two or more kernel context switches to
>use the libraries.
Two for a library access? Sounds like this would
invalidate the speed increase of an exokernel ...
>
>More importantly, applications, especially
>interactive ones, do not exist in a vaccuum. IPC
>overhead can quickly swamp a system in which
>user-level IPC libraries have to synchornize back
>and forth repeatedly.
Well one way to do IPC quick seems to be COM. Look
at DirectX!
>
>Finally, the overhead in writing and using
>multiple libraries to do more or less the same
>task is contradictory to the original purpose of
>an OS to begin with:
>
>My analysis is that while it would be of value in
>dedicated servers, where a well-optimized library
>is used and no shared code is used, it is
>ineffcient as a general-purpose design. I have been told that newer designs of this type answer these issues, but I haven't seen these new designs so I cannot say. The best thing to do is to look at the pages on exokernels at MIT (sorry, don't have a URL handy right now).
Then I'll check those new designs out!
Currently, I'll stick with a micro kernel design ...
Thanks!
The Legend