I know this thread is old, but I feel compelled to join in. Hope people involved in this thread have not abandoned it entirely. Frankly, why should we care about the age here - the argument is still as valid as ever, is it not?
Anyways. I am currently investigating into exokernels, partly out of personal and professional curiosity and partly because I am drafting a design of an OS.
What `mybura says is very true. Indeed, in Windows and Linux, applications from bigger vendors that do care about performance, but meet the "abstraction wall", go about in trying to circumvent these abstractions without breaking them. For instance, database servers such as MySQL and Postgres, store their tables in an single-table-per-file manner, in order to be able to perform more efficiently. This already is akin to what a libOS does with say, a disk drive - it offers its client(s) to manage a region of storage for them and give them an abstraction they would prefer. In case with a database engine, it abstracts a file as a table behind, for instance, an SQL query API. To the convenience of its implementation in turn, it can traverse the file in random-access or linear-access fashion, without having to "peel the onion" much before it gets the data it needs.
However there is a catch still, which I believe, makes a difference and tells it apart from a true libOS in an exokernel. Invariably and inevitably, an application-level database engine, even though operating on a file as a linearly- and randomly- accessed storage space, STILL has to go through another runtime level of abstraction that is the filesystem below. Modern filesystems deal with issues of their own these days, such as file fragmentation, and a level of indirection that is required for them to be able to map a user given filename to the final location on say, a disk platter, that the kernel disk driver can work with. File fragmentation alone is a serious performance show stopper, since it has the controller wait while disk platter skips some rotations before it aligns itself with the reading head so that it can read more of your fragmented file. Also, those system administrators or developers who optimistically map their table files in memory, sometimes run into an interesting and well-known issue - because of paging, when RAM is consumed or congested enough, say by another server process, their table file is swapped out (entirely or partially) to disk and is held there, having to be paged back (again entirely or partially) on demand, adding to the work having to be done, compared to simply reading the table off the disk and not bothering with memory mapping. Of course, database vendors know how to overcome such issues, but it comes at a cost of writing and maintaining more code, and thus more debugging.
In light of all said, I would add that exokernels are a novel approach and one worthy of further interest and realization. A system-in-a-file described above can be seen as a mere "emulation" of exokernel principles, but emulation is a level of abstraction too, of course. An exokernel would also give you all those things you try to emulate for a much cheaper cost, and have you write and thus debug less code. Add to this the fact that, as exokernel papers observe, most of the OS ends up being run in user-space, you get the bonus of security, essentially for free.
Another way of looking at and into an exokernel, and this touching on your filesystem related problems, is to think of exokernel as a resource multiplexer and nothing else. Now, DOS was a non-multitasking system - it didn't have to multiplex hardware and the programs that ran on it could do whatever they wanted in terms of accessing resources. Part of why it was possible lay in the very nature of non-concurrency and non-reentrancy of DOS itself. If only one thread of code is run, if it saves state at startup and restores it at exit, then it may indeed monopolize and manage hardware with no multiplexing required. It is because we are designing systems that give applications concurrency, that the need for such multiplexing - safely sharing limited common resource(s) - arises. A multiplexer is just that - a layer that lets multiple tasks share a resource in a safe (i.e. secure, stable, predictable) fashion. That is it, and that's all an exokernel should do, ideally. Thus, if it can do that, it's successful. So, if your system multiplexes the permanent storage in a safe manner, you have yourself an exokernel. Stack a generic, all-purpose, libOS that is your filesystem API on top of it - one that fits 99% of the applications - and you have fulfilled your original requirement, sticking with exokernel ways to do so.
Sorry, I know that I deviated somewhat from your original ponderings but I hope you forgive me for that and maybe my ramblings light a bulb in your head after all. I sure benefit from reading pretty much anything on exokernels, elsewhere and here, partly because they're a novel thing that doesn't get much exposure (yet?)