If you haven't already done so, I would recommend reading some of the research papers on capabilities (and in the case of PeterX, those on exokernels) before proceeding. I suspect that there are some significant misunderstandings on everyone's part (including my own, since I am certainly no expert on either topic).
I have always been of the opinion that the debate over ACLs vs. Capabilities is itself missing the real point, which is that they are complementary, rather than contradictory. Capabilities really don't cover the same types of security which ACLs are most often used for (e.g., user access - logins, action confirmations, and the like), while ACLs are sometimes misused for things which would be better handled with capabilities (e.g., anything which doesn't actually require user intervention - which most systems barely have any facilities for at all). ACLs are primarily best for
user security; capabilities are for
inter-process security.
At least, that is how it seems to me.
The other aspect of Capabilities which this discussion so far has missed is that they are (IMAO) better at isolating concerns. The process granting the capability doesn't need to keep a list of the tokens it has granted; only the process holding the token has that information, and the token can be passed to a different process without any intervention from the grantor. While this itself opens up a new way for the Confused Deputy problem to arise (if process A grants B a token, and B passes it to a process it shouldn't, then A becomes a Confused Deputy), but the system is supposed to have ways of checking for that - though I would have to re-read those papers I mentioned to recall how. The point is, the information about who can access what is localized with the one who needs the service, not the one providing it.
To put it another way, the difference between an ACL and a capability systems is who is keeping a list of what. The
c-lists usually used for this are associated with the process holding them, and even the process itself can only access them through the kernel. This is why
quajects are often compared to c-lists (and in Synthesis, implemented
using c-list-like kernel structures) - in both cases, they are lists of
operations to be performed, either as one-time keys to be applied or as functions to be performed, or both.
Since only the kernel can create the tokens for the grantor, or move them around from one holder to another, it limits the points of failure. This can be done with ACLs too, as bzt states, but in practice this never seems to be done - the programs are left to manage the lists themselves.
As for exokernels, I would hesitate to call anything which isn't a hypervisor an exokernel; indeed, the real relationship is sort of reversed, as most hypervisors are in effect a superset of what an exokernel can do (though the ability to share memory between different virtual machines is not common for hypervisors, as the main use of those is for multiplexing hardware between entire operating systems, and for compartmentalization of them to allow greater sandboxing).
The real difference between them is in how they are used, however. The whole point of an exokernel is that multplexing the hardware is the
only thing it does. The ability to share library memory is somewhat secondary to this, a memory-saving hack which is not really all that valuable on modern hardware with their copious physical memory capacities. It is still worth doing, but it is not the defining quality of an exokernel, IMAO.
I also would add that exokernel systems as they are usually described are generally best used for servers and other systems with more or less fixed configurations - almost an updated form of batch processing, as it were, in that each program is mostly running in isolation with little or no common data shared between them (at least, not in system memory - they can easily share secondary storage, of course).
I would even go so far as to say that the main reason that we haven't seen an exokernel-seque hypervisor specifically for the purposes of running containerized programs, rather than rump Linux kernels loaded via Docker or Vagrant, is because, well, a container is more than just the program being run; a container also has all of the extra (and often extraneous) things which most modern programs need in order to run, as well as all the configuration and development tools needed to go with them, set up in a known-good state when the container is created. An exokernel designer would need to re-invent all of those wheels, and while the end result would probably be worth it, you would then need to convince people to adopt it... which as I have often stated, is a far harder problem than any of the technical aspects.
Which is also why exokernels present a bit of a challenge for desktop use, especially if you mean to have a GUI. While you could use an exokernel for a GUI desktop system... why would you? The typical desktop manager and environment has to be able to stick its fingers into almost everything which a GUI-oriented program does, with compositing, the widget library, or even basic facilities such as the clipboard. While you could do all of this on top of the exokernel, you'd in essence be running a second OS on top of the first - in other words, it would be more of a conventional hypervisor.
While there may be ways to make this work - I have some ideas in this direction, regarding applying virtualization to some of the Synthesis kernel's idea - for an otherwise conventional system, I am not certain that it would be worth it. Something to experiment with, to be sure, and I do intend to, but I won't be surprised if it turns out to be a dead end.
And now, I await word from a certain other member, who will no doubt go on a tear about the Evils of Virtualization and how pointless and wasteful it is.