Why this discussion is silly?
Silly because, as Gigasoft points out, it ignores the easily checked facts. There are a host of theoretical explanations as to why it cannot possibly work, ignoring the simple fact that it did. There are demands for citations to prove it works, but a glaring lack of citations to support the opposite opinion.
It is, as I have said before, the equivalent of scientists proving that bumble bees can't fly. And yet they do.
Eppur si muove.
Many operating systems contain a mixture of 16, 32, and 64-bit code. That doesn't make them 16-bit, 32-bit, or - necessarily - 64-bit. I guess if we are to define the "bitness" of an operating system then we have to determine the minimum processor requirements. Can the kernel run on an x-bit processor? In the case of OS X we had.
1. A kernel that would run on 32-bit or 64-bit processors.
2. A kernel that would only run on 64-bit processors.
To my mind the only sensible way to categorize those kernels is to do as Apple did and call kernel 1 a 32-bit kernel and kernel 2 a 64-bit kernel. Kernel 1 ran 32-bit device drivers, kernel 2 required 64-bit device drivers (which didn't exist in many cases). Kernel 1 had the ability
when running on a 64-bit processor to run 64-bit user programs. But the kernel itself was still, to all intents and purposes, 32-bit. You could take a hard disk with that kernel installed from a 64-bit machine, where it would run 64-bit user programs, to a 32-bit machine, where it wouldn't. But that didn't change the kernel - it only required a 32-bit machine to run so it was a 32-bit kernel.
It is, if you like, just a matter of semantics. And just a bit of irrelevant history. The only real reason for such an implementation was the existence of a huge history of 32-bit device drivers coupled with the desire to run more efficient 64-bit user programs. I can think of no reason to design an OS like that from scratch, merely as a stepping stone to a full-blown 64-bit implementation.