What I mean is, the user applications shouldn't be able to call
anything in the kernel space directly, which is true regardless of the language you are working in.
OK, more stuff of the 'you should already know this' variety, just to make sure we agree on things. Don't take this the wrong way, I just want to make things as clear as possible.
Protected mode operating systems have different levels of privilege, which control the ability to use certain instructions, access certain memory areas, etc. Applications usually have very limited privileges, while the kernel itself has to have more or less unlimited privileges.
In x86, there are four privilege levels (CPL0 through CPL3, often called 'ring 0' through 'ring 3'), but most OSes don't use CPL1 and CPL2 at all, simply using the same absolute 'root/user' division as Unix.
In order for an application in CPL3 to do anything that requires CPL0 (supervisor, or root) privilege, it has to make one or more
System Calls - specific actions that change privilege level in a controlled way, and prevents the code outside of the OS proper from running at all while the process is in the privileged state.
(Depending on the design, the 'OS proper' may or may not include loadable modules such as drivers. Classic
Monolithic kernels have no loadable parts, but require everything to be linked into a single executable image at compile time. Classic
Microkernels move things like drivers out of the kernel entirely and define them as separate user processes which the applications communicate with via IPC - in some cases, the message-passing primitives are the
only system calls the OS provides, and the OS itself is also a separate process, unlike most
Higher Half Kernel designs. Most OSes since the early 1990s have been
Hybrid Kernel designs instead, meaning that the drivers and other modules are privileged, but the OS is able to load modules into supervisor space dynamically.)
There are a number of ways that an OS can provide system calls; older x86 systems usually used soft interrupts similar to those in MS-DOS (such as the Linux INT 0x80 interrupt), and some use 'call gates', but newer x86 OSes almost always implement a handler for the
SYSENTER instruction (or the SYSCALL instruction in Long mode) instead.
In most cases, the application programmers do not use the system calls directly, but use libraries which wrap them up and provide a cleaner API - indeed, a lot of the
stdio.h library functions were originally just light wrappers around Unix system calls. I think part of the confusion you are having is that you are thinking in terms of the API rather than the actual system interface.
This has absolutely nothing to do with how the OS itself is written; it is just the basic design used by most operating systems running on CPUs with separate supervisor modes since that sort of thing was introduced back in the 1960s.
As for using one object in several places, well, that's what pointers and reference variables are for - you would do the same thing in C, anyway, right? However, as I said, the object in question wouldn't be part of the OS, but would be part of the API that is running in userland instead, so each process would still have its own separate one regardless. There may be a corresponding abstraction in the OS, but the applications would
not be accessing those directly, ever, at least not with most OS designs (something like Synthesis is somewhat of an exception, though even there the access is mediated through system calls).