Google Fuchsia/Zircon design decisions

Discussions on more advanced topics such as monolithic vs micro-kernels, transactional memory models, and paging vs segmentation should go here. Use this forum to expand and improve the wiki!
vhaudiquet
Member
Member
Posts: 43
Joined: Sun Aug 20, 2017 10:59 am

Google Fuchsia/Zircon design decisions

Post by vhaudiquet »

I looked into Zircon (https://fuchsia.dev/fuchsia-src/concepts/kernel), the kernel made by Google for their Fuschia OS, and their libc documentation.

I really like the idea of dropping '..' support (see https://fuchsia.dev/fuchsia-src/concept ... ems/dotdot) and other POSIX features (eg. symlinks) to ensure more stability and security (process isolation here). I do think that modern operating systems, for an everyday user, will only work in a GUI and in single-user mode (with a capability permission system, kind of like in Android) ; thus most of the POSIX / standard C specifications are useless (i could go into details but this is not the subject).
I think that it is interesting to see them dropping fork(), signals, and exec() as well, it really shows that those are legacy interfaces and that we can do better today.

However, the kernel seems to have a concept of 'kernel objects' and 'handles' accessible through system calls (https://fuchsia.dev/fuchsia-src/reference/syscalls), kind of like Windows i guess... I don't really understand the design decision here, i don't really like the idea of userspace programs using 'kernel handles'. Maybe there are use cases of that, but i don't see them. To me, those are useless (i.e. can be replaced with better interfaces), and the userland programs should not be aware of any kind of kernel object they are dealing with, the interface should be more opaque.

Something that is interesting too is that they seem to have system calls for virtualization and multiple CPU emulation (`guest_create`, and virtual cpu management : `vcpu_*` syscalls). I don't really understand the use case of that for a regular user ; i guess they just want to be able to run a guest Android system to run Android apps ? (like on ChromeOS where the android runtime is virtualized now i think...)

Anyway, i found their documentation really interesting to read, and most of their choices seem to be explained well.

What do you guys think about Fuchsia, not as a user but as an OS developer ?
Did anyone look into it or maybe even contributed ? (i believe they accept contributions)

(please note that i just read their documentation, not any part of their code or anything, so i may be wrong in some aspects)
nullplan
Member
Member
Posts: 1769
Joined: Wed Aug 30, 2017 8:24 am

Re: Google Fuchsia/Zircon design decisions

Post by nullplan »

vhaudiquet wrote:I really like the idea of dropping '..' support [...] and other POSIX features (eg. symlinks) to ensure more stability and security (process isolation here).
I see a lot of broken porcelain and not much to show for it. Those features exist for a reason. And for any OS restriction you can be damn sure people will invent workarounds. In this case, they have no ".." support in the FS server, so why don't we just keep a handle to the root around and keep inheriting that one. Then we can implement path lookup in user space.
vhaudiquet wrote:I do think that modern operating systems, for an everyday user, will only work in a GUI and in single-user mode (with a capability permission system, kind of like in Android)
So a person who has only ever used a car tells me that nobody uses motorbikes anymore. *sigh* Just because something does not crop up in your world, doesn't meant it isn't still relevant for other people. At latest when you are working at a large company and need to access shared resources, the concept of a user will be very helpful. Which is also why multi-user support in the terminal OS (and FS) is a good idea. Now, yes, for most end-user terminals users are too wide a concept. The solution is not to throw the baby out with the bathwater, but rather, to restrict access scope further, beyond the user, which is what AppArmor and SELinux are doing. Personally I find SELinux impossible to work with, so AppArmor is the way to go for any normal person.
vhaudiquet wrote:thus most of the POSIX / standard C specifications are useless
I do not see how a claim about the concept of a user somehow connects to this baffling statement. C doesn't even know anything about users, and POSIX really doesn't care about them as much as you claim here.
vhaudiquet wrote:I think that it is interesting to see them dropping fork(), signals, and exec() as well, it really shows that those are legacy interfaces and that we can do better today.
You have an interesting notion of "better". exec() I don't really have a problem with, but for the most part there is no difference between exec() and spawn()+wait(). I do foresee issues when it comes to PID 1, which is special, but then PID 1 should not run a complex program (which is a lesson the systemd folks really should have learned by now). Dropping fork() in its original semantics is exactly what I am going to do (I wrote about it before). But signals?

I found no rationale given for dropping signals. It is certainly true that they complicate things. Programs handling signals must be written in a signal-safe way, because the signal could arrive at any time. However, we've worked it around a lot, and there is nothing quite like a signal for telling other processes about an event. Other communication channels just get unbearably complex if you require an n:m relationship (multiple senders and multiple receivers). With signals, as long as you know the PID, you can send a signal, even if you are not the parent process or anything.

Plus, once you have signals, you can use them to signal exceptional conditions in a standardized way, and don't have to create another way to do that. There is no way to create a watchdog timer that is quite so expedient as calling alarm(), for instance. Sometimes EINTR isn't a bug but a feature.
vhaudiquet wrote:i don't really like the idea of userspace programs using 'kernel handles'
What's a file, then? Userspace programs necessarily need to interact with kernel objects, unless they are pure calculations, but even those need to provide their results somehow. And that means they need to interact with the VFS as presented by the kernel. Whether you call it an FD or a "handle" doesn't really matter.

That said, I just read a bit into the documentation and found that they allow creating threads in other processes. So, the worst NT feature to ever grace the planet was added into Zircon as well. What was that you said about process isolation or something? They disallow signals because of isolation problems and then allow this nonsense. That's like buying a Smart for the fuel economy and then deliberately puncturing the fuel line.
vhaudiquet wrote:What do you guys think about Fuchsia, not as a user but as an OS developer ?
Typical Google. "We are so much better then MS", and then proceed to copy their worst aspects. It would appear that corporate OS development always goes that direction, whether it takes place in Washington or California.

As for Fuchsia itself, I see quite a lot of "change for the sake of change", which is self-indulgent at best, and destructive at worst. A couple good ideas on display, but the maxim "those who fail to learn from Unix are doomed to reinvent it, poorly" does seem to hold.
Carpe diem!
vhaudiquet
Member
Member
Posts: 43
Joined: Sun Aug 20, 2017 10:59 am

Re: Google Fuchsia/Zircon design decisions

Post by vhaudiquet »

In this case, they have no ".." support in the FS server, so why don't we just keep a handle to the root around and keep inheriting that one. Then we can implement path lookup in user space.
The idea is isolation : if you start a process with cwd /home/process/ it will never be able to go up to /home ; that being said it is true that you could just pass the root node to every process but it defeats the purpose
So a person who has only ever used a car tells me that nobody uses motorbikes anymore...
I'm talking about an end-user on Google use case for Fuchsia, which i think is basically Android phones (what they wanted to do first was replace android) and now Google Nest. If you're a person, not a company, you don't share your phone, you don't have multiple users on it. Some phones still have limited hardware ressources, and having multi user support seems like an useless feature to add, that's all i say.
when you are working at a large company and need to access shared resources, the concept of a user will be very helpful
I agree with you, for a company it is useful, but Fuchsia is not targetted at companies. Furthermore, i would argue that then the kernel doesnt need to know about users ; only the server (an equivalent of sshd ? a file transfer server ? any server that shares the ressources) has to handle users, and everything about user handling can be implemented in this program.
which is what AppArmor and SELinux are doing
Exactly : these are extending a kernel that was made to support an "user-based" permission model to a capability-based one. When creating a new kernel, why not just build it capability-based and drop 'users' ?
I do not see how a claim about the concept of a user somehow connects to this baffling statement. C doesn't even know anything about users, and POSIX really doesn't care about them as much as you claim here.
I'm sorry, i wasn't really clear. What i meant is that the system calls POSIX ask for and the libc standard, not the C standard itself, are tied up together and contains a lot of useless / legacy / insecure functions, some of which are for users/groups.
What's a file, then? Userspace programs necessarily need to interact with kernel objects
I'm sorry, i wasn't clear about that too. The thing that bothered me is that an user-space program can create VMO 'Virtual Memory Objects' and allocates 'VMAR' virtual memory address region ; i find it a bit weird to expose such low-level kernel objects, instead of just having an mmap() system call for example. I don't really understand the use case here.
they allow creating threads in other processes.
They have a concept of 'jobs' which is a group of processes. I think you can only do that in the same job ? I'm not sure tho
Ethin
Member
Member
Posts: 625
Joined: Sun Jun 23, 2019 5:36 pm
Location: North Dakota, United States

Re: Google Fuchsia/Zircon design decisions

Post by Ethin »

aI find it baffling that your defending the "end-user"/"this OS is made for end-users" idea. If there's one thing that history tells us, people will use programs and devices (and, really, anything) in an environment or way that it was not originally intended for. What the product or method or system or whatever was originally intended/designed for is absolutely irrelevant.
As for process isolation with the elimination of ".." and such, the addition of remote thread creation is pretty much a remote code execution vulnerability remade as a feature. Unless some extremely hard restrictions are placed on those kinds of syscalls, process isolation cannot exist. If a process is allowed to execute code in another process on the system, it cannot be isolated. Same for threads. Allowing processes or threads to run code in other processes or threads that the process or thread doesn't own is ridiculous, and why it was even invented in Windows is beyond me. Its a horrible idea because if I create process A and process B is doing something and I don't own it (it isn't me after all), I can influence the execution of process B and even change its state. I can tamper with the internal data structures and memory of process B. Hence it being an RCE vulnerability disguised as a "feature".
Korona
Member
Member
Posts: 1000
Joined: Thu May 17, 2007 1:27 pm
Contact:

Re: Google Fuchsia/Zircon design decisions

Post by Korona »

vhaudiquet wrote:I'm sorry, i wasn't clear about that too. The thing that bothered me is that an user-space program can create VMO 'Virtual Memory Objects' and allocates 'VMAR' virtual memory address region ; i find it a bit weird to expose such low-level kernel objects, instead of just having an mmap() system call for example. I don't really understand the use case here.
Zircon is a microkernel, it does not expose high level concepts such as mmap() by design. Zircon has no concept of files etc. Files are implemented in userspace.

What Zircon calls "kernel handles" are really just numerical IDs with no additional meaning (like FDs in UNIX or HANDLE in Windows). This is just capability-based design, which has been used in almost every OS since 1870 (like in UNIX or Windows).
managarm: Microkernel-based OS capable of running a Wayland desktop (Discord: https://discord.gg/7WB6Ur3). My OS-dev projects: [mlibc: Portable C library for managarm, qword, Linux, Sigma, ...] [LAI: AML interpreter] [xbstrap: Build system for OS distributions].
vhaudiquet
Member
Member
Posts: 43
Joined: Sun Aug 20, 2017 10:59 am

Re: Google Fuchsia/Zircon design decisions

Post by vhaudiquet »

Korona wrote:Zircon is a microkernel, it does not expose high level concepts such as mmap() by design.
But even in a microkernel, the kernel is responsible for memory managment and processes, no ?
So memory mapping syscalls should still be available ?
I don't think there are microkernels with memory manager as an userspace server ?

And i don't see the difference in capability between exposing mmap() and exposing 'create virtual memory object' 'destroy virtual memory object' ...
It just seems weird to me that then processes can write and read to those memory objects, and can pass those memory objects to other processes....

EDIT : I only meant mmap() as a 'memory allocation' function (with fd -1) ; i did not think about mapping files to memory here if that's what you're reffering to
Korona
Member
Member
Posts: 1000
Joined: Thu May 17, 2007 1:27 pm
Contact:

Re: Google Fuchsia/Zircon design decisions

Post by Korona »

Zircon does not implement MM in user space (but there are microkernels that do that, see for example L4).

The virtual memory object interface exists such that user space can implement abstractions such as files that can be mapped using a higher-level mmap() call (which is implemented in user space as well).
managarm: Microkernel-based OS capable of running a Wayland desktop (Discord: https://discord.gg/7WB6Ur3). My OS-dev projects: [mlibc: Portable C library for managarm, qword, Linux, Sigma, ...] [LAI: AML interpreter] [xbstrap: Build system for OS distributions].
User avatar
Sik
Member
Member
Posts: 251
Joined: Wed Aug 17, 2016 4:55 am

Re: Google Fuchsia/Zircon design decisions

Post by Sik »

Looking more in deep at that and if I understand correctly, it isn't that it doesn't allow going to a parent directory, but that it treats a specific directory as a sandbox's root and any processing of .. in a path is done entirely by the program (or one of its libraries) rather than being handled by the filesystem proper. If the sandbox's root matches the filesystem's root, then it's basically the same as what you'd get in other OSes.

Probably what's throwing things off is that the OS itself isn't providing any concept of "current directory", again it's up to the process to keep track of it (this makes more sense when using a microkernel). That's something that you'd normally hide behind the standard library rather than expose it directly to the programmer (e.g. by putting relative path processing within fopen). You could even make the current directory an environment variable that the library conveniently uses.

They certainly didn't help by making a dedicated page making a much bigger deal about it than it really is, though.


That said, not having symlinks at all (nor hard links I guess?) may not be a great idea… The whole point of them is to intentionally provide a mirror to a file or directory elsewhere as if they also existed there. Maybe a better idea is to restrict what can create them (e.g. file manager and shell should be able to do it so the user can make their own links, but other programs shouldn't have that permission). Note that .. not being an actual directory entry means that it should be still impossible to break out of the sandbox (the link's parent is the directory it was linked from, not the real parent directory).
nexos
Member
Member
Posts: 1078
Joined: Tue Feb 18, 2020 3:29 pm
Libera.chat IRC: nexos

Re: Google Fuchsia/Zircon design decisions

Post by nexos »

So Fuchsia, from what I gather, allows a similar approach for .. as what was suggested in a recent forum thread:

viewtopic.php?f=15&t=55942
Last edited by nexos on Mon Dec 20, 2021 4:23 pm, edited 1 time in total.
"How did you do this?"
"It's very simple — you read the protocol and write the code." - Bill Joy
Projects: NexNix | libnex | nnpkg
User avatar
eekee
Member
Member
Posts: 872
Joined: Mon May 22, 2017 5:56 am
Location: Kerbin
Discord: eekee
Contact:

Re: Google Fuchsia/Zircon design decisions

Post by eekee »

Ethin wrote:aI find it baffling that your defending the "end-user"/"this OS is made for end-users" idea. If there's one thing that history tells us, people will use programs and devices (and, really, anything) in an environment or way that it was not originally intended for. What the product or method or system or whatever was originally intended/designed for is absolutely irrelevant.
I agree. To give some real-world examples of phones as servers, WhatsApp uses uses your phone as the primary recipient and authenticator of messages, and also operates a server for your desktop to connect to so it can get and send messages too. Google's Messages For Web does the same thing with SMS. Some carriers offered SMS on the web, but because the phone itself can serve messages which are sent to it, Google's SMS app can offer it to everyone regardless of carrier. Years ago before all this, I ran FTP and SSH servers on my phone, not to mention netcat whenever I felt like it. All these, especially the first two, might be obvious uses now, but if phones had only ever run on dedicated OSs, never Unix-like kernels, how would any of this have ever arisen? Would any phone OS development company have ever spent money on implementing TCP/IP listen if the use-case wasn't already there? Cue someone telling me listening isn't a special case. :)

Yesterday, my friend and I were wondering how to use Android as a game server because it's less hassle than Linux or Windows. ;) mono on fuschia when

EDIT: I forgot there are multi-user phones now. I'm not quite sure why they exist; maybe for people who lend their phone to others, especially to family members. They only support one user at a time on the UI, (as far as I know,) but consider the network server arguments above.
Kaph — a modular OS intended to be easy and fun to administer and code for.
"May wisdom, fun, and the greater good shine forth in all your work." — Leo Brodie
linguofreak
Member
Member
Posts: 510
Joined: Wed Mar 09, 2011 3:55 am

Re: Google Fuchsia/Zircon design decisions

Post by linguofreak »

vhaudiquet wrote:I'm talking about an end-user on Google use case for Fuchsia, which i think is basically Android phones (what they wanted to do first was replace android) and now Google Nest. If you're a person, not a company, you don't share your phone, you don't have multiple users on it. Some phones still have limited hardware ressources, and having multi user support seems like an useless feature to add, that's all i say.
If a machine communicates with the network, you need, at the very least, to have two users, admin and non-admin. That way a remote vulnerability in a program doesn't automatically become a remote takeover of the device. And multi-user support doesn't cost all that much, resource wise: the average number of simultaneous users per machine has generally gone *down* over the years as memory and CPU cycles have become more and more plentiful.
Qbyte
Member
Member
Posts: 51
Joined: Tue Jan 02, 2018 12:53 am
Location: Australia

Re: Google Fuchsia/Zircon design decisions

Post by Qbyte »

Ethin wrote:aI find it baffling that your defending the "end-user"/"this OS is made for end-users" idea. If there's one thing that history tells us, people will use programs and devices (and, really, anything) in an environment or way that it was not originally intended for. What the product or method or system or whatever was originally intended/designed for is absolutely irrelevant.
Programmers and developers are "end users" as well and if an OS is made with them in mind, you'll generally get much better quality software written for them. Linux, Windows and iOS are all god-awful to work with because they all have terrible, convoluted interfaces. WinAPI and the X window system nonsense is enough to deter most people from even doing that kind of work.
nullplan
Member
Member
Posts: 1769
Joined: Wed Aug 30, 2017 8:24 am

Re: Google Fuchsia/Zircon design decisions

Post by nullplan »

Qbyte wrote:Linux, Windows and iOS are all god-awful to work with because they all have terrible, convoluted interfaces. WinAPI and the X window system nonsense is enough to deter most people from even doing that kind of work.
Nice constructive criticism there. So what do you think is a worthwhile interface? What do you even mean by "interface"?
Carpe diem!
User avatar
eekee
Member
Member
Posts: 872
Joined: Mon May 22, 2017 5:56 am
Location: Kerbin
Discord: eekee
Contact:

Re: Google Fuchsia/Zircon design decisions

Post by eekee »

Qbyte wrote:Programmers and developers are "end users" as well and if an OS is made with them in mind, you'll generally get much better quality software written for them. Linux, Windows and iOS are all god-awful to work with because they all have terrible, convoluted interfaces. WinAPI and the X window system nonsense is enough to deter most people from even doing that kind of work.
Haha! Arguably, you can't get more "made by programmers for programmers" than Linux or even X, but I agree they're awful to work with. X was legendary for putting off programmers back when I was new to Linux. (Incidentally, a graphical hello world in Win32 requires far more code than even X does.) After many years of puzzling over why, exactly, some systems are so awful, I'm still not sure. The world loves horrible timewasters, I guess. Sometimes, some awful things are brought in for extreme performance or whatever, but then these features are overused. Programmers are notorious for having no sense of perspective. ;)
Kaph — a modular OS intended to be easy and fun to administer and code for.
"May wisdom, fun, and the greater good shine forth in all your work." — Leo Brodie
Qbyte
Member
Member
Posts: 51
Joined: Tue Jan 02, 2018 12:53 am
Location: Australia

Re: Google Fuchsia/Zircon design decisions

Post by Qbyte »

nullplan wrote:Nice constructive criticism there. So what do you think is a worthwhile interface? What do you even mean by "interface"?
Do you really need an introduction to the architectural flaws of windows and linux or are you just arguing in bad faith? The graphics stack, network stack, i/o and event handling, driver model, file systems, security and privacy of both of them are attrocious. I could go on and into depth about these but that would be beating a dead horse.
Post Reply