Re: Separate Stack Segment in Protected mode?
Posted: Fri Aug 12, 2022 8:05 am
I can think of better arguments in favour of long mode than a 17-year-old research paper.
The Place to Start for Operating System Developers
https://f.osdev.org/
I think the performance benefit alone is enough to negate any arguable downsides.rdos wrote:Perhaps, but also a good argument for why you don't want to use long mode.
Not related to long mode. It was primarily a compiler issue.Demindiro wrote:I think the performance benefit alone is enough to negate any arguable downsides.rdos wrote:Perhaps, but also a good argument for why you don't want to use long mode.
Some things never change. Running an OS kernel without effective protection mechanisms in place is insane. I don't count paging as an effective protection mechanism since it has poor granularity and no limit checking. A decent micro-kernel design *might* be acceptable, but neither Windows nor Linux use that design. The problem becomes even worse when people decide to map all physical memory in the address space, and pack code & data in the executables.iansjack wrote:I can think of better arguments in favour of long mode than a 17-year-old research paper.
rdos wrote:Not related to long mode. It was primarily a compiler issue.Demindiro wrote:I think the performance benefit alone is enough to negate any arguable downsides.rdos wrote:Perhaps, but also a good argument for why you don't want to use long mode.
That's not important. The important thing is that flat kernels never become bug-free and poorly written drivers easily can bring down the entire OS since the environment is completely unprotected.Demindiro wrote: The long mode version is 25% faster than the protected mode version.
Andy Tanenbaum would like a word with you. I am highly suspicious of anything that claims bug-freedom, and anything that claims it can somehow insulate itself from the effects of poor drivers. Badly written drivers can make the hardware overwrite your kernel, whatever protection measures you deploy. Or melt the hardware or something.rdos wrote:That's not important. The important thing is that flat kernels never become bug-free and poorly written drivers easily can bring down the entire OS since the environment is completely unprotected.
Threat model? It's really easy. The less a piece of code has direct access to the less likely it is corrupt some vital data in the kernel. So, by mapping all physical memory in the linear address space, and by using a flat memory model, you basically give every line of kernel & driver code the ability to corrupt vital kernel data and physical memory that is protected by being mapped in server processes or that is private to PCI devices. It can't get any worse than that.nullplan wrote:Andy Tanenbaum would like a word with you. I am highly suspicious of anything that claims bug-freedom, and anything that claims it can somehow insulate itself from the effects of poor drivers. Badly written drivers can make the hardware overwrite your kernel, whatever protection measures you deploy. Or melt the hardware or something.rdos wrote:That's not important. The important thing is that flat kernels never become bug-free and poorly written drivers easily can bring down the entire OS since the environment is completely unprotected.
Actually, I would like to know your threat model. What is it you seek protection from?
It can by instructing hardware (e.g. xHCI) to write to specific physical addresses, and hardware does not care about either segmentation or paging.rdos wrote:RDOS drivers have a private code & data segment with exact limits. As long as it accesses using cs and ds, it never can corrupt the kernel..
Actually, no. All my USB drivers use a specialized kernel API that makes linear to physical (and the reverse) translations easy, and also localizes them to only a few physical pages. Thus, the XHCI device will never get bad physical addresses as long as drivers use the appropriate API only. When it comes to mapping transfer buffers, the driver gets a 48-bit pointer, gets it's base and then use a kernel API to get the physical address. Thus, the driver will never work with physical address as parameters, rather extract them from known-to-be valid pointers using a known to be reliable kernel API.Demindiro wrote:It can by instructing hardware (e.g. xHCI) to write to specific physical addresses, and hardware does not care about either segmentation or paging.rdos wrote:RDOS drivers have a private code & data segment with exact limits. As long as it accesses using cs and ds, it never can corrupt the kernel..
There are IOMMUs of course, but AFAIK not all platforms/hardware have one/support it and it also needs some sort of driver too.
It only works between application, server processes, and kernel, provided all parameters are properly validated. It doesn't protect kernel from buggy drivers since in a flat memory model, every piece of code can access everything in kernel. Paging cannot solve that issue. Only segmentation can solve it.iansjack wrote:So what’s wrong with using the protection inherent in paging?
Because you need to shoot down the TLB every time you switch to another function. This is much more expensive than loading a selector.iansjack wrote:If you can arrange for different processes, or drivers, to use different segments why can’t you arrange for them to use different page tables?
I know it is not illusory. I use applications that run in flat memory model, and so can compare with the kernel that run in segmented mode. Practically all pointer bugs in kernel create faults at the location of the problem, while in the application, it can frequently be an effect of some other code that corrupted stuff. Paging sometimes catch the issues, but only when I use the debug library which allocates 4k for every new, initialize it to known values, and check for overwrites when freeing it. I cannot use this in production releases since it consume too much memory and the validation is slow.iansjack wrote:In reality, you can’t protect against rogue code running in supervisor mode. It can create any segments or page mappings that it wants to. I think the protection you think you get from segmentation is totally illusory.
That's simply not true, or rather, the claim that segmentation would prevent it is incorrect. Supervisor-mode pages have exactly the same degree of protection as supervisor-mode segments - a wild userland pointer to a supervisor data page is still going to be blocked by the protection mechanisms, because the page is marked as supervisor access only. A wild pointer in the kernel? True, that can access any virtual address currently mapped for the process, but the majority of addresses won't be mapped at all, meaning that a page fault will be caught by the memory manager, which presumably can determine that the page shouldn't be accessible and raise a protection fault. If it does hit an address that is live, then yes, a kernel bug can have the effect you describe - but the same is just as true with segmentation. A corrupted supervisor-mode pointer is a supervisor-mode pointer, period.rdos wrote:The same scenario in long mode can lead to corruption of physical memory, vital kernel data, application data in another process, and even PCI BAR data.