Re: Why do people say C is better than Assembly?
Posted: Tue Jan 17, 2017 4:10 pm
Far be it from me to defend C, but I really don't think you are right in this, for a number of reasons.
First, the main complaint you make has no applicability to long mode, nor to CPU architectures which have never had memory segmentation to begin with. Memory segmentation is a peculiarity of the 8086 borne of the cost of memory addressing hardware at the time it was designed - memory protection wasn't a significant consideration, as the 8086 was only meant to be a short-term design primarily intended for embedded controllers anyway - and one which even Intel has admitted was a mistake. The only other segmented-memory CPU architecture of any note, the IBM System/360, similarly abandoned segmentation as a bad design choice in the mid 1970s.
Second, segmentation is actually significantly slower than page-based memory protection for 32-bit protected mode in most current x86 models. Now, admittedly, this is because Intel has de-emphasized support for segmentation since the 80386, and not an inherent aspect of the segmented model, but it is still the case.
Third, the question of segmentation had nothing to do with the C language at all. While the current dominant implementations (GCC and Visual C++) only work with a flat memory model, both the Borland and Watcom C compilers handled segmentation quite satisfactorily, and AFAIK their successors still do. The Amsterdam Compiler Kit C compiler used in early versions of Minix had no problems providing for a segmented OS written in C.
Fourth, the issue of monolithic vs. micro-kernel is orthogonal to the issues of languages and memory models. Both micro kernel systems (such as the above mentioned Minix) and monolithic kernel designs (e.g., Xenix, Coherent, UCSD Pascal) have been written for x86 real mode, using assembly, C, or even some other languages (e.g., Pascal, Parasol, Modula-2), so arguing about the kernel design methodology is pretty meaningless here.
(As an aside, I would like to remind you that the original impetus for developing the micro-kernel model in the late 1970s was to provide better process isolation in systems with no hardware memory protection - such as the 8086.)
Finally, segmentation does not actually provide any improvement in the level of memory protection from paged protection in 32-bit protected mode, and in fact, IIUC more recent 32-bit x86 processors (by AMD, at least, and possibly those by Intel) don't actually implement segmentation in hardware at all - it is actually emulated in firmware using the page protection mechanisms (which may be related to point two above). If anyone knows more details about this, please speak up, as I am not entirely certain if I am right on this point. Whether this is the case or not, however, what does matter is that segmentation is both less flexible than paging, and provides no protections which paging doesn't - it provides fewer, in fact, as the read-only limitations on user-space memory in a given segment are all-or-nothing, whereas page protection can be set on a per-process and per-page basis. While it is true that a given segment can be as small as a page or as large as the full addressing space, whereas paging requires all pages to be the same relatively small size, using segments in a fine-grained manner would, in effect, be using the segmentation to emulate paging - there would be no significant advantages.
While it is true that C compilers running in flat mode rarely take proper care in handling the page settings, and for that matter few operating systems give the compilers fine-grained control over setting page protection to begin with, that is a flaw in the OS and/or the compiler, not in the language itself, I would say.
First, the main complaint you make has no applicability to long mode, nor to CPU architectures which have never had memory segmentation to begin with. Memory segmentation is a peculiarity of the 8086 borne of the cost of memory addressing hardware at the time it was designed - memory protection wasn't a significant consideration, as the 8086 was only meant to be a short-term design primarily intended for embedded controllers anyway - and one which even Intel has admitted was a mistake. The only other segmented-memory CPU architecture of any note, the IBM System/360, similarly abandoned segmentation as a bad design choice in the mid 1970s.
Second, segmentation is actually significantly slower than page-based memory protection for 32-bit protected mode in most current x86 models. Now, admittedly, this is because Intel has de-emphasized support for segmentation since the 80386, and not an inherent aspect of the segmented model, but it is still the case.
Third, the question of segmentation had nothing to do with the C language at all. While the current dominant implementations (GCC and Visual C++) only work with a flat memory model, both the Borland and Watcom C compilers handled segmentation quite satisfactorily, and AFAIK their successors still do. The Amsterdam Compiler Kit C compiler used in early versions of Minix had no problems providing for a segmented OS written in C.
Fourth, the issue of monolithic vs. micro-kernel is orthogonal to the issues of languages and memory models. Both micro kernel systems (such as the above mentioned Minix) and monolithic kernel designs (e.g., Xenix, Coherent, UCSD Pascal) have been written for x86 real mode, using assembly, C, or even some other languages (e.g., Pascal, Parasol, Modula-2), so arguing about the kernel design methodology is pretty meaningless here.
(As an aside, I would like to remind you that the original impetus for developing the micro-kernel model in the late 1970s was to provide better process isolation in systems with no hardware memory protection - such as the 8086.)
Finally, segmentation does not actually provide any improvement in the level of memory protection from paged protection in 32-bit protected mode, and in fact, IIUC more recent 32-bit x86 processors (by AMD, at least, and possibly those by Intel) don't actually implement segmentation in hardware at all - it is actually emulated in firmware using the page protection mechanisms (which may be related to point two above). If anyone knows more details about this, please speak up, as I am not entirely certain if I am right on this point. Whether this is the case or not, however, what does matter is that segmentation is both less flexible than paging, and provides no protections which paging doesn't - it provides fewer, in fact, as the read-only limitations on user-space memory in a given segment are all-or-nothing, whereas page protection can be set on a per-process and per-page basis. While it is true that a given segment can be as small as a page or as large as the full addressing space, whereas paging requires all pages to be the same relatively small size, using segments in a fine-grained manner would, in effect, be using the segmentation to emulate paging - there would be no significant advantages.
While it is true that C compilers running in flat mode rarely take proper care in handling the page settings, and for that matter few operating systems give the compilers fine-grained control over setting page protection to begin with, that is a flaw in the OS and/or the compiler, not in the language itself, I would say.