Page 1 of 3
Using compiler features vs "the hard way"
Posted: Tue Mar 13, 2012 5:12 am
by Civillian
As a beginning OS developer, I face a dilemma:
should I exploit GCC's features, or should I do things by hand?
Examples:
- using builtins such as __SIZE_TYPE__, __INT32_TYPE__, __INT32_MAX__ etc. instead of old fashioned typedefs and numeric constants in base 16
- using builtins such as __builtin_va_list and family, instead of #define'ing the macros oneself
- to use or not to use GCC'isms such as __attribute__((packed))
I know everyone has their own needs, and will do things their own way (including myself), but I'd like to read cons and pros for "targeting a compiler", to help me make an early decision. Thanks.
Re: Using compiler features vs "the hard way"
Posted: Tue Mar 13, 2012 5:22 am
by Solar
Your OS should not use builtins, your C library should. Headers like <limits.h> and <stdarg.h> provide proper, standard-compliant wrapping of such features, so they shouldn't leak into mainline OS code.
With GCC, even a freestanding cross-compiler build provides these headers, so you can use them even without a C library in place. Much preferable.
Personally, I would shun even __attribute__((packed)) in code, and rather use -fpack-struct on the command line. Keeps the code itself compiler-independent, and easier to read.
Re: Using compiler features vs "the hard way"
Posted: Tue Mar 13, 2012 6:48 am
by Civillian
Solar wrote:Your OS should not use builtins, your C library should.
I originally missed your point, but now I understand what you mean.
From this thread:
http://forum.osdev.org/viewtopic.php?f=13&t=24958
Solar wrote:Look into ${PREFIX}/lib/gcc/${TARGET}/${VERSION}/include. Since the freestanding environmen does not include any executable code, but only definitions and macros depending on the compiler / target machine anyway, GCC provides these "for free".
So if one wanted to use them, would this be the right way? Otherwise copy/link those headers?
Code: Select all
$(CC) -nostdlib -nostdinc -ffreestanding -fno-builtin -I ./include:${PREFIX}/lib/gcc/${TARGET}/${VERSION}/include
Re: Using compiler features vs "the hard way"
Posted: Tue Mar 13, 2012 7:29 am
by Solar
There is nothing to link, as those headers are (by definition) typedefs / macros only. They might even be in the standard include path, but as -nostdinc forbids GCC to search that, the explicit stating of the directory would be necessary. (You want the GCC headers, but you don't want /usr/include...)
Re: Using compiler features vs "the hard way"
Posted: Tue Mar 13, 2012 8:24 am
by bluemoon
Civillian wrote:[*]using builtins such as __SIZE_TYPE__, __INT32_TYPE__, __INT32_MAX__ etc. instead of old fashioned typedefs and numeric constants in base 16
I suggest to use compiler's header. As problems stated in
previous thread.
Civillian wrote:[*]using builtins such as __builtin_va_list and family, instead of #define'ing the macros oneself
I would avoid va_list in kernel space, it seems convenient at first, but you cannot verify number of parameters and do type-check that way.
Re: Using compiler features vs "the hard way"
Posted: Tue Mar 13, 2012 8:30 am
by Solar
bluemoon wrote:I would avoid va_list in kernel space, it seems convenient at first, but you cannot verify number of parameters and do type-check that way.
Not necessarily bad
per se. Just don't do it with any kind of userspace input...
Re: Using compiler features vs "the hard way"
Posted: Tue Mar 13, 2012 10:33 am
by Civillian
Solar wrote:They might even be in the standard include path, but as -nostdinc forbids GCC to search that, the explicit stating of the directory would be necessary. (You want the GCC headers, but you don't want /usr/include...)
bluemoon wrote:I suggest to use compiler's header.
... and indeed it works. Now a minor problem is that I have
this in the Makefile:
Code: Select all
FREESTAND := /linux/usr/lib/gcc/x86_64-linux-gnu/4.6/include
Re: Using compiler features vs "the hard way"
Posted: Tue Mar 13, 2012 10:53 am
by Solar
The alternative would be to get rid of -nostdinc while making sure it doesn't drag in /usr/include. Some trial & error should do the trick. Don't forget to addyour findings to the tutorial.
Re: Using compiler features vs "the hard way"
Posted: Tue Mar 13, 2012 12:20 pm
by Civillian
The manpage cut down on the trial and error. I've gotten to this and it seems to be working:
Code: Select all
CFLAGS := -std=c99 -Werror $(WARNINGS) -ffreestanding -nostdlib -fno-stack-protector -m32 -O2 -isysroot ./include -I ./include
From what I read the implicit
-isysroot would be
/usr/include, so it makes sense to change it.
Next, using
-ffreestanding without -nostdinc will allow GCC to search its freestanding headers directory.
Thanks for the inspiration Solar, this turned out cleaner than I thought.
Re: Using compiler features vs "the hard way"
Posted: Wed Mar 14, 2012 3:42 am
by JamesM
Hi,
Solar wrote:The alternative would be to get rid of -nostdinc while making sure it doesn't drag in /usr/include. Some trial & error should do the trick. Don't forget to addyour findings to the tutorial.
The better alternative is to use "-nostdlibinc" instead of -nostdinc, which keeps the compiler headers on the path but removes the C library headers.
Re: Using compiler features vs "the hard way"
Posted: Wed Mar 14, 2012 3:44 am
by JamesM
Civillian wrote:[*]using builtins such as __builtin_va_list and family, instead of #define'ing the macros oneself
I would avoid va_list in kernel space, it seems convenient at first, but you cannot verify number of parameters and do type-check that way.
Printf is really damn useful.
Re: Using compiler features vs "the hard way"
Posted: Wed Mar 14, 2012 3:46 am
by JamesM
Personally, I would shun even __attribute__((packed)) in code, and rather use -fpack-struct on the command line. Keeps the code itself compiler-independent, and easier to read.
But breaks platform PCS compliance. It will only run correctly on x86 as it will inadvertently pack
all your structs, not just the ones you need packed for device interaction. So it'd cause unaligned accesses on platforms like ARM.
That's why I personally would mark every struct you need packed manually, as packed.
Re: Using compiler features vs "the hard way"
Posted: Wed Mar 14, 2012 4:06 am
by Solar
JamesM wrote:Personally, I would shun even __attribute__((packed)) in code, and rather use -fpack-struct on the command line.
It will only run correctly on x86 as it will inadvertently pack
all your structs, not just the ones you need packed for device interaction. So it'd cause unaligned accesses on platforms like ARM.
I admit I never worked on the ARM, so I am ignorant of its alignment requirements. But wouldn't it suffice to design any such structs with padding in mind (i.e., descending order of datatype sizes)?
Re: Using compiler features vs "the hard way"
Posted: Wed Mar 14, 2012 9:14 am
by JamesM
Solar wrote:JamesM wrote:Personally, I would shun even __attribute__((packed)) in code, and rather use -fpack-struct on the command line.
It will only run correctly on x86 as it will inadvertently pack
all your structs, not just the ones you need packed for device interaction. So it'd cause unaligned accesses on platforms like ARM.
I admit I never worked on the ARM, so I am ignorant of its alignment requirements. But wouldn't it suffice to design any such structs with padding in mind (i.e., descending order of datatype sizes)?
You're still second-guessing a procedure call standard, which (a) exists for a reason and (b) may be changed/adherance quality altered in a revision of the compiler.
Re: Using compiler features vs "the hard way"
Posted: Wed Mar 14, 2012 12:04 pm
by Combuster
JamesM wrote:That's why I personally would mark every struct you need packed manually, as packed.
The problem is that there's no portable way to do that, (even with defines thanks to pragma pack).
Then again, packed structs are only valid for architecture-specific code and portability is not an issue. If you need them on the second best candidate - files - you stink for not dealing with endianness.