Well, of course bitfields are architecture-dependent ... just take a look at PowerPC processors which are big-endian(?) while Intel processors are little-endian (it may be the contrary: i always mess up those two terms).
One will store the lowest bits of 0x12345678 first and the other will store the highest bits first. You shouldn't expect the same compiler to produce a bit-to-bit match of a bitfield made of 0x12345678 ... The very same way you shouldn't expect union { char c[4]; int; } to be portable across those architectures...
However, you need "trustable" bitfields mainly for architecture-dependent things like IDT, page table entries, etc. So it doesn't harm if those pieces of code are not portable.
Those architecture-dependent files are certainly full of asm(...) definitions aswell, so who cares if they're not portable to another compiler family ... If you really need another compiler to support your kernel (hum, ever tried to compile Linux kernel with msvc++ ?), you'll have to port the architecture/platform dependent code, including the bitfields definition, asm commands like "setPDBR() or FlushTranslationLookasideBuffer(), enable() and disable()" ...
Just bind those "dependent" parts to a well known place and try hard to avoid architecture/environment dependent stuff elsewhere.
The advantage of bitfields in that domain is that the rest of the text will use a *name* to access a part of the member. Whether arch/i386/stuff.h declared
Code: Select all
struct version {
int major:10;
int minor:10;
int revision:10;
int :2;
};
while arch/ppc/stuff.h declared
Code: Select all
struct version {
int :2;
int revision:10;
int minor:10;
int major:10;
};
to match the same pattern will be transparent to code that performs
v.major >= requrired.major
![Wink ;)](./images/smilies/icon_wink.gif)