Use of "-m32 -march=i686" to replace the standard toolchain

Question about which tools to use, bugs, the best way to implement a function, etc should go here. Don't forget to see if your question is answered in the wiki first! When in doubt post here.
kerravon
Member
Member
Posts: 278
Joined: Fri Nov 17, 2006 5:26 am

Re: Use of "-m32 -march=i686" to replace the standard toolch

Post by kerravon »

Ethin wrote:
kerravon wrote:The latest version of my GCC 3.2.3 was released 2021-06-09. I consider it to be the best C compiler available for Windows. It was created for a reason. It is the only thing that actually meets my requirements. It's also the best C compiler available for Freedos, and the best compiler for MVS.
Right. And the original C compiler developed by K&R is the best compiler and everybody should use it.
Personally I prefer C90 to K&R, but it's up to you what language you prefer.
kerravon wrote:
Linux GCC has always been complicated. Every distribution will compile it differently and will use different flags when building your code. The fact that you have to go all the way back to 3.2.3 if not older, and thereby suffer the fact that your OS will never be able to take advantage of any optimizations or other compiler advancements after 3.2.3 proves that.
No, it doesn't prove that at all. There is probably no barrier to GCC 11.1.0 being as neat and clean as my GCC 3.2.3. Someone just needs to put in the development effort to make it C90-compliant. Every single version of GCC *should* and probably *could* have ben C90-compliant from day one.
Considering that the first version of GCC was released in 1987, that would've been, uh, kinda impossible, since C90 hadn't even been released yet.
[/quote]
C90 is identical to C89 other than some section renumbering. There was a freely-available draft of C89 available around 1987. Even if there wasn't, C89 still supports almost all K&R code, and GCC continued to use K&R style right up until GCC 3.2.3.
keravon wrote:And as for "optimizations", feel free to play "spot the speed difference". Best of luck with that. I'd rather have something that actually works. If you find a way of shaving 1% off the runtime, that's great, even though I can't detect it, but make the package C90-compliant first. Also make it target 80386 first. So that it works on essentially every 32-bit x86 computer since 1986. That's a great start.
The fact that you have to modify your C compiler -- or use a custom version of GCC 3.2.3 that's modified to be C90 compliant -- says a lot about how antequated your toolchain is. Its irrelevant when "your" version of GCC was released; GCC 3.2.3 was released in 2003. A simple google search would reveal this for yourself.
Yes, that is the point at which I forked it to provide a superior product. The latest version of the superior product was released less than a month ago. No need to do a google search on that, it's written in this message.
Another nasty problem your going to suffer is compiling your code on more modern compilers. I'm pretty sure that GCC 11 couldn't compile your code.
The code is C90-compliant as far as I know, and all compilers should compile it. I have previously compiled it with a couple of different compilers (Watcom and Borland). Someone else compiled it with IBM C too. It is up to GCC 11 to accept C90-compliant code. If it doesn't, it is GCC 11 that needs to be fixed.
And I haven't even gotten into the other disadvantages of your compiler setup, such as static analysis and the amount of warnings available. And yes, there is a significant difference between how GCC 3.2.3 optimizes code and how GCC 11.1.0 does. Hell, your GCC version might even perform incorrect optimizations that were fixed in later versions of GCC.
And GCC 11 may have introduced incorrect optimizations. That's the problem with software in general - you replace one set of bugs that may not be affecting you, with another set that may. Most of my work has been done fixing bugs in GCC that affected the S/370 target. GCC 11 didn't just fail to fix those bugs themselves, they decided to delete the entire S/370 target.

Depending on your goal, you may find my fork of GCC 3.2.3 a far simpler and useful compiler. If you want to "upgrade" to GCC 11 at a later date, you don't actually lose anything anyway. Your personal OS and application C code will be identical.
kerravon
Member
Member
Posts: 278
Joined: Fri Nov 17, 2006 5:26 am

Re: Use of "-m32 -march=i686" to replace the standard toolch

Post by kerravon »

Octocontrabass wrote:
kerravon wrote:And as for "optimizations", feel free to play "spot the speed difference". Best of luck with that.
LTO can make a pretty big difference.
I don't know what that is, but the most CPU-intensive task I do is rebuilding GCC itself, with optimization on. When I'm doing it via emulation, the process takes maybe an hour. That's on an S/370 machine. If I'm doing it natively it takes something like a minute. I used to rebuild GCC a lot when I was doing development of it, but that has now stopped. Rebuilding my OS and C library is something I do more often, and that is a trivial amount of time.

If we're talking about a new OS project (I believe that's exactly what the OP is doing), they will be doing trivial amounts of compiling too, for years. When using an OS, the bottleneck will be (or should be) in the actual application being run, not the OS. The application may be written in Pascal rather than C, and using a non-GNU compiler. Regardless, providing the latest greatest compiler of any language for an OS is a separate task to writing the OS itself. The OP can indeed get started straight away with -m32 -march=i686, as far as I know.
kerravon wrote:Also make it target 80386 first. So that it works on essentially every 32-bit x86 computer since 1986.
Even the latest GCC can still emit binaries that will run on a 386. The trick is finding an OS that doesn't use any x86 features that were added later and doesn't trip over any of the nasty errata in those old CPUs.
We're talking about the OP's new OS, aren't we? Your link seems to discuss system-level debugging, not code emitted by GCC (any version). As far as I know, that has been infallible 80386 code forever. If the OP starts using some low-level features in some assembler code, he will need to consider these things. In the meantime, his existing compiler should be generating perfectly suitable 80386 code that can be used immediately for his purposes. He only needs to separate out the compile/assemble/link steps and use his own C library, or whatever he is planning on supporting. Or someone else's C library, like PDPCLIB.
kerravon
Member
Member
Posts: 278
Joined: Fri Nov 17, 2006 5:26 am

Re: Use of "-m32 -march=i686" to replace the standard toolch

Post by kerravon »

himAllThatBoyME wrote:I'm just starting out with OS development with C++, and have not found anything in the OsDev book about why it is necessary to use the toolchain rather than simply add "-m32 -march=i686" to a normal g++ compile command. Does this work? If not, is there a reason why I shouldn't do it?
As far as I know, it is totally unnecessary to change your toolchain if you have no interest in toolchains and are only interested in your own OS. Unless you wish to change the stack calling convention for some reason, which you probably don't.

If you show me the output of this (using "-S" to produce assembler output):

extern int x;

int foo(int c)
{
c += 5;
bar(c);
return (x);
}

I will be able to confirm that the assembler output looks correct.

So long as you have decent assembler, you should be able to produce an executable that can be loaded.

Can you tell me how you are planning to have your executable receive control? Some people use grub, some people use an EFI boot file. I use a 16-bit real mode loader which works on almost all 80386+ machines since 1986, but no longer works on very modern machines that have dropped BIOS support, even as CSM, thereby invalidating numerous OSes developed since 1986.

Anyway, depending on how you get control, you may need some startup code written in assembler. But you may not too. See if you can get this to work as the beginning of your OS:

void foo(void)
{
*((char *)0xb8000UL) = 'X';
for (;;) ;
}

Once again, most computers since 1986 will have a video card at 0xb8000 so you can see an 'X' on your screen, but some very modern ones don't (apparently).
Ethin
Member
Member
Posts: 625
Joined: Sun Jun 23, 2019 5:36 pm
Location: North Dakota, United States

Re: Use of "-m32 -march=i686" to replace the standard toolch

Post by Ethin »

Sorry, but I call BS about how your GCC version is supposedly superior. Particularly since you can't even take advantage of LTO or utilize more modern ISA features that have been introduced since 3.2.3. Can you even compile the latest Linux kernel with that fossil? Saying that your "superior product" is better than a modern GCC is incredibly arrogant and presumptuous of you.
Furthermore, Your code in 1990 C can be perfectly fine but still introduce undefined behavior or any number of other things. That's what warnings and static analysis are for. GCC removed the S/370 target because nobody uses it. Emulation, perhaps, but there are very few, if any, S/370 computers in use in the real world. That's especially relevant considering that S/370 can't take advantage of all the technological advancements that ISAs have made since its inception. Even Linux dropped support for the majority of the old, deprecated processor architectures because maintaining them when nobody used them was a waist of time and that time could be spent on other things.
So, try again: how is your fork of GCC better than the up to date GCC which is actually receiving bug fixes, optimization improvements, security fixes, more modern language standards, quite good static analysis, and various other enhancements by experts in the field?
I might be being harsh, perhaps, but telling someone to use GCC 3.2.3 -- or any ridiculously obsolete version of GCC -- is absurd. The *only* reason you should need to go back to such old software is if you want to target an architecture that isn't supported, and if its not, you should perhaps investigate as to why and reconsider your choice.
So, can you actually prove that your version of GCC is actually superior to GCC 11, especially given that your version doesn't even have LTO support? And can we stop perpetuating this nonsense about how a cross compiler is suddenly unnecessary for OS development? There's no reason other than pure laziness to not just do it now since you'll need it soon enough anyway. So I still can't spot the logic behind "avoid a cross-compiler", unless you enjoy reverse-engineering your toolchain because it did something unexpected *every time* you update it.
Like I said, I'm being a bit harsh, but this idea of "use ancient GCC versions" or "use a host compiler for your OS" makes absolutely no sense. You can't even get away with that in Rust, and OS development is a lot simpler with Rust than it is in C/C++. But even then, you *still* need to build compiler_builtins and core at minimum for your target system, and that's before you add processes and suddenly you have to go build the entire Rust compiler so that it can properly target your OS without introducing any host-specific garbage.
kerravon
Member
Member
Posts: 278
Joined: Fri Nov 17, 2006 5:26 am

Re: Use of "-m32 -march=i686" to replace the standard toolch

Post by kerravon »

Ethin wrote:Sorry, but I call BS about how your GCC version is supposedly superior. Particularly since you can't even take advantage of LTO or utilize more modern ISA features that have been introduced since 3.2.3. Can you even compile the latest Linux kernel with that fossil? Saying that your "superior product" is better than a modern GCC is incredibly arrogant and presumptuous of you.
My understanding is that the OP is trying to compile his own OS, not Linux. If Linux is C90-compliant it should compile on my GCC. I have no idea whether Linux is or not. I do know that my own OS is, so it works perfectly fine with my GCC.
Furthermore, Your code in 1990 C can be perfectly fine but still introduce undefined behavior or any number of other things.
I have no idea what you are talking about. My OS works fine. I don't know what further proof is required than that. That is the absolute litmus test. Not some theoretical disadvantage. Regardless, if you would like to make available a superior C compiler, that is C90-compliant so doesn't need the entire Unix baggage, and can be built using pdmake (ie not requiring "configure" which is a shell script that doesn't exist on Windows), that is great. Please point me to the source and Win32 binaries.
That's what warnings and static analysis are for. GCC removed the S/370 target because nobody uses it. Emulation, perhaps, but there are very few, if any, S/370 computers in use in the real world.
Modern z/Arch computers run S/370 programs perfectly fine. Once again, GCCMVS 3.2.3 provides the best C compiler available for z/OS running on z/Arch.
That's especially relevant considering that S/370 can't take advantage of all the technological advancements that ISAs have made since its inception. Even Linux dropped support for the majority of the old, deprecated processor architectures because maintaining them when nobody used them was a waist of time and that time could be spent on other things.
I don't consider old computers to be "deprecated".
So, try again: how is your fork of GCC better than the up to date GCC which is actually receiving bug fixes, optimization improvements, security fixes, more modern language standards, quite good static analysis, and various other enhancements by experts in the field?
Because it works, instead of being a constant mess.
I might be being harsh, perhaps, but telling someone to use GCC 3.2.3 -- or any ridiculously obsolete version of GCC -- is absurd.
It's not old or obsolete, it's less than a month old and in active use.
The *only* reason you should need to go back to such old software is if you want to target an architecture that isn't supported, and if its not, you should perhaps investigate as to why and reconsider your choice.
Or maybe you should reconsider your choice. Or maybe the OP should reconsider his choice. I've been considering my choice for 3 decades, and now have something I want, other than the fact that it isn't public domain, so will eventually try to replace it with something that is. It's just not the highest priority at the moment.
So, can you actually prove that your version of GCC is actually superior to GCC 11, especially given that your version doesn't even have LTO support?
Yes, mine works on both Freedos with HX and on PDOS/386. Out of the box. And is FAR more likely to work on the OP's OS than ANYTHING else.
And can we stop perpetuating this nonsense about how a cross compiler is suddenly unnecessary for OS development?
It is not me who is perpetuating the nonsense. If you want to ask something, you can ask me why I didn't call BS on this nonsense long ago.
There's no reason other than pure laziness
No. A compiler is just a tool. You shouldn't need to get involved with it. It's not laziness. You should be working on your own OS, not the tools.
to not just do it now since you'll need it soon enough anyway.
No, you do not need it unless you are targeting a different processor, or changing the stack calling convention. The OP in this case doesn't appear to be doing either, so should have been given honest advice instead of being sent on a difficult road.
So I still can't spot the logic behind "avoid a cross-compiler", unless you enjoy reverse-engineering your toolchain because it did something unexpected *every time* you update it.
It is not needed. It is a waste of time. Two good reasons from my perspective.
Ethin
Member
Member
Posts: 625
Joined: Sun Jun 23, 2019 5:36 pm
Location: North Dakota, United States

Re: Use of "-m32 -march=i686" to replace the standard toolch

Post by Ethin »

kerravon wrote:
Ethin wrote:Sorry, but I call BS about how your GCC version is supposedly superior. Particularly since you can't even take advantage of LTO or utilize more modern ISA features that have been introduced since 3.2.3. Can you even compile the latest Linux kernel with that fossil? Saying that your "superior product" is better than a modern GCC is incredibly arrogant and presumptuous of you.
My understanding is that the OP is trying to compile his own OS, not Linux. If Linux is C90-compliant it should compile on my GCC. I have no idea whether Linux is or not. I do know that my own OS is, so it works perfectly fine with my GCC.
I feel like your deliberately missing the point or trying to alter the narrative here with statements like this. Your not actually answering the question.
kerravon wrote:
Furthermore, Your code in 1990 C can be perfectly fine but still introduce undefined behavior or any number of other things.
I have no idea what you are talking about. My OS works fine. I don't know what further proof is required than that. That is the absolute litmus test. Not some theoretical disadvantage. Regardless, if you would like to make available a superior C compiler, that is C90-compliant so doesn't need the entire Unix baggage, and can be built using pdmake (ie not requiring "configure" which is a shell script that doesn't exist on Windows), that is great. Please point me to the source and Win32 binaries.
You do know that you have WSL now right? There is very little incentive for building an OS on Windows. The configure script isn't available for windows because, well, GCC wasn't made for Windows to begin with. You want a compiler that works on windows? I point you to LLVM/Clang.
kerravon wrote:
That's what warnings and static analysis are for. GCC removed the S/370 target because nobody uses it. Emulation, perhaps, but there are very few, if any, S/370 computers in use in the real world.
Modern z/Arch computers run S/370 programs perfectly fine. Once again, GCCMVS 3.2.3 provides the best C compiler available for z/OS running on z/Arch.
And what do you define as "modern"? Last time I checked, S/370 and Z/arch had been discontinued over a decade ago. Therefore, not modern.
kerravon wrote:
That's especially relevant considering that S/370 can't take advantage of all the technological advancements that ISAs have made since its inception. Even Linux dropped support for the majority of the old, deprecated processor architectures because maintaining them when nobody used them was a waist of time and that time could be spent on other things.
I don't consider old computers to be "deprecated".
I think that you'll find many people disagreeing with you on that point. Myself included. After all, you don't see AMD or Intel selling you 8086s anymore, do you?
Kerravon wrote:
So, try again: how is your fork of GCC better than the up to date GCC which is actually receiving bug fixes, optimization improvements, security fixes, more modern language standards, quite good static analysis, and various other enhancements by experts in the field?
Because it works, instead of being a constant mess.
Again, I disagree. I don't find the cross-compilation setup to be a "constant mess" at all.
kerravon wrote:
I might be being harsh, perhaps, but telling someone to use GCC 3.2.3 -- or any ridiculously obsolete version of GCC -- is absurd.
It's not old or obsolete, it's less than a month old and in active use.
By whom, other than you? Its old and deprecated no matter how you spin it. If it doesn't come on any software distribution anymore that's used by a large amount of users, its deprecated. If it isn't supported by its original authors, its deprecated and discontinued. Just because you maintain a fork of it does not alter that fact in any way.
kerravon wrote:
The *only* reason you should need to go back to such old software is if you want to target an architecture that isn't supported, and if its not, you should perhaps investigate as to why and reconsider your choice.
Or maybe you should reconsider your choice. Or maybe the OP should reconsider his choice. I've been considering my choice for 3 decades, and now have something I want, other than the fact that it isn't public domain, so will eventually try to replace it with something that is. It's just not the highest priority at the moment.
That's all fine and dandy, but stop throwing around misinformation, like a cross compiler being unnecessary.
kerravon wrote:
So, can you actually prove that your version of GCC is actually superior to GCC 11, especially given that your version doesn't even have LTO support?
Yes, mine works on both Freedos with HX and on PDOS/386. Out of the box. And is FAR more likely to work on the OP's OS than ANYTHING else.
Uh huh. And GCC 11 is just as likely to work on the OPs OS as well given that if they're on Windows they have WSL and if they're on Linux it just automatically works. You still haven't actually proved anything, other than making bold claims without any evidence.
kerravon wrote:
And can we stop perpetuating this nonsense about how a cross compiler is suddenly unnecessary for OS development?
It is not me who is perpetuating the nonsense. If you want to ask something, you can ask me why I didn't call BS on this nonsense long ago.
Yes, it is you and people who believe what you do who perpetuate the nonsense. The wiki explains quite clearly why a cross compiler is necessary. People like Korona have also explained why this is so. Throwing around supposed claims without proof does not change that.
kerravon wrote:
There's no reason other than pure laziness
No. A compiler is just a tool. You shouldn't need to get involved with it. It's not laziness. You should be working on your own OS, not the tools.
Who said anything about messing around with the compilers code? You hardly have to change anything in GCC to build a cross-compiler toolchain, and all of that can be trivially automated. Its still pure laziness.
kerravon wrote:
to not just do it now since you'll need it soon enough anyway.
No, you do not need it unless you are targeting a different processor, or changing the stack calling convention. The OP in this case doesn't appear to be doing either, so should have been given honest advice instead of being sent on a difficult road.
This is BS. If this is supposedly the case, then: why does SeaBIOS and Coreboot build a cross-compiler toolchain? After all, by that logic building firmware without a cross compiler should work perfectly fine. Why does my Rust toolchain build compiler-rt, core and alloc for my OS every time its built? After all, by that logic I should be able to use the ones that come with the compiler, right? Clearly, your wrong, because if you were right, people far more knowledgeable than you would've done what your claiming is possible long, long ago. You wouldn't need a cross-compiler for SEABios, or nasty hacks for OVMF to build. It would just "work".
Furthermore you go on to state that we're sending people down a difficult road. How, exactly, are we doing that? Building a cross-compiler toolchain for OS development is quite simple. The fact that it doesn't work on Windows is completely irrelevant, and always has been. And now that we have WSL its even more irrelevant.
kerravon wrote:
So I still can't spot the logic behind "avoid a cross-compiler", unless you enjoy reverse-engineering your toolchain because it did something unexpected *every time* you update it.
It is not needed. It is a waste of time. Two good reasons from my perspective.
Alright then. Build SeaBios and Coreboot with a host compiler if its not needed. We'll see how far you get on that. Its computer firmware but the same cross-compiler infrastructure is necessary. So if its not needed you should be able to build it with your host GCC, regardless of whether its firmware or an operating system.
kerravon
Member
Member
Posts: 278
Joined: Fri Nov 17, 2006 5:26 am

Re: Use of "-m32 -march=i686" to replace the standard toolch

Post by kerravon »

Ethin wrote:
kerravon wrote:
Ethin wrote:Sorry, but I call BS about how your GCC version is supposedly superior. Particularly since you can't even take advantage of LTO or utilize more modern ISA features that have been introduced since 3.2.3. Can you even compile the latest Linux kernel with that fossil? Saying that your "superior product" is better than a modern GCC is incredibly arrogant and presumptuous of you.
My understanding is that the OP is trying to compile his own OS, not Linux. If Linux is C90-compliant it should compile on my GCC. I have no idea whether Linux is or not. I do know that my own OS is, so it works perfectly fine with my GCC.
I feel like your deliberately missing the point or trying to alter the narrative here with statements like this. Your not actually answering the question.
I believe I have answered every single one of your questions.
kerravon wrote:
Furthermore, Your code in 1990 C can be perfectly fine but still introduce undefined behavior or any number of other things.
I have no idea what you are talking about. My OS works fine. I don't know what further proof is required than that. That is the absolute litmus test. Not some theoretical disadvantage. Regardless, if you would like to make available a superior C compiler, that is C90-compliant so doesn't need the entire Unix baggage, and can be built using pdmake (ie not requiring "configure" which is a shell script that doesn't exist on Windows), that is great. Please point me to the source and Win32 binaries.
You do know that you have WSL now right? There is very little incentive for building an OS on Windows. The configure script isn't available for windows because, well, GCC wasn't made for Windows to begin with. You want a compiler that works on windows? I point you to LLVM/Clang.
Or I point you, and myself, to gccwin.

To your point about WSL - I don't know what that is and I don't care what it is. Certainly nothing by that name exists on PDOS/386, which is the environment I am most interested in. If someone adds that in the future, cool. Maybe I'll take a look at it. Maybe.
kerravon wrote:
That's what warnings and static analysis are for. GCC removed the S/370 target because nobody uses it. Emulation, perhaps, but there are very few, if any, S/370 computers in use in the real world.
Modern z/Arch computers run S/370 programs perfectly fine. Once again, GCCMVS 3.2.3 provides the best C compiler available for z/OS running on z/Arch.
And what do you define as "modern"? Last time I checked, S/370 and Z/arch had been discontinued over a decade ago. Therefore, not modern.
z/Arch has not been discontinued. z/Linux and z/OS still run on it today, and there is no easy way for the z/OS users to get off it, so they are stuck with IBM's monopoly. The first version of z/Arch has just gone out of any patents though, which may help.
kerravon wrote:
That's especially relevant considering that S/370 can't take advantage of all the technological advancements that ISAs have made since its inception. Even Linux dropped support for the majority of the old, deprecated processor architectures because maintaining them when nobody used them was a waist of time and that time could be spent on other things.
I don't consider old computers to be "deprecated".
I think that you'll find many people disagreeing with you on that point. Myself included. After all, you don't see AMD or Intel selling you 8086s anymore, do you?
No. But I was working with someone last night who was using an old computer that was working perfectly fine. But then she wasn't a bot trying to get fresh sales for AMD/Intel by insisting she stop working on her perfectly viable computer.
Kerravon wrote:
So, try again: how is your fork of GCC better than the up to date GCC which is actually receiving bug fixes, optimization improvements, security fixes, more modern language standards, quite good static analysis, and various other enhancements by experts in the field?
Because it works, instead of being a constant mess.
Again, I disagree. I don't find the cross-compilation setup to be a "constant mess" at all.
GCC has always been a mess. Finally someone has patiently sat down with a fork to make it a non-mess. That fork is now the most viable strain of GCC.
kerravon wrote:
I might be being harsh, perhaps, but telling someone to use GCC 3.2.3 -- or any ridiculously obsolete version of GCC -- is absurd.
It's not old or obsolete, it's less than a month old and in active use.
By whom, other than you?
Argumentum ad populum is a logical fallacy. Look it up. This is a question of technology, not who has the best marketing campaign to fool the most number of gullible people to upgrade their computers unnecessarily.
Its old and deprecated no matter how you spin it.
Nope. It's less than a month old, and produces executables, including itself that work almost anywhere. Nothing else comes close.
If it doesn't come on any software distribution anymore that's used by a large amount of users, its deprecated.
If you are using "deprecated" to mean "not popular", then that is true, so I will concede that.
If it isn't supported by its original authors, its deprecated and discontinued. Just because you maintain a fork of it does not alter that fact in any way.
Stallman still supports GCC does he? Ok, cool. Let's hope he never dies and the product becomes deprecated and discontinued.
kerravon wrote:
The *only* reason you should need to go back to such old software is if you want to target an architecture that isn't supported, and if its not, you should perhaps investigate as to why and reconsider your choice.
Or maybe you should reconsider your choice. Or maybe the OP should reconsider his choice. I've been considering my choice for 3 decades, and now have something I want, other than the fact that it isn't public domain, so will eventually try to replace it with something that is. It's just not the highest priority at the moment.
That's all fine and dandy, but stop throwing around misinformation, like a cross compiler being unnecessary.
kerravon wrote:
So, can you actually prove that your version of GCC is actually superior to GCC 11, especially given that your version doesn't even have LTO support?
Yes, mine works on both Freedos with HX and on PDOS/386. Out of the box. And is FAR more likely to work on the OP's OS than ANYTHING else.
Uh huh. And GCC 11 is just as likely to work on the OPs OS as well given that if they're on Windows they have WSL and if they're on Linux it just automatically works. You still haven't actually proved anything, other than making bold claims without any evidence.
Sorry for the confusion. The OP hasn't written his OS. As he develops it, my GCC will start working LONG before GCC 11 does. It's a far simpler product. And what I proved was that my GCC works on both Freedos/HX and PDOS/386. That much is beyond question.
kerravon wrote:
And can we stop perpetuating this nonsense about how a cross compiler is suddenly unnecessary for OS development?
It is not me who is perpetuating the nonsense. If you want to ask something, you can ask me why I didn't call BS on this nonsense long ago.
Yes, it is you and people who believe what you do who perpetuate the nonsense. The wiki explains quite clearly why a cross compiler is necessary. People like Korona have also explained why this is so. Throwing around supposed claims without proof does not change that.
kerravon wrote:
There's no reason other than pure laziness
No. A compiler is just a tool. You shouldn't need to get involved with it. It's not laziness. You should be working on your own OS, not the tools.
Who said anything about messing around with the compilers code? You hardly have to change anything in GCC to build a cross-compiler toolchain, and all of that can be trivially automated. Its still pure laziness.
kerravon wrote:
to not just do it now since you'll need it soon enough anyway.
No, you do not need it unless you are targeting a different processor, or changing the stack calling convention. The OP in this case doesn't appear to be doing either, so should have been given honest advice instead of being sent on a difficult road.
This is BS. If this is supposedly the case, then: why does SeaBIOS and Coreboot build a cross-compiler toolchain?
No idea. Maybe they were misinformed too. Or maybe their situation is different from both mine and probably the OPs. I can tell you that if you are using gccwin, and you suddenly have a desire to write your own OS, you can go right ahead with zero changes. Unless you want to change the stack calling convention. Or target a different processor.
After all, by that logic building firmware without a cross compiler should work perfectly fine. Why does my Rust toolchain build compiler-rt, core and alloc for my OS every time its built? After all, by that logic I should be able to use the ones that come with the compiler, right? Clearly, your wrong, because if you were right, people far more knowledgeable than you would've done what your claiming is possible long, long ago. You wouldn't need a cross-compiler for SEABios, or nasty hacks for OVMF to build. It would just "work".
gccwin "just works" for anyone who wants to build their own OS. If you insist on using an inferior product, or insist on being misinformed by these people who are supposedly more knowledgeable than me (I wonder what happens when these people boot up PDOS/386 and say "but isn't this impossible?"), or if you have some obscure situation, then please, go ahead and go down your necessarily difficult road.

When I see the OP's actual assembler output, I will be able to confirm that his existing compiler produces perfectly fine assembler suitable for building an OS, just as gccwin does. It's not that surprising.
Furthermore you go on to state that we're sending people down a difficult road. How, exactly, are we doing that? Building a cross-compiler toolchain for OS development is quite simple. The fact that it doesn't work on Windows is completely irrelevant, and always has been. And now that we have WSL its even more irrelevant.
If you think that processes that don't work on Windows, or Freedos, or PDOS/386, or the OP's new OS are "completely irrelevant", that's up to you and the OP to decide. I don't care. He's welcome to go and build 50 million cross-compilers for all I care. He needs 0 though. That's the only technical point I am interested in.
kerravon wrote:
So I still can't spot the logic behind "avoid a cross-compiler", unless you enjoy reverse-engineering your toolchain because it did something unexpected *every time* you update it.
It is not needed. It is a waste of time. Two good reasons from my perspective.
Alright then. Build SeaBios and Coreboot with a host compiler if its not needed. We'll see how far you get on that. Its computer firmware but the same cross-compiler infrastructure is necessary. So if its not needed you should be able to build it with your host GCC, regardless of whether its firmware or an operating system.
How about you build PDOS/386 instead? You're the one claiming that that is impossible. And give me a bit more time with the OP, and you'll see what he can do with his existing compiler. Then you can forward this chain to the Seabios people and say "hey guys, this might interest you".
Locked