nullplan wrote:Large memory model means "make no assumptions about symbol values". This means that the compiler has to generate code that can deal with any address for all external symbols. So, if you want to call an external function, direct relative call is right out, because that only works within 2GB of the instruction.
All of the kernel and Alloy functions are in the 0x1xxxxx range. Applications need to be able to call functions in this address range. There is no system call interface, they have to be called directly. I believe that requires us to use a large memory model.
nullplan wrote:
Not nice, since it needs one more register.
I believe the register can be used in between function calls though, right?
I would like to see the degree at which this impacts performance, out of curiosity.
nullplan wrote:
And not really tested, since all the world uses the small and kernel models (who needs more than 2GB static allocation?). While I don't know of any bugs, I have learned to be wary of uncommon options in complex external components, like the compiler and the binutils.
I agree with you on this.
For a while, I was actually starting to suspect that this was a corner case bug in GCC.
nullplan wrote:
Kernel memory model means "all link/load time symbols are between -2GB and 0", so now the compiler can generate a relative direct call to external functions again, since it knows that the instruction and its target will be within 2GB of each other.
I see your point now.
I think this is something that we'll keep in mind when we begin profiling and optimizing the software.
I don't think I'll begin to look at that until there is a working end-point application (like a web server or something.)
We're a little off topic at this point, since the newlib port works now.
If you'd like to talk more about it, then you can reach me on Keybase or open a GitHub issue.
Thanks!