[Solved] Insanely large C++ binary. Bad linking to blame ?

Question about which tools to use, bugs, the best way to implement a function, etc should go here. Don't forget to see if your question is answered in the wiki first! When in doubt post here.
User avatar
Neolander
Member
Member
Posts: 228
Joined: Tue Mar 23, 2010 3:01 pm
Location: Uppsala, Sweden
Contact:

[Solved] Insanely large C++ binary. Bad linking to blame ?

Post by Neolander »

Hello !

I've spent a month's work on my 32-bit "bootstrap" kernel, and now I think that it's time to start work on the 64-bit "real" one. Hence I grabbed the C++ Barebones tutorial in order to get information on C++ handling and some basic working code that I can tweak later.
However, something horribly wrong seems to be going on, because my final binary image is... guess what... 2 Mo large for a simple "hello world" code !
As Microsoft said, the "wow" starts now ! :mrgreen:
Could you please help me finding out what's wrong ? I think there's an issue with the linker script, but I don't see what's wrong with it...

Here's stuff that matters in my building script :

Code: Select all

#Compile
CXX=x86_64-elf-g++
LD=x86_64-elf-ld
CXXFLAGS="-Wall -Wextra -Werror -nostdlib -nostartfiles -nodefaultlibs -fno-builtin -fno-exceptions -fno-rtti -fno-stack-protector"
INCLUDES="-I../../arch/x86_64/debug/ -I../../arch/x86_64/include/ -I../../include/"

echo \* Making main kernel...
cd bin/kernel
$CXX -c ../../init/kernel.cpp $CXXFLAGS $INCLUDES
$LD -T ../../support/kernel_linker.lds -o kernel.bin *.o
cd ../..
Here's kernel.cpp

Code: Select all

char* const vmem = (char *) 0xb8000;

extern "C" int kmain() {
  //Okay, everything is ready
  vmem[0] = 'R';
  
  return 0;
}
Here's kernel_linker.lds (Just a slighty modified version of C++ barebones's one).

Code: Select all

ENTRY(kmain)

SECTIONS
{
  . = 0x200000;

  .text :
  {
    *(.text*)
    *(.gnu.linkonce.t*)
  }

  .rodata ALIGN(4096) :
  {
    *(.rodata*)
    *(.gnu.linkonce.r*)
  }
  
  .data ALIGN(4096) :
  {
    start_ctors = .;
    *(.ctor*)
    end_ctors = .;

    start_dtors = .;
    *(.dtor*)
    end_dtors = .;
  
    *(.data*)
    *(.gnu.linkonce.d*)
  }

  .bss ALIGN(4096) :
  {
    *(.COMMON*)
    *(.bss*)
    *(.gnu.linkonce.b*)
  }
  
   /DISCARD/ :
   {
    *(.comment)
    *(.eh_frame) /* You should discard this unless you're implementing runtime support for C++ exceptions. */
   }
}
kernel.o weights 1.3 Ko. It's maybe a bit large, but it remains reasonable
kernel.bin, on the other hand, weights 2.0 Mo, as I stated before. This is what makes me think that the linking step is to blame.

I'm still not very familiar with LD scripts and linking theory, despite having learned a lot about them when writing my previous C kernel, but afaik and according to the LD script doc that I use ( http://sourceware.org/binutils/docs-2.1 ... le-Example )

Code: Select all

  . = 0x200000;
is used to tell the ELF loader that the image should be loaded at 2 MB, and should have no influence on kernel image size. However, nothing else looks suspicious in this ld script...
Last edited by Neolander on Mon May 03, 2010 1:19 pm, edited 2 times in total.
User avatar
Combuster
Member
Member
Posts: 9301
Joined: Wed Oct 18, 2006 3:45 am
Libera.chat IRC: [com]buster
Location: On the balcony, where I can actually keep 1½m distance
Contact:

Re: Insanely large C++ binary. Bad linking to blame ?

Post by Combuster »

I don't see anything on output format in the compilation nor the linker script, you seem to assume flat binary, but with the current information it should be a 64-bit ELF...

What does objdump have to say about your kernel image?
"Certainly avoid yourself. He is a newbie and might not realize it. You'll hate his code deeply a few years down the road." - Sortie
[ My OS ] [ VDisk/SFS ]
User avatar
Neolander
Member
Member
Posts: 228
Joined: Tue Mar 23, 2010 3:01 pm
Location: Uppsala, Sweden
Contact:

Re: Insanely large C++ binary. Bad linking to blame ?

Post by Neolander »

Combuster wrote:I don't see anything on output format in the compilation nor the linker script, you seem to assume flat binary, but with the current information it should be a 64-bit ELF...
This is not necessary, since the LD version that I use for main kernel linking is specifically built for ELF64 x86_64 targets.
What does objdump have to say about your kernel image?
This :

Code: Select all

gralouf@nutella-pardus Code $ x86_64-elf-objdump -xst bin/kernel/kernel.bin

bin/kernel/kernel.bin:     file format elf64-x86-64
bin/kernel/kernel.bin
architecture: i386:x86-64, flags 0x00000112:
EXEC_P, HAS_SYMS, D_PAGED
start address 0x0000000000200000

Program Header:
    LOAD off    0x0000000000200000 vaddr 0x0000000000200000 paddr 0x0000000000200000 align 2**21
         filesz 0x0000000000001008 memsz 0x0000000000001008 flags r-x

Sections:
Idx Name          Size      VMA               LMA               File off  Algn
  0 .text         00000013  0000000000200000  0000000000200000  00200000  2**2
                  CONTENTS, ALLOC, LOAD, READONLY, CODE
  1 .rodata       00000008  0000000000201000  0000000000201000  00201000  2**3
                  CONTENTS, ALLOC, LOAD, READONLY, DATA
SYMBOL TABLE:
0000000000200000 l    d  .text  0000000000000000 .text
0000000000201000 l    d  .rodata        0000000000000000 .rodata
0000000000000000 l    df *ABS*  0000000000000000 kernel.cpp
0000000000201000 l     O .rodata        0000000000000008 _ZL4vmem
0000000000202000 g       .rodata        0000000000000000 start_ctors
0000000000202000 g       .rodata        0000000000000000 start_dtors
0000000000202000 g       .rodata        0000000000000000 end_ctors
0000000000202000 g       .rodata        0000000000000000 end_dtors
0000000000200000 g     F .text  0000000000000013 kmain


Contents of section .text:
 200000 554889e5 b800800b 00c60052 b8000000  UH.........R....
 200010 00c9c3                               ...
Contents of section .rodata:
 201000 00800b00 00000000                    ........
gralouf@nutella-pardus Code $
Tobiking
Posts: 6
Joined: Mon Oct 26, 2009 3:43 pm

Re: Insanely large C++ binary. Bad linking to blame ?

Post by Tobiking »

You could try using "--build-id=none" and/or "--nmagic" as additional ld flags. They work for the ld included in Ubuntu 9.10 and higher, which also creates big binaries in some cases.
User avatar
JamesM
Member
Member
Posts: 2935
Joined: Tue Jul 10, 2007 5:27 am
Location: York, United Kingdom
Contact:

Re: Insanely large C++ binary. Bad linking to blame ?

Post by JamesM »

Code: Select all

Sections:
Idx Name          Size      VMA               LMA               File off  Algn
  0 .text         00000013  0000000000200000  0000000000200000  00200000  2**2
                  CONTENTS, ALLOC, LOAD, READONLY, CODE
  1 .rodata       00000008  0000000000201000  0000000000201000  00201000  2**3
                  CONTENTS, ALLOC, LOAD, READONLY, DATA
The problem here is that the file offset (0x200000) is the same as the LMA. That is, there is exactly 2MB of padding before the code starts in the ELF.

That's what the problem is. As for the solution - I'm sorry but I don't really know. I cannot see anything wrong with your linker script or command line, and don't know why the linker is setting the file offset == the logical memory address.

Now that I've pointed out the problem maybe someone else can see the solution?

James
User avatar
Neolander
Member
Member
Posts: 228
Joined: Tue Mar 23, 2010 3:01 pm
Location: Uppsala, Sweden
Contact:

Re: Insanely large C++ binary. Bad linking to blame ?

Post by Neolander »

Tried something on my side : if I change the

Code: Select all

  . = 0x200000;
thing to

Code: Select all

  . = 0x100000;
Binary size goes down to 1 MB, which confirms your hypothesis : clearly, LD has smoked some bad weed and insists on using the same offset in the ELF file as the one used for loading purpose. But why ?
Last edited by Neolander on Mon May 03, 2010 4:19 am, edited 1 time in total.
Selenic
Member
Member
Posts: 123
Joined: Sat Jan 23, 2010 2:56 pm

Re: Insanely large C++ binary. Bad linking to blame ?

Post by Selenic »

I tried something like this a while ago. Inspecting it with objdump said that it had 2M-aligned the entire file; because you're starting at a 2M boundary, that means that the header itself must be in the page before, hence the huge size. If you set it to load at, say, 2M+4k (ie, 0x201000), your problem should go away, as it can fit the headers into the first 4k. That or use -n or --nmagic, which disables alignment completely.
User avatar
Neolander
Member
Member
Posts: 228
Joined: Tue Mar 23, 2010 3:01 pm
Location: Uppsala, Sweden
Contact:

Re: Insanely large C++ binary. Bad linking to blame ?

Post by Neolander »

Selenic wrote:I tried something like this a while ago. Inspecting it with objdump said that it had 2M-aligned the entire file; because you're starting at a 2M boundary, that means that the header itself must be in the page before, hence the huge size. If you set it to load at, say, 2M+4k (ie, 0x201000), your problem should go away, as it can fit the headers into the first 4k. That or use -n or --nmagic, which disables alignment completely.
I didn't understand, but the technical issue is fixed now : 0x201000 works. Thanks !

Now could you please explain to me why LD can't use the 2M-4K address for the header ?
User avatar
xenos
Member
Member
Posts: 1121
Joined: Thu Aug 11, 2005 11:00 pm
Libera.chat IRC: xenos1984
Location: Tartu, Estonia
Contact:

Re: Insanely large C++ binary. Bad linking to blame ?

Post by xenos »

I had a quite similar problem and I finally solved it by adding -z max-page-size=0x1000 to my linker flags, which reduces the page size. There are some more possible solutions in the Wiki:

http://wiki.osdev.org/Creating_a_64-bit ... g.21.21.21
Programmers' Hardware Database // GitHub user: xenos1984; OS project: NOS
User avatar
Neolander
Member
Member
Posts: 228
Joined: Tue Mar 23, 2010 3:01 pm
Location: Uppsala, Sweden
Contact:

Re: Insanely large C++ binary. Bad linking to blame ?

Post by Neolander »

XenOS wrote:I had a quite similar problem and I finally solved it by adding -z max-page-size=0x1000 to my linker flags, which reduces the page size. There are some more possible solutions in the Wiki:

http://wiki.osdev.org/Creating_a_64-bit ... g.21.21.21
Sounds less like a hack, and hence is adopted :mrgreen: Still, if someone would bother to take some time in order to teach myself what's going on... :|
Selenic
Member
Member
Posts: 123
Joined: Sat Jan 23, 2010 2:56 pm

Re: Insanely large C++ binary. Bad linking to blame ?

Post by Selenic »

Neolander wrote:Now could you please explain to me why LD can't use the 2M-4K address for the header ?
Quoting the ELF standard (in the part about program headers), "p_vaddr should equal p_offset, modulo p_align"
What this means is that (unless you use -n, which ignores that part) the address it is loaded to (p_vaddr) and the offset within the ELF file (p_offset) should be equal modulo the page size (which is 2M here).

So if the virtual address is 2M, the file offset must be a multiple of 2M; as you need the header before that, the lowest possible value is 2M
On the other hand, if the virtual address is 2M+4k, the file offset must be a multiple of 2M, plus 4k; the header fits into that 4k, allowing the code to start at 0*2M + 4k = 4k
User avatar
Neolander
Member
Member
Posts: 228
Joined: Tue Mar 23, 2010 3:01 pm
Location: Uppsala, Sweden
Contact:

Re: Insanely large C++ binary. Bad linking to blame ?

Post by Neolander »

Selenic wrote:
Neolander wrote:Now could you please explain to me why LD can't use the 2M-4K address for the header ?
Quoting the ELF standard (in the part about program headers), "p_vaddr should equal p_offset, modulo p_align"
What this means is that (unless you use -n, which ignores that part) the address it is loaded to (p_vaddr) and the offset within the ELF file (p_offset) should be equal modulo the page size (which is 2M here).

So if the virtual address is 2M, the file offset must be a multiple of 2M; as you need the header before that, the lowest possible value is 2M
On the other hand, if the virtual address is 2M+4k, the file offset must be a multiple of 2M, plus 4k; the header fits into that 4k, allowing the code to start at 0*2M + 4k = 4k
Why is the page size equal to 2M ? Isn't it 4 KB usually ? Or does it change on x86_64 ?
Selenic
Member
Member
Posts: 123
Joined: Sat Jan 23, 2010 2:56 pm

Re: Insanely large C++ binary. Bad linking to blame ?

Post by Selenic »

Neolander wrote:Why is the page size equal to 2M ? Isn't it 4 KB usually ? Or does it change on x86_64 ?
You can use both 4k and 2M pages in long mode (some new processors even support 1G pages); evidently it's assumed that 2M is the default. Really, which is better is a very complicated trade-off: larger pages can map far more memory per TLB entry, improving performance, but they take much longer to page in/out. Further (at least for new AMD processors) there are separate TLBs for different page sizes.
User avatar
Neolander
Member
Member
Posts: 228
Joined: Tue Mar 23, 2010 3:01 pm
Location: Uppsala, Sweden
Contact:

Re: Insanely large C++ binary. Bad linking to blame ?

Post by Neolander »

Okay, now it's an issue that I know :
-> Large pages = more wasted memory and slower paging but smaller page table
-> Small pages = big page table and hence worse TLB performance

(I don't see the benefit of 1G pages, by the way, it just sounds like a ridiculous waste of memory, except maybe on virtualization-oriented servers.)

What's going to help me solve this issue, though, is that I want my OS to run fine on lower-end hardware, with 512MB memory as a minimum requirement.

With 2MB pages, it means 256 pages as a maximum. With a minimum of one page for code, one for rodata, and one for writable data per program, this means that I can run 85 programs simultaneously without swapping. Since I target a microkernel model with many small processes, this sounds a little small. Hence I'll rather target 4 KB pages at the moment, and wait for low-end 64-bit hardware to improve before switching to 2 MB and moving kernel to 2 MB alignment ;)

Thanks for the explanation !
Selenic
Member
Member
Posts: 123
Joined: Sat Jan 23, 2010 2:56 pm

Re: Insanely large C++ binary. Bad linking to blame ?

Post by Selenic »

@Neolander: You *can* mix together different page sizes, you know. Also, in some cases, you have large data structures that are never paged (ie, they're generated in memory), in which case you get the advantages of larger pages without the main disadvantage.
Neolander wrote:I don't see the benefit of 1G pages, by the way, it just sounds like a ridiculous waste of memory, except maybe on virtualization-oriented servers.
I think that's partly the point: to allow VMs to efficiently allocate memory by using TLB entries which nothing else is using. Also note that, with 32-bit guests, mapping a 4M page when the largest you can use in 64-bit mode is 2M is something of a pain, I'd imagine.
Post Reply