ld in Windows Xp
- salil_bhagurkar
- Member
- Posts: 261
- Joined: Mon Feb 19, 2007 10:40 am
- Location: India
ld in Windows Xp
I link about 30 files of my os together to form an image. But when the total number of files linked exceeds about 9-10 make gives me an error:
process_begin: CreateProcess(D:\DJGPP\bin\ld.exe, ld -T src/link.ld -o image entry.o mm.o sched.o..........(about 30 files)....smp.o cpu.o) failed.
make (e=87): The parameter is incorrect.
make ***[image] Error 87
process_begin: CreateProcess(D:\DJGPP\bin\ld.exe, ld -T src/link.ld -o image entry.o mm.o sched.o..........(about 30 files)....smp.o cpu.o) failed.
make (e=87): The parameter is incorrect.
make ***[image] Error 87
Yep...
For one, some part of the DJGPP environment has a restriction on command line length. I don't know whether it's the DOS box or the tools themselves or whatever, I never used the DJGPP toolchain myself. The usual recommendation is to use Cygwin.
Your specific problem - linker invocations getting terribly long - is best solved by setting up a linker archive. The tool to do this is 'ar'. Check out its manpage.
For one, some part of the DJGPP environment has a restriction on command line length. I don't know whether it's the DOS box or the tools themselves or whatever, I never used the DJGPP toolchain myself. The usual recommendation is to use Cygwin.
Your specific problem - linker invocations getting terribly long - is best solved by setting up a linker archive. The tool to do this is 'ar'. Check out its manpage.
Every good solution is obvious once you've found it.
- salil_bhagurkar
- Member
- Posts: 261
- Joined: Mon Feb 19, 2007 10:40 am
- Location: India
Thanks. I will probably get Cygwin and try it out.
But i actually two installations of windows Xp (The other one just has all services disabled and no s/w installations for speed). There are no such problems in the other xp. It links fine with the same ld. So is there any setting in windows for ntvdm that takes care of the commandline depth?
But i actually two installations of windows Xp (The other one just has all services disabled and no s/w installations for speed). There are no such problems in the other xp. It links fine with the same ld. So is there any setting in windows for ntvdm that takes care of the commandline depth?
- Brynet-Inc
- Member
- Posts: 2426
- Joined: Tue Oct 17, 2006 9:29 pm
- Libera.chat IRC: brynet
- Location: Canada
- Contact:
Yes, DJGPP has limited command line arguments apparently.. could be a DOS thing..salil_bhagurkar wrote:Thanks. I will probably get Cygwin and try it out.
But i actually two installations of windows Xp (The other one just has all services disabled and no s/w installations for speed). There are no such problems in the other xp. It links fine with the same ld. So is there any setting in windows for ntvdm that takes care of the commandline depth?
You might try specifying the names of your objects in your linker script..
Code: Select all
INPUT(file.o, file.o, …) or INPUT(file.o file.o …)
![Smile :)](./images/smilies/icon_smile.gif)
Code: Select all
ar vcrs libkern.a object1.o object2.o ... 30 files hehe..
Code: Select all
ld -T src/link.ld -o image libkern.a
![Very Happy :D](./images/smilies/icon_biggrin.gif)
(You could make multiple archives though..)
Adding the object names into the linker script seems to be the most effective solution in your case, Could also just use MinGW and then setup a Cross-compiler.. (Or Cygwin if you must..)
Here comes my trademark phrase though: Why not just use BSD or Linux?
![Wink :wink:](./images/smilies/icon_wink.gif)
Have fun
![Cool 8)](./images/smilies/icon_cool.gif)
- salil_bhagurkar
- Member
- Posts: 261
- Joined: Mon Feb 19, 2007 10:40 am
- Location: India
- Brynet-Inc
- Member
- Posts: 2426
- Joined: Tue Oct 17, 2006 9:29 pm
- Libera.chat IRC: brynet
- Location: Canada
- Contact:
KDE/Gnome are the resource hogs, That has nothing to do with BSD/Linux's speed..salil_bhagurkar wrote:Well, i have a 128Mb ram and all versions of linux seem to creep on my pc... I can manage Xp to run fast... Linux is getting bloated day by day... I just find Xp more familiar and user friendly (Please excuse me linux lovers)
You could run something like blackbox with as little as 32mb RAM (Less is possible.. unrecommended though..)
EDIT: Removed 2015: Imageshack replaced all links with spam.
Last edited by Brynet-Inc on Fri Aug 28, 2015 8:58 pm, edited 1 time in total.
- salil_bhagurkar
- Member
- Posts: 261
- Joined: Mon Feb 19, 2007 10:40 am
- Location: India
- Brynet-Inc
- Member
- Posts: 2426
- Joined: Tue Oct 17, 2006 9:29 pm
- Libera.chat IRC: brynet
- Location: Canada
- Contact:
- salil_bhagurkar
- Member
- Posts: 261
- Joined: Mon Feb 19, 2007 10:40 am
- Location: India
Hi,
With regards to the original problem, I use DJGPP for all my os devving - against the advice of the wiki
.
The problem you have is not so much with number of input files as DOS command line length. This will give the same problem with ar.
I have now started:
a) using a makefile for most of my builds
b) when using a batch file, command line, use wildcards - *.o rather than naming each object individually.
The second approach has the added advantage that you do not need to amend your linker command every time you add another object.
Cheers,
Adam
With regards to the original problem, I use DJGPP for all my os devving - against the advice of the wiki
![Razz :P](./images/smilies/icon_razz.gif)
The problem you have is not so much with number of input files as DOS command line length. This will give the same problem with ar.
I have now started:
a) using a makefile for most of my builds
b) when using a batch file, command line, use wildcards - *.o rather than naming each object individually.
The second approach has the added advantage that you do not need to amend your linker command every time you add another object.
Cheers,
Adam
Ar is just an archiver, not too much unlike zip or tar.
So you can do this:
No long command lines no matter how many files you add! ![Smile :)](./images/smilies/icon_smile.gif)
What comes to Linux running fast, basicly ANYTHING except KDE/Gnome will run just fine on 128MB. At least anything that was in freshmeat.net listings a few years ago (it's been a while since I've gone through the list). The only possible other trouble maker is Enlightenment, but I guess 128MB should be enough even for that (though it can burn lots of CPU for eyecandy).
So you can do this:
Code: Select all
del objects.a
ar rcs objects.a object1.o
ar rs objects.a object2.o
ar rs objects.a object3.o
ld -o binary.exe objects.a
![Smile :)](./images/smilies/icon_smile.gif)
What comes to Linux running fast, basicly ANYTHING except KDE/Gnome will run just fine on 128MB. At least anything that was in freshmeat.net listings a few years ago (it's been a while since I've gone through the list). The only possible other trouble maker is Enlightenment, but I guess 128MB should be enough even for that (though it can burn lots of CPU for eyecandy).
The real problem with goto is not with the control transfer, but with environments. Properly tail-recursive closures get both right.
-
- Member
- Posts: 2566
- Joined: Sun Jan 14, 2007 9:15 pm
- Libera.chat IRC: miselin
- Location: Sydney, Australia (I come from a land down under!)
- Contact:
What I do for my kernel/library is this:
All library files are named *_lib.c, so they get passed to gcc via a wildcard. The same goes for the object files.
All kernel files are name *_main.c.
This basically means that I can seamlessly add new files to the library or my kernel without having to modify batch files. The only place where this becomes inconvenient is when I use folders.
All library files are named *_lib.c, so they get passed to gcc via a wildcard. The same goes for the object files.
All kernel files are name *_main.c.
This basically means that I can seamlessly add new files to the library or my kernel without having to modify batch files. The only place where this becomes inconvenient is when I use folders.