Question about which tools to use, bugs, the best way to implement a function, etc should go here. Don't forget to see if your question is answered in the wiki first! When in doubt post here.
Pete wrote:
Do you think this will be enough subdivision to stop me reaching the terminal number of object files. I will probally further divide the drivers and apps folders in to seperate apps and types of device.
I think you should reason the other way around. Not like I have this division, will this technique work? more like I have this technique to avoid that problem, how do I make it fit in my problem? If you don't have a problem now, don't fix it.
PS Attached my makefile for anyone who wants it. Linux users will have to change the install section which I havn't changed to use cp yet. Its got the extension c so that I can attach it.
I would try to keep all OS-specific calls (such as cp, but also gcc and make themselves) to a configurable header. include that header in the real make file and direct the user to that file for adjustments, saves both a lot of time and trouble. Try using $(CP) (or $(COPY)) instead of copy or cp, so you will know what it should do, and so the user using it only has to modify the definition of that particular variable.
i tend to have one include config.$(OSTYPE) statement in every portable makefile i write and to have system-specific variables (or options like BINFORMAT=elf or BINFORMAT=coff) in that config.linux / config.dos files.
Under linux (and any other unix-derivated), OSTYPE is already defined, while for dos, it just costs one 'set OSTYPE=DOS' before you start working...
I've read a paper a little under a month ago that frowned upon invoking `make' recursively. You might want to consider its points: http://aegis.sourceforge.net/auug97.pdf
The problem is in the DPMI host built into NT, 2000 and XP, which causes a crash when you try to do a recursive make. So it's not running make that's the problem, but just the DJGPP make. Cygwin's make is unaffected.
I think their attitude is that it's not worth fixing. It only affects a very small proportion of DOS apps (only DJGPP that I know of), and replacements exist for most of those apps (begins with a 'C' and ends in 'ygwin').
In this case, I agree with them. It's nearly 2004, and people are still running a 32-bit compiler on a 16-bit emulation inside a 32-bit operating system?!
Mm, I see. The crashes are more severe than what I had in mind. Originally I had meant that the idea of recursive make is flawed. Each subdirectory will only be aware of a portion of the entire project tree's dependency graph, which is the cause of a number of issues the PDF discusses. By instead utilizing a single top-level Makefile which includes (much like C's #include) makefiles in subdirectories, you can circumvent shortcomings of using recursive make.
nullify wrote:
Mm, I see. The crashes are more severe than what I had in mind. Originally I had meant that the idea of recursive make is flawed. Each subdirectory will only be aware of a portion of the entire project tree's dependency graph, which is the cause of a number of issues the PDF discusses. By instead utilizing a single top-level Makefile which includes (much like C's #include) makefiles in subdirectories, you can circumvent shortcomings of using recursive make.
as if that wouldn't make a huge chaos... if only that this chaos is distributed... Recursive make is very much cleaner, you keep manageable packages, and with a little bit of template makefiling you can make extremely clean makefiles.
The problems of recursive make have [almost] nothing to do with the clarity of the makefile. It has to do with the fact that the makefile can end up being inefficient or wrong because each instance of make doesn't know about all of the dependencies "outside" of the subtree that it's being invoked in.
BTW, your example doesn't exactly demonstrate how each subdirectory is being traversed, which is the point of interest here.
nullify wrote:
The problems of recursive make have [almost] nothing to do with the clarity of the makefile. It has to do with the fact that the makefile can end up being inefficient or wrong because each instance of make doesn't know about all of the dependencies "outside" of the subtree that it's being invoked in.
ok, cannot disagree with that. Still, for linear directory division (those made for pure clarity of files, such as putting the core of the kernel in multiple directories) it is just as simple. For non-linear directory divisions you have to make a non-linear makefile, obviously.
Still, even arch-dependant directories can be made (add $(ARCH)/boot f.ex.)
BTW, your example doesn't exactly demonstrate how each subdirectory is being traversed, which is the point of interest here.