tup vs make
- eryjus
- Member
- Posts: 286
- Joined: Fri Oct 21, 2011 9:47 pm
- Libera.chat IRC: eryjus
- Location: Tustin, CA USA
tup vs make
(I'm splitting this away from this topic so that it can stay focused)
OK, I took an honest look at the tup documentation today. I have several concerns about whether tup is really the best tool for me for a Hobby OS project (I'm sure it will work extremely well for something far less complicated). I can boil my concerns down to 2 key differences between the way tup and make approach builds. Disclaimer: I did not actually download and try it.
Before I get into these differences, I feel a more concrete explanation of how I use make regularly is in order. I am currently targeting 3 architectures: i686, x86_64, and rpi2b. I am currently working on a stage 3 loader to put the processor into its native/target state before passing control to the kernel. Obviously, with distinct targets like this, there is both architecture-specific code and common code; I do not work on everything at the same time. The same will be the case with the kernel. Also, I am planning on a microkernel, so there will be lots of modules to maintain as well.
So, given these goals, I will be making several different styles of targets (and 'target' is a apropos description for what I want to do) as I go through an evening's development. Some of these are:
* `make <arch>` -- refresh all modules for that architecture
* `make <module>` -- refresh that module for all architectures
* `make <arch>-<module>` -- refresh the module for the architecture
* `make <arch>-iso` -- refresh an iso image (or disk image) for the architecture
* `make run-<arch> -- refresh the image and launch it with qemu with the proper parameters
* `make all` -- refresh everything up to but not including the iso images
Bottom-up builds versus top-down builds
* tup wants to refresh everything based on what has changed. This is a fantastic goal in theory, but the reality of how I code and run my builds is that I will work in one architecture (or module) and build many times before I think I can switch gears and move into another architecture. I think tup risks cluttering up the specific build results I am looking for with irrelevant noise.
* Now, the way I organize my build structure is that I could provide a full path to the target file I want to build, such as 'bin/rpi2b/loader.elf'. That is a more typing than I want to do when I can easily type 'rpi2b-loader' as an alias. I know there are more ways to handle this, but I think I address those below.
Overall, this represents a significant challenge to my very comfortable workflow.
No .PHONY targets
* When making something as complicated as an .iso image, I will have to invoke tup multiple times to build all the targets I want to include and then make the image itself. If I add a new module, I have to go through add that new module into the proper scripts. This maintenance is one thing I am very much against.
* tup takes the position that these are orthogonal to the concern of building a project. Anything that would be expressed in a .PHONY target in a makefile is really supposed to be a script to begin with. OK....
* Where to I place the make-run-i686.sh script? Should these be added to my own ~/bin directory? No my favorite option since I am now making project-specific tools globally available to my user -- undesirable results will occur.
* How about adding a project utils folder to my path (I already have one of those; I can just dump everything in there). Again, not a great answer as these become globally available.
* Now, to be fair, I could consider adding ./utils to my path and keep the scope of the possible issues narrow...
* I could also clutter up the root of my project with these which is the lowest risk of cross-contamination, but then I have to type "./" before any script I want to execute.
* I could also create my own ~/bin/make.sh that would reorganize the command into something that would work for real (such as changing `make run-rpi2b` into `utils/make-run-rpi2b.sh', but then what I have really need to use make?
* I could also...
OK, so there are lots of technical ways to solve the scripting issue. The point is that none of them really appeal to me.
Now, this is not to say that I am opposed to tup. Quite the contrary. I just do not think that it is the right tool for the job; you wouldn't use a picture hammer to frame a house.
I'm sure I have misunderstood some things -- I'm open to the discussion.
OK, I took an honest look at the tup documentation today. I have several concerns about whether tup is really the best tool for me for a Hobby OS project (I'm sure it will work extremely well for something far less complicated). I can boil my concerns down to 2 key differences between the way tup and make approach builds. Disclaimer: I did not actually download and try it.
Before I get into these differences, I feel a more concrete explanation of how I use make regularly is in order. I am currently targeting 3 architectures: i686, x86_64, and rpi2b. I am currently working on a stage 3 loader to put the processor into its native/target state before passing control to the kernel. Obviously, with distinct targets like this, there is both architecture-specific code and common code; I do not work on everything at the same time. The same will be the case with the kernel. Also, I am planning on a microkernel, so there will be lots of modules to maintain as well.
So, given these goals, I will be making several different styles of targets (and 'target' is a apropos description for what I want to do) as I go through an evening's development. Some of these are:
* `make <arch>` -- refresh all modules for that architecture
* `make <module>` -- refresh that module for all architectures
* `make <arch>-<module>` -- refresh the module for the architecture
* `make <arch>-iso` -- refresh an iso image (or disk image) for the architecture
* `make run-<arch> -- refresh the image and launch it with qemu with the proper parameters
* `make all` -- refresh everything up to but not including the iso images
Bottom-up builds versus top-down builds
* tup wants to refresh everything based on what has changed. This is a fantastic goal in theory, but the reality of how I code and run my builds is that I will work in one architecture (or module) and build many times before I think I can switch gears and move into another architecture. I think tup risks cluttering up the specific build results I am looking for with irrelevant noise.
* Now, the way I organize my build structure is that I could provide a full path to the target file I want to build, such as 'bin/rpi2b/loader.elf'. That is a more typing than I want to do when I can easily type 'rpi2b-loader' as an alias. I know there are more ways to handle this, but I think I address those below.
Overall, this represents a significant challenge to my very comfortable workflow.
No .PHONY targets
* When making something as complicated as an .iso image, I will have to invoke tup multiple times to build all the targets I want to include and then make the image itself. If I add a new module, I have to go through add that new module into the proper scripts. This maintenance is one thing I am very much against.
* tup takes the position that these are orthogonal to the concern of building a project. Anything that would be expressed in a .PHONY target in a makefile is really supposed to be a script to begin with. OK....
* Where to I place the make-run-i686.sh script? Should these be added to my own ~/bin directory? No my favorite option since I am now making project-specific tools globally available to my user -- undesirable results will occur.
* How about adding a project utils folder to my path (I already have one of those; I can just dump everything in there). Again, not a great answer as these become globally available.
* Now, to be fair, I could consider adding ./utils to my path and keep the scope of the possible issues narrow...
* I could also clutter up the root of my project with these which is the lowest risk of cross-contamination, but then I have to type "./" before any script I want to execute.
* I could also create my own ~/bin/make.sh that would reorganize the command into something that would work for real (such as changing `make run-rpi2b` into `utils/make-run-rpi2b.sh', but then what I have really need to use make?
* I could also...
OK, so there are lots of technical ways to solve the scripting issue. The point is that none of them really appeal to me.
Now, this is not to say that I am opposed to tup. Quite the contrary. I just do not think that it is the right tool for the job; you wouldn't use a picture hammer to frame a house.
I'm sure I have misunderstood some things -- I'm open to the discussion.
Adam
The name is fitting: Century Hobby OS -- At this rate, it's gonna take me that long!
Read about my mistakes and missteps with this iteration: Journal
"Sometimes things just don't make sense until you figure them out." -- Phil Stahlheber
The name is fitting: Century Hobby OS -- At this rate, it's gonna take me that long!
Read about my mistakes and missteps with this iteration: Journal
"Sometimes things just don't make sense until you figure them out." -- Phil Stahlheber
Re: tup vs make
I had similar reservations when I read about tup in that other thread.
In my CMake setup, if I edit a source file, several things become "outdated" at once -- the library and any binaries depending on it, the Doxygen documentation, and in some places the PDF documentation as well.
But I usually only want one specific binary to be rebuild (the test driver), because the others can wait until the current feature / issue is resolved. And I certainly don't want the rather verbose output of Doxygen to drown out compiler warnings, or have every single build commited to the test dashboard...
I really don't like the computer "guessing" anything. I have seen too many problems arise from "guesswork" going wrong. It's at the heart of more bugs and UX failures than I cared to count.
But most importantly, I rank the benefits to be had from CMake much higher than tup's performance benefits, which apparently only become "tangible" somewhere between 1000 and 10000 source files. (Quick head count... PDCLib has 177 source files, and my two "day job" projects, which I rate as "really quite significantly massive" in the "there are several man-years gone into these buggers" sense, have 356 and 507 source files, respectively.)
Now, for another angle.
I am using CMake.
CMake allows me to work on Linux / Vim / GCC / make, my colleague to work on Windows / MSVC / .sln, an automated nightly build to operate with Windows / MSVC / NMake, and another automated build to utilize MXE to build NSIS setup.exe's for Windows distribution, in addition to the DEB and RPM packages build for the various Unixoid platforms we're supporting. (Using clean VirtualBox setups under Vagrant control for each build.) All from one single configuration.
Eclipse CDT, KDevelop, CodeBlocks, CodeLite, Sublime, and Kate project files can also be generated (although I haven't tested those myself yet), with either MinGW / NMake / Makefiles or Ninja used for actual building.
Plus Doxygen, plus LaTeX / PDF documentation, plus test handling / submitting of test results to a web dashboard, plus non-disruptive source reformatting / normalization.
And since I made the configuration into a generic one (JAWS), I can have another C/C++ project with the same features up and running in a couple of minutes.
And right now I'm tinkering with the combination of CMake and Android Studio, figuring out how to make JAWS capable of targeting mobile devices as well.
While this is (of course) shameless advertising, it should also point out one other thing. Looking at it from a CMake perspective, whether you're using make, or Ninja, or NMake, or WMake, doesn't really matter.
Tup is nice technology, and quite some thought has gone into it, no doubt. But I think it's barking up the wrong tree, as the world doesn't really need a different "make", especially not one that gets "better" only for obscenely large projects (which are exceedingly unlikely to bet their money on a newcomer system, and really should have been separated into individual libraries long ago). We've moved beyond that, and rendered the actual build system a backend commodity. The benefits to be had from meta-systems, most importantly cross-platform capabilities, are compelling, IMHO.
In my CMake setup, if I edit a source file, several things become "outdated" at once -- the library and any binaries depending on it, the Doxygen documentation, and in some places the PDF documentation as well.
But I usually only want one specific binary to be rebuild (the test driver), because the others can wait until the current feature / issue is resolved. And I certainly don't want the rather verbose output of Doxygen to drown out compiler warnings, or have every single build commited to the test dashboard...
I really don't like the computer "guessing" anything. I have seen too many problems arise from "guesswork" going wrong. It's at the heart of more bugs and UX failures than I cared to count.
But most importantly, I rank the benefits to be had from CMake much higher than tup's performance benefits, which apparently only become "tangible" somewhere between 1000 and 10000 source files. (Quick head count... PDCLib has 177 source files, and my two "day job" projects, which I rate as "really quite significantly massive" in the "there are several man-years gone into these buggers" sense, have 356 and 507 source files, respectively.)
Now, for another angle.
I am using CMake.
CMake allows me to work on Linux / Vim / GCC / make, my colleague to work on Windows / MSVC / .sln, an automated nightly build to operate with Windows / MSVC / NMake, and another automated build to utilize MXE to build NSIS setup.exe's for Windows distribution, in addition to the DEB and RPM packages build for the various Unixoid platforms we're supporting. (Using clean VirtualBox setups under Vagrant control for each build.) All from one single configuration.
Eclipse CDT, KDevelop, CodeBlocks, CodeLite, Sublime, and Kate project files can also be generated (although I haven't tested those myself yet), with either MinGW / NMake / Makefiles or Ninja used for actual building.
Plus Doxygen, plus LaTeX / PDF documentation, plus test handling / submitting of test results to a web dashboard, plus non-disruptive source reformatting / normalization.
And since I made the configuration into a generic one (JAWS), I can have another C/C++ project with the same features up and running in a couple of minutes.
And right now I'm tinkering with the combination of CMake and Android Studio, figuring out how to make JAWS capable of targeting mobile devices as well.
While this is (of course) shameless advertising, it should also point out one other thing. Looking at it from a CMake perspective, whether you're using make, or Ninja, or NMake, or WMake, doesn't really matter.
Tup is nice technology, and quite some thought has gone into it, no doubt. But I think it's barking up the wrong tree, as the world doesn't really need a different "make", especially not one that gets "better" only for obscenely large projects (which are exceedingly unlikely to bet their money on a newcomer system, and really should have been separated into individual libraries long ago). We've moved beyond that, and rendered the actual build system a backend commodity. The benefits to be had from meta-systems, most importantly cross-platform capabilities, are compelling, IMHO.
Every good solution is obvious once you've found it.
-
- Member
- Posts: 5494
- Joined: Mon Mar 25, 2013 7:01 pm
Re: tup vs make
I use tup because it won't let me accidentally link stale object files.
The other features are a nice bonus.
The other features are a nice bonus.
Re: tup vs make
Perhaps tup is easier to set up correctly (I cannot judge as I haven't tried it), but you get the same benefit from any correctly configured build system....it won't let me accidentally link stale object files.
Every good solution is obvious once you've found it.
- eryjus
- Member
- Posts: 286
- Joined: Fri Oct 21, 2011 9:47 pm
- Libera.chat IRC: eryjus
- Location: Tustin, CA USA
Re: tup vs make
I noticed that also. But I also noticed that they did not run make with the '-j' option, which in my experience, '-j 2' even on a single core machine can have a dramatic positive effect on build performance.Solar wrote: which apparently only become "tangible" somewhere between 1000 and 10000 source files
Solar, you beat me to it... Octocontrabass, can we get some more specifics -- How is it that tup is better for you than make when both have that same basic function given a properly set up dependency tree?Solar wrote:...if set up correctly, which holds true for any build system....it won't let me accidentally link stale object files.
Adam
The name is fitting: Century Hobby OS -- At this rate, it's gonna take me that long!
Read about my mistakes and missteps with this iteration: Journal
"Sometimes things just don't make sense until you figure them out." -- Phil Stahlheber
The name is fitting: Century Hobby OS -- At this rate, it's gonna take me that long!
Read about my mistakes and missteps with this iteration: Journal
"Sometimes things just don't make sense until you figure them out." -- Phil Stahlheber
-
- Member
- Posts: 5494
- Joined: Mon Mar 25, 2013 7:01 pm
Re: tup vs make
Given a properly set up dependency tree, tup and make are equivalent to me.eryjus wrote:How is it that tup is better for you than make when both have that same basic function given a properly set up dependency tree?
The difference is that tup knows when I haven't set up the dependency tree properly, and complains until I fix it. I don't have to worry about bugs in my build scripts when tup automatically catches the most common ones.
- eryjus
- Member
- Posts: 286
- Joined: Fri Oct 21, 2011 9:47 pm
- Libera.chat IRC: eryjus
- Location: Tustin, CA USA
Re: tup vs make
OK, that's a nice feature -- I didn't see anything about that in my reading yesterday. Just curious if that is based on files without rules (as either a source or target) such as including a file in a recipe it doesn't know how to build (versus make where the requested file just has to exist)? Or any other way tup makes that determination?Octocontrabass wrote:The difference is that tup knows when I haven't set up the dependency tree properly, and complains until I fix it.
Interestingly enough, just as soon as I posted yesterday, I found a bug in my makefile... Maybe I'm being "guided" to give it a try despite my reservations.
Adam
The name is fitting: Century Hobby OS -- At this rate, it's gonna take me that long!
Read about my mistakes and missteps with this iteration: Journal
"Sometimes things just don't make sense until you figure them out." -- Phil Stahlheber
The name is fitting: Century Hobby OS -- At this rate, it's gonna take me that long!
Read about my mistakes and missteps with this iteration: Journal
"Sometimes things just don't make sense until you figure them out." -- Phil Stahlheber
Re: tup vs make
Here is where cmake helps by setting up your dependency tree properly.Octocontrabass wrote:The difference is that tup knows when I haven't set up the dependency tree properly,
Learn to read.
Re: tup vs make
You still have to specify rules inputs and outputs manually (something that cmake does for you on pretty much any scale), and then it uses call instrumentation and guesswork to figure the rest.eryjus wrote: Just curious if that is based on files without rules (as either a source or target) such as including a file in a recipe it doesn't know how to build (versus make where the requested file just has to exist)? Or any other way tup makes that determination?
Learn to read.
Re: tup vs make
It would certainly seem to be sensible to try it before criticising (although, judging from this thread, few seem to agree).eryjus wrote: Maybe I'm being "guided" to give it a try despite my reservations.
- eryjus
- Member
- Posts: 286
- Joined: Fri Oct 21, 2011 9:47 pm
- Libera.chat IRC: eryjus
- Location: Tustin, CA USA
Re: tup vs make
I'm not criticizing. I'm seeking to understand.iansjack wrote:It would certainly seem to be sensible to try it before criticising
EDIT: To further clarify, I am hoping some of the more experienced on the topic would share their experiences. You are one of many in particular I had hoped would weigh in with your experiences.
Adam
The name is fitting: Century Hobby OS -- At this rate, it's gonna take me that long!
Read about my mistakes and missteps with this iteration: Journal
"Sometimes things just don't make sense until you figure them out." -- Phil Stahlheber
The name is fitting: Century Hobby OS -- At this rate, it's gonna take me that long!
Read about my mistakes and missteps with this iteration: Journal
"Sometimes things just don't make sense until you figure them out." -- Phil Stahlheber
Re: tup vs make
I tried it, it's fine, but a bit overengineered for what i can do with cmake+ninjaiansjack wrote:It would certainly seem to be sensible to try it before criticising (although, judging from this thread, few seem to agree).eryjus wrote: Maybe I'm being "guided" to give it a try despite my reservations.
Learn to read.
- eryjus
- Member
- Posts: 286
- Joined: Fri Oct 21, 2011 9:47 pm
- Libera.chat IRC: eryjus
- Location: Tustin, CA USA
Re: tup vs make
I thought I would come back and report my results here. I am specifically looking at tup and make and I know that several people use cmake, ninja, capri, and other build tools. I am only looking at the narrow comparison between tup and make. I consider myself to be an expert in neither tool.dozniak wrote:I tried it, it's fine, but a bit overengineered for what i can do with cmake+ninja
First, there is a key difference between where the Tupfiles are required to be placed versus the conventional locations of Makefiles. Tupfiles are required to be placed in the directory where the output of a particular recipe will be placed, versus placing Makefiles more closely aligned with the source files. This leads to 2 key benefits:
1. When a dependency needs to be built, it leads to a quicker analysis of how to build that dependency. Yeah, I know they are stored in the DAG, but I can see performance benefits to not having to look through all of the rules to make these determinations.
2. In my environment, several sources are compiled into many targets, leading to a one-to-many source-to-target relationship. When Makefiles are aligned with the source, this relationship is more difficult to get right, relying on lots of different make rules to build the source into the right target. On the other hand, the Tupfiles are aligned to the target location, and since my targets are located in architecture-specific directories means they are aligned as well with the cross compiler required to produce them. This is a HUGE simplification.
tup also monitors the file system to see what things are read during a compile operation. Genius!! This eliminates the need to "pre-pre-process" a source file to capture the dependencies. This is another HUGE simplification.
So, what about my special targets like 'make run-i686'? Well, I started to make scripts for each of these and had en epiphany: use the two tools to their strengths. So, I created a much shorter Makefile that had all my phony targets and defined the recipe to execute each. I also created a 'make all' rule that simply runs 'tup', and make that the default rule. Therefore 'make' and 'tup' do exactly the same thing.
The overall simplification was rather dramatic. I started with 9 Makefiles and makefrag.mk files and took care so that I did not have a recursive make situation. These makefiles totalled 1949 lines with about 315 lines (give or take) of header comments in total.
With these changes, I now have 1 Makefile and 13 Tupfiles. The Makefile now has 126 lines (with 68 lines of header comments). The Tupfiles total 417 lines with about 200 lines (again, give or take) as header comments across all the Tupfiles.
I have given up none of my build system goals with these changes, but resulted in a simpler build system to maintain.
I think it is.Solar wrote:Perhaps tup is easier to set up correctly
Adam
The name is fitting: Century Hobby OS -- At this rate, it's gonna take me that long!
Read about my mistakes and missteps with this iteration: Journal
"Sometimes things just don't make sense until you figure them out." -- Phil Stahlheber
The name is fitting: Century Hobby OS -- At this rate, it's gonna take me that long!
Read about my mistakes and missteps with this iteration: Journal
"Sometimes things just don't make sense until you figure them out." -- Phil Stahlheber
Re: tup vs make
Thank you for the effort. Two points I'd like to address, though.
Now, CMake does place the build files in the binary / target directory, but that's a different matter, as the build files are generated, i.e. "product" files.
(Or did I misunderstand, and those "tupfiles" are auto-generated as well?)
If there is no target file (first run), you need to compile the source no matter what.
If there is a target file (subsequent runs), your first run has generated the dependency information.
It's a side effect, not a "pre-pre-processing".
If I understand this correctly to mean that the tupfiles need to be placed in the directory where the binaries will end up, I'd consider this a massive flaw. Source files and "product" files should be kept strictly separate, to keep the source tree clean (e.g. for packaging, and version control).eryjus wrote:Tupfiles are required to be placed in the directory where the output of a particular recipe will be placed...
Now, CMake does place the build files in the binary / target directory, but that's a different matter, as the build files are generated, i.e. "product" files.
(Or did I misunderstand, and those "tupfiles" are auto-generated as well?)
Only that the procedure I showcased for Makefiles does not need to "pre-pre-process" a source file either.tup also monitors the file system to see what things are read during a compile operation. Genius!! This eliminates the need to "pre-pre-process" a source file to capture the dependencies. This is another HUGE simplification.
If there is no target file (first run), you need to compile the source no matter what.
If there is a target file (subsequent runs), your first run has generated the dependency information.
It's a side effect, not a "pre-pre-processing".
Every good solution is obvious once you've found it.
- eryjus
- Member
- Posts: 286
- Joined: Fri Oct 21, 2011 9:47 pm
- Libera.chat IRC: eryjus
- Location: Tustin, CA USA
Re: tup vs make
Yeah, I cannot argue with that at all. I did take the approach to maintain a separation between the bin folder and the sysroot folder (which I was already doing with make anyway) and copy all the non-Tupfiles from bin to sysroot. This kept my bin folder as an intermediate destination and therefore lessened that pain.Solar wrote:If I understand this correctly to mean that the tupfiles need to be placed in the directory where the binaries will end up, I'd consider this a massive flaw. Source files and "product" files should be kept strictly separate, to keep the source tree clean (e.g. for packaging, and version control).
Now, I can barely spell 'CMake', but it sounds like it could have a similar situation where the build files would "clutter" up the test image depending on how it was set up. And, Tupfiles are not generated output -- at least not by tup.
I'm sure there are many other ways to do this other than what I had set up, but I also had to make sure that the .o file was dependent on the .d file or I would get odd failures that would be fixed with `make clean` regardless of whether the target was there already. Running `make clean`, or more to the point not needing to run it, was not something I fully appreciated until recently. It can be argued that the need (not desire) to run `make clean` is the result of a broken build system. My make build system was complicated enough it took a lot of effort to get right and keep right. tup has removed a lot of that complexity for me (as I'm sure CMake does for you).Solar wrote:If there is a target file (subsequent runs), your first run has generated the dependency information.
Adam
The name is fitting: Century Hobby OS -- At this rate, it's gonna take me that long!
Read about my mistakes and missteps with this iteration: Journal
"Sometimes things just don't make sense until you figure them out." -- Phil Stahlheber
The name is fitting: Century Hobby OS -- At this rate, it's gonna take me that long!
Read about my mistakes and missteps with this iteration: Journal
"Sometimes things just don't make sense until you figure them out." -- Phil Stahlheber