Re: Workflow questions
Posted: Mon Nov 28, 2011 6:09 pm
First of all, my benchmark was poorly representative. Revising it to behave more like what Brendan described, it comes down to this:
Baseline:
Brendan-style:
Makefile:
Depending on the run, Brendan-style and Makefile-from-scratch are generally the same, though Makefile-from-scratch is often a little faster (I'd need a bigger project to find anything statistically significant). However, for anything else, i.e. rebuilding half the project (to simulate a touched header file) Makefile-style is significantly faster- 30% or more.
Baseline:
Code: Select all
$ time g++ -o test *.cc
real 0m1.343s
user 0m1.136s
sys 0m0.176s
Code: Select all
$ time cat *.cc | g++ -o test -x c++ -
real 0m0.759s
user 0m0.676s
sys 0m0.072s
Code: Select all
$ make clean
$ time make -j6
real 0m0.735s
user 0m1.720s
sys 0m0.156s
$ time make -j6
real 0m0.014s
user 0m0.000s
sys 0m0.008s
$ touch file.cc printer.cc parser.cc
$ time make -j6
real 0m0.546s
user 0m0.704s
sys 0m0.120s
As you can see, parallelism gains you a lot, especially when you've touched a major header file. Pick an appropriate number of threads for your dev machine, give make a -jN, and you win. 3 characters is a lot less than a system to manage build threads yourself in C, for a better system in the end.Brendan wrote:Now do 50 of those at the same time ("When your build utility is running 150 threads and up to 50 external processes, "extra parallelism" isn't going to gain you anything."); and touch a header file that is used by everything.
If your project consists of lots of things that are created in lots of different ways, how is a custom build tool going to be any less of a mess? In a Makefile on the other hand, you just add file or directory-specific rules. I don't see it getting any simpler than that.Brendan wrote:If all things are built in similar ways (e.g. with GCC using the same arguments), then a well-written Makefile wouldn't take much maintaining. If your project consists of lots of things that are created in lots of different ways then it becomes a mess.
Other than replacing make (with harmful requirements imposed on project/build structure), implementing in C what would be more easily done with a few make targets and a real version control system, and duplicating various documentation and parsing utilities (which I don't see your problem with), what does it do? It reorders error messages, which is nice if you ever get a million of them at once; it guesses include directories, which is nice if you ever have more than one or two; it implements an ad-hoc version of make's dependency system just for scripts, which is nice if you don't have a more general one that can be automated. Oh, and it's all implemented in multithreaded C. Awesome.Brendan wrote:I lot of people make the mistake of thinking my build utility is just a make replacement. The reality is that "make replacement" is only a small amount of the build utility's code. If you combined make, docbook and doxygen you'd be a lot closer (but still be lacking support for a lot of things).Rusky wrote:Is it somehow more of a waste to write a simple Makefile than it is to maintain a custom make replacement written in C? Is it somehow more of a waste to use existing, well-tested tools that store, with virtually no effort on your part, much more of your project's history in a better format?
You call libraries of common code obfuscation, I call their absence harmful duplication of code. The Linux kernel, for example, has generic code like allocators, data structures, atomics, etc. in a single location. Kernel modules don't re-implement any of that, and if they did, maintenance would be impossible.Brendan wrote:Obfuscating code by hiding common algorithms in libraries is something I don't do.
Alzheimer's is not a prerequisite for forgetting exactly why some line is the way it is. While good testing and commenting are great, version control can tell you exactly which lines changed from what, to what, and for which reasons, even when you don't have a test case or perfect memory for whatever it is you changed last month. Version control also makes what information you could gather anyway far more accessible than searching through archives (full text search through the whole project's history? done.)Brendan wrote:Ok - that makes sense.JackScott wrote:I believe (Solar may correct me here) but context basically means which component of the operating system. Say if I fix a bug in my keyboard handler, and then Ctrl-C process termination no longer works, I want to be able to take a guess that it's in the keyboard handler and not process control code. Revision control means I can see what code has changed in both components since it was last working, and thus figure out where to focus the bug-fixing efforts.
I test often, and I don't have Alzheimer’s.
Version control is a better way to make those copies for several reasons. They're presented in context with the other files that they actually worked with and a log message, they don't clutter up your working directory (or get duplicated into backups unnecessarily), they can stay around forever because there's no mental overhead, and they can be stored in separate branches for when you're experimenting with different implementations.Brendan wrote:Occasionally (if I'm not sure what I'm doing) I might make a copy a file before doing something so that I can switch back to the original if I screw things up. Usually this is for refactoring the code though - so I can have the old/original in one window and use it as a reference while modifying/refactoring.Solar wrote:
- easy revert to last checked-in version;
That last point is utter crap. Especially with the ease of branching and merging you get from distributed version control, there's absolutely no reason not to make a new branch for each and every feature. It has nothing to do with indecision about the future of the project, and everything to do with avoiding lots of ad-hoc backup archives, files, etc. With tools like gitk, you can even visually browse your project's history, and branches make that much more clear.Brendan wrote: A decision chart (for "single developer" projects only):
- Do you want to create a branch?
- No:
- Ok then.
- Yes:
- Will you want to merge later?
- No:
- Then just create a copy of the project's directory.
- Yes (or maybe):
- Then you need to spend more time deciding what the future of your project should be instead.
Most developers *ahem* already use version control, so they don't need an extra tool. However, version control does not preclude a "make dist" target that builds an archive of the project. The problem is that those archives are not a good enough substitute for real version control.Brendan wrote:Can I "drag and drop" the repo into my FTP server's directory if I felt like letting someone download the entire project?
[...]
With git, you can just drag the entire directory; but the person who might want to download it walks away without downloading anything because it's too much hassle.
Version control does not mean F12 has to upload anything. Distributed version control like Git commits locally, so you could just do a "git commit -am 'auto-generated build id'" each build and have exactly what you do now with archives, but better.Brendan wrote:My internet connection is too slow for that - press F12, wait for 5 minutes while it uploads.
[...]
When I press F12 everything completes in less than half a second and I'm trying to reduce it further. Yesterday I pressed "F12" about 100 times. Those ten seconds (which doesn't even count creating the diff in the first place) would've added up to about 16 minutes throughout the day. Screw that.