Page 1 of 3

Brendan's Build Utility (was Workflow questions)

Posted: Sun Nov 27, 2011 12:40 pm
by mreiland
Hey Brendan, I'm actually really interested in that utility, is there a chance I could get a look at the old version to give me an idea of how I would go about automating things? Obviously my needs are much simpler than yours, but it would be nice to see the specifics of how someone else solved that problem.

Re: Workflow questions

Posted: Sun Nov 27, 2011 6:16 pm
by Brendan
Hi,
mreiland wrote:Hey Brendan, I'm actually really interested in that utility, is there a chance I could get a look at the old version to give me an idea of how I would go about automating things? Obviously my needs are much simpler than yours, but it would be nice to see the specifics of how someone else solved that problem.
The basic idea is relatively simple, but first let me explain something about my project's source code.

Once upon a time computers didn't have much memory (e.g. 64 KiB of RAM, or less). Back then, you simply couldn't fit the compiler and all the data it needs in memory at the same time. To get around that people designed toolchains so that large projects could be broken up into smaller pieces and compiled separately, then all the separate pieces could be combined. For example, you might have 20 different C source files, compile them into object files separately, then link all the object files together.

Ironically, this ancient practice isn't very smart on modern machines (where there's plenty of RAM). Not only are you starting up the compiler 20 times instead of once, you're doing a lot more file IO (storing the object files, then loading them all again). In theory you get to avoid recompiling individual source files that didn't change (unless you change a header file or something and all object files need to be recompiled), but in practice (even if only one source file changed) this benefit is negligible. It also means that the compiler can't optimise properly because it can only see a small part of the whole program at a time.

So, I don't do that. For each binary I have a file (called "0index.asm" or "0index.c") that includes all the other source files; and the compiler and linker (or the assembler) does the entire binary in one go. Of course this also means it's easy for a utility to look at the first source file ("0index.asm" or "0index.c") and find all of the dependencies... ;)


The first thing the build utility does is recursively scan the project's directory looking for "important" files, like "0index.asm" or "0index.c". For each important file it finds it creates a "thread data structure" and spawns a thread. Each thread opens its initial file and parses the file's header and searches for anything it "#includes". To replace something like "make" you have to know what command/s to use to compile or assemble the source code, and what the resulting output file/s will be called. That's why each of the "0index.asm" and "0index.c" files need a small header.

Once each thread knows what its dependencies and output files are, it does a few sanity checks - determine if other threads want to create the same output binaries, determine if there's any circular dependencies, etc. While doing this a thread determines what other threads it might need to wait for.

Then each thread looks at file modification times. The idea here is to determine which output files definitely need to be updated and which output files might not need to be updated. This is simple - if the latest source file modification time is later than the earliest output file modification time, then the thread definitely does need to rebuild its output file/s. In this case the thread sets a flag in its "thread data structure" so other threads can see that.

Next, if a thread might not need to update its output file/s, it checks any other threads that it depends on to determine if any of them will be updated (this doesn't need to happen when a thread already know their output file/s definitely need to be updated). After this a thread knows if it has to do something or not; and if it doesn't have to update it's output file/s it simply stops.

After this we're left with threads that do have to update their output files. The first thing a a thread needs to do is wait for any other threads that it depends on to complete. After that, a thread can start the compiler or assembler to generate the output files.

Of course there's a bunch of little details I've skipped. For example, you need to redirect any child process' "stdout" and "stderr" to the build utility, and buffer any output from each thread so it can be displayed in a sane order.

Once you've got this working it's easy to extend. The first thing you'd want to look at is include files and headers. When you're scanning the project's directory you can remember any sub-directories called "include"; and use this information (combined with the list of dependencies) to automatically generate the include file path. For example, if "/foo/bar/0index.asm" contains the line "%include "thing.inc""; then you can search for "/foo/bar/thing.inc", then "/foo/inc/thing.inc", then "/inc/thing.inc" and add the correct directory to the include path that you pass to the assembler.

The next step is to add support for "scripts" to the build utility. The problem with scripts is you can't easily determine which files they depend on, so I just add a "dependency list" to the script's header. You've already got the ability to execute commands (to run compilers and assemblers) and that's all a script is (just run a list of commands one at a time). You've also already got the automatic dependency resolution stuff. For example, if a script generates the file "/foo/bar/hello.asm" and this file is included by "foo/bar/0index.asm", then the thread responsible for "foo/bar/0index.asm" know that it has to wait for the script's thread to complete before it assembles "foo/bar/0index.asm".

Then you start looking at backups. It's easy to get the build utility to "tar and gzip" the entire project's directory, but you can be smarter than that. You already know which files will be generated by compilers, assemblers and scripts; and there's no point including any of those files in the backup, so you can tell "tar" to exclude them. Because none of the output files will be included in the backup, you can start the "tar and gzip" process as soon as the other threads have parsed headers; and you don't have to wait until the other threads have finished updating their output binaries. Of course the problem with generating backups like this is that you'll fill up your hard drive with millions of backups (on a good day I'll press F12 about 50 times, and 50 backups per day starts to add up over a few years). However, it's easy enough to start the "tar and gzip" process, then (while it's running) do some maintenance of you "backup" directory and delete unnecessary backups. I give the backups a file name based on the time and date, and then keep:
  • all backups from the current day and the previous day, plus
  • one backup per day for the most recent 2 weeks, plus
  • one backup per week for the most recent 2 months, plus
  • one backup per month
Anything other backups get discarded.

The next thing to think about is documentation. If you're like me you'll end up with a bunch of utilities in C (to create floppy images, create CD images, convert font data into a different format, compress files in a way your boot code likes, etc); and eventually you'll want to provide documentation on how to install/use the OS for end users. Then there's specifications - I end up creating lots of specifications for different file formats, different APIs, etc; and they're not too different from documentation. I really don't like existing tools (like LaTeX, docbook, etc), so I add that to my build utility too.

Basically, when the build utility scans the project directory looking for "special" files; it also notices anything called "*.txt" in a "doc" or "spec" directory; and for each text file it finds it starts a thread to process them. These threads convert the plain text source files into fancy HTML web pages (for example, here's the [rul=http://bcos.hopto.org/www2/doc/util/build.html]documentation for the previous version of the build utility[/url], which was processed by the previous version of the build utility).

Now, one of the things that annoys me with most normal utilities is that you can move/rename or delete files and you'll end up with old output files. For example, let's say you've got a file "/doc/foo.txt" which gets converted into "/www/doc/foo.html", and you rename the source file to "/doc/bar.txt". Now you get a new "/www/doc/bar.html" file, but you've also got an obsolete "/www/doc/foo.html" that's left laying around. Next feature for the build utility is a "cleaner" thread - while everything else is happening, find all these obsolete/orphaned files and remove them!

Now let's talk about source code. If your build utility is already generating fancy HTML documentation and specifications, why not make it parse assembly and C (and scripts) too and generate fancy web pages for them too? There's plenty of utilities that do this (e.g. doxygen), but they're a pain in the neck (maintenance) and I don't like the way the resulting web pages look. Also, an "all in one" utility can be much more powerful than any of these stand-alone utilities. For example, I can have a comment like "// For more information see [s:foo, bar]." in some source code, and the utility will automatically find the section labelled "bar" in the specification "foo.txt" and generate a HTML link that looks like "// For more information see Section 1.2.3 The Thingy, in Some Specification" (which is easy to do because the utility already parses specifications - one thread just asks the other thread for the information), and if I change anything in the specification the web page for the source code will be updated to reflect the changes. To make this more fun I also added titles and subtitles to the headers (e.g. the header in an "0index.asm" might say that the title is "80x86 Floppy Boot Loader").

Finally, there's navigating through the project. A bunch of disorganised web pages is just a mess. To fix that I have "association files" which organise the web pages into a hierarchical structure. Any file called "index.txt" is converted into a fancy HTML page with HTML links to its parent and all it's children (but it's very similar to the way documentation and specifications are parsed and converted to fancy HTML). On top of that, no web site is complete without a site map - I have a thread that auto-generates a project map from all titles of other pages (that follows the "hierarchical tree" navigation system).

That's about it for the previous version of my build utility. For the new version a lot more features are planned - performance improvements (the thing already has it's own file cache), source code reformatting, a project-wide glossary, better HTML pages (CSS + HTML4 rather than plain HTML3), better support for C source code, etc. :)


Cheers,

Brendan

Re: Workflow questions

Posted: Sun Nov 27, 2011 7:22 pm
by Rusky
I'm skeptical that the performance difference of recompiling unchanged source files is negligible.

On a modern system, multiple instances of the compiler can be parallelized to take advantage of extra RAM (while their code and read-only data will be shared), communication between compiler/assembler and linker will be through the disk cache in the extra RAM, and LTO will restore any knowledge lost by separate compilation.

On any system, keeping support libraries separate and sharing their object code between different libraries/executables would reduce a lot of duplicated effort- changing the library only means relinking (and possibly doing LTO), not recompiling every binary that uses the library.

Of course, benchmarks are more useful than speculation or conventional wisdom.

I'm also not sure your backup system is necessary with version control. You could easily have a Makefile commit and tag the current state each time you do a build, and let the software that's designed to keep track of history do its job much more efficiently than simply compressing a bunch of (mostly the same) files over and over and then dropping potentially useful data.

Re: Workflow questions

Posted: Sun Nov 27, 2011 11:37 pm
by Brendan
Hi,
Rusky wrote:I'm skeptical that the performance difference of recompiling unchanged source files is negligible.

On a modern system, multiple instances of the compiler can be parallelized to take advantage of extra RAM (while their code and read-only data will be shared), communication between compiler/assembler and linker will be through the disk cache in the extra RAM, and LTO will restore any knowledge lost by separate compilation.
When your build utility is running 150 threads and up to 50 external processes, "extra parallelism" isn't going to gain you anything.
Rusky wrote:On any system, keeping support libraries separate and sharing their object code between different libraries/executables would reduce a lot of duplicated effort- changing the library only means relinking (and possibly doing LTO), not recompiling every binary that uses the library.
I don't use libraries for anything, and can't think of any case where different binaries use the same code (where using libraries might be worthwhile). Of course this depends on what the OS is like - for my OS (where "services" running as different processes are used instead of shared libraries) it's less useful.
Rusky wrote:Of course, benchmarks are more useful than speculation or conventional wisdom.
Agreed. It's a bit hard to benchmark something like that though, as it depends on a lot of different factors (number of files, chance of all files needing to be recompiled, etc).

I did do a few tests. First was doing "time gcc -c test.c" where the file "test.c" is completely empty to estimate that it costs around 20 ms of overhead just to start GCC. Another test was "gcc -pipe" to determine that using pipes to avoid file IO (even for a single source/object file and even when the rest of the computer is idle) improves compile times by about 5%.
Rusky wrote:I'm also not sure your backup system is necessary with version control. You could easily have a Makefile commit and tag the current state each time you do a build, and let the software that's designed to keep track of history do its job much more efficiently than simply compressing a bunch of (mostly the same) files over and over and then dropping potentially useful data.
I don't want to waste my time writing/maintaining makefiles or version control systems (and don't see the point for what is essentially a "single developer" project). The backup is there as a backup (e.g. in case my main file system dies), and a nice clean and simple "tar.gz" serves that purpose well (although I should probably copy it from my "backup" hard drive to a USB stick more often).


Cheers,

Brendan

Re: Workflow questions

Posted: Mon Nov 28, 2011 2:56 am
by Solar
A note on Brendan's build system: Gentoo Linux offers an optional build flag for KDE (C++) to lump together all sources into a single file and compile that, instead of compiling the individual translation units. The performance increase is massive, IMHO - as is the memory use.

Re: Workflow questions

Posted: Mon Nov 28, 2011 3:29 am
by rdos
I have to disagree to Brendan's claims that version control is not necesary, and that it is sufficient to just zip the source tree. IMO, that is NOT sufficient. When you discover that a change made several weeks ago have side-effects that are discovered after several revisions of the source tree, you will discover that simply zipping the source tree is not adequate. In worst case, with version control, it is possible to go back to any previous release, not only the last one zipped. I've done this several times with RDOS as I've not been able to resolve errors in any other way. The big deal with version control is that ALL revisions are saved in a safe place, not only one or a few that the developper might remember to do. The diff-functionality of typical version control tools is also handy, although it is possible to do this between zipped files as well.

Re: Workflow questions

Posted: Mon Nov 28, 2011 3:46 am
by Solar
Gosh, I missed that one!
Brendan wrote:I don't want to waste my time writing/maintaining makefiles or version control systems (and don't see the point for what is essentially a "single developer" project).
I seldom disagree with Brendan, but I very much do in this case.

1) A well-written Makefile template is adaptable to a wide range of projects, and shouldn't take much "maintaining". It is just very unfortunate that so few such templates exist.

2) Version Control is never a waste of time (and doesn't take much time to begin with), not even in a single-developer project. Without going into detail, immediately obvious benefits are:
  • changes being annotated (unless you go for really long zip/tarball names);
  • "blame" (the ability to check which parts of a source file have been changed when, by whom, and in which context);
  • easy revert to last checked-in version;
  • no-brainer synchronization of multiple development machines.
There are many more advantages, those are just the most obvious ones. After you did set up your repo, using VCS takes even less time than tarballing your source - and definitely uses less hard drive space.

Getting a VCS repository hosted somewhere on the internet additionally gives you automatic off-site backups and the ability to check your sources even when away from your development machine.

VCS, IMHO, is one of the things that should just be automatic for a developer, like indenting your code. You just do it.

Re: Workflow questions

Posted: Mon Nov 28, 2011 3:48 am
by gerryg400
True. I cannot imagine doing anything without version control.

Re: Workflow questions

Posted: Mon Nov 28, 2011 4:59 am
by Combuster
So, lets ask Brendan about indentation.
Given his language of choice, it'll probably be restricted to vertical indentation only :twisted:.

Re: Workflow questions

Posted: Mon Nov 28, 2011 7:33 am
by Rusky
Brendan wrote: When your build utility is running 150 threads and up to 50 external processes, "extra parallelism" isn't going to gain you anything.
This would be an interesting thing to benchmark- 50 GCC processes (and less, to serialize them if it starts hurting performance) for separate compilation vs. 50 source files compiled by the same GCC process. Should be simple with make's -j option, which I must add is much simpler than writing a custom utility in C.
Brendan wrote: I don't use libraries for anything, and can't think of any case where different binaries use the same code (where using libraries might be worthwhile). Of course this depends on what the OS is like - for my OS (where "services" running as different processes are used instead of shared libraries) it's less useful.
Do none of your different binaries use the C standard library? Or any common algorithms? Or is that implemented in a separate process? ;)
Brendan wrote: I did do a few tests. First was doing "time gcc -c test.c" where the file "test.c" is completely empty to estimate that it costs around 20 ms of overhead just to start GCC. Another test was "gcc -pipe" to determine that using pipes to avoid file IO (even for a single source/object file and even when the rest of the computer is idle) improves compile times by about 5%.
Here's a test. A small project using a Makefile with automatic dependency generation (find it at http://github.com/rpjohnst/dejavu if you want to test for yourself), on a dev machine with 4 cores, times shown are after several runs, so everything is already in the disk cache, etc. as if I'd been editing/building for a while:

Code: Select all

$ time g++ -o test *.cc
real	0m1.343s
user	0m1.136s
sys	0m0.176s
$ make clean
$ time make -j6
real	0m0.735s
user	0m1.720s
sys	0m0.156s
$ time make -j6
real	0m0.014s
user	0m0.000s
sys	0m0.008s
$ touch file.cc printer.cc parser.cc
$ time make -j6
real	0m0.546s
user	0m0.704s
sys	0m0.120s
It is noticeably faster to build with an appropriately-parallelized make no matter how many files have changed, and when the only thing that happens is determining which files have changed, it's virtually instantaneous.

The Makefile I wrote is much, much simpler than any C-based utility could hope to be. It contains nothing related to parallelism- I just pass -j6. It only builds what is necessary, to a noticeable performance improvement.

It's also easily viewable thanks to version control. :P

Of course, maybe the performance crosses over after you add enough files, or maybe it's different for assembly vs. C vs. C++, or any number of things. But at least in this case, separate compilation is a massive improvement.
Brendan wrote: I don't want to waste my time writing/maintaining makefiles or version control systems (and don't see the point for what is essentially a "single developer" project). The backup is there as a backup (e.g. in case my main file system dies), and a nice clean and simple "tar.gz" serves that purpose well (although I should probably copy it from my "backup" hard drive to a USB stick more often).
Is it somehow more of a waste to write a simple Makefile than it is to maintain a custom make replacement written in C? Is it somehow more of a waste to use existing, well-tested tools that store, with virtually no effort on your part, much more of your project's history in a better format?

Re: Workflow questions

Posted: Mon Nov 28, 2011 1:14 pm
by Brendan
Hi,
Solar wrote:1) A well-written Makefile template is adaptable to a wide range of projects, and shouldn't take much "maintaining". It is just very unfortunate that so few such templates exist.
If all things are built in similar ways (e.g. with GCC using the same arguments), then a well-written Makefile wouldn't take much maintaining. If your project consists of lots of things that are created in lots of different ways then it becomes a mess.

2) Version Control is never a waste of time (and doesn't take much time to begin with), not even in a single-developer project. Without going into detail, immediately obvious benefits are:
Solar wrote:
  • changes being annotated (unless you go for really long zip/tarball names);
Which is useful because...?

I do have a "project log", but that's mostly intended for spectators - so the general public can see my (lack of) progress on the web site.
Solar wrote:
  • "blame" (the ability to check which parts of a source file have been changed when, by whom, and in which context);
It's a single developer project, so "by whom" is obvious (I don't have schizophrenia, and neither do I). For "when", I've never needed to know. I don't understand the "in which context" part (everything is done in the context of me working on the OS).
Solar wrote:[*] easy revert to last checked-in version;
Occasionally (if I'm not sure what I'm doing) I might make a copy a file before doing something so that I can switch back to the original if I screw things up. Usually this is for refactoring the code though - so I can have the old/original in one window and use it as a reference while modifying/refactoring.
Solar wrote:[*] no-brainer synchronization of multiple development machines.[/list]
Not sure why I'd want multiple development machines - they're all connected to the same KVM anyway.
Solar wrote:There are many more advantages, those are just the most obvious ones. After you did set up your repo, using VCS takes even less time than tarballing your source - and definitely uses less hard drive space.
Can I "drag and drop" the repo into my FTP server's directory if I felt like letting someone download the entire project?
Solar wrote:Getting a VCS repository hosted somewhere on the internet additionally gives you automatic off-site backups and the ability to check your sources even when away from your development machine.
My internet connection is too slow for that - press F12, wait for 5 minutes while it uploads.
berkus wrote:
Solar wrote:VCS, IMHO, is one of the things that should just be automatic for a developer, like indenting your code.
So, lets ask Brendan about indentation.
Indented with tabs. Columns for assembly, and my own style for C that most people wouldn't like. ;)


Cheers,

Brendan

Re: Workflow questions

Posted: Mon Nov 28, 2011 1:41 pm
by Brendan
Hi,
Rusky wrote:
Brendan wrote: I don't use libraries for anything, and can't think of any case where different binaries use the same code (where using libraries might be worthwhile). Of course this depends on what the OS is like - for my OS (where "services" running as different processes are used instead of shared libraries) it's less useful.
Do none of your different binaries use the C standard library? Or any common algorithms? Or is that implemented in a separate process? ;)
Utilities use the C standard library, but the C standard library is not part of my project (it's something external that I don't have to worry about).

Obfuscating code by hiding common algorithms in libraries is something I don't do.

Rusky wrote:It is noticeably faster to build with an appropriately-parallelized make no matter how many files have changed, and when the only thing that happens is determining which files have changed, it's virtually instantaneous.
Now do 50 of those at the same time ("When your build utility is running 150 threads and up to 50 external processes, "extra parallelism" isn't going to gain you anything."); and touch a header file that is used by everything.
Rusky wrote:
Brendan wrote: I don't want to waste my time writing/maintaining makefiles or version control systems (and don't see the point for what is essentially a "single developer" project). The backup is there as a backup (e.g. in case my main file system dies), and a nice clean and simple "tar.gz" serves that purpose well (although I should probably copy it from my "backup" hard drive to a USB stick more often).
Is it somehow more of a waste to write a simple Makefile than it is to maintain a custom make replacement written in C? Is it somehow more of a waste to use existing, well-tested tools that store, with virtually no effort on your part, much more of your project's history in a better format?
I lot of people make the mistake of thinking my build utility is just a make replacement. The reality is that "make replacement" is only a small amount of the build utility's code. If you combined make, docbook and doxygen you'd be a lot closer (but still be lacking support for a lot of things).


Cheers,

Brendan

Re: Workflow questions

Posted: Mon Nov 28, 2011 3:20 pm
by JackScott
Brendan wrote:It's a single developer project, so "by whom" is obvious (I don't have schizophrenia, and neither do I). For "when", I've never needed to know. I don't understand the "in which context" part (everything is done in the context of me working on the OS).;
I believe (Solar may correct me here) but context basically means which component of the operating system. Say if I fix a bug in my keyboard handler, and then Ctrl-C process termination no longer works, I want to be able to take a guess that it's in the keyboard handler and not process control code. Revision control means I can see what code has changed in both components since it was last working, and thus figure out where to focus the bug-fixing efforts.
Brendan wrote:Can I "drag and drop" the repo into my FTP server's directory if I felt like letting someone download the entire project?
[...]
My internet connection is too slow for that - press F12, wait for 5 minutes while it uploads.
With git, you can just drag up the entire directory. Then the downloader gets full revision history as well. Whats more, you can sync the copy on the web/ftp server with your local copy so that you only have to upload the differences in revisions.

Even with Australian Internet (Australian dial-up Internet even) uploading the diff between two revisions should take no more than ten seconds. Even if those ten seconds are important to you, this upload can be done parallel to other tasks (like coding) so the time doesn't really matter.

Solar also didn't mention (unless I'm blind, which is a possibility) the fact that revision control lets you do branching of code much easier. If I wanted to work on a component in seperation from all the others, and then merge the changes in all at once later on, this is a fairly easy operation in any revision control (and some, like git and mercurial, make it positively trivial).

Re: Workflow questions

Posted: Mon Nov 28, 2011 4:39 pm
by Brendan
Hi,
JackScott wrote:
Brendan wrote:It's a single developer project, so "by whom" is obvious (I don't have schizophrenia, and neither do I). For "when", I've never needed to know. I don't understand the "in which context" part (everything is done in the context of me working on the OS).;
I believe (Solar may correct me here) but context basically means which component of the operating system. Say if I fix a bug in my keyboard handler, and then Ctrl-C process termination no longer works, I want to be able to take a guess that it's in the keyboard handler and not process control code. Revision control means I can see what code has changed in both components since it was last working, and thus figure out where to focus the bug-fixing efforts.
Ok - that makes sense.

I test often, and I don't have Alzheimer’s.
JackScott wrote:
Brendan wrote:Can I "drag and drop" the repo into my FTP server's directory if I felt like letting someone download the entire project?
[...]
My internet connection is too slow for that - press F12, wait for 5 minutes while it uploads.
With git, you can just drag up the entire directory. Then the downloader gets full revision history as well. Whats more, you can sync the copy on the web/ftp server with your local copy so that you only have to upload the differences in revisions.
When I'm downloading stuff from the Internet there's 4 cases - single uncompressed files (most common - PDFs, etc), single gzipped files (rare), "*.zip" archives, "tar/gzipped" archives, and stuff I don't download because I couldn't be bothered installing/using an appropriate tool to unpack it. For "*.zip" archives and "tar/gzipped" archives I can just click the archive in KDE like a normal directory (much the same as recent Windows and "*.zip" archives) to open it up and get to the individual files.

With git, you can just drag the entire directory; but the person who might want to download it walks away without downloading anything because it's too much hassle.

Note: I do realise that for downloads I probably should use "*.zip" instead, to reduce hassle for a wider audience.
JackScott wrote:Even with Australian Internet (Australian dial-up Internet even) uploading the diff between two revisions should take no more than ten seconds. Even if those ten seconds are important to you, this upload can be done parallel to other tasks (like coding) so the time doesn't really matter.
When I press F12 everything completes in less than half a second and I'm trying to reduce it further. Yesterday I pressed "F12" about 100 times. Those ten seconds (which doesn't even count creating the diff in the first place) would've added up to about 16 minutes throughout the day. Screw that. ;)
JackScott wrote:Solar also didn't mention (unless I'm blind, which is a possibility) the fact that revision control lets you do branching of code much easier. If I wanted to work on a component in seperation from all the others, and then merge the changes in all at once later on, this is a fairly easy operation in any revision control (and some, like git and mercurial, make it positively trivial).
A decision chart (for "single developer" projects only):
  • Do you want to create a branch?
    • No:
      • Ok then.
    • Yes:
      • Will you want to merge later?
        • No:
          • Then just create a copy of the project's directory.
        • Yes (or maybe):
          • Then you need to spend more time deciding what the future of your project should be instead.
:)

Cheers,

Brendan

Re: Workflow questions

Posted: Mon Nov 28, 2011 5:14 pm
by Jezze
@Brendan: I guess you wrote that over-simplified use case in a not so serious way and to highlight you are not using git.

Because that is a true flame-starter. =)