Page 3 of 3
Re: Workflow questions
Posted: Wed Nov 30, 2011 11:02 pm
by Rusky
Brendan wrote:If it wasn't a problem, I wouldn't have noticed a problem.... To be honest it's easier for me to work around the problem in the build utility than to do extensive research.
To be sure, but wouldn't it be even easier (than a custom utility over a random-name-generating Makefile) to use the ?version trick or configure your web server not to let browsers cache CSS? In any case, how often does your CSS change? There's a reason servers usually let stylesheets get cached for a long time.
Brendan wrote:For dependencies, for web page generation alone (not including building binaries, etc): ...
Makes sense. I can see why you wanted to write your own tool for generating documentation, but I also have a nagging feeling that you've over-complicated things. Make could calculate all of those dependencies for you, with the help of a simple utility to do the equivalent of "gcc -M" for doc files. That still has the problem of re-parsing headers though, so this is probably the best reason I've seen so far.
Brendan wrote:- allocators - not sure which "allocators". For allocating ranges of the physical address space (not RAM), physical memory pages and virtual memory pages use the kernel API/s. For allocating IO port ranges and IRQs it's all contained within a "device manager" module (nothing else ever allocates these things). For anything else, it either doesn't exist, is confined to a specific module or you create your own implementation to suit your specific case.
The Linux kernel has several special purpose memory allocators more akin to malloc than anything that manages pages, IRQs, or ports. They implement optimizations like fixed-size slab allocation, and are reusable with different arenas, sizes, memory regions, etc. The whole point - efficiency - would be missed if they were behind any level of indirection.
Brendan wrote:- data structures - no idea which ones. The only data structures that are used by more than one binary are those that are covered by IPC protocols (and the specifications and include/header files that define them) or the kernel's APIs (and the specifications and include/header files that define them).
The Linux kernel has common code for handling arbitrary-sized bitmaps, including optimized bit/region set/test/clear, scanning, printing for debugging purposes, cross-architecture portability, etc. It has common code for working with binary trees- searching, lookup, etc. while doing maintenance like keeping them balanced. It also has common code for linked lists, radix trees,
flex arrays, etc. as well as algorithms like binary search, sorting arrays and lists, etc.
Re: Workflow questions
Posted: Thu Dec 01, 2011 12:29 am
by Brendan
Hi,
Rusky wrote:Brendan wrote:If it wasn't a problem, I wouldn't have noticed a problem.... To be honest it's easier for me to work around the problem in the build utility than to do extensive research.
To be sure, but wouldn't it be even easier (than a custom utility over a random-name-generating Makefile) to use the ?version trick or configure your web server not to let browsers cache CSS?
A web browser asks my web server for the file "foo.css?version=1234" and my web server strips off the "?version=1234" part and returns the right file. Then the user downloads a copy of that web page and puts it on their hard disk. In this case, can you guarantee that there are no browsers that will ask the file system for the file "foo.css?version=1234" (rather than "foo.css")? Is the difference between "foo.css?version=1234" and "foo1234.css" worth the hassle of testing every browser, given that you still need to change the version number in all the web pages anyway?
Rusky wrote:In any case, how often does your CSS change? There's a reason servers usually let stylesheets get cached for a long time.
I don't know how often it will change; but it doesn't matter really. If the CSS file changes once in 10 years and people get new web pages with old CSS for half an hour, then I've lost the right to have pride in my ability as a programmer.
Rusky wrote:Brendan wrote:- allocators - not sure which "allocators". For allocating ranges of the physical address space (not RAM), physical memory pages and virtual memory pages use the kernel API/s. For allocating IO port ranges and IRQs it's all contained within a "device manager" module (nothing else ever allocates these things). For anything else, it either doesn't exist, is confined to a specific module or you create your own implementation to suit your specific case.
The Linux kernel has several special purpose memory allocators more akin to malloc than anything that manages pages, IRQs, or ports. They implement optimizations like fixed-size slab allocation, and are reusable with different arenas, sizes, memory regions, etc. The whole point - efficiency - would be missed if they were behind any level of indirection.
Ok, I don't do that. For things like thread and process information blocks I use something I call "sparse arrays" - reserve a large area of virtual memory for an array of structures, then let the page fault handler allocate RAM when it's accessed, and recycle previously allocated/freed entries when you can. Some things are designed to work with 4 KiB pages (e.g. message queues are just linked lists of pages). Of course it's a micro-kernel, so it never sees/cares about all the little structures used by drivers, file systems, networking, etc (which is probably where the Linux kernel uses the majority of its special purpose memory allocators).
Rusky wrote:Brendan wrote:- data structures - no idea which ones. The only data structures that are used by more than one binary are those that are covered by IPC protocols (and the specifications and include/header files that define them) or the kernel's APIs (and the specifications and include/header files that define them).
The Linux kernel has common code for handling arbitrary-sized bitmaps, including optimized bit/region set/test/clear, scanning, printing for debugging purposes, cross-architecture portability, etc. It has common code for working with binary trees- searching, lookup, etc. while doing maintenance like keeping them balanced. It also has common code for linked lists, radix trees,
flex arrays, etc. as well as algorithms like binary search, sorting arrays and lists, etc.
The physical memory manager is the only thing in the kernel that uses bitmaps (and it only uses one). I don't use binary trees, radix trees, flex arrays, binary search or array sorting in the kernel. If you can't handle linked lists with macros instead of library functions then you should be shot (but I've got better things to do than using "
grep list_entry *" to figure out what code actually does, so please don't use macros either).
Cheers,
Brendan
Re: Workflow questions
Posted: Thu Dec 01, 2011 2:51 am
by Solar
Brendan wrote:Then the user downloads a copy of that web page and puts it on their hard disk. In this case, can you guarantee that there are no browsers that will ask the file system for the file "foo.css?version=1234" (rather than "foo.css")? Is the difference between "foo.css?version=1234" and "foo1234.css" worth the hassle of testing every browser, given that you still need to change the version number in all the web pages anyway?
You don't have to.
Seriously, are you
shipping files named "foo1234.css"? I thought we were talking your development workflow, where the only browser that counts is the one
you are using.
Any distributed package should have such ugly workaround kludges removed. If the user's browser caches the CSS of the old version of your documentation, their problem.
Brendan wrote:I don't know how often it will change; but it doesn't matter really. If the CSS file changes once in 10 years and people get new web pages with old CSS for half an hour, then I've lost the right to have pride in my ability as a programmer.
As I said, upstream problem. By the way, the whole issue would be solved (on the customer side) if the
root URL to the documentation would be versioned, i.e. brendanproject.com/docs/v1.1/foo.css. Voila, kludge-free documentation ownage.
Re: Workflow questions
Posted: Thu Dec 01, 2011 4:00 am
by Brendan
Hi,
Solar wrote:Brendan wrote:Then the user downloads a copy of that web page and puts it on their hard disk. In this case, can you guarantee that there are no browsers that will ask the file system for the file "foo.css?version=1234" (rather than "foo.css")? Is the difference between "foo.css?version=1234" and "foo1234.css" worth the hassle of testing every browser, given that you still need to change the version number in all the web pages anyway?
You don't have to.
Seriously, are you
shipping files named "foo1234.css"? I thought we were talking your development workflow, where the only browser that counts is the one
you are using.
Any distributed package should have such ugly workaround kludges removed. If the user's browser caches the CSS of the old version of your documentation, their problem.
You're right, and perhaps I've been a little too lazy. For the end-user documentation, there's no real reason I couldn't generate completely different HTML pages for the online version and the offline version.
Note: For the web site, there is no "development" and "published" - it's all both. If I get bored and write a rude joke in some source code, then it's online for anyone to see the next time I press F12. It's a bit like reality TV (someone repeatedly refreshing a page I happen to be working on would see a new version every few minutes and could watch as the code is written). I like it like that. The idea that someone might be watching and might see all my silly mistakes and typos helps to keep me attentive (even though it's unlikely anyone actually is watching).
Solar wrote:Brendan wrote:I don't know how often it will change; but it doesn't matter really. If the CSS file changes once in 10 years and people get new web pages with old CSS for half an hour, then I've lost the right to have pride in my ability as a programmer.
As I said, upstream problem. By the way, the whole issue would be solved (on the customer side) if the
root URL to the documentation would be versioned, i.e. brendanproject.com/docs/v1.1/foo.css. Voila, kludge-free documentation ownage.
Hmm - every time I press F12, increment the version number, create a new directory to generate all the HTML in, replace all the old pages with HTML redirect pages (so that people's bookmarks don't break), then watch as my "/www" directory gets filled with millions of those HTML redirect pages...
Cheers,
Brendan
Re: Workflow questions
Posted: Thu Dec 01, 2011 4:29 am
by Solar
Brendan wrote:Solar wrote:As I said, upstream problem. By the way, the whole issue would be solved (on the customer side) if the
root URL to the documentation would be versioned, i.e. brendanproject.com/docs/v1.1/foo.css. Voila, kludge-free documentation ownage.
Hmm - every time I press F12, increment the version number, create a new directory to generate all the HTML in, replace all the old pages with HTML redirect pages (so that people's bookmarks don't break), then watch as my "/www" directory gets filled with millions of those HTML redirect pages...
Brendan wrote:Note: For the web site, there is no "development" and "published" - it's all both. If I get bored and write a rude joke in some source code, then it's online for anyone to see the next time I press F12.
You see where this modus operandi might not be the smartest of choices?
There's a reason for havign a work directory, and "committing" changes only at selected points in the development.
I know, I know, you're a one-man show. But do you see how several of your problems, that made you desire non-standard solutions, originate in your non-standard workflow?
Re: Workflow questions
Posted: Thu Dec 01, 2011 5:21 am
by Brendan
Hi,
Solar wrote:Brendan wrote:Note: For the web site, there is no "development" and "published" - it's all both. If I get bored and write a rude joke in some source code, then it's online for anyone to see the next time I press F12.
You see where this modus operandi might not be the smartest of choices?
There's a reason for havign a work directory, and "committing" changes only at selected points in the development.
I know, I know, you're a one-man show. But do you see how several of your problems, that made you desire non-standard solutions, originate in your non-standard workflow?
You misunderstand - not having a reason to care about the professionalism and quality of my code until I commit it (by which time I probably would've forgotten all the stupid things I did) would be much worse.
A tightrope walker who works without a net is far less likely to fall.
Cheers,
Brendan
Re: Brendan's Build Utility (was Workflow questions)
Posted: Thu Dec 01, 2011 7:26 am
by Solar
Let's agree to disagree. (On both statements, actually.)
Re: Brendan's Build Utility (was Workflow questions)
Posted: Thu Dec 01, 2011 8:58 am
by cxzuk
id just like to say something quickly about version control and 'workflows'.
its important to remember we have a design stage and an implementation stage.
version control is only useful to keep a record of -design- changes.
there is no point storing changes from the implementation as the changes will only represent 'blunders' from design to implementation.
some people use some form of wiki or document as design and c/c++ as implementation, some use c/c++ as design, perhaps Brendan is the former?
Re: Workflow questions
Posted: Thu Dec 01, 2011 9:34 am
by Rusky
Brendan wrote:A web browser asks my web server for the file "foo.css?version=1234" and my web server strips off the "?version=1234" part and returns the right file. Then the user downloads a copy of that web page and puts it on their hard disk. In this case, can you guarantee that there are no browsers that will ask the file system for the file "foo.css?version=1234" (rather than "foo.css")? Is the difference between "foo.css?version=1234" and "foo1234.css" worth the hassle of testing every browser, given that you still need to change the version number in all the web pages anyway?
If somebody is using offline documentation of your source code, are they really going to be using some old web browser that can't handle query strings? I'd say that the difference is worth the hassle of testing relatively new browsers, since that only takes a few minutes and gains you a build system that never has to rename stylesheets, only insert random numbers into templates.
Brendan wrote:Of course it's a micro-kernel, so it never sees/cares about all the little structures used by drivers, file systems, networking, etc (which is probably where the Linux kernel uses the majority of its special purpose memory allocators).
Thus, libraries that can be linked into your drivers and other servers would make sense. Having shared, generic code for data structures reduces duplication and makes it easier to pick a good algorithm because there's no barrier to using something you've already implemented.
Brendan wrote:The physical memory manager is the only thing in the kernel that uses bitmaps (and it only uses one). I don't use binary trees, radix trees, flex arrays, binary search or array sorting in the kernel. If you can't handle linked lists with macros instead of library functions then you should be shot (but I've got better things to do than using "grep list_entry *" to figure out what code actually does, so please don't use macros either).
Putting linked list code in library functions is better, because C's macro system sucks and the compiler knows how to inline functions anyway. Putting linked list code in a common place is better, because it eliminates any possibility of subtle bugs in code that isn't the point of what you're doing.
Re: Brendan's Build Utility (was Workflow questions)
Posted: Thu Dec 01, 2011 10:09 am
by rdos
cxzuk wrote:its important to remember we have a design stage and an implementation stage.
There is? Not for one-man projects.
cxzuk wrote:version control is only useful to keep a record of -design- changes.
Why? Do you mean that you document your design? My design is only documented in my brain. It is a little hard to do version control on a brain.
cxzuk wrote:there is no point storing changes from the implementation as the changes will only represent 'blunders' from design to implementation.
It is these "blunders" that version control is so nice in resolving. Say that you broke something 150 versions ago, and you didn't do a proper test until now, and now you find out something is broken, but have no idea what (your system just reboots without any meaningful logs). How do you resolve this with either Brendan's method (single backup), or with your method (only design-changes are saved)?
Re: Brendan's Build Utility (was Workflow questions)
Posted: Thu Dec 01, 2011 10:23 am
by Solar
rdos wrote:Why? Do you mean that you document your design? My design is only documented in my brain.
OK guys, where's the camera?
Re: Brendan's Build Utility (was Workflow questions)
Posted: Thu Dec 01, 2011 4:44 pm
by cxzuk
rdos wrote:cxzuk wrote:its important to remember we have a design stage and an implementation stage.
There is? Not for one-man projects.
You say that you have the design in your head. There must still be a stage where it was created (or thought up).
rdos wrote:cxzuk wrote:version control is only useful to keep a record of -design- changes.
Why? Do you mean that you document your design? My design is only documented in my brain. It is a little hard to do version control on a brain.
cxzuk wrote:there is no point storing changes from the implementation as the changes will only represent 'blunders' from design to implementation.
It is these "blunders" that version control is so nice in resolving. Say that you broke something 150 versions ago, and you didn't do a proper test until now, and now you find out something is broken, but have no idea what (your system just reboots without any meaningful logs). How do you resolve this with either Brendan's method (single backup), or with your method (only design-changes are saved)?
Im going to guess you use C for your kernel. Your C files represent your design, along with information mentally stored to fill gaps. The compiler is the implementer, it is very rare for the compiler to blunder from turning your C files (design) into the implementation (binary).
If you design your project on paper and then implement it manually (asm is isomorphic(ish) and is a human friendly form of the binary) you could blunder.
So what would a wiki be for? Some people use another language other than C/C++ (etc) because they can not record all the information they want (EDIT:) clearly in those languages. There is no reason why you can not turn your design stored in your wiki in your selected vocabulary straight into machine code. Turning it into an intermediate language and handing it to a compiler is a perceived convenience step.
Re: Brendan's Build Utility (was Workflow questions)
Posted: Thu Dec 01, 2011 4:53 pm
by Kevin
I must have missed something... Is this thread an attempt for a new record in the category "most WTFs/minute"?
Re: Brendan's Build Utility (was Workflow questions)
Posted: Thu Dec 01, 2011 8:21 pm
by Rusky
It seems to have become that, yes.
Re: Brendan's Build Utility (was Workflow questions)
Posted: Fri Dec 02, 2011 7:02 am
by rdos
cxzuk wrote:You say that you have the design in your head. There must still be a stage where it was created (or thought up).
Yes. I thought it up in my head, not on paper. I have the general picture about how it works in my head, and I know exactly where to look for different things.
cxzuk wrote:Im going to guess you use C for your kernel. Your C files represent your design, along with information mentally stored to fill gaps. The compiler is the implementer, it is very rare for the compiler to blunder from turning your C files (design) into the implementation (binary).
No. I use assembly almost exclusively. The only part that is in C is the ACPICA device-driver that I've ported.
cxzuk wrote:If you design your project on paper and then implement it manually (asm is isomorphic(ish) and is a human friendly form of the binary) you could blunder.
No, I think up the design in my head, and then I write the code based on the general functional description I have in my head.
cxzuk wrote:So what would a wiki be for? Some people use another language other than C/C++ (etc) because they can not record all the information they want (EDIT:) clearly in those languages. There is no reason why you can not turn your design stored in your wiki in your selected vocabulary straight into machine code. Turning it into an intermediate language and handing it to a compiler is a perceived convenience step.
I would only write things on paper, or on a wiki, in order to document it for others. As long as it is a one-man project, there is no need for written documentation.