Single source file
Re: Single source file
Ok, here's a list of cons:
1) Takes longer to compile
2) Obviously not preferable for closed source software.
3) Also not good for open source software, as only one person can edit at a time.
4) Requires a code folding IDE
5) Not nearly as portable
And pros:
1) Code files have same name as executable.
Am I missing something?
PS, I consider that pro to be a con.
1) Takes longer to compile
2) Obviously not preferable for closed source software.
3) Also not good for open source software, as only one person can edit at a time.
4) Requires a code folding IDE
5) Not nearly as portable
And pros:
1) Code files have same name as executable.
Am I missing something?
PS, I consider that pro to be a con.
Programming is 80% Math, 20% Grammar, and 10% Creativity <--- Do not make fun of my joke!
If you're new, check this out.
If you're new, check this out.
Re: Single source file
If making modifications and recompiling, yes. Otherwise, no.m12 wrote:Takes longer to compile
What difference it makes whether it is closed source or not? Sharing the work is a problem anyway. That is a drawback unless the IDE or revision control system is designed for this.m12 wrote:Obviously not preferable for closed source software.
Same as above.m12 wrote:Also not good for open source software, as only one person can edit at a time.
Does not require it. However, it is really helpful to have one.m12 wrote:Requires a code folding IDE
The idea is much more portable. It just cannot be implemented easily with current conventions.m12 wrote:Not nearly as portable
The name has nothing to do with this.m12 wrote:Code files have same name as executable.
Re: Single source file
So then, there's no point to this?
Programming is 80% Math, 20% Grammar, and 10% Creativity <--- Do not make fun of my joke!
If you're new, check this out.
If you're new, check this out.
Re: Single source file
I am sure that there are even more advantages I have mentioned so far. Do you think there is no point at all? I am quite surprised if you all really think so. However, I do not believe it.
I agree. All the things that are needed for helping an IDE to build a reasonable view of the program should be standardized. Like the programming language itself. Then it is not necessary to use a particular IDE. Of course, we can edit standards conforming file formats with many tools. In this case, it should be editable with plain text editor also.iansjack wrote:I am very much against the idea of source files that require a particular IDE
Re: Single source file
The function, method, class, etc. meta-information header could be something like that is already in use when using a documentation generator, like Doxygen. This time the idea is taken a step further.
Re: Single source file
IMO, single source file may be one form of distribution, but for development purpose you should split code into multiple files.
SQLLite, for example, I found it's convenient to just drag one .cpp and one .h into my project and blah, it works; but I'm as a user's perspective; I'm sure the developer of sqllite uses multiple files.
SQLLite, for example, I found it's convenient to just drag one .cpp and one .h into my project and blah, it works; but I'm as a user's perspective; I'm sure the developer of sqllite uses multiple files.
-
- Member
- Posts: 92
- Joined: Tue Aug 14, 2012 8:51 am
Re: Single source file
There's nothing to take you from developing your project in multiple files, and combining them all into one file for distribution, instead of compiling it. But you must take care of developing headers for every single bit of the program, or writing the combiner will be a hard time, if it has to generate every forward declaration that's needed.
But there are times where you'll have to code more than one source file, like when you combine different (programming) languages into one program. Or just an example we (might) all know: an Operating System. Would you write your userspace applications in the same file where you wrote your kernel and bootsector ? I'm not sure how easy it will be to maintain (and even build) that code...
But there are times where you'll have to code more than one source file, like when you combine different (programming) languages into one program. Or just an example we (might) all know: an Operating System. Would you write your userspace applications in the same file where you wrote your kernel and bootsector ? I'm not sure how easy it will be to maintain (and even build) that code...
Re: Single source file
Thank you bluemoon for mentioning this SQLite. I just checked its source code and it really is, like you thought, combination of several source code files. The source code comment mentions the same advantage about the optimizations.
sqlite3.c wrote:This file is an amalgamation of many separate C source files from SQLite version 3.7.17. By combining all the individual C code files into this single large file, the entire code can be compiled as a single translation unit. This allows many compilers to do optimizations that would not be possible if the files were compiled separately. Performance improvements of 5% or more are commonly seen when SQLite is compiled as a single translation unit.
Re: Single source file
Hi,
Let's look at some of the disadvantages people have mentioned (and some I made up myself)...
"It's harder to find something". True, but only for crappy "plain text" development environments and crappy languages that allow things to be spread everywhere. Antti already suggested an IDE with a "tree-like view of the program" to solve the first problem. The second problem is actually caused by using multiple files in languages like C, where you have to use headers to make it work (and end up with some things in the code and some things elsewhere in headers). A different programming language can avoid this problem (e.g. it's not a problem for languages like Java).
"It enables several developers to work on the same code base at the same time". True, but only for crappy "plain text" development environments. With tools designed for it, nothing prevents you from having something like (e.g.) multiple "front end" clients that all talk to a common "back end" service. This approach may actually be far superior; as it allows things for things like "remote peer programming" (e.g. 2 programmers in different countries who are chatting via. something like TeamSpeak and working on the same code together); while also allowing lone developers to "lock" pieces of source code while they work on it alone.
"It complicates version control". True, but only for crappy "plain text" version control systems. With tools designed for it (e.g. multiple "front end" clients that all talk to a common "back end" service) there's no reason the "back end" service can't store previous versions.
"It makes it hard/impossible to build different parts of the code base with different compiler and/or linker options". True (at least in some cases), but is this really a good thing? I'd suggest it's better to figure out which problems developers are trying to avoid by using different compilers and/or different linker options and fix those problems instead.
"It can make build times worse". To be honest; I'm not sure that this is true and I suspect it depends on which language, which optimisations and which linker. For example; as soon as you attempt link time optimisation this argument falls apart. In any case, for end-users (e.g. everything needs to be compiled when the executable is installed) a single source file is faster.
"It makes it hard/impossible to reuse code". True, sort of, but it's more complicated than that. The problem isn't code reuse (almost everything supports "copy and paste") but maintaining that reused code afterwards. For example, if a piece of code is used by 100 different/separate projects and you find a bug in it, then you'd want to be able to fix the bug at some central point rather than having to track down the 100 different/separate projects where it's been reused. However; the opposite is also true - if someone creates a new bug in a piece of reused code (possibly including deliberately inserting malicious code), then you don't want that bug to be automatically propagated into 100 different/separate projects where the code has been reused. What you really want is a way for the developers responsible for each of those 100 different/separate projects to be automatically notified when an update to the reused code is available, and to allow those developers to check the changes and decide whether to accept the updated reused code or to continue using the existing/older version. In both cases (single file source and multi-file source) you need something more than what existing tools provide.
"Not having/needing build tools (like "autoconf") makes it hard to avoid platform-specific hacks". True, but it's far better to avoid the need for platform-specific hacks. This can be done by having standards designed by someone slightly more competent than a committee of schizophrenic monkeys that have been taking large quantities of LSD. Sadly, this means avoiding a lot of existing tools and languages (those monkeys have been very very busy).
"Not having separate files destroys the concept of "file scope" in some languages and can make code maintenance harder". An example of this is static functions and static global data in C. The simple solution is to use a language that doesn't rely on "file scope"; or to modify the language to suit (e.g. "subsection of file scope"); and the end result of this is that normal compilers (e.g. GCC) won't be adequate.
Mostly what I'm saying is that existing tools are the only real problem with the single-source file idea.
Cheers,
Brendan
I think it's a little similar to what I'm doing. The main problem is that existing tools aren't designed for it; and you really do need different tools designed to do it well (including IDEs/editors, compiler and languages).Antti wrote:What do you think?
Let's look at some of the disadvantages people have mentioned (and some I made up myself)...
"It's harder to find something". True, but only for crappy "plain text" development environments and crappy languages that allow things to be spread everywhere. Antti already suggested an IDE with a "tree-like view of the program" to solve the first problem. The second problem is actually caused by using multiple files in languages like C, where you have to use headers to make it work (and end up with some things in the code and some things elsewhere in headers). A different programming language can avoid this problem (e.g. it's not a problem for languages like Java).
"It enables several developers to work on the same code base at the same time". True, but only for crappy "plain text" development environments. With tools designed for it, nothing prevents you from having something like (e.g.) multiple "front end" clients that all talk to a common "back end" service. This approach may actually be far superior; as it allows things for things like "remote peer programming" (e.g. 2 programmers in different countries who are chatting via. something like TeamSpeak and working on the same code together); while also allowing lone developers to "lock" pieces of source code while they work on it alone.
"It complicates version control". True, but only for crappy "plain text" version control systems. With tools designed for it (e.g. multiple "front end" clients that all talk to a common "back end" service) there's no reason the "back end" service can't store previous versions.
"It makes it hard/impossible to build different parts of the code base with different compiler and/or linker options". True (at least in some cases), but is this really a good thing? I'd suggest it's better to figure out which problems developers are trying to avoid by using different compilers and/or different linker options and fix those problems instead.
"It can make build times worse". To be honest; I'm not sure that this is true and I suspect it depends on which language, which optimisations and which linker. For example; as soon as you attempt link time optimisation this argument falls apart. In any case, for end-users (e.g. everything needs to be compiled when the executable is installed) a single source file is faster.
"It makes it hard/impossible to reuse code". True, sort of, but it's more complicated than that. The problem isn't code reuse (almost everything supports "copy and paste") but maintaining that reused code afterwards. For example, if a piece of code is used by 100 different/separate projects and you find a bug in it, then you'd want to be able to fix the bug at some central point rather than having to track down the 100 different/separate projects where it's been reused. However; the opposite is also true - if someone creates a new bug in a piece of reused code (possibly including deliberately inserting malicious code), then you don't want that bug to be automatically propagated into 100 different/separate projects where the code has been reused. What you really want is a way for the developers responsible for each of those 100 different/separate projects to be automatically notified when an update to the reused code is available, and to allow those developers to check the changes and decide whether to accept the updated reused code or to continue using the existing/older version. In both cases (single file source and multi-file source) you need something more than what existing tools provide.
"Not having/needing build tools (like "autoconf") makes it hard to avoid platform-specific hacks". True, but it's far better to avoid the need for platform-specific hacks. This can be done by having standards designed by someone slightly more competent than a committee of schizophrenic monkeys that have been taking large quantities of LSD. Sadly, this means avoiding a lot of existing tools and languages (those monkeys have been very very busy).
"Not having separate files destroys the concept of "file scope" in some languages and can make code maintenance harder". An example of this is static functions and static global data in C. The simple solution is to use a language that doesn't rely on "file scope"; or to modify the language to suit (e.g. "subsection of file scope"); and the end result of this is that normal compilers (e.g. GCC) won't be adequate.
Mostly what I'm saying is that existing tools are the only real problem with the single-source file idea.
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Re: Single source file
No. It is a clear that these two things should be own units. They are like different modules. Actually, I would not write my kernel and bootsector in the same file either. As a bad example (again): I would not like to have a word processor and Internet browser written in the same file.AbstractYouShudNow wrote:Would you write your userspace applications in the same file where you wrote your kernel and bootsector ?
Re: Single source file
Yes. It is quite funny because one might think it is easier to make use of parallel computing if compiling many independent translation units. However, it looks like the overhead of all the multiple object-code mess is slower than simply compiling one big translation unit. I am not saying that parallel computing cannot be used there also.Brendan wrote:In any case, for end-users (e.g. everything needs to be compiled when the executable is installed) a single source file is faster.
One totally different thing came to my mind just a minute ago: license issues. What if we are using an open-source license (GPL/BSD/etc) and the published source code format differs quite significantly from the original developers own "work-copy". I am not a license expert. Does any license say anything about this? It is probably acceptable to combine "work-copy sources" to one single file and publish that as source code of some particular program. What if we take this a step further?
For example, I have a collection of C code files and I then combine them together and compile it to one assembly file. Then I published my program and use the GPL license and offer my assembly file as a "source". It is clear that it is not exactly what the end user wants from GPL programs. How far we can go in this?
Re: Single source file
GPL allows you to modify the original source, the resulting single source file would be your derivative work.Antti wrote:What if we are using an open-source license (GPL/BSD/etc) and the published source code format differs quite significantly from the original developers own "work-copy". I am not a license expert.
However you must also state clear what part your derivative works is (e.g. efforts/changes so that the code builds - its not likely you just do cat a+b+c>combined.c and it build), and don't claim anything on the original work.
Furthermore, you need to provide notice and/or method and/or links for anyone to get the original work.
Well, I'm not a lawyer, I may be wrong.
Re: Single source file
I guess this is true to a certain extent: What I really care about is how I can access and work with the source code, not how it's stored on disk. So in a different world working with single-file projects may actually not be too bad.Brendan wrote:Mostly what I'm saying is that existing tools are the only real problem with the single-source file idea.
But, Antti, in practice, with the tools that are commonly used, I don't think your approach makes any sense. Applying it to C makes it only worse. You lose important features (ever heard of modularity, encapsulation and things like that?) and you gain nothing. Well, almost nothing, because yes, it might be more obvious to a random user that this one source file is one program. But you're optimising for the wrong audience: They aren't interested in source files anyway. It's the developers who need to work with the source files, and their needs will most definitely be hurt.
Besides, I agree that cc -o foo foo.c is pretty simply. But cc -o foo *.c isn't much more complicated, so this isn't about single-file at all. Build system are used for different reasons (it starts with configurable modules/dependencies, includes compiling things not more often than necessary and doesn't end with cross-compiling). Most programs are considerably more complex than "Hello world". Treating everything like "Hello world" on your OS will lead to having only programs that have marginally more functionality than "Hello world".
Re: Single source file
It is quite common to work with source files without even looking at the actual content. For example, if I want to use the program but there is no binary executable available. I am not interested in developing that particular program. It would be more attractive to avoid binary distribution in general if building is easier and "look and feel" of source code (at "file level") is pleasing for "random users" also. After all, I would say the source code is not only for developers.Kevin wrote: It's the developers who need to work with the source files, and their needs will most definitely be hurt.
The biggest individual reason is just the handling of multiple source files. A flame warning...Kevin wrote:Build system are used for different reasons
Hello Worlds are elegant. Things usually go directly downward when going further. I try to see if there are any solutions to avoid it.Kevin wrote:Treating everything like "Hello world" on your OS
Re: Single source file
Yes, I know this situation. In that case I usually ignore the src/ directory and just do ./configure; make; make install in the top-level directory. I don't care how the source code is organised internally.Antti wrote:It would be more attractive to avoid binary distribution in general if building is easier and "look and feel" of source code (at "file level") is pleasing for "random users" also. After all, I would say the source code is not only for developers.
Of course, I don't do this a lot. Whenever I can, I use binary packages, because I simply don't feel like wasting my time waiting for the compiler. As soon as your project is a bit more complex than Hello World, compile time matters. It really does. If you don't believe me, try to quickly install a Linux From Scratch with a desktop and everything the average distro gives you. Let's talk about the progress you made next week. (For comparison, with a binary distro that's half an hour. If you're lucky, you'll have built the kernel for your LFS at that point.)
I doubt that. People usually start with shell scripts/batch files for handling multiple source files. And then, at some point, they notice that Makefiles can do a bit more.The biggest individual reason is just the handling of multiple source files. A flame warning...
Oh, yes, I absolutely agree. Hello Worlds are beautiful, easy to read, quick to compile, hard to get the design wrong, super portable - and completely useless.Hello Worlds are elegant. Things usually go directly downward when going further. I try to see if there are any solutions to avoid it.