iansjack wrote:The idea that dynamic libraries lead to a larger memory footprint is plainly ridiculous.
If you have a library that is used by only one process in the system, then the shared library
does eat up more memory, due to PIC overhead, PLT, GOT, and the fact that the linker can't throw the unused object files away (like it can with static linking).
If the library is used by more than one process with different programs, then dynamic libraries can be a memory benefit
if the shared amount of memory is more than the overhead. Which, for small libraries, isn't a given.
iansjack wrote:Just imagine if every program that used the standard C library were statically linked.
You mean "imagine
http://sta.li? Anyway, while the standard C library is an oft-used one, next to no-one uses all of it. In fact, there are parts of it almost nobody uses. But with dynamic linking, you get to load functions like nftw() and hsearch() into memory regardless.
Speaking of the C library, musl often uses weak linking tricks to get the footprint in statically linked programs down (e.g. you don't get the function that runs all atexit() handlers if you don't use atexit()). Well, with dynamic linking, those tricks don't work. You always get the atext() stuff with it.
bzt wrote:You still don't understand how dynamic libraries work and why they are used.
I'm pretty sure I know how dynamic libraries work in a larger amount of detail and for more architectures than most on this forum. As to why they are used: Because everyone else is doing it. Because it's the hip new thing, at least compared to static linking.
No, the idea sounds alluring: You get to save on memory for code, by only loading reused library functions once. It sounds so nice. Unfortunately, it isn't quite so simple, for all the reasons already laid out. Yes, you only load the functions once, but you also get to load all the other functions in the library once. You get to do it with larger code and with structures needed to make the whole thing work.
bzt wrote:Btw, advantages and disadvantages is not a matter of a subjective opinion. It's a matter of measurements, and comparition of objective test results.
As soon as multiple dimensions get involved, you get to arbitrarily decide, which of these is more important in precisely what degree. Don't tell me you never heard of a tradeoff before. If I use the lookup-table version of the CRC32 algorithm, then I am expending 1 kB of additional memory, but speed the algorithm up eightfold. Dynamic libraries are another space-time tradeoff. Only in this case, the tradeoff is not so clear (there are both negative and positive aspects to dynamic linking in both the space and the time domain, and whether the positive or the negative side will win out is a matter of the precise test case)
Measurements are complicated because they depend on so many environmental factors. Are the directories that are searched in the page cache? Are the libraries and executables? How many paths are there in LD_LIBRARY_PATH? How fast is the hard drive in question? And how fast is the memory? What system call mechanism is in use? How fast is mmap()? How fast is the page fault handler? How much of the library does your test program use?
All of these determine large parts of which version is faster.
But since a verdict on which is better is so contingent on the environment, no blanket statement like "static libs are better" or "shared libs are better" can be correct. It can only be "Shared libs are better in this case for that machine under these conditions". But most people don't want that, then they'd have to re-evaluate their decision every single time they have to make it. Which is hard, so nobody does it. So blanket rules it is, and since dynamic linking is the hot new thing, that's what everyone is going with.
bzt wrote:If what you're saying were true, nobody would have implemented DLLs or SOs.
bzt wrote:But here's the news, they have (check out Win, Linux, MacOSX, BSDs, literally ANY mainstream OS, and BeOS, Haiku, Plan9, Ultrix, VMS, Solaris, ReactOS, just the name a few non-mainstream, and all successful hobby OSes if you don't believe me).
Argumentum ad populum (Also known as the "million flies" argument, which you can google at your discretion). By that logic, we should use Windows (just look at how many people use that). Also ELF shared libraries are light on the kernel, merely requiring support for PT_INTERP, and the rest is in user space.
Interesting that you should list Plan 9 there. Because the people behind it have in the past said they only included it because people expect it now. Here's a more thorough documentation of this:
http://harmful.cat-v.org/software/dynamic-linking/
Interesting too that you should list Windows there. Ever since Windows Vista, there is a directory in the windows directory called "winsxs". Look at it. Take it all in. Gigabytes upon gigabytes of tiny little DLLs, most of these used by only a single program in the system. Open up the list of installed programs, and have a gander at all the versions of "Microsoft Visual C++ Redistributable" you have installed.
While many things are open to the environment, a shared lib used only by a single program is always a loss, in both the space and time departments. And Windows makes very much sure (of late) that most DLLs will only be used by a single program.
bzt wrote:Now after this little intermezzo, I hope we can go back to the original topic.
Yes, we should. This was my last reply on the matter.