Page 1 of 1

linux installation Project

Posted: Sun Dec 03, 2006 7:47 pm
by B.E
I have an idea of a project, the idea is this

The problem with Linux is the that there are no formal installation program. This means that the distro's maintainers need not to be maintaining their package database.

The project will allow applications to be installed a using graphical interface or using the console, if a X is not running. The project will not have any external dependences(and thus allow the installation program to run on any existing UNIX system), or will the application have to be compiled at the destination.

The project will consist of two sub-program, one which will be installed on the developers machine. which will allow the developer to configure list of dependencies(because the application being installed may need external dependencies), list of files to install. The other sub-program will be "shipped" with the product.

Any thoughts on such a project.

Posted: Mon Dec 04, 2006 3:53 am
by Solar
Problem 1: "The project will not have any external dependencies."

Only possible if your only output is done with printf() and your only input is done with scanf(). Everything else will drag in dependencies. Everything beyond that, and you are in the midst of the endless joys that are autoconf.

Problem 2: Virtually every distribution has its own, sometimes partial, sometimes already good - sollution to the problem: YaST, APT, RPM, Portage, Ports... Your "unified" installation program would simply add one more "solution" to the fray, and don't expect people to flock to a system that will feel alien to everyone and familiar to no-one.

Problem 3: Due to the various already-existing "solutions", dependency tracking is done vastly different on various machines. On a SuSE system, you'll have to query YaST. On a Debian, you'll have to interface with APT. On a Gentoo, you'll have to speak to Portage...

Problem 4: There are two "brands" of Linux out there, binary-based and source-based. The only (!) way to ship a Linux package in a way that is compatible with all other Linux out there is in source form, which is an instant show-stopper for many a binary-based distro user out there. They don't want to wait for their software to compile. You can use a binary shipping format, but that means you cripple the configurability available to source-based distros, will alienate the more security-aware types out there who want stuff compiled from source while they look at it, and you will break compatibility with some of the trickier binary distros (libc with/without NPTL, can anyone remember that desaster?)...

All in all: Nice idea, but it won't work. Not for Linux, at least.

Posted: Mon Dec 04, 2006 5:58 am
by spix
Problem 1: "The project will not have any external dependencies."

Only possible if your only output is done with printf() and your only input is done with scanf(). Everything else will drag in dependencies. Everything beyond that, and you are in the midst of the endless joys that are autoconf.
You could staticly link everything. That's what commercial vendors do when they want their application to run on every version of linux.
Problem 2: Virtually every distribution has its own, sometimes partial, sometimes already good - sollution to the problem: YaST, APT, RPM, Portage, Ports... Your "unified" installation program would simply add one more "solution" to the fray, and don't expect people to flock to a system that will feel alien to everyone and familiar to no-one.
I agree with this. There has been a number of projects to try and unify things, klick comes to mind, then there is autopackage and i'm sure there are others.
Problem 3: Due to the various already-existing "solutions", dependency tracking is done vastly different on various machines. On a SuSE system, you'll have to query YaST. On a Debian, you'll have to interface with APT. On a Gentoo, you'll have to speak to Portage...
No, that would only be necissary if you want to integrate with the existing package manager. I believe the OP is talking about an installshield type application, which is not related to package management. They are different concepts.

A good example I think on how these two concepts might exist together is fink on top of mac os x.
Problem 4: There are two "brands" of Linux out there, binary-based and source-based. The only (!) way to ship a Linux package in a way that is compatible with all other Linux out there is in source form,
Or staticly link it. If you download the opera static build, it will run on any recent distro with X-Windows installed. It is when you want to run on multiple platforms, ie FreeBSD, OpenBSD, Linux PowerPC and so on that you must distribute in source form.
which is an instant show-stopper for many a binary-based distro user out there. They don't want to wait for their software to compile. You can use a binary shipping format, but that means you cripple the configurability available to source-based distros,
Source based distros (with the exception of LFS) have no real extra configurability from a binary meta-distribution such as Debian. You are still stuck with your package manager (for example portage). And if a package is distributed in source form or binary form, it's not in the package manager so the install steps required will be the same binary or source distribution.

Source distros are overrated. If you want configurability and have time to waste build an LFS box, if you don't get Slackware.
will alienate the more security-aware types out there who want stuff compiled from source while they look at it, and you will break compatibility with some of the trickier binary distros (libc with/without NPTL, can anyone remember that desaster?)...
People who want to read the source code to a program before they install it are the exception to the rule. If you don't have the time, you either get a binary package signed by someone you trust, or a source recipie (ala emerge) signed by someone you trust. It matters not the method of install, just if you trust the vendor.

Having said all that, unix is traditionally used package managers, however there are a number of applications that use installers, Unreal Tournament is the first one I can think of.

An installer makes sense if your application is outside the operating system. For example, Microsoft gives you windows, Norton gives you antivirus. In the linux bowl of spagetti, distribution vendors try to be the jack of all trades (master of none) and control every piece of software on your system.

An installer like installshield would probably be mostly useful for people who want to distribute commercial (not opensource) software.

Sorry to ramble, I hope my thoughts made sense

Andrew

Posted: Mon Dec 04, 2006 6:46 am
by Solar
spix wrote:You could staticly link everything.
Yep. Increase storage and memory requirements by a sizeable factor, and remove the ability to fix a problem in some library by fixing the library.

If your "solution" is to statically link everything, you don't need an installer - you just unpack-and-run-from-subdirectory.
I believe the OP is talking about an installshield type application, which is not related to package management. They are different concepts.
The package manager gives me a central interface to installing and uninstalling applications. If you don't talk to it, I cannot uninstall your app through the accostumed interface. Your app will waste space on my hard drive and in my main memory, thus eating performance. It will not integrate with applications already installed, or installed later on.
Problem 4: There are two "brands" of Linux out there, binary-based and source-based. The only (!) way to ship a Linux package in a way that is compatible with all other Linux out there is in source form,
Or staticly link it.
Compatibility in code and spirit. The reason why people are using source-based distributions is because they do not want loads of unnecessary stuff included in their binaries, or having those binaries compiled for a smallest common denoter they do not want to support. My Mozilla is compiled with crypt, java, SVG, SSL, composer and xprint support, but without mailer, IRC, Gnome, LDAP, calendar and Postgres support. The compiler was set to generate code optimized for the Pentium-M family. I want it that way. That is why I chose the distro I use. Do you really think I will consider a statically linked Mozilla everything-and-your-cat behemoth because it is "easier" to install (which I doubt)?
It is when you want to run on multiple platforms, ie FreeBSD, OpenBSD, Linux PowerPC and so on that you must distribute in source form.
Just as one example, are you informed about the argument SuSE vs Schilling? Linux has turned into "multiple platforms" long ago...
Source based distros (with the exception of LFS) have no real extra configurability from a binary meta-distribution such as Debian.
Have you ever used a source distro? Above comment suggests, no.
You are still stuck with your package manager (for example portage). And if a package is distributed in source form or binary form, it's not in the package manager so the install steps required will be the same binary or source distribution.
What the **** are you talking about?
Source distros are overrated. If you want configurability and have time to waste build an LFS box, if you don't get Slackware.
Ahhhh... so you not only challenge my ability (and that of a sizeable part of the Linux community) to chose the distro that suits best, you also turn B.E's suggestion of a generic installer into one of an installer that works for every distro except those you consider "overrated"?
An installer makes sense if your application is outside the operating system.
One of the characteristics of Linux is that there is no fixed boundary between "the OS" and "its applications". I cannot think of many non-game apps that would qualify as "outside the operating system" in my book. So B.E's generic installer is now an installer-for-the-distros-I-mean-and-the-apps-I-mean...

Do you really think that will ever hold a candle against installing those apps you mean using the system package manager? Especially since those monolithic software suites you are referring to are usually well-supported (because they are important) anyway?
An installer like installshield would probably be mostly useful for people who want to distribute commercial (not opensource) software.
Name three commercial software titles that were a success on Linux.
Sorry to ramble, I hope my thoughts made sense
#

I see where you are coming from, but if you want to serve a community with a swiss army knife, you'd better not tell them that everyone using the scissors is a jerk...

Posted: Mon Dec 04, 2006 7:47 am
by spix
I seem to have offended you. Sorry.
Yep. Increase storage and memory requirements by a sizeable factor, and remove the ability to fix a problem in some library by fixing the library.

If your "solution" is to statically link everything, you don't need an installer - you just unpack-and-run-from-subdirectory.
What's wrong with Opera? It's not that big, and works well.
The package manager gives me a central interface to installing and uninstalling applications. If you don't talk to it, I cannot uninstall your app through the accostumed interface. Your app will waste space on my hard drive and in my main memory, thus eating performance. It will not integrate with applications already installed, or installed later on.
That's right. So what? If this person wanted to write a package manager then that is what he would do. He's talking about an installer. I assume it would probably come with an uninstaller too.
Compatibility in code and spirit. The reason why people are using source-based distributions is because they do not want loads of unnecessary stuff included in their binaries, or having those binaries compiled for a smallest common denoter they do not want to support. My Mozilla is compiled with crypt, java, SVG, SSL, composer and xprint support, but without mailer, IRC, Gnome, LDAP, calendar and Postgres support. The compiler was set to generate code optimized for the Pentium-M family. I want it that way. That is why I chose the distro I use. Do you really think I will consider a statically linked Mozilla everything-and-your-cat behemoth because it is "easier" to install (which I doubt)?
Yes. Sure. I am not saying the entire Operating system would be installed via installers. Nor am I saying that you should install everything and your cat. Many binary distros provide different packages, vim with gtk, vim with gnome, vim without X, and so on.
Just as one example, are you informed about the argument SuSE vs Schilling? Linux has turned into "multiple platforms" long ago...
No I am not. But are you telling me the underlying architecture of Linux distributions are not the same? Sure they put files in different places and use different versions of software. The system call interface is the same, programs are loaded the same way. The interface between the programs and the hardware is essentially the same.
Have you ever used a source distro? Above comment suggests, no.
Yes, I used gentoo for a couple of years. I liked it. Except that it took about a week to upgrade kde on my hardware.
What the **** are you talking about?
What I was trying to say was, portage is just another package manager. It just approches packages from another perspective. It uses recipies to build a package, installs that package and records the information about that installed package in it's database. If you want to use a version of a library that is not available in portage, you face the same dependency problems a binary distribution faces when a library is not available in their repository, because the underlying principle is the same.
Ahhhh... so you not only challenge my ability (and that of a sizeable part of the Linux community) to chose the distro that suits best, you also turn B.E's suggestion of a generic installer into one of an installer that works for every distro except those you consider "overrated"?
I'm sorry, when I said "you" I didn't mean you solar, I was using the word generally, I should have yoused "one" or "a person". I do think that the advantages of using a source based distribution in terms of speed and efficiency is overrated. If you like compiling packages from source, then that's up to you. Whatever floats your boat.

I was not saying that this universal installer wouldn't work on a source based distribution. I'm not sure where you got that.
One of the characteristics of Linux is that there is no fixed boundary between "the OS" and "its applications". I cannot think of many non-game apps that would qualify as "outside the operating system" in my book.
This is a percieved characteristic. A distribution supplies applications which are part of the operating system. You then can install applications that are not part of the operating system. If a person is going to refer to a distribution as an opeating system, then I am saying the applications not supplied by the distribution are "outside" the operating system.
Do you really think that will ever hold a candle against installing those apps you mean using the system package manager? Especially since those monolithic software suites you are referring to are usually well-supported (because they are important) anyway?
Yes I do. Sure, it would be nice to have a package your your distribution of choice, but if I am an upstart software company producing a commercial application for Linux, I don't want to have to make 2000 different packages for every entry on distrowatch. A nice installer that would work across the board would be a perfect solution.
Name three commercial software titles that were a success on Linux.
Oracle, Java, Crossover Office, Vmware, Win4Lin, Opera.. need I go on?
I see where you are coming from, but if you want to serve a community with a swiss army knife, you'd better not tell them that everyone using the scissors is a jerk...
I didn't call anyone a jerk.

Posted: Mon Dec 04, 2006 10:30 am
by Brendan
Hi,

What the world needs is a simple package manager for installing portable/free applications on Windows with a couple mouse clicks.... ;)

The good thing about source-based distributions is the flexibility, but the long compile times suck.

What I want is for someone with a large group of powerful servers to make their own "Gentoo based distribution", which is identical to Gentoo but supports "compile on demand" (with the most commonly requested combinations cached as pre-compiled binaries). That way users would get the same flexibility, much shorter compile and download times and they'd still have the option of compiling things themselves.


Cheers,

Brendan

Posted: Mon Dec 04, 2006 2:53 pm
by spix
The good thing about source-based distributions is the flexibility, but the long compile times suck.
I think the way BSD ports & pkgsrc works is the best way, by supplying binary packages, but having the option to compile from sorce. I think gentoo does this now with GRP packages?

Posted: Mon Dec 04, 2006 6:49 pm
by B.E
Name three commercial software titles that were a success on Linux.
I can name two reasons why. First, the Linux community is so fragmented(with over hundreds of distrobutions) , it'd be impossible to get the software running on all distributions, second, because most of the Linux distributions are source based and commercial companies don't like giving out there source code.

As for the idea, I think, I need to leave it in the oven for a bit longer (needs more time to develop). Anyway, I have a feeling that thread will become into another windows v Linux debate (or in this case Linux v Linux), which I didn't intend it being.

Posted: Tue Dec 05, 2006 1:51 am
by Solar
spix wrote:
Just as one example, are you informed about the argument SuSE vs Schilling? Linux has turned into "multiple platforms" long ago...
No I am not. But are you telling me the underlying architecture of Linux distributions are not the same?
Exactly.

SuSE added a couple of patches to the kernel that broke Schilling's apps, and he got quite fed up with "bug reports" that had nothing to do with his code, but SuSE's patches.

Another example was RedHat, when NPTL was introduced. Older RedHat environments were compiled without NPTL support, but newer packages for the old environment actually required NPTL support.

Next example, init scripts. Debian does them Linux-style, using symlinks in /etc/rc.d. That directory doesn't even exist on a Gentoo box, because Gentoo does them BSD-style.

Configuration files... depending on distro, certain files are not to be edited manually, but are autogenerated using other input. Means, the file is there, you can edit it, but your edits are lost when "the system" does its thing. /etc/modules.conf vs. /etc/modules.d/, /etc/* vs. YaST...
The system call interface is the same...
...as long as we aren't in some paradigm shift as Linux undergoes every six months or so, with parts of the distros doing it the old way, parts doing it the new way and the rest being broken-in-transition...
The interface between the programs and the hardware is essentially the same.
Ahem... madwifi vs. wpa_supplicant, earlier this year...
I'm sorry, when I said "you" I didn't mean you solar...
I quite understood that. I didn't mean "me" either, but everyone who decided to use a source distro.
I do think that the advantages of using a source based distribution in terms of speed and efficiency is overrated.
Emphasis is mine.

You know what? Phrased like that, I totally agree. But earlier you said "Source distros are overrated", period. I chose Gentoo over all other distros because I was finally able to have only the stuff I really use on my hard drive, without unresolved compatibility issues among packages, and because only with Gentoo I have been able to go through major updates (of kernel, libc, gcc, kde) without having to reinstall from scratch. Gentoo was the first Linux distro that worked for me, after trying many a binary distro in many versions over about a dozen attempts scattered over about three years.

I don't think source distros are "overrated". I think their main purpose is probably miscommunicated. Efficency and speed is a secondary benefit for me.
Whatever floats your boat.
We're getting closer here. The thing with "the installer" is, it doesn't float anyone's boat. There are precious little "commercial software suites" that would require a fancy unzipper to "float their boat" (they either have them already or do quite well with "untar-and-run"), and everything beyond those big suites would require system integration, which we seem to agree would be very hard if not impossible to do.
I was not saying that this universal installer wouldn't work on a source based distribution. I'm not sure where you got that.
It would work there, but it would be an alien to the system, and you'd end up with the source distro providing a "recipe" / ebuild for your software package anyway so that there's a way to properly integrate it. You could make it impossible for them to do so, so that they are forced to use your universal installer, but would that make them happy customers?
A distribution supplies applications which are part of the operating system. You then can install applications that are not part of the operating system.
I could send you a listing of /usr/portage/* on my system, and you go ahead telling me what is "part of the operating system" and what isn't...
If a person is going to refer to a distribution as an opeating system, then I am saying the applications not supplied by the distribution are "outside" the operating system.
Gentoo / SuSE / RedHat are not operating systems, they are Linux distros. Source based distros don't "supply" anything except the recipies (ebuilds) to build something. Unless you define "OS" as "kernel", you can't really draw the line. Is the third-party kernel module I use for a WLAN driver part of the OS? Is X.org part of it? KDE? If you say "no", then Linux is a very, very poor "operating system"...
Sure, it would be nice to have a package your your distribution of choice, but if I am an upstart software company producing a commercial application for Linux, I don't want to have to make 2000 different packages for every entry on distrowatch. A nice installer that would work across the board would be a perfect solution.
You already got that. Link everything statically, and ship a tarball.

Honestly. I don't see where, between "static tarball" and "native package", you see room for added value.
Name three commercial software titles that were a success on Linux.
Oracle, Java, Crossover Office, Vmware, Win4Lin, Opera.. need I go on?
I doubt both the "commercial" and the "success" on some of the above, but OK.

Now cross-index those with the requirements "does not need system integration" and "would benefit from a generic installer as opposed to untar-and-run-from-subdirectory"...
I didn't call anyone a jerk.
You denied several demands and requirements I voiced as "not relevant". I'm not an evangelical Linux user - it simply "floats my boat", so to speak :wink: - but I am trying to warn you that an "eat or die" attitude has never failed to annoy most of the Linux community...

Posted: Tue Dec 05, 2006 12:31 pm
by Candy
Just a small comment:

The only time you should consider static linking is either when transferring to a location that doesn't support dynamic linking (your own OS at first is a good target for such things) or when ALL the code that's going to run on the target is in one executable. The first doesn't apply for your host development system (or any other common system for that matter) and the second doesn't apply for anything but embedded systems, and then still not if not too small.

Posted: Tue Dec 05, 2006 11:03 pm
by B.E
I think I may of solved my pre-compilation problem (although I still need a compiler at the destination). The reason why I wanted this to happen was because, most people don't like giving out their source code.

when the installation program is made, the application is compiled into a static library format.
At the destination, the installation program create an executable file out of the library.

There are a couple of problems, I can see with this:

First - because, some compilers like to add a _ to all of their system calls, but this can be fixed by making a utility to convert it to the needed format.

Second - machine architectures may differ (i.e x86 and x64), but this can be solved by the distributor suppling a installer for each architecture.

Third (kind of a self fixing program) - because, each system may not have, all functions. that the program requires. the solution would be to identify, which functions are not available(or work differently on the destination OS (not this only applies to standard functions)) and provide a object file with the corrected implementation to be linked.

Please note, that I could have an option to either pre-compile it or simply compile it at the destination.

Posted: Wed Dec 06, 2006 12:49 am
by Solar
B.E wrote:First - because, some compilers like to add a _ to all of their system calls, but this can be fixed by making a utility to convert it to the needed format.
-fleading-underscore / -fno-leading-underscore.
Second - machine architectures may differ (i.e x86 and x64), but this can be solved by the distributor suppling a installer for each architecture.
Linux also runs on PPC, ARM and numerous other platforms. Then there is the question of binary compatibility (do you remember the pains we went through with GCC 3.0 / 3.1 / 3.2?)...
Third (kind of a self fixing program) - because, each system may not have, all functions. that the program requires. the solution would be to identify, which functions are not available(or work differently on the destination OS (not this only applies to standard functions))...
Here you are somewhere between autoconf and the packet manager...
...and provide a object file with the corrected implementation to be linked.
I'm not sure it would be that easy.

But we are one step too far already, IMHO. I haven't yet grasped exactly what the itch is you want to scratch. What is it, exactly, that your "generic installer" will provide a solution for?