Hi,
Rusky wrote:So, Brendan seems to be advocating the Waterfall Model, which was originally conceived not as a legitimate methodology but as an example of a bad methodology. Software development and design requires iteration or you get locked into bad decisions- nobody can know all the problems they will encounter as part of a project, let alone solve them all perfectly without any experience using their design. Even Brendan has changed his design over time because of the experience he's gained working on previous versions and using other software.
I'm mostly advocating the separation of design and implementation; and the inclusion of research, consultation, feedback and a standardisation process into the design phase. This need not be waterfall - it can be iterative where each iteration has a design phase and an implementation phase. Note that what I'm suggesting is not new, or radical, or uncommon - it's the process that virtually every standardisation body uses (and virtually every standardisation body uses it iteratively).
For my project; consultation and feedback haven't been part of my design phase; and I have changed my design over time. Perhaps if consultation and feedback were part of my design phase the number of design changes (and effort spent on deprecated implementations) could've been greatly reduced. Of course consultation and feedback isn't very effective when you only have 1 stakeholder (yourself); but Linux doesn't have that problem.
Rusky wrote:Of course you should think through what you're going to do before you start coding away, but there's a middle ground. You have to be willing to change your design over time, even if for no other reason than to keep up with changing requirements from the real world. This is one reason Linux has "no official 'standard graphics interface'"- not because the people who made the older designs were idiots, but because they had different requirements, and because nobody has time to stop the whole world while they recreate the entire software stack and upgrade everyone's hardware to use the new interface every time it changes.
Consider
Wayland - have they "stopped the world" to begin the Wayland project? Will they be expecting anyone to upgrade to "Wayland compatible" hardware?
More importantly; after Wayland is implemented will they need to change the design and break everything every month, or does the design process they're using adequately minimise the risk of future breakage? My guess is that the answer is "No", and that Wayland is destined to become a major cause of breakage (just like X is/was), until someone decides to replace Wayland with something else (which will also fail to minimise the risk of future breakage and cause just as much pain for end-users and developers as its predecessors).
Rusky wrote:Another reason is that there are many sets of requirements in use even at a single point in time. There is no silver bullet, there is no single interface that will satisfy all uses cases optimally. A design that works on a supercomputer would be insane to use on a cell phone, and a design for a cell phone would be insane to use on a workstation. You cannot build a system that will satisfy everyone all at once now, let alone into the future.
Wrong. If there are many sets of requirements in use even at a single point in time, then you can find out what all of the requirements are and attempt to design something suitable for all of the requirements rather than failing to try. If some of the requirements are mutually exclusive (e.g. the requirements for cell phones are insane for a workstation) then you split it into 2 standards (one for cell phone and one for workstation).
Now; see if you can predict where you will be in 5 minutes time. My guess is that you will be able to predict the future with better than 95% accuracy; because 5 minutes isn't very far into the future. As the "distance into the future" increases, your predictions will get less accurate.
For something like graphics APIs, the requirements haven't really changed much over the last 20 years. Mostly; people failed to identify the requirements properly 20 years ago, and that caused major changes in the implementations without any change in requirements. How accurately can you predict changes in "graphics API requirements" that will occur in the next 5 minutes? How about the next 2 years? For the next 20 years I'd predict that the only change in "graphics API requirements" will be an increase in the popularity of "head tracking" (e.g. where the display is fixed to your face and moving your head causes the graphics to pan); but it's trivial to build some insurance into the graphics API to cover that possibility; and possible to design a graphics API today that is unlikely to need any significant changes in the next 20+ years.
Rusky wrote:One example is the insanity on zero-install in this thread. If an application has to be usable directly from a USB drive, and cannot store any data elsewhere, then all the following use use cases are impossible:
- You can't store an application separately from its dependencies, so you duplicate them all everywhere- maybe not a problem on your typical desktop with TBs of storage, but it still wastes RAM and thus destroys the cache
Let's split dependencies into 2 categories: things that are a standard part of the OS and are therefore never missing; and things that should be considered part of the application rather than a dependency of the application. I see no problem here; only the opportunity to optimise the application better, and reduce wasted RAM and improve cache efficiency far more than "slightly shared by very few apps shared libraries" ever will.
Rusky wrote:- You can't store your apps in read-only locations- useful for security, for running off CDs (as mentioned here!), for running off the network
Yes you can. The only thing that would need to be modified is end-user configuration/settings, which needn't be stored with the application itself and shouldn't be stored with the application itself. Each user would have a "/home/username/settings/myApp" directory containing the settings for "myApp"; and if they use the application on a different computer they just get the default settings until/unless they customise the application's settings on that computer. Of course you'd also be able to put your "/home/username/settings/" on a USB flash stick or network or whatever; possibly including the ability to synchronise multiple copies of your "/home/username/settings/" in different places; and possibly including the ability to have the app and your "/home/username/settings/" on the same USB flash stick.
Rusky wrote:- You can't separate config and data from the app itself (yes, this is a feature)- useful when you want to store the app and config on different media (fast/small vs slow/large; central app db on a network with user profiles on their devices; user profile on a USB drive with app on machine; app on USB drive with profile on machine; use case-specific set of config profiles rather than one per user; etc.)
- You can't keep your config around when you un/re-install/update the app, or easily move it to another install- one of many reasons for a separate /home partition
Why can't use separate config and data from the app itself?
Cheers,
Brendan