SPICE: lots of theoretical wankery that may someday be an OS

Discussions on more advanced topics such as monolithic vs micro-kernels, transactional memory models, and paging vs segmentation should go here. Use this forum to expand and improve the wiki!
User avatar
Owen
Member
Member
Posts: 1700
Joined: Fri Jun 13, 2008 3:21 pm
Location: Cambridge, United Kingdom
Contact:

Re: SPICE: lots of theoretical wankery that may someday be a

Post by Owen »

iansjack wrote:
Octocontrabass wrote:
onlyonemac wrote:I still maintain that the Classic Macintosh operating system is still the best designed operating system ever
Every program runs in kernel mode. There are kilobytes of global variables, at fixed memory locations. Are you sure it's the best design ever? :roll:
To be fair, Classic Mac OS was so hardware independent that it would run on 68000 family processors and the PowerPC family. The designers of Unix/Linux/BSD must be green with envy at such versatility.
I'm assuming heavy sarcasm here, because one must note, of course, that Mac OS was ported to PowerPC by running most of it in an emulator.

(This still made things slightly faster, because the PowerPCs that Apple used were about 3x faster than the fastest 68040)
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: SPICE: lots of theoretical wankery that may someday be a

Post by Rusky »

Yes, separating design and implementation is good- as I said, "you should think through what you're going to do before you start coding away." But iteration should not (always) mean creating entirely new implementations- it's okay to have things like incompatible versions of libraries as clients transition to new ones. Sometimes that's unavoidable no matter how much design you do up-front.

Wayland is actually a great example of how not to stop the world. Because we've reached a point where we can't evolve X11 and need to break some fundamental parts of it, they created a new project. However, because we can't just drop everything while we rewrite all our software, including tiny niche programs, corporate-internal programs, etc. to use Wayland, they also created XWayland so all those programs that haven't or won't migrate will will work.

Neither X11 nor Wayland have been or are major causes of breakage because of monthly breaking design changes. X11 goes to great lengths to be backwards compatible, which is the entire reason Wayland exists. While there have been some dead-end design choices throughout its evolution, X11 has been for the most part very well thought-out- look how much requirement-change it's outlasted!

You can claim that every new interface is the result of poor initial planning, but you have all the experience and new design ideas available when you make that claim. Hindsight is 20/20, but hindsight is made from experience. Of course initial planning is good, but it cannot be perfect- as you say, your ability to predict the future goes down the farther out you look.

For example, graphics API requirements have, in fact, changed drastically since X11 was created. At that time, there were only very simple, functional graphics that render on (in X11 terms) a server, using software rendering, with tight memory constraints and little to no animation. Now, we have ultra-high resolution (pixels and color depth) graphics rendering in real-time at 60fps on dedicated hardware with gigabytes of memory, and we take advantage of that for video, games, usability enhancements, flashy effects (note: not necessarily the same as usability enhancements), etc.

X11's original design is great for the first set of requirements, and because its creators understood how requirements change, they designed it to be extensible enough that it still handles the second set pretty well. However, some pieces of X11 that are good for the first set are bad for the second (no compositing built in, server/client, software rendering primitives, etc.) or needed to be moved into the kernel because the hardware interfaces standardized (and so keeping them around in X11 is just baggage). It needs to go to let graphics continue to progress. Head tracking requires low latency and new input methods, which Wayland is optimized for, but your claim that it's the only change in the next 20 years is more failure of imagination than anything, which is why Wayland is also extensible.

Basically my point here is that you can't just ignore changing requirements and imperfect humans- you have to design for them. Learn from the past, even if it's icky. Make things extensible so you don't have to keep starting from scratch. Be okay with complexity, because you're dealing with a complex problem domain, especially if you want your software to work for people besides just you.
Brendan wrote:Let's split dependencies into 2 categories: things that are a standard part of the OS and are therefore never missing; and things that should be considered part of the application rather than a dependency of the application. I see no problem here; only the opportunity to optimise the application better, and reduce wasted RAM and improve cache efficiency far more than "slightly shared by very few apps shared libraries" ever will.
There's definitely room for libraries in the space between OS-standard and part-of-the-app.
Brendan wrote:
Rusky wrote:
  • You can't store your apps in read-only locations- useful for security, for running off CDs (as mentioned here!), for running off the network
Yes you can. The only thing that would need to be modified is end-user configuration/settings, which needn't be stored with the application itself and shouldn't be stored with the application itself.
I was responding to this:
MessiahAndrw wrote:On my operating system I'm planning for programs to simply be directories. Configuration files must stay in that directory - even if they're user-specific, they can create sub-directories.
Same applies to the rest of my points in that list.
Brendan wrote:Of course there's a massive difference between "advanced features" (that are useful/needed in certain situations) and "complications that benefit nobody that the designer failed to avoid" (that provide no benefits compared to superior/simpler alternatives). A lot of the complexities that end-users are expected to deal with for "desktop GNU/Linux distributions" are the latter. Note: I've explicitly excluded Android here, as the "user space" that Android users see is significantly different.
Are you saying the existence of distributions is one of those unnecessary complications? Or just that many distributions expose those complications?
User avatar
AndrewAPrice
Member
Member
Posts: 2299
Joined: Mon Jun 05, 2006 11:00 pm
Location: USA (and Australia)

Re: SPICE: lots of theoretical wankery that may someday be a

Post by AndrewAPrice »

Brendan wrote:In my opinion, for user interfaces the best solution is to use multiple modes where applicable, to "hide" the advanced stuff so that beginners don't get confused and don't screw things up, but so that advanced users can find/enable the advanced options if/when they want to.
The examples I think of are video games. For example, in realtime strategy games where you can build a base and raise an army, there can be several dozen different types of buildings and units, and several hundred combinations of upgrades. It's overwhelming if you could jump straight in with everything unlocked, so it's common in these games that when you play the single player campaign linearly, you only have a couple of things unlocked. By the end of the single player campaign, you've been introduced to all of the different buildings and units, and that's when you can join into an online skirmish and play against others without feeling totally lost.

I'm not sure how you'd introduce that into general purpose software, such as a word processor or spreadsheet programs. Many advance features of word processors and spreadsheets programs aren't used by the typical user (watermarks, mail merge, synchronizing data, revision tracking, embedding binary attachments, scripting) - for others, they may use these features on an every day basis.

It may be overwhelming to a new user if they jump in and all of these advance features are suddenly exposed, yet to a power user - they'd want that. There are also different types of power users that have different needs and use different features.

Some programs have an 'advance mode' that can be switched on and off. Some programs have multiple 'views' - like calculators that have simplified, programmer, and scientific modes. Most programs let you show and hide different windows and toolbar panels. Some programs even go so far as to allow you to customize the individual items in menus and toolbars.

I'm actually one that customizes my programs very little. Unless there's specifically something in my way or something missing, I tend to just use them as-is.
My OS is Perception.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: SPICE: lots of theoretical wankery that may someday be a

Post by Brendan »

Hi,
Rusky wrote:X11's original design is great for the first set of requirements, and because its creators understood how requirements change, they designed it to be extensible enough that it still handles the second set pretty well. However, some pieces of X11 that are good for the first set are bad for the second (no compositing built in, server/client, software rendering primitives, etc.) or needed to be moved into the kernel because the hardware interfaces standardized (and so keeping them around in X11 is just baggage). It needs to go to let graphics continue to progress. Head tracking requires low latency and new input methods, which Wayland is optimized for, but your claim that it's the only change in the next 20 years is more failure of imagination than anything, which is why Wayland is also extensible.
Let's enumerate requirements for a graphical API in historical order:
  • Ability to draw text, pictures and basic 2D shapes in graphics mode (requirement existed since graphics modes existed)
  • Ability to support multiple independent windows/planes/canvases/whatever_you_call_them (requirement from about 1973)
  • Ability to handle multiple output devices/"multi-monitor" (requirement from around mid 1980s)
  • Ability to handle hardware accelerated 2D (requirement since around mid 1980s)
  • Ability to handle hardware accelerated 3D (requirement since early 1990s)
  • Ability to handle hardware accelerated video decoding (requirement since early 2000s)
  • Ability to handle programmable shaders ("anti-requirement" since early 2000s)
According to the wikipedia article, the X Window System dates back to 1984. From the list above there has only been 3 changes in requirements since.
Rusky wrote:
Brendan wrote:Let's split dependencies into 2 categories: things that are a standard part of the OS and are therefore never missing; and things that should be considered part of the application rather than a dependency of the application. I see no problem here; only the opportunity to optimise the application better, and reduce wasted RAM and improve cache efficiency far more than "slightly shared by very few apps shared libraries" ever will.
There's definitely room for libraries in the space between OS-standard and part-of-the-app.
The room in the space between OS-standard and part-of-the-app is a trap. It's the beginning of a slippery slope that leads to a massive disaster.
Rusky wrote:
Brendan wrote:Of course there's a massive difference between "advanced features" (that are useful/needed in certain situations) and "complications that benefit nobody that the designer failed to avoid" (that provide no benefits compared to superior/simpler alternatives). A lot of the complexities that end-users are expected to deal with for "desktop GNU/Linux distributions" are the latter. Note: I've explicitly excluded Android here, as the "user space" that Android users see is significantly different.
Are you saying the existence of distributions is one of those unnecessary complications? Or just that many distributions expose those complications?
I'd be tempted to say that the existence of multiple distributions (e.g. tailored for a specific use - maybe one for embedded, one for mobile, one for desktop and one for servers) is fine; but the sheer number of them and differences between them are a symptom of a larger problem, and that the larger problem is a severe failure to establish adequate standards as part of thorough design process.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: SPICE: lots of theoretical wankery that may someday be a

Post by Rusky »

The requirements for X11 to draw text/pictures/shapes originally required an interface that could be efficiently transmitted to the X server, using little memory, with no hardware acceleration for 2D or 3D. Today, applications need to use hardware acceleration to render everything to their own buffer with as little latency as possible, with the window manager doing compositing (again through hardware) rather than requesting applications redraw invalidated areas.

Nothing uses the old X11 interfaces anymore because they don't fit these requirements, and yet they're still here so Xorg can be fully compliant with the X11 spec. Instead, applications use a combination of X extensions and kernel interfaces that completely bypass X. What interface should X11 have used in 1984 so that those new interfaces wouldn't have been necessary? Compositing? Not an option for the requirements of the time. Client-side rendering? Not an option at the time. And yet those are essential for the requirements of modern interfaces, for both usability and aesthetic reasons.

You cannot take the optimal interface from 2014 and have it work in 1984, nor can you take the optimal interface from 1984 and make it fulfill the requirements from 2014, because the requirements are too different.
Brendan wrote:The room in the space between OS-standard and part-of-the-app is a trap. It's the beginning of a slippery slope that leads to a massive disaster.
What disaster? The horrible death of having to install dependencies? Where do you put scripting languages? Database engines? XML parsers? Compilers (standalone vs part of IDE)? Different desktop environments? Bake them into the OS so nobody can ever use any but the default? Bake them into the apps that use them so you have separate copies of them all polluting the disk cache, RAM, and the CPU cache? I'd rather have a real package manger, thanks.
Brendan wrote:I'd be tempted to say that the existence of multiple distributions (e.g. tailored for a specific use - maybe one for embedded, one for mobile, one for desktop and one for servers) is fine; but the sheer number of them and differences between them are a symptom of a larger problem, and that the larger problem is a severe failure to establish adequate standards as part of thorough design process.
So once one group has figured out a standard for a desktop distro, nobody else should be allowed to create one? What happened to open source and competition? Why not just let the distros themselves handle ease of use? Some are doing a pretty good job of it. I could understand your complaint if a company were offering computers with your choice of 1000 distros pre-installed, but this is ridiculous. You could extend your argument to say that all OSes are contributing to the problem and we need to take down this site and standardize on Windows.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: SPICE: lots of theoretical wankery that may someday be a

Post by Brendan »

Hi,
Rusky wrote:The requirements for X11 to draw text/pictures/shapes originally required an interface that could be efficiently transmitted to the X server, using little memory, with no hardware acceleration for 2D or 3D. Today, applications need to use hardware acceleration to render everything to their own buffer with as little latency as possible, with the window manager doing compositing (again through hardware) rather than requesting applications redraw invalidated areas.
That's an implementation detail and not a requirement.

Note that I personally think it's a bad implementation detail (e.g. an application should only send a description of what to render, and should not have access to any "pixel buffer" of any kind for any reason); especially for something where network transparency is a goal.
Rusky wrote:You cannot take the optimal interface from 2014 and have it work in 1984, nor can you take the optimal interface from 1984 and make it fulfill the requirements from 2014, because the requirements are too different.
Agreed; but only because the requirements changed with the introduction of 3D acceleration, programmable shaders and hardware accelerated video decoders. You could have designed an optimal interface in 2001 that is still optimal today, and is likely to remain optimal until the requirements change (maybe in 2022 when "head tracking displays" become common). You could also design an optimal interface today that could remain optimal for 50+ years.
Rusky wrote:
Brendan wrote:The room in the space between OS-standard and part-of-the-app is a trap. It's the beginning of a slippery slope that leads to a massive disaster.
What disaster? The horrible death of having to install dependencies? Where do you put scripting languages? Database engines? XML parsers? Compilers (standalone vs part of IDE)? Bake them into the OS so nobody can ever use any but the default? Bake them into the apps that use them so you have separate copies of them all polluting the disk cache, RAM, and the CPU cache? I'd rather have a real package manger, thanks.
There's 2 issues here ("baked into OS" and "baked into application"). For the first issue; you seem to have significant difficulties distinguishing between "interfaces" and "implementations". For example, you can have a single standard "database engine interface" (e.g. SQL) with hundreds of entirely different competing "database engine implementations" that all comply with the standard for that interface. You could provide a generic/minimal database engine with the OS, but it wouldn't prevent anyone from switching to any other database engine they like. Applications only depend on the interface, do not depend on the implementation, and needn't know or care which implementation is being used.

For the second issue ("baked into application"); I think you mean "whole program optimisation that doesn't suck", or "zero breakage caused by library changes that application weren't able to be aware of", or "no chance of dependency hell", or "no need for a massive over-complicated spaghetti monster package management system", or "far superior in every possible way imaginable". 8)
Rusky wrote:
Brendan wrote:I'd be tempted to say that the existence of multiple distributions (e.g. tailored for a specific use - maybe one for embedded, one for mobile, one for desktop and one for servers) is fine; but the sheer number of them and differences between them are a symptom of a larger problem, and that the larger problem is a severe failure to establish adequate standards as part of thorough design process.
So once one group has figured out a standard for a desktop distro, nobody else should be allowed to create one? What happened to open source and competition? Why not just let the distros themselves handle ease of use? Some are doing a pretty good job of it. I could understand your complaint if a company were offering computers with your choice of 1000 distros pre-installed, but this is ridiculous. You could extend your argument to say that all OSes are contributing to the problem and we need to take down this site and standardize on Windows.
Once a group has figured out a set of standards that specify the interfaces, many different people can create many different implementations that comply with those standards. If the standards aren't adequate, then people can also create a new set of standards. What is important is interoperability and the freedom to choose implementation; not the freedom the choose incompatible interfaces.

What happened with open source and competition is that, after 20 years of trying we've discovered that the vast majority of people would rather pay money to Microsoft (or Apple) than use anything designed by a disorganised hoard of monkeys running about causing random pointless churn. This is also why it's so hard to find any company offering computers with any Linux distro pre-installed (but extremely easy to find many companies happy to pre-install Windows).

Note that I'm not saying open source is a complete failure. Google/Android has shown that if you can get rid of the disorganised hoard of monkeys it's possible for open source to succeed.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
commodorejohn
Posts: 11
Joined: Sat Jul 05, 2014 9:31 pm
Location: Duluth, MN
Contact:

Re: SPICE: lots of theoretical wankery that may someday be a

Post by commodorejohn »

embryo wrote:Your most important problem is your desire. System without idiot users will straggle to get any attention and most probably die. You have mentioned improper marketing and other Apple's mistakes - it is the consequence of ignoring idiots. Another ignorant systems are unixes - they keep idiots far from understanding themselves. But idiots use Linux distributions - is it a joke? One noticeable distribution is named Android and very soon will reach 2 billion users.
I don't understand what you're trying to say here...?

Also, you seem to be under the impression that I think of "ordinary users" and "idiots" as being one and the same. That's not the case at all. Ordinary users should be given some consideration (which Unix/Linux basically doesn't do at all, because its developers commonly do think that ordinary users are idiots, so when they're not ignoring ordinary users entirely, they're attempting to coddle them with a dozen variants of Baby's First Desktop UI.) Things like emphasizing intuitive, consistent user-interface design and failing in a recoverable fashion when possible would fall into that category. But there's nothing a computer can do to make someone stop behaving in a reckless or ignorant fashion other than to let them fail and learn from their failure; trying to prevent them from failing is less likely to educate them and more likely to hamper sensible users. (See, for example, Windows Vista/7/8's UAC.)
But here is another problem - how to implement such an ideal? And the answer can be found within another range - from Linux's way on the one extreme and up to Windows/Android way on the other. First extreme is a very long lived community process on the basis of a bazaar approach, which obviously leads to the number one philosophy point to be compromised. Second extreme is based on a cathedral approach with tightly controlled development process. But as time goes the cathedral should be reconstructed a lot of times to suite new needs of a cathedral owner and it obviously leads to the number one philosophy point to be compromised, because the system should serve idiot's needs (i.e. main customers, ordinary users) and the service should be delivered as quickly as possible and to be as inexpensive as possible.
So very, very wrong. Cathedral/bazaar/whatever methodology has nothing to do with the ability to design an intuitive yet powerful operating system. It's entirely possible for a corporation to develop an accessible but flexible OS (Amiga! BeOS!) and it's equally possible for a community of independent developers to work on the "ordinary users are idiots" philosophy and create software suitable only for idiots (dear God, GNOME 3. Or that "Sugar" **** on the OLPC.) It has everything to do with the assumptions you start with - if you really think that ordinary users are rock-stupid, your attempt to create a system for ordinary users is going to result in commensurately rock-stupid software.
And after looking at the possible implementation approaches it seems that all your philosophy is useless and in any way you should first make something "not so good" and try to increase it's quality in some iterative manner without deep thoughts about the philosophy. Because the philosophy like - the OS should be simple and good - is absolutely obvious and requires no thoughts. And the implementation details will always prevent you from building a real cathedral (an ideal).
Feh. Is perfection going to be achieved in one go? Of course not. But the way I see it, having some idea of where you want to go when you start is a good way to reduce your chances of getting aimlessly lost along the way, even if it still takes you many revisions to actually get there.

And are the things I said obvious? Darn right they are. Unfortunately, they seem to be the kind of obvious that people frequently miss or ignore despite their obviousness - so I think they bear repeating.
Rusky wrote:Another example is the OP's claim that any non-browser/network app should never be insecure or the developer is "doing it wrong." The fact is, your software will be insecure.
Will it? Seriously, I'm still trying to wrap my head around this viewpoint after years of hearing it treated as gospel. Please, explain to me how, say, a text editor can present a security flaw that doesn't spring from the developer doing something incredibly bone-headed like making it Internet-facing for no good reason.

(For that matter, could someone relate a network security flaw that didn't spring from something incredibly, obviously stupid like using buffers without bounds-checking on an outward-facing connection? I'm really curious.)
It's much more important to figure out how to deal with vulnerabilities when they're discovered rather than sticking your head in the sand and singing "lalala I can't hear you."
Well I certainly never suggested doing that.
Even in a memory-safe language with a sophisticated type system and lots of static analysis- no, even in a mathematically impossible language that catches all security holes at the language level, your design and users will still have problems- phishing, scammers calling on the phone pretending to be tech support, social engineering, side channel attacks, etc.
But at some point that stuff has to stop being the job of the OS and start being the job of the user - because, as you point out, it's impossible for the OS to actually account for every possible eventuality, and it becomes unusable well before that. We have mail providers that block whole regions of IPs and filter out messages based on complex heuristics, and people still fall for incredibly obvious 419 scams. That ain't the computer's fault, and it's insane to act like it is.
Given that software must have dependencies, different software must use different dependencies, software must be updated both for changing requirements and for security, and both config and dependencies should be separable from their applications, it's not unreasonable to use a program to manage it all. It needs to make sure apps have the right dependencies installed, that you don't accidentally break your system trying to install conflicting dependencies (optimally by letting them coexist), that you don't get phished trying to install something by gallivanting around the Internet downloading unsigned packages from untrusted sources, and that your software is all up to date.
Dependencies, in my experience, only become incomprehensible and unmanageable when either A. the dependencies themselves are poorly designed, or B. the average program is made up of a tiny core dynamically linking to eleven billion libraries, sub-libraries, and sub-sub-libraries (so, any given GNOME application.) My answer to both of these problems would be more along the lines of "holy geez stop doing that already!" than "let's introduce a framework to formalize and manage these terrible practices!" It's like dealing with squatters by asking them to fill in an entry on the guest register.

As for that signing nonsense, it's an excellent way to sneak in some brand-new garden walls under the pretense that the computer should be subbing for the user's common sense and good judgement.
To clarify again, I'm not claiming any particular Linux distribution is perfect, or that we can't come up with a good zero-install system, or that unix is the pinnacle of good design. But throwing out all the wisdom and experience that's gone into those systems because you don't like some petty surface things is insane. There's even less of a chance that your magical recreate-everything design will be any good if you throw out the baby with the bathwater, let alone assert dominance without even attempting to provide solutions to these problems.
Who said anything about throwing out wisdom and experience? I'm perfectly willing to take lessons from these into consideration. What I'm not interested in is trying to kludge a system that's already half made up of kludges into being something more elegant and modern when it would be simpler to just start from scratch.
onlyonemac wrote:Why don't you just use an old Mac? Also, I'm eagerly awaiting the day when someone ports or at least rewrites the Classic Macintosh operating system to run on PCs (or perhaps the day when I finally shelve my current operating system project and write a loose Mac OS clone)...
Heh, I toyed with that idea for a while...ultimately I think it's probably more trouble than it's worth, but if you ever do take a stab at it, you can count on me being a tester ;)

(As for why I don't just use an old Mac? Really, pretty much it's just the lack of support for SSL and WPA/WPA2. If it weren't for that, OS9 on my TiBook would probably suit me just about perfectly.)
By the way, are you the same commodorejohn that was on the 68kMLA forums?
I am indeed!
iansjack wrote:The proof of the pudding is in the eating. How many people now use these wonderful OSs - Classic Mac OS, AmigaOS, Beos, Haiku, whatever? And yet the granddaddy of them all - Unix -, strangely, is still going strong, thanks to a well thought out, elegant design, powering the Internet, allowing amateurs like Sir Tim to invent the world-wide web. It's a funny old world.
Ah, there's that old Unix vainglory. Not only is it somehow "the granddaddy" of systems actually descended from SmallTalk or TRIPOS or designed from scratch as an object-oriented operating system with only POSIX compatibility as a link to the Unix legacy, it's also Deep Magic that is the only thing that could possibly have enabled someone to come up with the idea of network hypertext!
SpyderTL wrote:Getting anyone on this site to agree with you about anything would be a true miracle. The problem is that everyone here already has their "perfect" OS design in their head, and everything that doesn't happen to line up with it is immediately attacked with passion and fervor.
Oh, of course. But it's made for some lively conversation!
Anything is possible, and sometimes it just takes someone who believes in something so much that they simply ignore all of the nay-sayers and proceed to change the world. With that in mind, if there is anything I can do to help, just let me know.
Many thanks :)
Brendan wrote:Of course there's a massive difference between "advanced features" (that are useful/needed in certain situations) and "complications that benefit nobody that the designer failed to avoid" (that provide no benefits compared to superior/simpler alternatives). A lot of the complexities that end-users are expected to deal with for "desktop GNU/Linux distributions" are the latter.
Dead-on.
Computers: Amiga 1200, DEC VAXStation 4000/60, DEC MicroPDP-11/73
Synthesizers: Roland JX-10/MT-32/D-10, Oberheim Matrix-6, Yamaha DX7/FB-01, Korg MS-20 Mini, Ensoniq Mirage/SQ-80, Sequential Circuits Prophet-600, Hohner String Performer
User avatar
iansjack
Member
Member
Posts: 4689
Joined: Sat Mar 31, 2012 3:07 am
Location: Chichester, UK

Re: SPICE: lots of theoretical wankery that may someday be a

Post by iansjack »

You miss my point about "grandaddy" Unix. I make no claim that it is the forefather of now almost forgotten operating systems, just that it predates them - and yet, here it still is. I only make the practical point that the number of users of those OSs are in the low thousands whereas Unix has survived them all and is now the most widespread OS. It must be doing something right; heck, it must be doing a lot of things right. Those other wonderful OSs are now just tombstones visited by a few devoted pilgrims. Meanwhile, in the real world....

Classic Mac OS, AmigaOS, Beos, RiscOS, all these wonderful OSs were niche designs running on very limited hardware. All of them were so poorly designed that they could not adapt to new hardware. You value architecture independence and yet you give no regard to an OS that runs on everything from handhelds to mainframes.

The Internet could run on something other than Unix, the WWW could have been designed on something other than Unix. Interesting as a debating point, but the reality is that Unix is that the Internet does run on Unix and developers, scientists and the man in the street all over the world use Unix, not Classic Mac OS, to get their jobs done. Theorize all you like but you can't argue with the numbers.
commodorejohn
Posts: 11
Joined: Sat Jul 05, 2014 9:31 pm
Location: Duluth, MN
Contact:

Re: SPICE: lots of theoretical wankery that may someday be a

Post by commodorejohn »

iansjack wrote:You miss my point about "grandaddy" Unix. I make no claim that it is the forefather of now almost forgotten operating systems, just that it predates them - and yet, here it still is. I only make the practical point that the number of users of those OSs are in the low thousands whereas Unix has survived them all and is now the most widespread OS. It must be doing something right; heck, it must be doing a lot of things right.
If success was a measure of quality, Twilight would be good. Unix's prevalence shows only that a product that is first pushed by a giant monopoly and then given away for free can get people to go "okay, sure, whatever."
Classic Mac OS, AmigaOS, Beos, RiscOS, all these wonderful OSs were niche designs running on very limited hardware. All of them were so poorly designed that they could not adapt to new hardware. You value architecture independence and yet you give no regard to an OS that runs on everything from handhelds to mainframes.
Actually, all of the above were quite adaptable to new hardware (the Amiga less so than the others, but still.) Three of the four have even made successful jumps to completely different architectures, in one case through sheer community willpower. So...yeah, good argument there.

And I'd give Unix more credit on the portability front if it didn't achieve that by treating everything like a mainframe.
The Internet could run on something other than Unix, the WWW could have been designed on something other than Unix. Interesting as a debating point, but the reality is that Unix is that the Internet does run on Unix and developers, scientists and the man in the street all over the world use Unix, not Classic Mac OS, to get their jobs done.
And...? You want a medal or something?
Theorize all you like but you can't argue with the numbers.
I'm not arguing with the numbers, I'm just saying that they don't prove what you think they do.
Computers: Amiga 1200, DEC VAXStation 4000/60, DEC MicroPDP-11/73
Synthesizers: Roland JX-10/MT-32/D-10, Oberheim Matrix-6, Yamaha DX7/FB-01, Korg MS-20 Mini, Ensoniq Mirage/SQ-80, Sequential Circuits Prophet-600, Hohner String Performer
embryo

Re: SPICE: lots of theoretical wankery that may someday be a

Post by embryo »

SpyderTL wrote:The problem is that everyone here already has their "perfect" OS design in their head, and everything that doesn't happen to line up with it is immediately attacked with passion and fervor.
But it is exactly your position, isn't it?

The best way here is to understand the reason which drives negative comments. And it isn't the "to line up with it" stance. There are problems that you think are small and unimportant, but for some reason other people feel it in a different way.
embryo

Re: SPICE: lots of theoretical wankery that may someday be a

Post by embryo »

Brendan wrote:In my opinion, for user interfaces the best solution is to use multiple modes where applicable, to "hide" the advanced stuff so that beginners don't get confused and don't screw things up, but so that advanced users can find/enable the advanced options if/when they want to.
Of course it is a nice way to go. But as some comments in this thread suggest, may be it will be nicer to stimulate some learning process among OS users. And even may be we can base an OS on the basis of a permanent learning paradigm?
Brendan wrote:Of course there's a massive difference between "advanced features" (that are useful/needed in certain situations) and "complications that benefit nobody that the designer failed to avoid" (that provide no benefits compared to superior/simpler alternatives). A lot of the complexities that end-users are expected to deal with for "desktop GNU/Linux distributions" are the latter.
In case of permanent learning the final knowledge body should be really exciting, like very beautiful cathedral. Linux's way leads to a large heap of useless knowledge but with some brilliants within. Citizens of the "heap city" are often unable (or refuse) to see the whole body, because the heap is very large. And some brilliants that are exposed to everyone convince most of the citizens in the holiness of the city. What should a new and exciting cathedral be to attract the citizens of the "heap city"? We need a very big step forward, but we have not enough resources :(
Brendan wrote:For example, you can have a single standard "database engine interface" (e.g. SQL) with hundreds of entirely different competing "database engine implementations" that all comply with the standard for that interface. You could provide a generic/minimal database engine with the OS, but it wouldn't prevent anyone from switching to any other database engine they like. Applications only depend on the interface, do not depend on the implementation, and needn't know or care which implementation is being used.
Here we can have a generic service interface to be able to go without OS if some new concept will be invented. For example if we do not know about database and such concept is just invented then a database vendor can implement generic interface and let other programs discover and use it in a simple way. Maybe in the future we can integrate some more fine grained interface of database service into the OS.
embryo

Re: SPICE: lots of theoretical wankery that may someday be a

Post by embryo »

commodorejohn wrote:I don't understand what you're trying to say here...?
Your perfection passion is a bit misleading. It prevents you from seeing good solutions.
commodorejohn wrote:But there's nothing a computer can do to make someone stop behaving in a reckless or ignorant fashion other than to let them fail and learn from their failure
It can be "fail and refuse to continue with your OS".

A computer (programs, if to be correct) can teach not only by showing the user it's miserable failures.
commodorejohn wrote:Cathedral/bazaar/whatever methodology has nothing to do with the ability to design an intuitive yet powerful operating system.
Without good methodology you will end up with counterintuitive and very weak OS.
commodorejohn wrote:It's entirely possible for a corporation to develop an accessible but flexible OS (Amiga! BeOS!) and it's equally possible for a community of independent developers to work on the "ordinary users are idiots" philosophy and create software suitable only for idiots
OK, do you have a corporation for this purpose? If not - then please do not refuse to see the real world of "independent developers to work on the "ordinary users are idiots" philosophy". Or you will fail miserably.
commodorejohn wrote:But the way I see it, having some idea of where you want to go when you start is a good way to reduce your chances of getting aimlessly lost along the way
We still have no idea what is the way "you see it". Only some trivial philosophy. It is better to share your ideas instead of exploiting feedback of others only.

I see you are very angry about some inconsistencies of existing systems, but fail to offer anything useful to fight the inconsistencies.
User avatar
AndrewAPrice
Member
Member
Posts: 2299
Joined: Mon Jun 05, 2006 11:00 pm
Location: USA (and Australia)

Re: SPICE: lots of theoretical wankery that may someday be a

Post by AndrewAPrice »

commodorejohn - Talking about alternatives and how things could be better is great. We all love great debates and new ideas. However, if you do criticize something - a standard practice, someone's project, current ways of doing things - you really need to focus on presenting an alternative, rather than just talking about why current methods are bad. If you do not do this, what you are saying can easily be misinterpreted as either an offensive attack or simply whining, and that's when things generally get bad on this forum and your thread gets locked. I hope that doesn't become the case, so please think through what you're about to post. If you focus your discussions around 'here's an idea that I think could make things better' rather than 'here's a list of things that suck' people will respond a lot more positively.
My OS is Perception.
commodorejohn
Posts: 11
Joined: Sat Jul 05, 2014 9:31 pm
Location: Duluth, MN
Contact:

Re: SPICE: lots of theoretical wankery that may someday be a

Post by commodorejohn »

embryo wrote:Your perfection passion is a bit misleading. It prevents you from seeing good solutions.
No, what I'm not seeing there is the point you're trying to make.
Without good methodology you will end up with counterintuitive and very weak OS.
But you're classifying these methodologies as "good" or "bad" on the basis of your ideology, not on the basis of whether they give good results. That's just politics. And in any case, even the "right" methodology isn't a sure guarantee of a good product.
OK, do you have a corporation for this purpose? If not - then please do not refuse to see the real world of "independent developers to work on the "ordinary users are idiots" philosophy". Or you will fail miserably.
I don't have a corporation, but I don't see how that's relevant. And if distancing myself from developers who think that ordinary users are idiots means failure, well, I'd rather fail than come up with piece-of-crap system that condescends to its users.
We still have no idea what is the way "you see it". Only some trivial philosophy. It is better to share your ideas instead of exploiting feedback of others only.

I see you are very angry about some inconsistencies of existing systems, but fail to offer anything useful to fight the inconsistencies.
As I've said, I'm getting to that. I have other demands on my time, you know.
Computers: Amiga 1200, DEC VAXStation 4000/60, DEC MicroPDP-11/73
Synthesizers: Roland JX-10/MT-32/D-10, Oberheim Matrix-6, Yamaha DX7/FB-01, Korg MS-20 Mini, Ensoniq Mirage/SQ-80, Sequential Circuits Prophet-600, Hohner String Performer
User avatar
iansjack
Member
Member
Posts: 4689
Joined: Sat Mar 31, 2012 3:07 am
Location: Chichester, UK

Re: SPICE: lots of theoretical wankery that may someday be a

Post by iansjack »

Quote:
Classic Mac OS, AmigaOS, Beos, RiscOS, all these wonderful OSs were niche designs running on very limited hardware. All of them were so poorly designed that they could not adapt to new hardware. You value architecture independence and yet you give no regard to an OS that runs on everything from handhelds to mainframes.

Actually, all of the above were quite adaptable to new hardware (the Amiga less so than the others, but still.) Three of the four have even made successful jumps to completely different architectures, in one case through sheer community willpower.
Successful jumps to one other processor. As opposed to Linux....

And where are those wonderful OSs now - to all intents and purposes, dead. The users have voted with their feet.
Post Reply