Rusky wrote:This is not a case of mere logic, this is a case of what actually happens.
I didn't say it didn't happen, I asked if anybody could point me to a time it happened that wasn't for very stupid reasons.
What significant program have you used that hasn't had security patches?
Programmer's Notepad. FCE Ultra.
Word processors get macro viruses.
Yes, and that would be a
prime example of what I'm talking about when I say that it seems like most of these things stem from stupid design decisions - because seriously, how could you
not realize the potential of a fully programmable and completely unsecured scripting language that can be transparently embedded into ordinary-looking documents and interface with the operating system at large?
Do you really developers are just dumb and ignore the "incredibly, obviously stupid" buffer overflows in their code?
Since they
keep happening (Heartbleed, anyone?) I'd have to go with "yes, apparently so."
The problem is you're not seeing the lessons as what they are, and instead seeing them as garbage, because you haven't thought the requirements through. It's this kind of pie-in-the-sky rewrite-the-universe-and-it-will-be-perfect thinking that leads to the legitimate problems you're complaining about.
If by "you're not seeing the lessons as what they are" you mean "you're not blindly accepting that the way Unix/Linux does things is always and invariably the best and only way to do it," sure. Otherwise, no, that's not what I'm doing at all.
And I'm curious as to how the idea of starting with a more or less clean slate and building a new system from the ground up is the source of problems I've complained about that pretty much all stem from trying to kludge an existing arcane, primitive design into successive approximations of an actually modern operating system.
Brendan wrote:The differences are how we are planning to solve the complexity and "mistake detection" problems - commodorejohn seems to be going for "interpreted" (which could solve the complexity problem and should solve most "mistake detection" problems, but implies severe performance penalties).
Actually, I'm looking at interpreted systems more for the purposes of portability than security - beyond a certain level of catching access violations, I really don't think that it's feasible or wise to expect the language or runtime environment to do the developer's job of debugging for them.
(About the performance penalties, ten years ago when I was learning Java in college and marvelling over what a mess it was, I would've agreed completely. Now, though, I really think we've reached a point where a sensibly-designed VM can be quite performant enough for pretty much anything besides heavy number-crunching or high-end gaming.)
The first lesson to learn is that there's a "well trodden path" that leads to the status quo, and if you attempt to leave that path (e.g. go in a direction that isn't "pointless wheel reinvention") you can expect much higher resistance.
Indeed.
Rusky wrote:Just because a solution has problems doesn't mean the problems it solves won't exist when you rewrite the universe.
Maybe so, but the fact that there will always be problems needing to be solved doesn't mean that you can't arrive at a better solution to certain problems by avoiding the poor design decisions that led to them in the first place.
Rusky wrote:People in this thread, and others with the same attitude, complain about things on a general level without offering real solutions. They say things like "ridiculously complex yet unbelievably primitive system architecture" and "complications that benefit nobody that the designer failed to avoid" without considering that the way they use a tool is not the only valid way. You can tell they've had frustrating experiences with the things they're complaining about, but they immediately jump from "it didn't do what I want" to "it's an overly complicated piece of garbage" rather than "I should probably learn more about this tool" or "I wonder how this could be improved."
Man, I spent upwards of
seven years "learning more about [these tools]." All it got me was a fuller comprehension of just what a mess they are. If you want to go "works for
me!" or otherwise argue that you don't care, well, fine, that's your affair. But this typical blame-the-user-for-an-overly-convoluted-design thing is the same crap I've already heard a million times, and it didn't change my mind then, either.
Linux's directory structure really does have some dark corners.
Seriously? It's
all dark corners.
But I don't make huge exaggerations and declare the whole of the software universe unfit for consumption,
I never did that, and you freaking know it. What I
did was suggest that Unix and Unixoids, specifically, are too much of a mess to constitute a viable base for a high-quality modern operating system on account of their being a mainframe OS kludged up with forty years of legacy cruft - but of course dissing Unix is Nerd Heresy and cannot be tolerated, so naturally that gives everybody else license to read inventive new meanings into the things I actually
did say and just generally make stuff up, and to further berate me for only complaining and not offering better ideas when the time I'm actually able to spare to get over here is taken up pretty much completely with explaining how "no, I didn't actually say that" to the point where I simply haven't had the opportunity to sit down and
explain what I think good solutions would be.
Brendan wrote:It'd be more accurate to say package management helps hide the symptoms of "dependency hell"; and hiding symptoms of a problem isn't something I'll ever be fond of (in general, hiding symptoms is something you do when you fail to solve a problem).
This.