bzt wrote:eekee wrote:I was thinking "swapping", but I used the word "paging"
Oh, I was answering to ~, interestingly I did understand that way what you wrote
Probably because you have also used the phrase "swap in Linux".
Ah!
bzt wrote:eekee wrote:because doesn't swap require page tables anyway?
Actually no. The first time sharing systems "swapped" out entire processes. In DOS, there were a technique when you mapped in portions of code from disk when they were needed. Of course it wasn't called swapping back then, it was called overlays, and it was the app not the OS kernel that did the swapping, but the technique was essentially the same.
So when it was swapping, it wasn't called swapping? *ducks*
Terminology is a crazy thing, isn't it?
bzt wrote:eekee wrote:Overall, I think it's always possible to find a way to do every task with static buffers
I'm not sure about this. Think about a compiler which has to keep a list of defined functions as it parses the source for example. If you allocate that list static, you'll end up having an upper limit in number of functions it can compile. I have never seen that in practice (although possible). I mean we shouldn't forget that not everything can be swapped out. It would be highly ineffective if the list of functions (or symbols) would have to be swapped in and out for every single token. Therefore while in theory yes, you could use static buffers only, in practice it's not viable.
I did mean it like your latter example, the the list of functions being swapped in and out. It would be very slow, but I think it was a suitable way to do some tasks on the limited machines of the 70s and 80s. Perhaps not compiling; if the symbol table is bigger than available memory, the program likely would be too. Perhaps databases.
That reminds me, databases were typically indexed; generating smaller files which could be searched faster. That's another way to help with memory management issues. I think Unix .a files have an index within the file, perhaps at the front. They're libraries for static linking, and can be seen as databases of code. In fact, this concept is huge...
@Everybody:
Structure data so you can make the code efficient. When you can't, consider generating indexes. I'm sure I've read articles and statements by well-known professionals on the power of structuring data for efficient processing. I'm sorry I can't remember any of the details now, but I'm sure it's a very powerful tool. Indexes are one example. Another is the fixed-length fields and records of (popular) databases.
(This post is clearly too long, bold alone doesn't make text stand out at all. Sorry.)
bzt wrote:eekee wrote:Programming has got easier, requiring less training and thought
Imho this is a common misconception. Just because a language allows expressions on a higher level, it doesn't mean it requires less training and though. I know my opinion on this goes against the mainstream (but mainstream is starting to realize I'm right). I've seen way too many shitty apps in java and especially with jQuery. Imho a good programmer can write good code in a lower level language as well as in a higher level language, while a bad (untrained/unthoughtful) programmer can only write bad code in a higher level language. Therefore it is not true that programming got easier, it's just something big companies want you to believe (because they are making money of selling their programming tools). A couple years ago everybody was hiring Indian programmers because they were uneducated and cheap. Now that trend stopped, because everybody have realized that it doesn't matter if they use a modern language like java, go or rust, if you want quality results, you need well-educated programmers no matter what. Same happened to HTML. Do you remember Netscape Composer and Dreamweaver? Everybody thought at first that high quality wyswyg editors will render manual HTML coding obsolete. But they did not.
Maybe I'm a bit out of date. In fact, companies in general realising the value of good programmers is news to me, and it's good. All the same, I question how practical it is, and how long it will last. "Making programming easier" has been a huge drive since the 50s and 60s. It's not because of the companies selling tooling, it's because of the need; good programmers are too rare. Take my generation for example. Lots of kids wanted to be programmers. Competition and innovation saturated our schooling, science fiction saturated our entertainment. Computers were the pinnacle of innovation at that time. In business, entertainment, schooling,
and the space program, computers were
the cutting edge. If you could program a computer, just-about everyone was impressed! Education was well-funded. We had been focussed, as much as the world can focus children, to fit into the dot-com bubble, which started about 3 years after we left high school. It wasn't enough. The dot-com bubble was characterised by the mass hiring of incompetant programmers because there weren't enough good programmers to go around.
The Soviet Union had the same problem. They were better than the West at using the human resources they had; no gender bias, no age bias, but it wasn't enough. They had teams of little old ladies programming computers back when computers still needed to be run in chilled rooms -- the little old ladies wore fingerless gloves to type. I have no doubt they were well-trained, the Soviet Union was very big on education. It wasn't enough. Starting in the late 50s, they stole IBM designs and secretly bought IBM software because they couldn't develop enough software on their own. (IBM was quite supportive, believe it or not! Most of the trades took place in Hong Kong.)
What about us, developing operating systems solo? Well, "easier" doesn't really apply in the same way, but it does still affect us. If we're working to a standard, we need to implement features and support languages which people in the past thought would make programming easier. If we're developing our own designs, it's wise to develop systems which will make future development and design quicker -- which is another definition of easier, one which is not at all in conflict with what you're saying, bzt.
bzt wrote:eekee wrote:For another example, Plan 9's window system is quite wasteful of memory, but very easy to program. In the early 90s it must have been limited to just black and white, but give it a couple hundred megabytes and it's fine with color. Windows PCs of the era were already showing 256 colors, but their window system was harder to program.
I see what you mean. There's truth in it, nowdays we don't have to code so carefully because there's more resource. But, I don't think it's good idea on the long run. Same thinking was used in the 80's (waste resources just because we can), and look where it got us in a short 40 years: accumulated into a very likely mass extension event with global warming. To get you a closer example: a Win10 and Linux boots considerably slower on a cutting-edge modern computer than DOS did on an average 386 PC. Which is imho insane, because today's computers are about a hundredthousand times faster at least, but modern OSes provide far less than hundred times more features compared to DOS. They should boot at least thousand times faster, not slower...
I wonder, perhaps this is a result of failing to structure data for efficient processing. In many cases, it's fairly clear slow program start-up is because the data for a program is indexed or re-structured while the program is starting up. The big argument for this is that it eliminates the risk of supplying the wrong index for the data.
But now you used booting as an example, I'm specifically wondering why Plan 9 boots more slowly than FreeDOS on my OS-dev box. FreeDOS boots faster even with a 5 second delay for a menu. It's especially surprising because booting was a key part of how Plan 9 is designed to be used. By design, Plan 9 users work from diskless workstations. Logging in is part of the boot process, and logging out is accomplished by switching the terminal off. (This is mind-blowing to many!
) (It's possible to use it in a more monolithic way, but that's the design.)
Now I need to investigate this. I vaguely remembering Plan 9 booting faster in the distant past. ... Ah-
hah! Part of the delay appears to be starting the filesystem. That's a rarely-done task in a normal Plan 9 network; the file server is the hub of the network so it stays up. (Mine's a monolithic installation.) It's not the whole story... Okay, helps to plug the ethernet in of course. The most noticeable remaining delay is polling EHCI before bringing up the filesystem. DOS doesn't do USB, and I don't think that old fast-booting Plan 9 could boot from it. (I'd like to avoid it.) I guess there's more. Maybe it's crept in as 9front has implemented fixes and support for modern hardware. Maybe I could slim down my boot scripts.
Edit: I just noticed I don't load a cd/dvd-rom driver in freedos because its search is slow.
There's no such drive in the machine, anyway.
bzt wrote:Cheers,
bzt
Cheers! Replying to this was stimulating.