Old vs New, are we really improving?

All off topic discussions go here. Everything from the funny thing your cat did to your favorite tv shows. Non-programming computer questions are ok too.
SoulofDeity
Member
Member
Posts: 193
Joined: Wed Jan 11, 2012 6:10 pm

Old vs New, are we really improving?

Post by SoulofDeity »

Sometimes when I look at some of our old ways of doing things in comparison to the new ways, I often wonder...wtf were they smoking?

*exaggeration*

But in all seriousness, I'm going to point out a list of things the rattle my brain as to why they've changed, proceed to ramble on and complain about the changes, and would like to know your opinion on them. The first thing I'll be talking about is...


Overlays vs Virtual Memory
-------------
Ok, we've been using virtual memory for a pretty long time now. So long now that most people don't even know what an overlay is anymore. For those who already know what they are, go ahead and skip to the next paragraph. Overlays are binary files that overlap each other memory, allowing the user to load applications that exceed the memory constraints of the device they're using. Overlays that overlap each other obviously can't be loaded at the same time though, though this can be overcome by making them relocatable.

Why does this bother me though? Firstly is the matter of performance. While you can blatantly ignore the RAM limits of the device you're programming for with virtual memory, using paging to treat ROM as RAM won't make it as fast as RAM. "So what?", you say. "It's not like you notice the difference." Sure, with an average application the result may not seem noticeable, but for increasingly large applications, it is. Overlays however, run entirely in ram, and are only used as the application needs them, reducing the memory requirement and increasing performance. In fact, embedded devices today still use overlays.

Secondly is the matter of complication. While overlays have their own drawbacks, virtual memory is far more complicated. You must initialize complicated page tables and directories, and managing these pages is anything but simple. Overlays are much simpler. You just copy them to their virtual start address and they're good to go. A drawback though is that overlays require the user to divide their application into the files. I for one don't see it as a problem though. With languages such as C++ and Java, we're already dividing our application into classes. It's just a matter of looking at them and saying; well, I'm obviously not going to be loading class "eggnog"(overlay2) until I have class "thermos"(overlay1), so I'll just keep it in the "fridge"(rom) for now.

Lastly is the matter of portability. With virtual memory, applications must be loaded by the operating system and are therefore system dependent, even if they make absolutely no external calls at all. Overlays however are much simpler. You just load them into ram and adjust their relocations. In fact, you can use overlays in your everyday programs as opposed to DLL's, ensuring that your plugins will work regardless of what operating system the user is on, depending on the content of the plugin.

----

So, I leave with these final questions:

1. ) Did your views on virtual memory change at all?

2.) Given the pro's and con's of each, do you consider the switch to virtual memory as an improvement?

3.) Do you plan on using overlays in the future for any purpose?
User avatar
NickJohnson
Member
Member
Posts: 1249
Joined: Tue Mar 24, 2009 8:11 pm
Location: Sunnyvale, California

Re: Old vs New, are we really improving?

Post by NickJohnson »

Even if you have enough RAM to run all of your applications together, virtual memory still is massively useful, because it allows you to protect them from each other at basically zero overhead. Memory protection is an absolute must-have. I don't think anyone wants to go back to the days when you could accidentally crash your whole system by running an unprivileged program with a simple memory bug.

Virtual memory is also not more complicated, and executables using virtual memory are no more dependent on the system to relocate them than ones using overlays. That's really more a matter of the executable format. It also depends where you think the complexity is worse: virtual memory may be slightly more complex for the OS designer, but overlays are a pain for the user and compiler writer.
User avatar
gravaera
Member
Member
Posts: 737
Joined: Tue Jun 02, 2009 4:35 pm
Location: Supporting the cause: Use \tabs to indent code. NOT \x20 spaces.

Re: Old vs New, are we really improving?

Post by gravaera »

Yo:

The key point is that whatever complication is introduced by virtual memory schemes and address space linearization technologies is introduced in the right place and the burden is placed on the right people: it is introduced in the kernel, and the kernel developers are the ones who have to deal with the complexity. That means that the rest of the world can tunnel-vision on more important things, like actually getting userspace applications developed.

One small group of specialists handle kernel space, and then userspace devs can do their job without having to worry about system level fragmentation schemes. It's a win for everyone other than kernel developers -- and I would even say that it is a win for kernel developers because it gives them something more to play with and adds more fun to the job.

--Peace out,
gravaera
17:56 < sortie> Paging is called paging because you need to draw it on pages in your notebook to succeed at it.
User avatar
Love4Boobies
Member
Member
Posts: 2111
Joined: Fri Mar 07, 2008 5:36 pm
Location: Bucharest, Romania

Re: Old vs New, are we really improving?

Post by Love4Boobies »

I didn't bother to read the whole thing as it seems to mostly be a rant that misses the point of virtual memory. Virtual memory brings COW to the table (e.g., for really fast implementations of fork), helps to efficiently handle fragmentation (overlays would require moving things around), simplifies compilers, can catch certain errors via exceptions, helps with protection (although alternatives do exist---but overlays alone cannot achieve this), etc.

Also, why would you assume people here don't know about overlays? It's not nice to call people ignorant.

EDIT: FUUU-, I just noticed gravaera mentioned some of the things I did. Sorry for repeating.
"Computers in the future may weigh no more than 1.5 tons.", Popular Mechanics (1949)
[ Project UDI ]
SoulofDeity
Member
Member
Posts: 193
Joined: Wed Jan 11, 2012 6:10 pm

Re: Old vs New, are we really improving?

Post by SoulofDeity »

Love4Boobies wrote:I didn't bother to read the whole thing as it seems to mostly be a rant that misses the point of virtual memory. Virtual memory brings COW to the table (e.g., for really fast implementations of fork), helps to efficiently handle fragmentation (overlays would require moving things around), simplifies compilers, can catch certain errors via exceptions, helps with protection (although alternatives do exist---but overlays alone cannot achieve this), etc.

Also, why would you assume people here don't know about overlays? It's not nice to call people ignorant.

EDIT: FUUU-, I just noticed gravaera mentioned some of the things I did. Sorry for repeating.

I never assumed people here don't know about them, I stated that generally most programmers don't know about them. You guys do bring up some valid points though.

Much of my problems with the way things are done nowadays is just the laziness of it all. In the past, programs were simpler, cleaner, and more elegant. People actually had to think. There's been many times I've thought about going all the way back to Windows 3 and making all my software from scatch. A sorta nostalgic, yet new feeling, you know?
FallenAvatar
Member
Member
Posts: 283
Joined: Mon Jan 03, 2011 6:58 pm

Re: Old vs New, are we really improving?

Post by FallenAvatar »

SoulofDeity wrote:
Love4Boobies wrote:...
I never assumed people here don't know about them, I stated that generally most programmers don't know about them. You guys do bring up some valid points though.

Much of my problems with the way things are done nowadays is just the laziness of it all. In the past, programs were simpler, cleaner, and more elegant. People actually had to think. There's been many times I've thought about going all the way back to Windows 3 and making all my software from scratch. A sorta nostalgic, yet new feeling, you know?
First off, the laziness that you refer to is on your end, assuming a general purpose OS dev project. Part of OS dev is learning anything and everything that effects your design plans and implementation, so your lack of knowledge here is not a valid point of argument. Your viewpoint of the past where programs were simpler, cleaner, and more elegant is completely subjective and I can't comment there.

However, if you are making or thinking about making an OS where you are the sole designer, and programmer, including the programming of user apps, relying on a system like overlays or segmentation, or other solutions of memory management that offer no/limited memory protection is not an issue. However, most people here assume you are designing a general purpose OS in which other developers will design and implement their own apps and in that case, memory protection is an absolute requirement. And in this day and age, there is no reason to not design/implement a general purpose OS except in rare situations (embedded development, etc.)

Given the above exceptions, overlays are effectively useless.

- Monk
linguofreak
Member
Member
Posts: 510
Joined: Wed Mar 09, 2011 3:55 am

Re: Old vs New, are we really improving?

Post by linguofreak »

SoulofDeity wrote:Ok, we've been using virtual memory for a pretty long time now. So long now that most people don't even know what an overlay is anymore. For those who already know what they are, go ahead and skip to the next paragraph. Overlays are binary files that overlap each other memory, allowing the user to load applications that exceed the memory constraints of the device they're using. Overlays that overlap each other obviously can't be loaded at the same time though, though this can be overcome by making them relocatable.

Why does this bother me though? Firstly is the matter of performance. While you can blatantly ignore the RAM limits of the device you're programming for with virtual memory, using paging to treat ROM as RAM won't make it as fast as RAM. "So what?", you say. "It's not like you notice the difference."
Actually, what I'd say is more like "So what? You hardly ever access ROM after boot time these days".
Sure, with an average application the result may not seem noticeable, but for increasingly large applications, it is. Overlays however, run entirely in ram, and are only used as the application needs them, reducing the memory requirement and increasing performance.
But that's just the thing: Much of the reason we don't use overlays these days is that we're no longer nearly as memory constrained as we used to be. Reducing memory requirements is not a huge priority.

Furthermore, virtual memory and overlays (if you insist that you need them) are not mutually exclusive, and depending on the details of your platform it might be possible, or even sensible, to implement your overlays using virtual memory. For example, on a system with a fairly large physical address space, but a very constrained virtual address space, like some PDP-11 models that had 4MiB physical / 64KiB virtual, a program with an executable size larger than the virtual address space might have several overlay modules that would all be present in physical memory at the same time, but would be switched in and out of the virtual address space as needed.

Also, depending on how you define "overlay", modern paged swapping implementations can be seen as a more sophisticated and flexible version of overlays powered by virtual memory.
Secondly is the matter of complication. While overlays have their own drawbacks, virtual memory is far more complicated.
Overlays are simpler for the system programmer (maybe). For the application programmer, however, they are much more complicated. From some of the things you've said, I'm thinking (forgive me if I'm wrong) that you probably have a fair bit more experience programming embedded devices and/or single-tasking personal systems (Apple II, DOS-era PC's, etc) than multitasking systems. For embedded and single-tasking systems, the line between application and system programming is much fuzzier, so you might not see quite as much of the application-level downside of overlays as someone who has done more work with multitasking systems.
Lastly is the matter of portability. With virtual memory, applications must be loaded by the operating system and are therefore system dependent, even if they make absolutely no external calls at all.
A full application is *certain* to make external calls on a multitasking system (though an overlay or DLL might not), because the OS won't allow it direct access to the hardware, so if it doesn't make calls to the OS, it won't be able to interact with the user, read or write anything from/to disk, etc. And it's better, portability-wise, to be OS-dependent than hardware dependent anyways, and you're going to end up being one or the other if your program isn't completely solipsistic (though technically CPU architecture will always give you some hardware dependency even then).
Overlays however are much simpler. You just load them into ram and adjust their relocations. In fact, you can use overlays in your everyday programs as opposed to DLL's, ensuring that your plugins will work regardless of what operating system the user is on, depending on the content of the plugin.

----

So, I leave with these final questions:

1. ) Did your views on virtual memory change at all?
Nope.
2.) Given the pro's and con's of each, do you consider the switch to virtual memory as an improvement?
Very much so, yes.
3.) Do you plan on using overlays in the future for any purpose?
Not for any serious purpose. I've already considered trying to implement a 16-bit protected mode or v86 mode operating system that uses overlays to fit everything into a 64k or 1M virtual address space purely for the sake of fun and anachronism. But I'd never use overlays on a modern system with modern amounts of memory.
linguofreak
Member
Member
Posts: 510
Joined: Wed Mar 09, 2011 3:55 am

Re: Old vs New, are we really improving?

Post by linguofreak »

SoulofDeity wrote:Much of my problems with the way things are done nowadays is just the laziness of it all. In the past, programs were simpler, cleaner, and more elegant.
From what I've seen, they certainly were simpler (because the memory constraints that existed in the past limited complexity), but not cleaner or more elegant. In fact, a lot of stuff in that era was full of dirty kludges just to be able to fit in 64k, 640k, or whatever the limit was on the system in question.
User avatar
bewing
Member
Member
Posts: 1401
Joined: Wed Feb 07, 2007 1:45 pm
Location: Eugene, OR, US

Re: Old vs New, are we really improving?

Post by bewing »

I'm on SoulOfDeity's side on this one. Bloat = laziness. And I see it EVERYWHERE these days.
A linear algebra library that consists of 50MB of source is not clean or elegant. A compiler that consists of over 256M of source just sucks, period. A computer in 1995 did almost everything your machine does today, but did it in under 8M of RAM. And size IS directly related to speed and efficiency. It astonishes me that even people on here don't realize that 99% of their CPU power is being wasted on low-efficiency code and algorithms. Don't people want to have machines that run 100 times faster, just by fixing the code? Laziness.
linguofreak
Member
Member
Posts: 510
Joined: Wed Mar 09, 2011 3:55 am

Re: Old vs New, are we really improving?

Post by linguofreak »

bewing wrote:It astonishes me that even people on here don't realize that 99% of their CPU power is being wasted on low-efficiency code and algorithms.
Except on my machine, 95-ish% of my CPU power is wasted because the machine is waiting for user input, or waiting for the disk, or waiting for the network.
Don't people want to have machines that run 100 times faster, just by fixing the code? Laziness.
No, because the code is plenty fast enough as it is. While people gripe about "bloat", our CPU's spend most of their time halted and operating in their lowest clock-speed power state.

Yes, a 486 with 8 megs of RAM would choke on Ubuntu 12.04, let alone a modern Windows version.

No, that's not relevant on a quad-core i7 with 8 gigs.
User avatar
Griwes
Member
Member
Posts: 374
Joined: Sat Jul 30, 2011 10:07 am
Libera.chat IRC: Griwes
Location: Wrocław/Racibórz, Poland
Contact:

Re: Old vs New, are we really improving?

Post by Griwes »

The problem is that, let me quote klange from IRC:
Software gets slower faster than hardware gets faster
Not that we are "there" already, but we are approaching, and if the trend doesn't change, we will reach "there" soon.
Reaver Project :: Repository :: Ohloh project page
<klange> This is a horror story about what happens when you need a hammer and all you have is the skulls of the damned.
<drake1> as long as the lock is read and modified by atomic operations
linguofreak
Member
Member
Posts: 510
Joined: Wed Mar 09, 2011 3:55 am

Re: Old vs New, are we really improving?

Post by linguofreak »

Griwes wrote:The problem is that, let me quote klange from IRC:
Software gets slower faster than hardware gets faster
Not that we are "there" already, but we are approaching, and if the trend doesn't change, we will reach "there" soon.
Given the amount of idle time my computer generally has, I'm not sure that's true. Of course, some day Moore's law will give out and computer performance will level off. At that point software will continue to grow until it saturates computer performance, at which point performance will regain prominence as a criterion for software design and bloat will be squeezed out.
User avatar
iansjack
Member
Member
Posts: 4685
Joined: Sat Mar 31, 2012 3:07 am
Location: Chichester, UK

Re: Old vs New, are we really improving?

Post by iansjack »

A computer in 1995 did almost everything your machine does today, but did it in under 8M of RAM.
On what level to start discussing how incorrect that statement is?

I can't be bothered - it is so fundamentally flawed that it is not worth the effort. (He said whilst browsing the Internet, listening to some music, whilst on a VM a large compile is taking place. Meanwhile, the latest version of Fedora is downloading.)
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Old vs New, are we really improving?

Post by Brendan »

Hi,
linguofreak wrote:
bewing wrote:It astonishes me that even people on here don't realize that 99% of their CPU power is being wasted on low-efficiency code and algorithms.
Except on my machine, 95-ish% of my CPU power is wasted because the machine is waiting for user input, or waiting for the disk, or waiting for the network.
You're right - software fails to use idle time effectively. For example, while I'm writing this (between handling key presses) my computer should be doing spell-checking, trying to find ways to improve executable code, optimising physical memory usage, optimising disk usage (de-fragmenting?), compressing data that's not likely to be used soon (and decompressing data that is more likely to be used soon), etc. A CPU is only ever truly idle when software developers have failed to make use of spare CPU time.

You're right - bad/inefficient file formats and bad/inefficient networking protocols make the CPUs spend more time waiting for IO, and this adds to the performance problem. The most common cause is using a "human readable" format for things that humans should never need to read, which increases sizes by an order of magnitude for no sane reason (and also means extra processing time is consumed for parsing). A simple example of this is SVG (or anything that uses XML, or anything that was designed by W3C).


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
OSwhatever
Member
Member
Posts: 595
Joined: Mon Jul 05, 2010 4:15 pm

Re: Old vs New, are we really improving?

Post by OSwhatever »

The question is how virtual memory scales with larger memories. As memory becomes larger we would need up to 5 levels of page tables which increase the time spent if there is a TLB miss and also over all complexity. Inverted page tables lose their advantage as memory size goes up and the regular hierarchical page table is more attractive again. As I see it, page tables feels like a flash translation layer which is a workaround in order to hide drawbacks of a technology.

Virtual memory do give us a lot useful features however I'm not sure this is the way to go in the future. Per object protection does reduce development times and reduce bugs so why not use it all the way with OS support.

The obstacle I think is language support for bare bones in order to create something like this. C/C++ pretty much use flat memory model, java runs in a virtual environment instead. Maybe we need a middle way.

We need some sort of compiler that operates on descriptors rather than addresses to begin with. I know that Watcom can use a segmented model but I'm not sure if is useful in this case.

Virtual memory is so trenched in now that we lack a lot of tools, both SW and HW.
Post Reply