Page 1 of 1

Assuming a large amount of RAM (200mb+)

Posted: Sun Sep 07, 2003 6:32 am
by Perica
..

Re:Assuming a large amount of RAM (200mb+)

Posted: Sun Sep 07, 2003 9:47 am
by Whatever5k
I don't think it is a good idea at all. There are still quite many PCs with less than 200MB RAM.

Re:Assuming a large amount of RAM (200mb+)

Posted: Sun Sep 07, 2003 10:08 am
by Schol-R-LEA
Probably not, and in the end, it probably isn't necessary, either. Even Windows XP, which is several times the size of any OS anyone here is likely to build, will run in 128MB with virtual memory disabled - I know, I've had to do it once on my machine while fixing a VM problem. It ran, and even loaded up a firewall, the anti-virus daemon, and the disk defragmenter (the XP defrag component does a less than desirable job of compacting the disk, for reasons I'm not entirely sure I agree with, but that's a different story). I don't know if it could run in 64MB (the official minimum memory requirements) without VM, but I'm willing to bet that it could if you stripped it down to the core functionality.

Windows is extremely large, for a variety of reasons - it has a tremendous amount of legacy material to support, it supports a vast number of hardware and software configurations, and it provides an enormous number of features. Even Linux, which is huge itself and is the result of millions of man-hours of work, is considerably smaller than Windows XP. Any OS that is written by independent developers must have a smaller memory footprint than Windows, if only because there simply isn't enough manpower and time with which to write that much code. Unless your memory management is so poor that your kernel leaks like a sieve, you probably won't even come close to using 4MB any time soon, never mind 256MB. I expect early versions of Thelema to run within 4MB, and that's taking into account the fact that it will be written partly in a LISP dialect.

In any case, what really eats memory is not the OS itself, but the applications - especially applications with large datasets that have to be kept in memory, such as web browsers, A/V players, image editors, and games. Since these are themselves large, complex pieces of software, it is likely that by the time you'll be writing things like that for your OS, you should have the memory management well in hand.

Why do you ask, anyway? Are you concerned about memory leaks, or having trouble implementing paging, or what? The reasons why it comes up may contain the solution to the real problem.

BTW, Perica, I know that it's been mentioned before, but you really need to grasp the powers of 2. Memory always comes in amounts which are in that power series; currently, it is usually in amounts of 64MB, 128MB, 256MB, and 512MB (1024MB is just starting to appear on the market, while units of 32MB and smaller have largely vanished). Thus, while you might find memories of 64MB, 128MB, 192MB (128MB + 64MB), 256MB, 320MB (256MB + 64MB), 384MB (256MB+128MB), 512MB, 576MB (512MB + 64MB), etc., you are not likely to find memories of 200MB or 250MB.

Re:Assuming a large amount of RAM (200mb+)

Posted: Sun Sep 07, 2003 1:02 pm
by one
Yeah I agree with their views, not everyone has that much of memory anyway. For e.g. back here computers are not that cheap and so is memory. So I think you're OS should support at least 64Mb machines. 8)

Re:Assuming a large amount of RAM (200mb+)

Posted: Mon Sep 08, 2003 12:26 am
by Perica
..

Re:Assuming a large amount of RAM (200mb+)

Posted: Mon Sep 08, 2003 12:28 am
by mystran
Actually, while a small microkernel might need 16k for the binary and static data, it might still use much more dynamic memory.

Linux 1.2 booted on 2MB (yeah, I've done it), 2.0 series needed 4MB, and while 2.4 image is not necessarily much larger, it's not supposed to boot under 8MB.

It's often easy to spend some memory for speed. Say, if you want to have a kernel heap in all page directories, you might want to preallocated the page tables for a 512MB (or 1GB) of address space, which means you need 512kb (or 1MB) more to boot, but saves you the trouble of versioning the page directories, since you only need to modify mappings inside the page tables.

Similarly, if you wanted to have a static table for processes, and another for possible physical pages, all that stuff needs memory.

The thing is, it's usually much easier to spend a few megs here and there, than to make ones kernel small.

Also, if you think most people are running with 256MB+ memory, your going to spend some memory for managing things anyway and assuming some amount of memory will make some things easier, even if you free that memory later if it's not needed.

Say, it's slightly easier to allocate 4MB of memory for a physical memory stack, then fill the stack with pages, and then free any left-over pages from the stack, but that means the kernel won't boot under 4MB (+ code and stack).

Still, I can't really imagine why would anyone need 256MB to boot. The thing is, if your assuming at least 256MB, you might be able to make some tradeoffs/optimisations that benefit people with a lot of memory, even if the kernel still booted with 8MB or less.

Re:Assuming a large amount of RAM (200mb+)

Posted: Tue Sep 09, 2003 1:29 pm
by Adek336
You could not boot your os under bochs with the requirement of 256 megs (or you'd need a powerful computer)

Re:Assuming a large amount of RAM (200mb+)

Posted: Tue Sep 09, 2003 4:06 pm
by Schol-R-LEA
Adek336 wrote: You could not boot your os under bochs with the requirement of 256 megs (or you'd need a powerful computer)
Yes and no. As far as I know, in order to run the Windows version of Bochs with 256MB RAM, you would need at least 384MB total virtual memory, at least 128MB of which would have to be physical RAM; however, with a configuration like that, you'd end up with horrible thrashing. IIRC, in order to get reasonable performance from Bochs, You should have at least 128MB of physical memory above and beyond the total amount being simulated, under Windows XP; I would expect similar, though perhaps somewhat smaller, requirements under other OSes.

Re:Assuming a large amount of RAM (200mb+)

Posted: Wed Sep 10, 2003 1:39 am
by df
i dotn think there is anything wrong with gunning for a large memory base.

I've restricted my os to pentium or better. The less legacy **** the better imo. I also think 128 is a nice minimum. I dont care about the folk siwth 16mb 386 boxes. early pentiums were buggy. I routinely set my vmware box to 16mb or that...

A lot of PCs still come with 128mb which I think is a good base....

Re:Assuming a large amount of RAM (200mb+)

Posted: Fri Sep 12, 2003 5:53 am
by mystran
df wrote: I've restricted my os to pentium or better. The less legacy **** the better imo. I also think 128 is a nice minimum. I dont care about the folk siwth 16mb 386 boxes. early pentiums were buggy. I routinely set my vmware box to 16mb or that...
Basicly the same thing here. I routinely set my vmware box to 256mb. Sometimes even run two of them at the same time.

Really, 128mb for modern OS is a joke. I don't mean that the OS shouldn't run with less, but it's reasonable to expect at least 256mb. I'll be optimizing for 512+ since when I get something to usable point that'll be a joke.

Re:Assuming a large amount of RAM (200mb+)

Posted: Fri Sep 12, 2003 6:48 am
by Pype.Clicker
i read some day something about "magic numbers" that says "why do Excel only remember the 5 last opened files?! Why not 8 or 10 ? Damn, "5" isn't even a power of 2 ! ..."

The result of that rant was to say the only 'valid' magic numbers were 0,1,2 and N (this is, Any Number the system can deal with).

Why stating that your system will require/assume N amount of physical memory ? can't you just read from configuration (for instance, if the sysop do not wish to dedicate more than 8MB to file buffers) or compute from the amount of physical memory (like the amount of memory needed to manage physical memory).

Now don't take me wrong: so far, Clicker will not initialize correctly on machine with less than 32Mb ... But this is not by design. This is just 'waiting for some config-reading feature' and i consider it as something to be improved asap.

Re:Assuming a large amount of RAM (200mb+)

Posted: Fri Sep 12, 2003 1:45 pm
by Schol-R-LEA
Yes, this is known as "The Zero-One-Infinity Principle", a commonly cited rule of thumb. Like much hacker lore, it is rarely explicated; most programmers pick it up by example, rather than from formal training, though at least one common textbook I know of states it outright. See here for a list of similar engineering principles.

Re:Assuming a large amount of RAM (200mb+)

Posted: Fri Sep 12, 2003 2:08 pm
by Pype.Clicker
Schol-R-LEA wrote: Yes, this is known as "The Zero-One-Infinity Principle", a commonly cited rule of thumb.
woops? where did my "2" come from ?? LISP's cells, i guess (each list cell used to have two parts: the Content of Address Register -- CAR and the Content of Data Register -- CDR) ... How many sons will a node of my tree have ? With '2', i can have as much info as i wish in my tree simply by splitting "a stuff" and "rest of stuffs" ;)

Re:Assuming a large amount of RAM (200mb+)

Posted: Fri Sep 12, 2003 2:36 pm
by Schol-R-LEA
Well, yeah, in practice the 'Powers of 2' correlary ('if there must be an arbirary limit that isn't specifically imposd by the structure of the problem, use either a power of 2 or one less than a power of 2') is followed more often than the rule itself is, esp. in with languages which don't have easy dynamic memory allocation. The Zero-one-infinity principle is an ideal, but like most ideals must usually be approximated rather than achieved.

BTW, the Jargon File also has an entry on the Zero-one-infinity Principle which might be Illuminating fnord.