Re: Understanding memory allocation for a process manually ?
Posted: Fri Feb 19, 2016 4:32 am
ok mate....
The Place to Start for Operating System Developers
https://f.osdev.org/
Don't act in bad manner when someone gives a simple answer to your question.manoj9372 wrote:I didn't asked that question in a bad sense,i just wanted understand how a programmer calculates the MEMORY REQUIREMENTS for a process ?Code: Select all
Basicly the person who programmed the code running in that process decided it needs "x" amount of memory. Brendan explained (very detailed) how memory allocation itself works. Like how a car is able to move. But asking on what basis the process knows that it needs "x" amount of memory is like me asking you on what basis you know where to drive your car.
that's what my question is,kindly don't take it in bad manner...
Ask better questions and be nice.manoj9372 wrote:i want to know on what basis the process know that it needs "x" amount of memory ?
Agreed, though there's something that I can't place my finger on that bothers me a bit with that..Brendan wrote:Hi,
Yes, but...
From this post:
"When the kernel is running out of free physical RAM, it sends a "free some RAM" message (including how critical the situation is) out to processes that have requested it. VFS responds by freeing "less important" cached file data. A file system might do the same for its meta-data cache. Web browser might respond by dumping cached web page resources. DNS system might do the same for it's domain name cache. A word processor might have enough data to allow the user to undo the last 5000 things they've done, and might respond by reducing that so the user can only undo the last 3000 things they've done."
With a system like this; if you don't allow over-commitment you don't waste resources as much - those resources are still being used for caches and other things.
To me; this is the right way to do things - don't allowing over-commit, except for resources that can be revoked.
Of course most existing OSs (and things like POSIX) don't have any way for the kernel to ask for resources back, so they're stuck with the "waste resource or over-commit" compromise.
Not sure if it's changed but previously there were quite a few things that affected what gets printed where. Whether it's a physical console vs SSH shell, whether it's root or normal user, etc.linguofreak wrote: As for system messages coming up while you're working in a terminal, Linux will print all sorts of errors to the system console regardless of what you may be doing on that virtual terminal. My box is currently spewing errors about a failed CD drive that I haven't had the time to open up the machine to disconnect. An OOM situation is next thing to a kernel-panic / bluescreen, both of which will happily intrude while you're doing other things, so I don't see any big problem with the OOM killer doing the same thing.
The OS doesn't, but every app that requests memory does, right? And sooner or later that's going to be everyone one of them, right? Obviously there are things such as SNMP that ease monitoring servers, but the point is that the server has gotten itself into a mess, stopping to wait for a user in a Data Center with 10k real servers and 100k+ virtual servers isn't really feasible. Of course configurable OOM Killer might easily be a solution here, servers automatic, desktops users, but if the manual version isn't absolutely needed then I might prefer consistency..linguofreak wrote: The OS doesn't need to totally grind to a halt. It will have to stall on outstanding allocations until the user makes a decision or a process ends on its own or otherwise returns memory to the OS, but processes that aren't actively allocating (or are allocating from free chunks in their own heaps rather than going to the OS for memory) can continue running. Now, if a mission-critical process ends up blocking on an allocation, yes, you have a problem, and a user OOM killer might not be appropriate for situations where this is likely to cause more trouble than an OOM situation generally does in the first place.
True, but couldn't the SSH then just pre-allocate memory and allow inbound connections? If the connection isn't from one of the "root" users, then kick them back out, and maybe enforce stricter time-outs, etc.. My point was more along the lines that while you can do this, you may need to do it for every tool and it becomes impractical.linguofreak wrote: For a server, your OOM killer could actually make use of a reverse-SSH protocol where the OOM killer makes an *outbound* ssh-like connection, using pre-allocated memory, to a machine running server management software, which could then send alerts to admin's cell phones, take an inbound SSH connection from an administrator's workstation (or phone), and pass input from the administrator to the OOMed server and output from the server back to the administrator.
Are you sure? Haven't really thought about every possible scenario but is there some reason this is the case? It might be that there's some really special case, like calculating the answer "The Answer to the Ultimate Question of Life, The Universe, and Everything", but for everything in normal world, are there cases that aren't bounded by mem and runtime?linguofreak wrote: Not every process has bounded memory requirements. Not every process has bounded runtime. Daemons are generally supposed to keep running indefinitely.