Memory size limit of malloc per process in Linux (no swap)
-
- Member
- Posts: 70
- Joined: Tue Jul 14, 2020 4:01 am
- Libera.chat IRC: clementttttttttt
Search StackOverflow for questions like this.
title
CuriOS: A single address space GUI based operating system built upon a fairly pure Microkernel/Nanokernel. Download latest bootable x86 Disk Image: https://github.com/h5n1xp/CuriOS/blob/main/disk.img.zip
Discord:https://discord.gg/zn2vV2Su
Discord:https://discord.gg/zn2vV2Su
Re: Memory size limit of malloc per process in Linux (no swa
32-bit title or 64-bit title?
-
- Member
- Posts: 5512
- Joined: Mon Mar 25, 2013 7:01 pm
Re: Memory size limit of malloc per process in Linux (no swa
No single call to malloc() can allocate more than PTRDIFF_MAX - 1 bytes. This is a limitation of GCC, Clang, and any C library designed to work with them. Multiple calls to malloc() can be used to allocate more than that as long as you don't hit one of the other limits.
Different CPU architectures have different user address space limits. For example, on 32-bit MIPS, a user program can never address more than 2GiB of memory. Some of this address space is already in use by your program and your C library.
Typical malloc() implementations use the kernel's lazy allocation, which may overcommit. Linux allows the overcommit strategy to be configured at runtime. If you attempt to use lazy-allocated memory and no memory is available to back it, the OOM killer will start killing processes. I'm not sure if there are any malloc() implementations that use eager alloction, or if it would be possible to make one using the available Linux system calls.
Different CPU architectures have different user address space limits. For example, on 32-bit MIPS, a user program can never address more than 2GiB of memory. Some of this address space is already in use by your program and your C library.
Typical malloc() implementations use the kernel's lazy allocation, which may overcommit. Linux allows the overcommit strategy to be configured at runtime. If you attempt to use lazy-allocated memory and no memory is available to back it, the OOM killer will start killing processes. I'm not sure if there are any malloc() implementations that use eager alloction, or if it would be possible to make one using the available Linux system calls.
Re: Memory size limit of malloc per process in Linux (no swa
No, it is not. mmap() has flags that provide hints as to whether there should be memory behind the maps, but those are nonstandard and are only hints, so they don't force anything. The keyword is "memory overcommit". By default, Linux uses a heuristic overcommit, where it will allow up to 50% more memory to be mapped than is actually available (IIRC, writing this from memory and being slightly inebriated). The other settings are "full overcommit", where it will just allow anything that fits into virtual memory, and "strict overcommit", where it will not allow a single page beyond what fits into physical memory and swap. These settings are only selectable on a systemwide basis with a proc pseudo-file.Octocontrabass wrote: I'm not sure if there are any malloc() implementations that use eager alloc[a]tion, or if it would be possible to make one using the available Linux system calls.
Carpe diem!
Re: Memory size limit of malloc per process in Linux (no swa
Octocontrabass + 1.
If you are talking about the user level malloc api, then this usually relies on the library runtime. If I remember correctly some unix libraries used sbrk system call to implement malloc. But I am not completely sure if that is the case anymore and I feel lazy to look that up.
--Thomas
If you are talking about the user level malloc api, then this usually relies on the library runtime. If I remember correctly some unix libraries used sbrk system call to implement malloc. But I am not completely sure if that is the case anymore and I feel lazy to look that up.
--Thomas