Page 1 of 1
OS in cache?
Posted: Thu Nov 06, 2008 4:29 pm
by 01000101
would it be possible to load an OS (even the simplest/smallest) fully into cache (as in L1/L2/L3)? Is there any direct methods of manipulating those hardware-based cache locations? Also, I don't have the intel manuals in front of me, but how does the processor know what to cache and how does it go about doing so?
Also, caches are usually enabled by default yes? or is that only the L1 and the rest are left to the operator/programmer?
Sorry for the large amount of questions and small amount of real input as this is a very unfamiliar area to me and I am not around any of my books and I only have a short amount of time online before I must attend to other matters.
I think it would be an very cool experiment.
Re: OS in cache?
Posted: Thu Nov 06, 2008 5:28 pm
by Troy Martin
I think L1 is for memory locations for the processor, and L2,L3 are for programmer use.
Not sure about that or the default enables.
Re: OS in cache?
Posted: Thu Nov 06, 2008 8:50 pm
by Love4Boobies
I'd say no. But it depends what you mean by *simple*.
Re: OS in cache?
Posted: Thu Nov 06, 2008 10:12 pm
by froggey
01000101 wrote:would it be possible to load an OS (even the simplest/smallest) fully into cache (as in L1/L2/L3)?
L2 caches on modern are quite big (2MB/4MB on Intel's Core 2 processors). So it's possible to fit quite a bit of the kernel in to the L2 cache.
01000101 wrote:Is there any direct methods of manipulating those hardware-based cache locations?
You can use the prefetch instructions to hint that data is going to be used soon or instructions like clflush, wbinvd & invd to flush/invalidate what's in the cache. There are also instructions like sfence/lfence/mfence to control the ordering of memory accesses. However, as far as I'm aware there are no instructions that say "Always keep the code/data at address
x in cache" or similar.
01000101 wrote:Also, caches are usually enabled by default yes? or is that only the L1 and the rest are left to the operator/programmer?
No, when the processor boots, the caches are disabled by the CD (Cache disable) and NW (Not Write-through) bits in CR0. But the BIOS will usually enable them at startup and you don't really need to worry about them, check the section on processor initialisation in Volume 3.
You'll probably want to check out the chapter on optimizing cache usage in the optimization reference.
Edit: There's also this thread that popped up recently: http://forum.osdev.org/viewtopic.php?f=1&t=18329
Re: OS in cache?
Posted: Fri Nov 07, 2008 3:56 am
by tsp
At least in ARM processors I would expect that loading entire (very small) OS into cache would mean that the cache needs to be fully associative (instead of X-way set associative) so that any line of cache can store the contents of any memory location!
I'm aware there are no instructions that say "Always keep the code/data at address x in cache" or similar.
ARM processors has a feature called
cache lock down. With cache lock down you can lock code and data into I$ or D$. Normally cache lock down is used to provide predictable code behavior in embedded systems for example to hold IRQ routines in a cache.
Re: OS in cache?
Posted: Fri Nov 07, 2008 6:31 am
by bewing
On Intel machines you use the MTRR set of MSR registers to control caching behavior, based on physical memory address. In general, yes -- if your kernel is small and used often, then it will stay cache-resident. You don't need to do anything to "control" this -- it's best just to let the caches do their thing automatically. They know what to do.
Just make an effort to avoid TLB invalidations/flushes.
Re: OS in cache?
Posted: Fri Nov 07, 2008 7:26 am
by Brendan
Hi,
The simple answer is "yes, it's possible". The
coreboot project is using this to initialize RAM - they call it "CAR" (or Cache As RAM).
The problem (for OS developers) is that the amount of stuff you can lock into the cache is small compared to the amount of stuff that wouldn't be locked into the cache; and with caches disabled the large amount of stuff that's not in the cache will be accessed *slowly*. Also, the CPU's are designed to cache the most recently used stuff, prefetch stuff, and to hide the cost of cache misses (e.g. using out-of-order execution to execute other instructions until the data needed is available); so the performance advantage you get will probably be a lot smaller than you think.
Basically you wouldn't get much performance improvement from locking things into cache and it'd cause a large performance problem for everything else.
Cheers,
Brendan