Page 1 of 1

Interrupt mechanism corrupting the data used in the stack

Posted: Thu Jul 11, 2024 4:35 pm
by Oxmose
Hi, it's me again!

I have been fighting with a stack corruption issue since 2 days and I think I found the source of my problem. However I don't understand how to solve that.

From what I understand, on interrupt, SS, RSP, RFLAGS, CS and RIP are pushed on the stack.
However, I get an interrupt in the following code:

Code: Select all

uint32_t         highPart;
uint32_t         lowPart;

/* Get time */
__asm__ __volatile__ ("rdtsc" : "=a"(lowPart), "=d"(highPart));
Which in assembly is translated to

Code: Select all

push rbp
mov rsp, rbp
mov rdi, -0x28(rbp)
mov -0x28(rbp), rax
mov rax, -0x8(rbp)
rdtsc
mov eax, -0xc(rbp)
mov edx, -0x10(rbp)
mov -0x8(rbp), rax
mov (rax), eax
And the last assembly instruction mov (rax), eax generates a segfault. Why? Because rax (value pointed by rbp - 8 ) contains.... 0x10, my CS selector value!

So from what I understand, the interrupt will crunch the data under rbp address (rbp = rsp in my case) and when I return from interrupt, well, the data that were stored by the C code lower than RBP were replaced by the interrupt data pushed by the CPU. I checked the content of my virtual CPU (where I store the register on interrupt) and the content of the stack at the offending line and indeed, this is the behavior that I am describing that happened.

I only have one level of execution (ring 0), thus using the same stack for interrupt and regular thread execution (maybe that's exactly what I should not do, but how can I switch stacks if I stay on the same privilege level?)

Also, note that this happened in other places in the kernel when I didn't have any kind of inline assembly so I doubt that it comes from the use of inline assembly in that case.

Re: Interrupt mechanism corrupting the data used in the stack

Posted: Thu Jul 11, 2024 4:43 pm
by Oxmose
Well, might answer myself this time.

Just learnt about the red-zone, adding -mno-red-zone when compiling solved the issue. Now on to the next bug!