Re: Kernel Side-Channel "Ultra-Nuke"
Posted: Wed Mar 21, 2018 10:56 am
tRP + tRCD will be triggered in the user code itself by performing access on a (virtual address) range allocated with large page or "consecutively" if deterministic allocation is possible, with a fixed page stride to determine if any row will trigger "row conflict". If timed sufficiently accurately, it may reveal the relative (to some unknown offset) location of a physical page used by the kernel during the last service call. It is a long shot to a full exploit, but may leverage an extended attack plan.Brendan wrote:For "row select" timing in the RAM chips themselves, it'd be ruined by CPU fetching the first instruction from user-space. Doing WBINVD (invalidating all the caches) between kernel API calls would make it impossible to infer anything from preceding kernel API calls. The timing mitigation/s I mentioned should (I hope) cover the problem of using 2 kernel API functions at the same time (where one is used to infer something about the other).
The idea was to detect a filesystem cache hit (not a CPU cache hit), to determine if an API reuses a storage device location for two pieces of information, relying on the I/O latency of the storage device to detect actual read. Similarly, in a different scenario, disk seeking may approximately reveal the distances between file extents, which I speculate, may become part of an offline attack plan.Brendan wrote:For device drivers; a micro-kernel using the IOMMU (to restrict what devices can/can't access) goes a long way to prevent device drivers from using devices to use figure out what kernel did/didn't fetch into cache. Assuming that the kernel handles timers itself (e.g. and doesn't have a "HPET driver' in user-space); I'm not sure if a device driver can use a device to get better time measurements (e.g. to bypass the timing mitigation/s I mentioned).
The random jitter could be compensated by the attacker, assuming it has a stationary mean. Also, running busy loop in parallel provides a coarse measure of time. I do agree that the system can abstract and protect its services better, but it could be that the programming model may have to be scrapped and core changes may be necessary. Changes on the scheduling and synchronization level (and shared memory may have to be discarded overall). I also feel that JIT-compiling infrastructure would hypothetically enforce such complicated constraints with smaller performance penalty.Brendan wrote:You'd still need to either make a small number of "accurate enough" measurements (which would be ruined by random delays before returning from kernel to user-space) or need to make a large number of measurements (which the kernel could hopefully detect). If you can't do either of these things then you can't gather the data to send to a remote server to analyse.
Depends. On one hand, the executable can be entirely stopped from executing. But if launched anyhow, since software uses all sorts of online databases and servers, it may not be feasible to make such restriction effective (because the user or the security manifest are likely to attest to the network usage). This all depends on the system. Servers will be obviously administered differently and with a narrower purpose, from say, home desktops.Brendan wrote:Note that network access would be disabled by default (a processes can't send/receive packets unless admin explicitly enabled network access for that executable).
Ideally, yes. But it may not happen. Spyware, ransomware, bots, do not always need to escalate their privileges to do damage. Of course, escalating control over the system and becoming viral are the bigger threats, generally speaking, but some users may care less about that after the fact. In contrast, unintentionally exploitable code may assist in forcing its own crash during or after the attack, by defining the scope of its current operation and requesting policing by the kernel.Brendan wrote:I'd say that if a process is intentionally malicious it should crash and burn; and if a process is unintentionally exploitable then it should crash and burn.
Security over-relies (with justification) on immutability. Signing, DEP, secure boot, etc, are effective way to limit the software zoo in software designed around the Harvard architecture. But if the software comes bundled with some sort of interpreter, such as most games today and a lot of the office software, the data inside can offer Turing complete behavior that must be protected as well. For generalized learning AI, despite the usual safeguards, the state is by necessity mutable and in a narrow sense, Turing-complete. It can certainly be exploited to re-purpose itself in contrast to its original intent.Brendan wrote:I don't think it makes any difference if a piece of software is created by a human or created by a machine - if the software is malicious or exploitable then it dies regardless of what created it.