Message ID | alpine.DEB.2.20.1707070844100.11769@east.gentwo.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Fri, Jul 7, 2017 at 6:50 AM, Christoph Lameter <cl@linux.com> wrote: > On Thu, 6 Jul 2017, Kees Cook wrote: > >> Right. This is about blocking the escalation of attack capability. For >> slab object overflow flaws, there are mainly two exploitation methods: >> adjacent allocated object overwrite and adjacent freed object >> overwrite (i.e. a freelist pointer overwrite). The first attack >> depends heavily on which slab cache (and therefore which structures) >> has been exposed by the bug. It's a very narrow and specific attack >> method. The freelist attack is entirely general purpose since it >> provides a reliable way to gain arbitrary write capabilities. >> Protecting against that attack greatly narrows the options for an >> attacker which makes attacks more expensive to create and possibly >> less reliable (and reliability is crucial to successful attacks). > > > The simplest thing here is to vary the location of the freelist pointer. > That way you cannot hit the freepointer in a deterministic way > > The freepointer is put at offset 0 right now. But you could put it > anywhere in the object. > > Index: linux/mm/slub.c > =================================================================== > --- linux.orig/mm/slub.c > +++ linux/mm/slub.c > @@ -3467,7 +3467,8 @@ static int calculate_sizes(struct kmem_c > */ > s->offset = size; > size += sizeof(void *); > - } > + } else > + s->offset = s->size / sizeof(void *) * <insert random chance logic here> > > #ifdef CONFIG_SLUB_DEBUG > if (flags & SLAB_STORE_USER) I wouldn't mind having both mitigations, but this alone is still open to spraying attacks. As long as an attacker's overflow can span an entire object, they can still hit the freelist pointer (which is especially true with small objects). With the XOR obfuscation they have to know where the pointer is stored (usually not available since they have only been able to arrange "next object is unallocated" without knowing _where_ it is allocated) and the random number (stored separately in the cache). If we also added a >0 offset, that would make things even less deterministic. Though I wonder if it would make the performance impact higher. The XOR patch right now is very light. Yet another option would be to moving the freelist pointer over by sizeof(void *) and adding a canary to be checked at offset 0, but that involves additional memory fetches and doesn't protect against a bad array index attack (rather than a linear overflow). So, I still think the XOR patch is the right first step. Would could further harden it, but I think it's the place to start. -Kees
On Fri, 7 Jul 2017, Kees Cook wrote: > If we also added a >0 offset, that would make things even less > deterministic. Though I wonder if it would make the performance impact > higher. The XOR patch right now is very light. There would be barely any performance impact if you keep the offset within a cacheline since most objects start on a cacheline boundary. The processor has to fetch the cacheline anyways.
On Fri, Jul 7, 2017 at 10:06 AM, Christoph Lameter <cl@linux.com> wrote: > On Fri, 7 Jul 2017, Kees Cook wrote: > >> If we also added a >0 offset, that would make things even less >> deterministic. Though I wonder if it would make the performance impact >> higher. The XOR patch right now is very light. > > There would be barely any performance impact if you keep the offset within > a cacheline since most objects start on a cacheline boundary. The > processor has to fetch the cacheline anyways. Sure, this seems like a nice additional bit of hardening, even if we're limited to a cacheline. I'd still want to protect the spray and index attacks though (which the XOR method covers), but we can do both. We should keep them distinct patches, though. If you'll Ack the XOR patch, I can poke at adding offset randomization? -Kees
Index: linux/mm/slub.c =================================================================== --- linux.orig/mm/slub.c +++ linux/mm/slub.c @@ -3467,7 +3467,8 @@ static int calculate_sizes(struct kmem_c */ s->offset = size; size += sizeof(void *); - } + } else + s->offset = s->size / sizeof(void *) * <insert random chance logic here> #ifdef CONFIG_SLUB_DEBUG if (flags & SLAB_STORE_USER)