Message ID | 202003051624.AAAC9AECC@keescook (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | slub: Relocate freelist pointer to middle of object | expand |
On Thu, 5 Mar 2020, Kees Cook wrote: > Instead of having the freelist pointer at the very beginning of an > allocation (offset 0) or at the very end of an allocation (effectively > offset -sizeof(void *) from the next allocation), move it away from > the edges of the allocation and into the middle. This provides some > protection against small-sized neighboring overflows (or underflows), > for which the freelist pointer is commonly the target. (Large or well > controlled overwrites are much more likely to attack live object contents, > instead of attempting freelist corruption.) Sounds good. You could even randomize the position to avoid attacks on via the freelist pointer. Acked-by: Christoph Lameter <cl@linux.com>
From: Christopher Lameter > Sent: 08 March 2020 19:21 > > On Thu, 5 Mar 2020, Kees Cook wrote: > > > Instead of having the freelist pointer at the very beginning of an > > allocation (offset 0) or at the very end of an allocation (effectively > > offset -sizeof(void *) from the next allocation), move it away from > > the edges of the allocation and into the middle. This provides some > > protection against small-sized neighboring overflows (or underflows), > > for which the freelist pointer is commonly the target. (Large or well > > controlled overwrites are much more likely to attack live object contents, > > instead of attempting freelist corruption.) > > Sounds good. You could even randomize the position to avoid attacks on via > the freelist pointer. Random overwrites could be detected (fairly cheaply) by putting two copies of the pointer into the same cacheline in the buffer. Or better make the second one 'pointer xor constant'. David - Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK Registration No: 1397386 (Wales)
On Wed, Mar 11, 2020 at 02:48:05PM +0000, David Laight wrote: > From: Christopher Lameter > > Sent: 08 March 2020 19:21 > > > > > On Thu, 5 Mar 2020, Kees Cook wrote: > > > > > Instead of having the freelist pointer at the very beginning of an > > > allocation (offset 0) or at the very end of an allocation (effectively > > > offset -sizeof(void *) from the next allocation), move it away from > > > the edges of the allocation and into the middle. This provides some > > > protection against small-sized neighboring overflows (or underflows), > > > for which the freelist pointer is commonly the target. (Large or well > > > controlled overwrites are much more likely to attack live object contents, > > > instead of attempting freelist corruption.) > > > > Sounds good. You could even randomize the position to avoid attacks on via > > the freelist pointer. > > Random overwrites could be detected (fairly cheaply) by putting two > copies of the pointer into the same cacheline in the buffer. > Or better make the second one 'pointer xor constant'. My sense is that this starts to stray closer to "too much overhead" vs the mitigation benefit against known heap metadata attacks. I'm open to seeing patches, of course, though! :)
From: Christopher Lameter > Sent: 08 March 2020 19:21 > > On Thu, 5 Mar 2020, Kees Cook wrote: > > > Instead of having the freelist pointer at the very beginning of an > > allocation (offset 0) or at the very end of an allocation (effectively > > offset -sizeof(void *) from the next allocation), move it away from > > the edges of the allocation and into the middle. This provides some > > protection against small-sized neighboring overflows (or underflows), > > for which the freelist pointer is commonly the target. (Large or well > > controlled overwrites are much more likely to attack live object contents, > > instead of attempting freelist corruption.) > > Sounds good. You could even randomize the position to avoid attacks on via > the freelist pointer. That's a good point. "offset" is just calculated once, and for many slabs, the available space is quite large. I wonder what the best practice might be for how far from the edge to stay. Hmmm. Maybe simply carve it into thirds, and randomize the offset within the middle third?
On Wed, 11 Mar 2020, Kees Cook wrote: > > Sounds good. You could even randomize the position to avoid attacks on via > > the freelist pointer. > > That's a good point. "offset" is just calculated once, and for many > slabs, the available space is quite large. I wonder what the best Correct. > practice might be for how far from the edge to stay. Hmmm. Maybe simply > carve it into thirds, and randomize the offset within the middle third? Take off the first and last word and randomize within the space that is left?
diff --git a/mm/slub.c b/mm/slub.c index 107d9d89cf96..45926cb4514f 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3562,6 +3562,13 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order) */ s->offset = size; size += sizeof(void *); + } else if (size > sizeof(void *)) { + /* + * Store freelist pointer near middle of object to keep + * it away from the edges of the object to avoid small + * sized over/underflows from neighboring allocations. + */ + s->offset = ALIGN(size / 2, sizeof(void *)); } #ifdef CONFIG_SLUB_DEBUG
In a recent discussion[1] with Vitaly Nikolenko and Silvio Cesare, it became clear that moving the freelist pointer away from the edge of allocations would likely improve the overall defensive posture of the inline freelist pointer. My benchmarks show no meaningful change to performance (they seem to show it being faster), so this looks like a reasonable change to make. Instead of having the freelist pointer at the very beginning of an allocation (offset 0) or at the very end of an allocation (effectively offset -sizeof(void *) from the next allocation), move it away from the edges of the allocation and into the middle. This provides some protection against small-sized neighboring overflows (or underflows), for which the freelist pointer is commonly the target. (Large or well controlled overwrites are much more likely to attack live object contents, instead of attempting freelist corruption.) The vaunted kernel build benchmark, across 5 runs. Before: Mean: 250.05 Std Dev: 1.85 and after, which appears mysteriously faster: Mean: 247.13 Std Dev: 0.76 Attempts at running "sysbench --test=memory" show the change to be well in the noise (sysbench seems to be pretty unstable here -- it's not really measuring allocation). Hackbench is more allocation-heavy, and while the std dev is above the difference, it looks like may manifest as an improvement as well: 20 runs of "hackbench -g 20 -l 1000", before: Mean: 36.322 Std Dev: 0.577 and after: Mean: 36.056 Std Dev: 0.598 [1] https://twitter.com/vnik5287/status/1235113523098685440 Cc: Vitaly Nikolenko <vnik@duasynt.com> Cc: Silvio Cesare <silvio.cesare@gmail.com> Signed-off-by: Kees Cook <keescook@chromium.org> --- mm/slub.c | 7 +++++++ 1 file changed, 7 insertions(+)