From patchwork Fri Mar 6 00:26:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 11422831 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D2C1F14E3 for ; Fri, 6 Mar 2020 00:26:23 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9F42B21556 for ; Fri, 6 Mar 2020 00:26:23 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="kIPJbpP4" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9F42B21556 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chromium.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id DF9896B0005; Thu, 5 Mar 2020 19:26:22 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id DA9B76B0006; Thu, 5 Mar 2020 19:26:22 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CBE286B0007; Thu, 5 Mar 2020 19:26:22 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0176.hostedemail.com [216.40.44.176]) by kanga.kvack.org (Postfix) with ESMTP id B48EC6B0005 for ; Thu, 5 Mar 2020 19:26:22 -0500 (EST) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 7E15E180AD801 for ; Fri, 6 Mar 2020 00:26:22 +0000 (UTC) X-FDA: 76563045804.24.mom69_379e665fc9c2c X-Spam-Summary: 30,2,0,c6a85b8f7e1a5746,d41d8cd98f00b204,keescook@chromium.org,,RULES_HIT:41:355:379:800:960:965:966:967:973:988:989:1260:1277:1312:1313:1314:1345:1437:1516:1518:1519:1535:1542:1593:1594:1595:1596:1711:1730:1747:1777:1792:2196:2198:2199:2200:2393:2525:2553:2559:2563:2682:2685:2859:2897:2933:2937:2939:2942:2945:2947:2951:2954:3022:3138:3139:3140:3141:3142:3354:3865:3866:3867:3868:3870:3871:3872:3874:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4321:4385:4390:4395:5007:6119:6261:6653:7514:7903:8603:9025:10010:10400:11026:11232:11473:11658:11914:12043:12296:12297:12517:12519:12555:12895:12986:13161:13229:13439:13895:14096:14097:14181:14394:14721:21063:21080:21444:21451:21627:30054:30064:30070:30090,0,RBL:209.85.216.66:@chromium.org:.lbl8.mailshell.net-62.2.0.100 66.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:1:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: mom69_379e665fc9c2c X-Filterd-Recvd-Size: 5181 Received: from mail-pj1-f66.google.com (mail-pj1-f66.google.com [209.85.216.66]) by imf41.hostedemail.com (Postfix) with ESMTP for ; Fri, 6 Mar 2020 00:26:22 +0000 (UTC) Received: by mail-pj1-f66.google.com with SMTP id nm6so306144pjb.0 for ; Thu, 05 Mar 2020 16:26:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=date:from:to:cc:subject:message-id:mime-version:content-disposition; bh=I+H+aBsk+aErmv7EQrFez4S6k+Jo/VLRM9wBUoRhvdk=; b=kIPJbpP4aDjikDwERd83CWqw0g1gDeaOpN3rKxN7U17opQWk+DszN7EOBIYAYkfIE7 U3kALe3j030A6c828cw57yTR/uSXIZSeMfZ7+fYCnFpufu2dOki1HqYnphcu9isu3/UX OZMgxzbxAdtSb8z0beCH5HUQyv18GY9HcvP+c= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:mime-version :content-disposition; bh=I+H+aBsk+aErmv7EQrFez4S6k+Jo/VLRM9wBUoRhvdk=; b=J9fbjaGWgXVJt1KORTu3JjeTau5jokJUQ5MTcnttT2REooFWC6CF5qRkTfGM9X/zY/ AlBcPjv0z4w9YXQpur58bwq4t170oLeSNoDB1D/Bp9XLUCQ0IHRwv4YXnWGxN8bsquC+ Sk2xzi0zkdtsyj7YPmOKVLbz83Cd9ZGQMORXrR9Ac0Pr503ImUl+RIQZTs+azQjNZxOM 6Dt3F0Hm1tWt+KIlVDA3UwgeJkG+c9NpP8hauzhbL/Rxdch1ILtNvoLi36FmuBlS7Q5X 0bG1QOmQSYXGvTqRfzGFf0cd8/BoRzjPhVeT1LQTufCoTMAnjO5Rt9RVBWFwvEO7/qvD 71xA== X-Gm-Message-State: ANhLgQ1sEI/ytc+InYtN24u+0WhizwhsBcw2KNa0cgK5HqqMeDSHbzmu 1SEXCSv0Ygg8Nda2TpwIbxf7NA== X-Google-Smtp-Source: ADFU+vvsEmRas83z6b6EOfPKmn+c0YwxGHS7MDsICsIW90k0sEU9LxS8X8RfEwavWXkWsBUZOqRfkQ== X-Received: by 2002:a17:90a:b94a:: with SMTP id f10mr748899pjw.1.1583454380995; Thu, 05 Mar 2020 16:26:20 -0800 (PST) Received: from www.outflux.net (smtp.outflux.net. [198.145.64.163]) by smtp.gmail.com with ESMTPSA id f17sm22013249pge.48.2020.03.05.16.26.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Mar 2020 16:26:19 -0800 (PST) Date: Thu, 5 Mar 2020 16:26:18 -0800 From: Kees Cook To: Andrew Morton Cc: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Daniel Micay , Vitaly Nikolenko , Silvio Cesare , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH] slub: Relocate freelist pointer to middle of object Message-ID: <202003051624.AAAC9AECC@keescook> MIME-Version: 1.0 Content-Disposition: inline X-Bogosity: Ham, tests=bogofilter, spamicity=0.002596, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In a recent discussion[1] with Vitaly Nikolenko and Silvio Cesare, it became clear that moving the freelist pointer away from the edge of allocations would likely improve the overall defensive posture of the inline freelist pointer. My benchmarks show no meaningful change to performance (they seem to show it being faster), so this looks like a reasonable change to make. Instead of having the freelist pointer at the very beginning of an allocation (offset 0) or at the very end of an allocation (effectively offset -sizeof(void *) from the next allocation), move it away from the edges of the allocation and into the middle. This provides some protection against small-sized neighboring overflows (or underflows), for which the freelist pointer is commonly the target. (Large or well controlled overwrites are much more likely to attack live object contents, instead of attempting freelist corruption.) The vaunted kernel build benchmark, across 5 runs. Before: Mean: 250.05 Std Dev: 1.85 and after, which appears mysteriously faster: Mean: 247.13 Std Dev: 0.76 Attempts at running "sysbench --test=memory" show the change to be well in the noise (sysbench seems to be pretty unstable here -- it's not really measuring allocation). Hackbench is more allocation-heavy, and while the std dev is above the difference, it looks like may manifest as an improvement as well: 20 runs of "hackbench -g 20 -l 1000", before: Mean: 36.322 Std Dev: 0.577 and after: Mean: 36.056 Std Dev: 0.598 [1] https://twitter.com/vnik5287/status/1235113523098685440 Cc: Vitaly Nikolenko Cc: Silvio Cesare Signed-off-by: Kees Cook Acked-by: Christoph Lameter --- mm/slub.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/mm/slub.c b/mm/slub.c index 107d9d89cf96..45926cb4514f 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3562,6 +3562,13 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order) */ s->offset = size; size += sizeof(void *); + } else if (size > sizeof(void *)) { + /* + * Store freelist pointer near middle of object to keep + * it away from the edges of the object to avoid small + * sized over/underflows from neighboring allocations. + */ + s->offset = ALIGN(size / 2, sizeof(void *)); } #ifdef CONFIG_SLUB_DEBUG