From patchwork Thu Mar 19 12:05:22 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: George Spelvin X-Patchwork-Id: 11447069 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A6C74913 for ; Thu, 19 Mar 2020 12:05:50 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A4FC420663 for ; Thu, 19 Mar 2020 12:05:49 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A4FC420663 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=SDF.ORG Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C03C86B0003; Thu, 19 Mar 2020 08:05:48 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id BB4B56B0005; Thu, 19 Mar 2020 08:05:48 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id ACB276B0006; Thu, 19 Mar 2020 08:05:48 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 8FAC36B0003 for ; Thu, 19 Mar 2020 08:05:48 -0400 (EDT) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 56BDB180AD815 for ; Thu, 19 Mar 2020 12:05:48 +0000 (UTC) X-FDA: 76611982776.09.alley15_4fb135e751c47 X-Spam-Summary: 2,0,0,3a11bd4ff07d12ae,d41d8cd98f00b204,lkml@sdf.org,,RULES_HIT:2:41:69:355:379:800:960:965:966:973:988:989:1260:1277:1312:1313:1314:1345:1359:1431:1437:1516:1518:1519:1535:1593:1594:1595:1596:1605:1606:1730:1747:1777:1792:2005:2195:2196:2198:2199:2200:2201:2393:2559:2562:2732:2895:2901:2918:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3874:4117:4321:4385:4390:4395:5007:6117:6119:6261:7514:7903:7904:7974:8603:8660:9036:9592:10004:11026:11473:11658:11914:12043:12114:12291:12296:12297:12438:12517:12519:12555:12663:12895:13148:13161:13229:13230:13439:13846:13870:13895:14096:14097:14394:21080:21324:21365:21433:21451:21627:21939:21990:30012:30034:30045:30054:30056:30064:30070,0,RBL:205.166.94.20:@sdf.org:.lbl8.mailshell.net-64.201.201.201 62.14.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: alley15_4fb135e751c47 X-Filterd-Recvd-Size: 6882 Received: from mx.sdf.org (mx.sdf.org [205.166.94.20]) by imf43.hostedemail.com (Postfix) with ESMTP for ; Thu, 19 Mar 2020 12:05:47 +0000 (UTC) Received: from sdf.org (IDENT:lkml@otaku.sdf.org [205.166.94.8]) by mx.sdf.org (8.15.2/8.14.5) with ESMTPS id 02JC5MV8003481 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits) verified NO); Thu, 19 Mar 2020 12:05:23 GMT Received: (from lkml@localhost) by sdf.org (8.15.2/8.12.8/Submit) id 02JC5MuF015096; Thu, 19 Mar 2020 12:05:22 GMT Date: Thu, 19 Mar 2020 12:05:22 +0000 From: George Spelvin To: Kees Cook Cc: Dan Williams , linux-mm@kvack.org, Andrew Morton , Alexander Duyck , Randy Dunlap , lkml@sdf.org Subject: [PATCH v4] mm/shuffle.c: Fix races in add_to_free_area_random() Message-ID: <20200319120522.GA1484@SDF.ORG> References: <20200317135035.GA19442@SDF.ORG> <202003171435.41F7F0DF9@keescook> <20200317230612.GB19442@SDF.ORG> <202003171619.23210A7E0@keescook> <20200318014410.GA2281@SDF.ORG> <20200318203914.GA16083@SDF.ORG> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20200318203914.GA16083@SDF.ORG> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The separate "rand" and "rand_count" variables could get out of sync with bad results. In the worst case, two threads would see rand_count=1 and both decrement it, resulting in rand_count=255 and rand being filled with zeros for the next 255 calls. Instead, pack them both into a single, atomically updateable, variable. This makes it a lot easier to reason about race conditions. They are still there - the code deliberately eschews locking - but basically harmless on the rare occasions that they happen. Second, use READ_ONCE and WRITE_ONCE. Because the random bit buffer is accessed by multiple threads concurrently without locking, omitting those puts us deep in the land of nasal demons. The compiler would be free to spill to the static variable in arbitrarily perverse ways and create hard-to-find bugs. (I'm torn between this and just declaring the buffer "volatile". Linux tends to prefer marking accesses rather than variables, but in this case, every access to the buffer is volatile. It makes no difference to the generated code.) Third, use long rather than u64. This not only keeps the state atomically updateable, it also speeds up the fast path on 32-bit machines. Saving at least three instructions on the fast path (one load, one add-with-carry, and one store) is worth a second call to get_random_u*() per 64 bits. The fast path of get_random_u* is less than the 3*64 = 192 instructions saved, and the slow path happens every 64 bytes so isn't affected by the change. Fourth, make the function inline. It's small, and there's only one caller (in mm/page_alloc.c:__free_one_page()), so avoid the function call overhead. Fifth, use the msbits of the buffer first (left shift) rather than the lsbits (right shift). Testing the sign bit produces slightly smaller/faster code than testing the lsbit. I've tried shifting both ways, and copying the desired bit to a boolean before shifting rather than keeping separate full-width r and rshift variables, but both produce larger code: x86-64 text size Msbit 42236 Lsbit 42242 (+6) Lsbit+bool 42258 (+22) Msbit+bool 42284 (+52) (Since this is straight-line code, size is a good proxy for number of instructions and execution time. Using READ/WRITE_ONCE instead of volatile makes no difference.) In a perfect world, on x86-64 the fast path would be: shlq rand(%eip) jz refill refill_complete: jc add_to_tail but I don't see how to get gcc to generate that, and this function isn't worth arch-specific implementation. Signed-off-by: George Spelvin Acked-by: Kees Cook Acked-by: Dan Williams Cc: Alexander Duyck Cc: Randy Dunlap Cc: Andrew Morton Cc: linux-mm@kvack.org Acked-by: Alexander Duyck --- v2: Rewrote commit message to explain existing races better. Made local variables unsigned to avoid (technically undefined) signed overflow. v3: Typos fixed, Acked-by, expanded commit message. v4: Rebase against -next; function has changed from add_to_free_area_random() to shuffle_pick_tail. Move to inline function in shuffle.h. Not sure if it's okay to keep Acked-by: after such a significant change. mm/shuffle.c | 23 ----------------------- mm/shuffle.h | 26 +++++++++++++++++++++++++- 2 files changed, 25 insertions(+), 24 deletions(-) base-commit: 47780d7892b77e922bbe19b5dea99cde06b2f0e5 diff --git a/mm/shuffle.c b/mm/shuffle.c index 44406d9977c7..ea281d5e1f23 100644 --- a/mm/shuffle.c +++ b/mm/shuffle.c @@ -182,26 +182,3 @@ void __meminit __shuffle_free_memory(pg_data_t *pgdat) for (z = pgdat->node_zones; z < pgdat->node_zones + MAX_NR_ZONES; z++) shuffle_zone(z); } - -bool shuffle_pick_tail(void) -{ - static u64 rand; - static u8 rand_bits; - bool ret; - - /* - * The lack of locking is deliberate. If 2 threads race to - * update the rand state it just adds to the entropy. - */ - if (rand_bits == 0) { - rand_bits = 64; - rand = get_random_u64(); - } - - ret = rand & 1; - - rand_bits--; - rand >>= 1; - - return ret; -} diff --git a/mm/shuffle.h b/mm/shuffle.h index 4d79f03b6658..fb79e05cd86d 100644 --- a/mm/shuffle.h +++ b/mm/shuffle.h @@ -22,7 +22,31 @@ enum mm_shuffle_ctl { DECLARE_STATIC_KEY_FALSE(page_alloc_shuffle_key); extern void page_alloc_shuffle(enum mm_shuffle_ctl ctl); extern void __shuffle_free_memory(pg_data_t *pgdat); -extern bool shuffle_pick_tail(void); +static inline bool shuffle_pick_tail(void) +{ + static unsigned long rand; /* buffered random bits */ + unsigned long r = READ_ONCE(rand), rshift = r << 1; + + /* + * rand holds 0..BITS_PER_LONG-1 random msbits, followed by a + * 1 bit, then zero-padding in the lsbits. This allows us to + * maintain the pre-generated bits and the count of bits in a + * single, atomically updatable, variable. + * + * The lack of locking is deliberate. If two threads race to + * update the rand state it just adds to the entropy. The + * worst that can happen is a random bit is used twice, or + * get_random_long is called redundantly. + */ + if (unlikely(rshift == 0)) { + r = get_random_long(); + rshift = r << 1 | 1; + } + WRITE_ONCE(rand, rshift); + + return (long)r < 0; +} + static inline void shuffle_free_memory(pg_data_t *pgdat) { if (!static_branch_unlikely(&page_alloc_shuffle_key))