From patchwork Wed Mar 18 01:44:10 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: George Spelvin X-Patchwork-Id: 11444433 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 261C81667 for ; Wed, 18 Mar 2020 01:44:18 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E7288206EC for ; Wed, 18 Mar 2020 01:44:17 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E7288206EC Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=SDF.ORG Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 00D0F6B0003; Tue, 17 Mar 2020 21:44:17 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id EFF6C6B0006; Tue, 17 Mar 2020 21:44:16 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E15686B0007; Tue, 17 Mar 2020 21:44:16 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0066.hostedemail.com [216.40.44.66]) by kanga.kvack.org (Postfix) with ESMTP id C71236B0003 for ; Tue, 17 Mar 2020 21:44:16 -0400 (EDT) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id B786E18045E7A for ; Wed, 18 Mar 2020 01:44:16 +0000 (UTC) X-FDA: 76606787712.23.brick63_7c9fa0604ed60 X-Spam-Summary: 2,0,0,bfe05e285f08ea7f,d41d8cd98f00b204,lkml@sdf.org,,RULES_HIT:41:69:355:379:800:960:965:966:973:988:989:1260:1277:1312:1313:1314:1345:1359:1431:1437:1516:1518:1519:1534:1543:1593:1594:1595:1596:1711:1730:1747:1777:1792:2005:2195:2196:2199:2200:2393:2559:2562:2895:3138:3139:3140:3141:3142:3354:3865:3866:3867:3868:3870:3871:3872:3874:4385:4390:4395:5007:6117:6261:7903:7904:8603:8660:9036:10004:10400:11026:11473:11658:11914:12043:12114:12296:12297:12438:12517:12519:12555:12895:13146:13148:13161:13229:13230:13439:13846:13895:14096:14097:14181:14394:14721:21080:21324:21451:21627:30012:30045:30054:30056:30064:30070,0,RBL:205.166.94.20:@sdf.org:.lbl8.mailshell.net-64.201.201.201 62.8.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: brick63_7c9fa0604ed60 X-Filterd-Recvd-Size: 4664 Received: from mx.sdf.org (mx.sdf.org [205.166.94.20]) by imf50.hostedemail.com (Postfix) with ESMTP for ; Wed, 18 Mar 2020 01:44:16 +0000 (UTC) Received: from sdf.org (IDENT:lkml@otaku.sdf.org [205.166.94.8]) by mx.sdf.org (8.15.2/8.14.5) with ESMTPS id 02I1iBIV024112 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits) verified NO); Wed, 18 Mar 2020 01:44:11 GMT Received: (from lkml@localhost) by sdf.org (8.15.2/8.12.8/Submit) id 02I1iAOZ019117; Wed, 18 Mar 2020 01:44:10 GMT Date: Wed, 18 Mar 2020 01:44:10 +0000 From: George Spelvin To: Kees Cook Cc: Dan Williams , linux-mm@kvack.org, Andrew Morton , lkml@sdf.org Subject: [PATCH v2] mm/shuffle.c: Fix races in add_to_free_area_random() Message-ID: <20200318014410.GA2281@SDF.ORG> References: <20200317135035.GA19442@SDF.ORG> <202003171435.41F7F0DF9@keescook> <20200317230612.GB19442@SDF.ORG> <202003171619.23210A7E0@keescook> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <202003171619.23210A7E0@keescook> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The old code had separate "rand" and "rand_count" variables, which could get out of sync with bad results. In the worst case, two threads would see rand_count == 1 and both decrement it, resultint in rand_count = 255 and rand being filled with zeros for the next 255 calls. Instead, pack them both into a single, atomically updatable, variable. This makes it a lot easier to reason about race conditions. They are still there - the code deliberately eschews locking - but basically harmless on the rare occasions that they happen. Second, use READ_ONCE and WRITE_ONCE. Without them, we are deep in the land of nasal demons. The compiler would be free to spill temporaries to the static variables in arbitrary perverse ways and create hard-to-find bugs. (Alternatively, we could declare the static variable "volatile", one of the few places in the Linux kernel that would be correct, but it would probably annoy Linus.) Third, use long rather than u64. This not only keeps the state atomically updatable, it also speeds up the fast path on 32-bit machines. Saving at least three instructions on the fast path (one load, one add-with-carry, and one store) is worth exchanging one call to get_random_u64 for two calls to get_random_u32. The fast path of get_random_* is less than the 3*64 = 192 instructions saved, and the slow path happens every 64 bytes so isn't affectrd by the change. I've tried a few variants. Keeping random lsbits with a most-significant end marker, and using an explicit bool flag rather than testing r both increase code size slightly. x86_64 i386 This code 94 95 Explicit bool 103 99 Lsbits 99 101 Both 96 100 Signed-off-by: George Spelvin Cc: Dan Williams Cc: Kees Cook Cc: Andrew Morton Cc: linux-mm@kvack.org Acked-by: Kees Cook Acked-by: Dan Williams --- mm/shuffle.c | 26 ++++++++++++++++---------- 1 file changed, 16 insertions(+), 10 deletions(-) diff --git a/mm/shuffle.c b/mm/shuffle.c index e0ed247f8d90..4ba3ba84764d 100644 --- a/mm/shuffle.c +++ b/mm/shuffle.c @@ -186,22 +186,28 @@ void __meminit __shuffle_free_memory(pg_data_t *pgdat) void add_to_free_area_random(struct page *page, struct free_area *area, int migratetype) { - static u64 rand; - static u8 rand_bits; + static long rand; /* 0..BITS_PER_LONG-1 buffered random bits */ + unsigned long r = READ_ONCE(rand), rshift = r << 1;; /* - * The lack of locking is deliberate. If 2 threads race to - * update the rand state it just adds to the entropy. + * rand holds some random msbits, with a 1 bit appended, followed + * by zero-padding in the lsbits. This allows us to maintain + * the pre-generated bits and the count of bits in a single, + * atomically updatable, variable. + * + * The lack of locking is deliberate. If two threads race to + * update the rand state it just adds to the entropy. The + * worst that can happen is a random bit is used twice, or + * get_random_long is called redundantly. */ - if (rand_bits == 0) { - rand_bits = 64; - rand = get_random_u64(); + if (unlikely(rshift == 0)) { + r = get_random_long(); + rshift = r << 1 | 1; } + WRITE_ONCE(rand, rshift); - if (rand & 1) + if ((long)r < 0) add_to_free_area(page, area, migratetype); else add_to_free_area_tail(page, area, migratetype); - rand_bits--; - rand >>= 1; }