From patchwork Wed Jun 14 22:45:26 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jason A. Donenfeld" X-Patchwork-Id: 9787601 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 70A3960384 for ; Wed, 14 Jun 2017 22:45:43 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 631A327031 for ; Wed, 14 Jun 2017 22:45:43 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 56A4727F81; Wed, 14 Jun 2017 22:45:43 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D245527031 for ; Wed, 14 Jun 2017 22:45:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752098AbdFNWpl (ORCPT ); Wed, 14 Jun 2017 18:45:41 -0400 Received: from frisell.zx2c4.com ([192.95.5.64]:57883 "EHLO frisell.zx2c4.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752097AbdFNWpk (ORCPT ); Wed, 14 Jun 2017 18:45:40 -0400 Received: by frisell.zx2c4.com (ZX2C4 Mail Server) with ESMTP id ea8e60aa; Wed, 14 Jun 2017 22:44:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=zx2c4.com; h=from:to:cc :subject:date:message-id:in-reply-to:references; s=mail; bh=tL8m lsi/Dp+MHqK9sVuOwloO8xQ=; b=xTDHGkqT4lQCz9ZC8EP7bsOrMuSOdYyq54FO Q7G46C6gwK13QqSRZA5k6ky9omzvr9xKGqqWRzliPz1AH2hpHUHgHNDRMCiSi+O5 GwhUp6dGkKlspmxjBDv7sLm0K+ZpY2f7tWNMzxPfHfNsjto8bv4+RMQEUnQA4yMd QZAq+5H8hWCSm37miUF5HA5Xi5joZf1C6W3M66IGcP5tlfOOgPDvlovLyRtbKc7S HFdOtJMro78oIkJ4QZt0doLngxKphwTEXHMmlwdFI500VQKwLwUV6EzWAcN/bYrl +K4cHUyFpD3zg39/WHuIWWfh0N6qAJBlBrrBdb1G0oJyfqJGOA== Received: by frisell.zx2c4.com (ZX2C4 Mail Server) with ESMTPSA id be6d4fa6 (TLSv1.2:ECDHE-RSA-AES256-GCM-SHA384:256:NO); Wed, 14 Jun 2017 22:44:17 +0000 (UTC) From: "Jason A. Donenfeld" To: Theodore Ts'o , Linux Crypto Mailing List , LKML , kernel-hardening@lists.openwall.com, Greg Kroah-Hartman , Eric Biggers , Linus Torvalds , David Miller , tglx@breakpoint.cc Cc: "Jason A. Donenfeld" Subject: [PATCH] random: silence compiler warnings and fix race Date: Thu, 15 Jun 2017 00:45:26 +0200 Message-Id: <20170614224526.29076-1-Jason@zx2c4.com> In-Reply-To: <20170614192838.3jz4sxpcuhxygx4z@breakpoint.cc> References: <20170614192838.3jz4sxpcuhxygx4z@breakpoint.cc> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Odd versions of gcc for the sh4 architecture will actually warn about flags being used while uninitialized, so we set them to zero. Non crazy gccs will optimize that out again, so it doesn't make a difference. Next, over aggressive gccs could inline the expression that defines use_lock, which could then introduce a race resulting in a lock imbalance. By using READ_ONCE, we prevent that fate. Finally, we make that assignment const, so that gcc can still optimize a nice amount. Finally, we fix a potential deadlock between primary_crng.lock and batched_entropy_reset_lock, where they could be called in opposite order. Moving the call to invalidate_batched_entropy to outside the lock rectifies this issue. Signed-off-by: Jason A. Donenfeld --- Ted -- the first part of this is the fixup patch we discussed earlier. Then I added on top a fix for a potentially related race. I'm not totally convinced that moving this block to outside the spinlock is 100% okay, so please give this a close look before merging. drivers/char/random.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index e870f329db88..01a260f67437 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -803,13 +803,13 @@ static int crng_fast_load(const char *cp, size_t len) p[crng_init_cnt % CHACHA20_KEY_SIZE] ^= *cp; cp++; crng_init_cnt++; len--; } + spin_unlock_irqrestore(&primary_crng.lock, flags); if (crng_init_cnt >= CRNG_INIT_CNT_THRESH) { invalidate_batched_entropy(); crng_init = 1; wake_up_interruptible(&crng_init_wait); pr_notice("random: fast init done\n"); } - spin_unlock_irqrestore(&primary_crng.lock, flags); return 1; } @@ -841,6 +841,7 @@ static void crng_reseed(struct crng_state *crng, struct entropy_store *r) } memzero_explicit(&buf, sizeof(buf)); crng->init_time = jiffies; + spin_unlock_irqrestore(&primary_crng.lock, flags); if (crng == &primary_crng && crng_init < 2) { invalidate_batched_entropy(); crng_init = 2; @@ -848,7 +849,6 @@ static void crng_reseed(struct crng_state *crng, struct entropy_store *r) wake_up_interruptible(&crng_init_wait); pr_notice("random: crng init done\n"); } - spin_unlock_irqrestore(&primary_crng.lock, flags); } static inline void crng_wait_ready(void) @@ -2041,8 +2041,8 @@ static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u64); u64 get_random_u64(void) { u64 ret; - bool use_lock = crng_init < 2; - unsigned long flags; + bool use_lock = READ_ONCE(crng_init) < 2; + unsigned long flags = 0; struct batched_entropy *batch; #if BITS_PER_LONG == 64 @@ -2073,8 +2073,8 @@ static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u32); u32 get_random_u32(void) { u32 ret; - bool use_lock = crng_init < 2; - unsigned long flags; + bool use_lock = READ_ONCE(crng_init) < 2; + unsigned long flags = 0; struct batched_entropy *batch; if (arch_get_random_int(&ret))