From patchwork Tue Jun 6 00:50:58 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jason A. Donenfeld" X-Patchwork-Id: 9767789 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 066336034B for ; Tue, 6 Jun 2017 00:54:02 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id F101727968 for ; Tue, 6 Jun 2017 00:54:01 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E48C728435; Tue, 6 Jun 2017 00:54:01 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7A7DD27968 for ; Tue, 6 Jun 2017 00:54:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751347AbdFFAvf (ORCPT ); Mon, 5 Jun 2017 20:51:35 -0400 Received: from frisell.zx2c4.com ([192.95.5.64]:57707 "EHLO frisell.zx2c4.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751194AbdFFAvd (ORCPT ); Mon, 5 Jun 2017 20:51:33 -0400 Received: by frisell.zx2c4.com (ZX2C4 Mail Server) with ESMTP id 7c6b4965; Tue, 6 Jun 2017 00:51:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=zx2c4.com; h=from:to:cc :subject:date:message-id:in-reply-to:references; s=mail; bh=hTbK qxk/qUE9cI52ucgoHawA+mo=; b=eXo3JP58Cy5KvuncOEyGV7cXD5DitQNyuVbH rli19sDzqlqV43k8xQqixfkzDnCHmtRlMnNc56HP3whiNQlkrZIwFS6VAafjNM1K QSgjLGGiOlUwY4Lrop+tt+n9jLaKgMd8XTKbLCl4qDnO1z+hL62vQCn0yChCS+Do 3UnlT3z9wfeSJ+EBx7QUlFaShXAjdiVqUhMr3/DL03JRgpsgExF1qu+U6X+MwmZv xQrJ+FW2xPapGPfOfiXK7381n2FoS0U19wxKfzVKqHduAtXcgc5O0LY/KamGIIcg zdGqsGK2lCNhZN6dKPQ8b0Ds6tYtm3mq1MjNmVpbpIgqgvegNw== Received: by frisell.zx2c4.com (ZX2C4 Mail Server) with ESMTPSA id 39bd35d7 (TLSv1.2:ECDHE-RSA-AES256-GCM-SHA384:256:NO); Tue, 6 Jun 2017 00:51:16 +0000 (UTC) From: "Jason A. Donenfeld" To: Theodore Ts'o , Linux Crypto Mailing List , LKML , kernel-hardening@lists.openwall.com, Greg Kroah-Hartman , David Miller Cc: "Jason A. Donenfeld" Subject: [PATCH v3 03/13] random: invalidate batched entropy after crng init Date: Tue, 6 Jun 2017 02:50:58 +0200 Message-Id: <20170606005108.5646-4-Jason@zx2c4.com> X-Mailer: git-send-email 2.13.0 In-Reply-To: <20170606005108.5646-1-Jason@zx2c4.com> References: <20170606005108.5646-1-Jason@zx2c4.com> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP It's possible that get_random_{u32,u64} is used before the crng has initialized, in which case, its output might not be cryptographically secure. For this problem, directly, this patch set is introducing the *_wait variety of functions, but even with that, there's a subtle issue: what happens to our batched entropy that was generated before initialization. Prior to this commit, it'd stick around, supplying bad numbers. After this commit, we force the entropy to be re-extracted after each phase of the crng has initialized. In order to avoid a race condition with the position counter, we introduce a simple rwlock for this invalidation. Since it's only during this awkward transition period, after things are all set up, we stop using it, so that it doesn't have an impact on performance. Signed-off-by: Jason A. Donenfeld --- drivers/char/random.c | 34 ++++++++++++++++++++++++++++++++++ 1 file changed, 34 insertions(+) diff --git a/drivers/char/random.c b/drivers/char/random.c index 035a5d7c06bd..c328e9b11f1f 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -762,6 +762,8 @@ static DECLARE_WAIT_QUEUE_HEAD(crng_init_wait); static struct crng_state **crng_node_pool __read_mostly; #endif +static void invalidate_batched_entropy(void); + static void crng_initialize(struct crng_state *crng) { int i; @@ -800,6 +802,7 @@ static int crng_fast_load(const char *cp, size_t len) } if (crng_init_cnt >= CRNG_INIT_CNT_THRESH) { crng_init = 1; + invalidate_batched_entropy(); wake_up_interruptible(&crng_init_wait); pr_notice("random: fast init done\n"); } @@ -837,6 +840,7 @@ static void crng_reseed(struct crng_state *crng, struct entropy_store *r) crng->init_time = jiffies; if (crng == &primary_crng && crng_init < 2) { crng_init = 2; + invalidate_batched_entropy(); process_random_ready_list(); wake_up_interruptible(&crng_init_wait); pr_notice("random: crng init done\n"); @@ -2037,6 +2041,7 @@ struct batched_entropy { }; unsigned int position; }; +static rwlock_t batched_entropy_reset_lock = __RW_LOCK_UNLOCKED(batched_entropy_reset_lock); /* * Get a random word for internal kernel use only. The quality of the random @@ -2050,6 +2055,8 @@ static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u64); u64 get_random_u64(void) { u64 ret; + bool use_lock = crng_init < 2; + unsigned long flags; struct batched_entropy *batch; #if BITS_PER_LONG == 64 @@ -2062,11 +2069,15 @@ u64 get_random_u64(void) #endif batch = &get_cpu_var(batched_entropy_u64); + if (use_lock) + read_lock_irqsave(&batched_entropy_reset_lock, flags); if (batch->position % ARRAY_SIZE(batch->entropy_u64) == 0) { extract_crng((u8 *)batch->entropy_u64); batch->position = 0; } ret = batch->entropy_u64[batch->position++]; + if (use_lock) + read_unlock_irqrestore(&batched_entropy_reset_lock, flags); put_cpu_var(batched_entropy_u64); return ret; } @@ -2076,22 +2087,45 @@ static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u32); u32 get_random_u32(void) { u32 ret; + bool use_lock = crng_init < 2; + unsigned long flags; struct batched_entropy *batch; if (arch_get_random_int(&ret)) return ret; batch = &get_cpu_var(batched_entropy_u32); + if (use_lock) + read_lock_irqsave(&batched_entropy_reset_lock, flags); if (batch->position % ARRAY_SIZE(batch->entropy_u32) == 0) { extract_crng((u8 *)batch->entropy_u32); batch->position = 0; } ret = batch->entropy_u32[batch->position++]; + if (use_lock) + read_unlock_irqrestore(&batched_entropy_reset_lock, flags); put_cpu_var(batched_entropy_u32); return ret; } EXPORT_SYMBOL(get_random_u32); +/* It's important to invalidate all potential batched entropy that might + * be stored before the crng is initialized, which we can do lazily by + * simply resetting the counter to zero so that it's re-extracted on the + * next usage. */ +static void invalidate_batched_entropy(void) +{ + int cpu; + unsigned long flags; + + write_lock_irqsave(&batched_entropy_reset_lock, flags); + for_each_possible_cpu (cpu) { + per_cpu_ptr(&batched_entropy_u32, cpu)->position = 0; + per_cpu_ptr(&batched_entropy_u64, cpu)->position = 0; + } + write_unlock_irqrestore(&batched_entropy_reset_lock, flags); +} + /** * randomize_page - Generate a random, page aligned address * @start: The smallest acceptable address the caller will take.