From patchwork Tue Feb 8 15:53:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jason A. Donenfeld" X-Patchwork-Id: 12738961 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8F6E6C433EF for ; Tue, 8 Feb 2022 15:54:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1382082AbiBHPyB (ORCPT ); Tue, 8 Feb 2022 10:54:01 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35126 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1382101AbiBHPyB (ORCPT ); Tue, 8 Feb 2022 10:54:01 -0500 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E3548C06157A; Tue, 8 Feb 2022 07:53:58 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 99BBAB81BB6; Tue, 8 Feb 2022 15:53:57 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D05A7C004E1; Tue, 8 Feb 2022 15:53:55 +0000 (UTC) Authentication-Results: smtp.kernel.org; dkim=pass (1024-bit key) header.d=zx2c4.com header.i=@zx2c4.com header.b="bPW2N82I" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=zx2c4.com; s=20210105; t=1644335634; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=EGnQXWAKgKEke7Ex6iv0rHYBEDTkGuQWRHtfKLlXujQ=; b=bPW2N82IC/BYOQ/S0ISBuNPQv35azRvlSO4iZMIeHqhBxe36W9wSXE9SceG/wuj4mE90lJ 3EN8fsTK9eiHLMiJVD8eBuxqzBDD/WqhfD73TaYBdf14SS91Y8QbpQ9wc73sRtfUtH9q8r Ex0lsvt7Q/kUPi8/AYLD65a/9oSWN1Y= Received: by mail.zx2c4.com (ZX2C4 Mail Server) with ESMTPSA id 5b8c8b38 (TLSv1.3:AEAD-AES256-GCM-SHA384:256:NO); Tue, 8 Feb 2022 15:53:54 +0000 (UTC) From: "Jason A. Donenfeld" To: linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org Cc: "Jason A. Donenfeld" , Theodore Ts'o , Dominik Brodowski Subject: [PATCH v1 1/7] random: use RDSEED instead of RDRAND in entropy extraction Date: Tue, 8 Feb 2022 16:53:29 +0100 Message-Id: <20220208155335.378318-2-Jason@zx2c4.com> In-Reply-To: <20220208155335.378318-1-Jason@zx2c4.com> References: <20220208155335.378318-1-Jason@zx2c4.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org When /dev/random was directly connected with entropy extraction, without any expansion stage, extract_buf() was called for every 10 bytes of data read from /dev/random. For that reason, RDRAND was used rather than RDSEED. At the same time, crng_reseed() was still only called every 5 minutes, so there RDSEED made sense. Those olden days were also a time when the entropy collector did not use a cryptographic hash function, which meant most bets were off in terms of real preimage resistance. For that reason too it didn't matter _that_ much whether RDSEED was mixed in before or after entropy extraction; both choices were sort of bad. But now we have a cryptographic hash function at work, and with that we get real preimage resistance. We also now only call extract_entropy() every 5 minutes, rather than every 10 bytes. This allows us to do two important things. First, we can switch to using RDSEED in extract_entropy(), as Dominik suggested. Second, we can ensure that RDSEED input always goes into the cryptographic hash function with other things before being used directly. This eliminates a category of attacks in which the CPU knows the current state of the crng and knows that we're going to xor RDSEED into it, and so it computes a malicious RDSEED. By going through our hash function, it would require the CPU to compute a preimage on the fly, which isn't going to happen. Cc: Theodore Ts'o Suggested-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Reviewed-by: Eric Biggers --- drivers/char/random.c | 22 +++++++++------------- 1 file changed, 9 insertions(+), 13 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index fe24d203b99b..42a3a3663767 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -722,13 +722,8 @@ static void crng_reseed(struct crng_state *crng) CHACHA_KEY_SIZE); } spin_lock_irqsave(&crng->lock, flags); - for (i = 0; i < 8; i++) { - unsigned long rv; - if (!arch_get_random_seed_long(&rv) && - !arch_get_random_long(&rv)) - rv = random_get_entropy(); - crng->state[i + 4] ^= buf.key[i] ^ rv; - } + for (i = 0; i < 8; i++) + crng->state[i + 4] ^= buf.key[i]; memzero_explicit(&buf, sizeof(buf)); WRITE_ONCE(crng->init_time, jiffies); spin_unlock_irqrestore(&crng->lock, flags); @@ -1064,16 +1059,17 @@ static void extract_entropy(void *buf, size_t nbytes) unsigned long flags; u8 seed[BLAKE2S_HASH_SIZE], next_key[BLAKE2S_HASH_SIZE]; struct { - unsigned long rdrand[32 / sizeof(long)]; + unsigned long rdseed[32 / sizeof(long)]; size_t counter; } block; size_t i; trace_extract_entropy(nbytes, input_pool.entropy_count); - for (i = 0; i < ARRAY_SIZE(block.rdrand); ++i) { - if (!arch_get_random_long(&block.rdrand[i])) - block.rdrand[i] = random_get_entropy(); + for (i = 0; i < ARRAY_SIZE(block.rdseed); ++i) { + if (!arch_get_random_seed_long(&block.rdseed[i]) && + !arch_get_random_long(&block.rdseed[i])) + block.rdseed[i] = random_get_entropy(); } spin_lock_irqsave(&input_pool.lock, flags); @@ -1081,7 +1077,7 @@ static void extract_entropy(void *buf, size_t nbytes) /* seed = HASHPRF(last_key, entropy_input) */ blake2s_final(&input_pool.hash, seed); - /* next_key = HASHPRF(seed, RDRAND || 0) */ + /* next_key = HASHPRF(seed, RDSEED || 0) */ block.counter = 0; blake2s(next_key, (u8 *)&block, seed, sizeof(next_key), sizeof(block), sizeof(seed)); blake2s_init_key(&input_pool.hash, BLAKE2S_HASH_SIZE, next_key, sizeof(next_key)); @@ -1091,7 +1087,7 @@ static void extract_entropy(void *buf, size_t nbytes) while (nbytes) { i = min_t(size_t, nbytes, BLAKE2S_HASH_SIZE); - /* output = HASHPRF(seed, RDRAND || ++counter) */ + /* output = HASHPRF(seed, RDSEED || ++counter) */ ++block.counter; blake2s(buf, (u8 *)&block, seed, i, sizeof(block), sizeof(seed)); nbytes -= i; From patchwork Tue Feb 8 15:53:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jason A. Donenfeld" X-Patchwork-Id: 12738962 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AEE3BC433FE for ; Tue, 8 Feb 2022 15:54:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233501AbiBHPyD (ORCPT ); Tue, 8 Feb 2022 10:54:03 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35132 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1382085AbiBHPyB (ORCPT ); Tue, 8 Feb 2022 10:54:01 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F199DC0613CB; Tue, 8 Feb 2022 07:54:00 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 8FDF36166C; Tue, 8 Feb 2022 15:54:00 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9129AC340EF; Tue, 8 Feb 2022 15:53:59 +0000 (UTC) Authentication-Results: smtp.kernel.org; dkim=pass (1024-bit key) header.d=zx2c4.com header.i=@zx2c4.com header.b="oKDfgIZ+" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=zx2c4.com; s=20210105; t=1644335638; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=o+To7SbPAC6WHaVLqmEB9nPHSa7awDbf9XLlklQoFXk=; b=oKDfgIZ+GJWbJ/eZs0Xc1AdVS4TscsyaS017EOeG0O/3kwp2nqM8WPRN/eKGvkdO1aTlOa vPMehcnXlLau25TmVMG33P7/8yMs52rCfhx0nSbiNVZW87oH+e0488wHNhUJZYDHKOEfA5 rVy6U0Xo7dc/KvMHSvPtFPa1IADWi9s= Received: by mail.zx2c4.com (ZX2C4 Mail Server) with ESMTPSA id 0b957a98 (TLSv1.3:AEAD-AES256-GCM-SHA384:256:NO); Tue, 8 Feb 2022 15:53:58 +0000 (UTC) From: "Jason A. Donenfeld" To: linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org Cc: "Jason A. Donenfeld" , Theodore Ts'o , Dominik Brodowski Subject: [PATCH v1 2/7] random: get rid of secondary crngs Date: Tue, 8 Feb 2022 16:53:30 +0100 Message-Id: <20220208155335.378318-3-Jason@zx2c4.com> In-Reply-To: <20220208155335.378318-1-Jason@zx2c4.com> References: <20220208155335.378318-1-Jason@zx2c4.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org As the comment said, this is indeed a "hack". Since it was introduced, it's been a constant state machine nightmare, with lots of subtle early boot issues and a wildly complex set of machinery to keep everything in sync. Rather than continuing to play whack-a-mole with this approach, this commit simply removes it entirely. This commit is preparation for "random: use simpler fast key erasure flow on per-cpu keys" in this series, which introduces a simpler (and faster) mechanism to accomplish the same thing. Cc: Theodore Ts'o Cc: Dominik Brodowski Signed-off-by: Jason A. Donenfeld --- drivers/char/random.c | 225 ++++++++++-------------------------------- 1 file changed, 53 insertions(+), 172 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 42a3a3663767..59d8d38f6a07 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -323,14 +323,11 @@ static struct crng_state primary_crng = { * its value (from 0->1->2). */ static int crng_init = 0; -static bool crng_need_final_init = false; #define crng_ready() (likely(crng_init > 1)) static int crng_init_cnt = 0; -static unsigned long crng_global_init_time = 0; #define CRNG_INIT_CNT_THRESH (2 * CHACHA_KEY_SIZE) -static void _extract_crng(struct crng_state *crng, u8 out[CHACHA_BLOCK_SIZE]); -static void _crng_backtrack_protect(struct crng_state *crng, - u8 tmp[CHACHA_BLOCK_SIZE], int used); +static void extract_crng(u8 out[CHACHA_BLOCK_SIZE]); +static void crng_backtrack_protect(u8 tmp[CHACHA_BLOCK_SIZE], int used); static void process_random_ready_list(void); static void _get_random_bytes(void *buf, int nbytes); @@ -365,7 +362,7 @@ static struct { static void extract_entropy(void *buf, size_t nbytes); -static void crng_reseed(struct crng_state *crng); +static void crng_reseed(void); /* * This function adds bytes into the entropy "pool". It does not @@ -459,7 +456,7 @@ static void credit_entropy_bits(int nbits) trace_credit_entropy_bits(nbits, entropy_count, _RET_IP_); if (crng_init < 2 && entropy_count >= POOL_MIN_BITS) - crng_reseed(&primary_crng); + crng_reseed(); } /********************************************************************* @@ -472,16 +469,7 @@ static void credit_entropy_bits(int nbits) static DECLARE_WAIT_QUEUE_HEAD(crng_init_wait); -/* - * Hack to deal with crazy userspace progams when they are all trying - * to access /dev/urandom in parallel. The programs are almost - * certainly doing something terribly wrong, but we'll work around - * their brain damage. - */ -static struct crng_state **crng_node_pool __read_mostly; - static void invalidate_batched_entropy(void); -static void numa_crng_init(void); static bool trust_cpu __ro_after_init = IS_ENABLED(CONFIG_RANDOM_TRUST_CPU); static int __init parse_trust_cpu(char *arg) @@ -490,24 +478,6 @@ static int __init parse_trust_cpu(char *arg) } early_param("random.trust_cpu", parse_trust_cpu); -static bool crng_init_try_arch(struct crng_state *crng) -{ - int i; - bool arch_init = true; - unsigned long rv; - - for (i = 4; i < 16; i++) { - if (!arch_get_random_seed_long(&rv) && - !arch_get_random_long(&rv)) { - rv = random_get_entropy(); - arch_init = false; - } - crng->state[i] ^= rv; - } - - return arch_init; -} - static bool __init crng_init_try_arch_early(void) { int i; @@ -526,100 +496,17 @@ static bool __init crng_init_try_arch_early(void) return arch_init; } -static void crng_initialize_secondary(struct crng_state *crng) -{ - chacha_init_consts(crng->state); - _get_random_bytes(&crng->state[4], sizeof(u32) * 12); - crng_init_try_arch(crng); - crng->init_time = jiffies - CRNG_RESEED_INTERVAL - 1; -} - -static void __init crng_initialize_primary(void) +static void __init crng_initialize(void) { extract_entropy(&primary_crng.state[4], sizeof(u32) * 12); if (crng_init_try_arch_early() && trust_cpu && crng_init < 2) { invalidate_batched_entropy(); - numa_crng_init(); crng_init = 2; pr_notice("crng init done (trusting CPU's manufacturer)\n"); } primary_crng.init_time = jiffies - CRNG_RESEED_INTERVAL - 1; } -static void crng_finalize_init(void) -{ - if (!system_wq) { - /* We can't call numa_crng_init until we have workqueues, - * so mark this for processing later. */ - crng_need_final_init = true; - return; - } - - invalidate_batched_entropy(); - numa_crng_init(); - crng_init = 2; - crng_need_final_init = false; - process_random_ready_list(); - wake_up_interruptible(&crng_init_wait); - kill_fasync(&fasync, SIGIO, POLL_IN); - pr_notice("crng init done\n"); - if (unseeded_warning.missed) { - pr_notice("%d get_random_xx warning(s) missed due to ratelimiting\n", - unseeded_warning.missed); - unseeded_warning.missed = 0; - } - if (urandom_warning.missed) { - pr_notice("%d urandom warning(s) missed due to ratelimiting\n", - urandom_warning.missed); - urandom_warning.missed = 0; - } -} - -static void do_numa_crng_init(struct work_struct *work) -{ - int i; - struct crng_state *crng; - struct crng_state **pool; - - pool = kcalloc(nr_node_ids, sizeof(*pool), GFP_KERNEL | __GFP_NOFAIL); - for_each_online_node(i) { - crng = kmalloc_node(sizeof(struct crng_state), - GFP_KERNEL | __GFP_NOFAIL, i); - spin_lock_init(&crng->lock); - crng_initialize_secondary(crng); - pool[i] = crng; - } - /* pairs with READ_ONCE() in select_crng() */ - if (cmpxchg_release(&crng_node_pool, NULL, pool) != NULL) { - for_each_node(i) - kfree(pool[i]); - kfree(pool); - } -} - -static DECLARE_WORK(numa_crng_init_work, do_numa_crng_init); - -static void numa_crng_init(void) -{ - if (IS_ENABLED(CONFIG_NUMA)) - schedule_work(&numa_crng_init_work); -} - -static struct crng_state *select_crng(void) -{ - if (IS_ENABLED(CONFIG_NUMA)) { - struct crng_state **pool; - int nid = numa_node_id(); - - /* pairs with cmpxchg_release() in do_numa_crng_init() */ - pool = READ_ONCE(crng_node_pool); - if (pool && pool[nid]) - return pool[nid]; - } - - return &primary_crng; -} - /* * crng_fast_load() can be called by code in the interrupt service * path. So we can't afford to dilly-dally. Returns the number of @@ -697,68 +584,71 @@ static int crng_slow_load(const u8 *cp, size_t len) return 1; } -static void crng_reseed(struct crng_state *crng) +static void crng_reseed(void) { unsigned long flags; - int i; + int i, entropy_count; union { u8 block[CHACHA_BLOCK_SIZE]; u32 key[8]; } buf; - if (crng == &primary_crng) { - int entropy_count; - do { - entropy_count = READ_ONCE(input_pool.entropy_count); - if (entropy_count < POOL_MIN_BITS) - return; - } while (cmpxchg(&input_pool.entropy_count, entropy_count, 0) != entropy_count); - extract_entropy(buf.key, sizeof(buf.key)); - wake_up_interruptible(&random_write_wait); - kill_fasync(&fasync, SIGIO, POLL_OUT); - } else { - _extract_crng(&primary_crng, buf.block); - _crng_backtrack_protect(&primary_crng, buf.block, - CHACHA_KEY_SIZE); - } - spin_lock_irqsave(&crng->lock, flags); + do { + entropy_count = READ_ONCE(input_pool.entropy_count); + if (entropy_count < POOL_MIN_BITS) + return; + } while (cmpxchg(&input_pool.entropy_count, entropy_count, 0) != entropy_count); + extract_entropy(buf.key, sizeof(buf.key)); + wake_up_interruptible(&random_write_wait); + kill_fasync(&fasync, SIGIO, POLL_OUT); + + spin_lock_irqsave(&primary_crng.lock, flags); for (i = 0; i < 8; i++) - crng->state[i + 4] ^= buf.key[i]; + primary_crng.state[i + 4] ^= buf.key[i]; memzero_explicit(&buf, sizeof(buf)); - WRITE_ONCE(crng->init_time, jiffies); - spin_unlock_irqrestore(&crng->lock, flags); - if (crng == &primary_crng && crng_init < 2) - crng_finalize_init(); + WRITE_ONCE(primary_crng.init_time, jiffies); + spin_unlock_irqrestore(&primary_crng.lock, flags); + if (crng_init < 2) { + invalidate_batched_entropy(); + crng_init = 2; + process_random_ready_list(); + wake_up_interruptible(&crng_init_wait); + kill_fasync(&fasync, SIGIO, POLL_IN); + pr_notice("crng init done\n"); + if (unseeded_warning.missed) { + pr_notice("%d get_random_xx warning(s) missed due to ratelimiting\n", + unseeded_warning.missed); + unseeded_warning.missed = 0; + } + if (urandom_warning.missed) { + pr_notice("%d urandom warning(s) missed due to ratelimiting\n", + urandom_warning.missed); + urandom_warning.missed = 0; + } + } } -static void _extract_crng(struct crng_state *crng, u8 out[CHACHA_BLOCK_SIZE]) +static void extract_crng(u8 out[CHACHA_BLOCK_SIZE]) { unsigned long flags, init_time; if (crng_ready()) { - init_time = READ_ONCE(crng->init_time); - if (time_after(READ_ONCE(crng_global_init_time), init_time) || - time_after(jiffies, init_time + CRNG_RESEED_INTERVAL)) - crng_reseed(crng); + init_time = READ_ONCE(primary_crng.init_time); + if (time_after(jiffies, init_time + CRNG_RESEED_INTERVAL)) + crng_reseed(); } - spin_lock_irqsave(&crng->lock, flags); - chacha20_block(&crng->state[0], out); - if (crng->state[12] == 0) - crng->state[13]++; - spin_unlock_irqrestore(&crng->lock, flags); -} - -static void extract_crng(u8 out[CHACHA_BLOCK_SIZE]) -{ - _extract_crng(select_crng(), out); + spin_lock_irqsave(&primary_crng.lock, flags); + chacha20_block(&primary_crng.state[0], out); + if (primary_crng.state[12] == 0) + primary_crng.state[13]++; + spin_unlock_irqrestore(&primary_crng.lock, flags); } /* * Use the leftover bytes from the CRNG block output (if there is * enough) to mutate the CRNG key to provide backtracking protection. */ -static void _crng_backtrack_protect(struct crng_state *crng, - u8 tmp[CHACHA_BLOCK_SIZE], int used) +static void crng_backtrack_protect(u8 tmp[CHACHA_BLOCK_SIZE], int used) { unsigned long flags; u32 *s, *d; @@ -769,17 +659,12 @@ static void _crng_backtrack_protect(struct crng_state *crng, extract_crng(tmp); used = 0; } - spin_lock_irqsave(&crng->lock, flags); + spin_lock_irqsave(&primary_crng.lock, flags); s = (u32 *)&tmp[used]; - d = &crng->state[4]; + d = &primary_crng.state[4]; for (i = 0; i < 8; i++) *d++ ^= *s++; - spin_unlock_irqrestore(&crng->lock, flags); -} - -static void crng_backtrack_protect(u8 tmp[CHACHA_BLOCK_SIZE], int used) -{ - _crng_backtrack_protect(select_crng(), tmp, used); + spin_unlock_irqrestore(&primary_crng.lock, flags); } static ssize_t extract_crng_user(void __user *buf, size_t nbytes) @@ -1381,10 +1266,7 @@ static void __init init_std_data(void) int __init rand_initialize(void) { init_std_data(); - if (crng_need_final_init) - crng_finalize_init(); - crng_initialize_primary(); - crng_global_init_time = jiffies; + crng_initialize(); if (ratelimit_disable) { urandom_warning.interval = 0; unseeded_warning.interval = 0; @@ -1554,8 +1436,7 @@ static long random_ioctl(struct file *f, unsigned int cmd, unsigned long arg) return -EPERM; if (crng_init < 2) return -ENODATA; - crng_reseed(&primary_crng); - WRITE_ONCE(crng_global_init_time, jiffies - 1); + crng_reseed(); return 0; default: return -EINVAL; From patchwork Tue Feb 8 15:53:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jason A. Donenfeld" X-Patchwork-Id: 12738963 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 917C9C433F5 for ; Tue, 8 Feb 2022 15:54:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1382138AbiBHPyK (ORCPT ); Tue, 8 Feb 2022 10:54:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35312 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1382192AbiBHPyK (ORCPT ); Tue, 8 Feb 2022 10:54:10 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C8055C061579; Tue, 8 Feb 2022 07:54:04 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 6256F61657; Tue, 8 Feb 2022 15:54:04 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 45922C340E9; Tue, 8 Feb 2022 15:54:03 +0000 (UTC) Authentication-Results: smtp.kernel.org; dkim=pass (1024-bit key) header.d=zx2c4.com header.i=@zx2c4.com header.b="gsakA+XT" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=zx2c4.com; s=20210105; t=1644335642; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=kTCMClZ/8hdkID8gWxLun/PWoNRaIRZF4QHrYPAUfKE=; b=gsakA+XT3TJ3oSPIg3PHxrEcopoSu7eJYc1J7WmhkESKNXEZgW3t/GQautXYQB3/aTtQ33 Zg202BSZg95BDPHF62tt0Rtbhyg0fcUI3o1vmzbkUjW6d6hR6L4DX1FN0sQup1xxF/k7Dk JMdOT4DHSbaEpJcyXK13u7fvnbP3oWY= Received: by mail.zx2c4.com (ZX2C4 Mail Server) with ESMTPSA id c20a5771 (TLSv1.3:AEAD-AES256-GCM-SHA384:256:NO); Tue, 8 Feb 2022 15:54:01 +0000 (UTC) From: "Jason A. Donenfeld" To: linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org Cc: "Jason A. Donenfeld" , Theodore Ts'o , Dominik Brodowski Subject: [PATCH v1 3/7] random: inline leaves of rand_initialize() Date: Tue, 8 Feb 2022 16:53:31 +0100 Message-Id: <20220208155335.378318-4-Jason@zx2c4.com> In-Reply-To: <20220208155335.378318-1-Jason@zx2c4.com> References: <20220208155335.378318-1-Jason@zx2c4.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org This is a preparatory commit for the following one. We simply inline the various functions that rand_initialize() calls that have no other callers. The compiler was doing this anyway before. Doing this will allow us to reorganize this after. Cc: Theodore Ts'o Cc: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Reviewed-by: Eric Biggers --- drivers/char/random.c | 90 ++++++++++++++++--------------------------- 1 file changed, 33 insertions(+), 57 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 59d8d38f6a07..db0e0e77613e 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -471,42 +471,6 @@ static DECLARE_WAIT_QUEUE_HEAD(crng_init_wait); static void invalidate_batched_entropy(void); -static bool trust_cpu __ro_after_init = IS_ENABLED(CONFIG_RANDOM_TRUST_CPU); -static int __init parse_trust_cpu(char *arg) -{ - return kstrtobool(arg, &trust_cpu); -} -early_param("random.trust_cpu", parse_trust_cpu); - -static bool __init crng_init_try_arch_early(void) -{ - int i; - bool arch_init = true; - unsigned long rv; - - for (i = 4; i < 16; i++) { - if (!arch_get_random_seed_long_early(&rv) && - !arch_get_random_long_early(&rv)) { - rv = random_get_entropy(); - arch_init = false; - } - primary_crng.state[i] ^= rv; - } - - return arch_init; -} - -static void __init crng_initialize(void) -{ - extract_entropy(&primary_crng.state[4], sizeof(u32) * 12); - if (crng_init_try_arch_early() && trust_cpu && crng_init < 2) { - invalidate_batched_entropy(); - crng_init = 2; - pr_notice("crng init done (trusting CPU's manufacturer)\n"); - } - primary_crng.init_time = jiffies - CRNG_RESEED_INTERVAL - 1; -} - /* * crng_fast_load() can be called by code in the interrupt service * path. So we can't afford to dilly-dally. Returns the number of @@ -1230,17 +1194,28 @@ int __must_check get_random_bytes_arch(void *buf, int nbytes) } EXPORT_SYMBOL(get_random_bytes_arch); +static bool trust_cpu __ro_after_init = IS_ENABLED(CONFIG_RANDOM_TRUST_CPU); +static int __init parse_trust_cpu(char *arg) +{ + return kstrtobool(arg, &trust_cpu); +} +early_param("random.trust_cpu", parse_trust_cpu); + /* - * init_std_data - initialize pool with system data - * - * This function clears the pool's entropy count and mixes some system - * data into the pool to prepare it for use. The pool is not cleared - * as that can only decrease the entropy in the pool. + * Note that setup_arch() may call add_device_randomness() + * long before we get here. This allows seeding of the pools + * with some platform dependent data very early in the boot + * process. But it limits our options here. We must use + * statically allocated structures that already have all + * initializations complete at compile time. We should also + * take care not to overwrite the precious per platform data + * we were given. */ -static void __init init_std_data(void) +int __init rand_initialize(void) { int i; ktime_t now = ktime_get_real(); + bool arch_init = true; unsigned long rv; mix_pool_bytes(&now, sizeof(now)); @@ -1251,22 +1226,23 @@ static void __init init_std_data(void) mix_pool_bytes(&rv, sizeof(rv)); } mix_pool_bytes(utsname(), sizeof(*(utsname()))); -} -/* - * Note that setup_arch() may call add_device_randomness() - * long before we get here. This allows seeding of the pools - * with some platform dependent data very early in the boot - * process. But it limits our options here. We must use - * statically allocated structures that already have all - * initializations complete at compile time. We should also - * take care not to overwrite the precious per platform data - * we were given. - */ -int __init rand_initialize(void) -{ - init_std_data(); - crng_initialize(); + extract_entropy(&primary_crng.state[4], sizeof(u32) * 12); + for (i = 4; i < 16; i++) { + if (!arch_get_random_seed_long_early(&rv) && + !arch_get_random_long_early(&rv)) { + rv = random_get_entropy(); + arch_init = false; + } + primary_crng.state[i] ^= rv; + } + if (arch_init && trust_cpu && crng_init < 2) { + invalidate_batched_entropy(); + crng_init = 2; + pr_notice("crng init done (trusting CPU's manufacturer)\n"); + } + primary_crng.init_time = jiffies - CRNG_RESEED_INTERVAL - 1; + if (ratelimit_disable) { urandom_warning.interval = 0; unseeded_warning.interval = 0; From patchwork Tue Feb 8 15:53:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jason A. Donenfeld" X-Patchwork-Id: 12738964 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A0720C433EF for ; Tue, 8 Feb 2022 15:54:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1382101AbiBHPyM (ORCPT ); Tue, 8 Feb 2022 10:54:12 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35338 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1382230AbiBHPyK (ORCPT ); Tue, 8 Feb 2022 10:54:10 -0500 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E6560C0612BD; Tue, 8 Feb 2022 07:54:08 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id B019DB81BBA; Tue, 8 Feb 2022 15:54:07 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0FE06C004E1; Tue, 8 Feb 2022 15:54:05 +0000 (UTC) Authentication-Results: smtp.kernel.org; dkim=pass (1024-bit key) header.d=zx2c4.com header.i=@zx2c4.com header.b="WT802gTW" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=zx2c4.com; s=20210105; t=1644335645; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=M0ac5RYvFLuC+iFVFU7A5DVoaNbsJ43GdSQLF6niWbY=; b=WT802gTWzbOqidTARTWJdW+TFUzWSuVV/GAcdlq5DcXhl/W9uRYep0CJCDujlzZVD5shp0 lkazgLEi+5b7tB8UY+rwT/CJI0VCVQjy8QT04MEZfmiRMHtr/BKR/nprEKm/om9Ir1dQ7/ dUcC/Eld+sXCIwPXdZIBmUPP3Pdzp3I= Received: by mail.zx2c4.com (ZX2C4 Mail Server) with ESMTPSA id a588c37c (TLSv1.3:AEAD-AES256-GCM-SHA384:256:NO); Tue, 8 Feb 2022 15:54:04 +0000 (UTC) From: "Jason A. Donenfeld" To: linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org Cc: "Jason A. Donenfeld" , Theodore Ts'o , Dominik Brodowski Subject: [PATCH v1 4/7] random: ensure early RDSEED goes through mixer on init Date: Tue, 8 Feb 2022 16:53:32 +0100 Message-Id: <20220208155335.378318-5-Jason@zx2c4.com> In-Reply-To: <20220208155335.378318-1-Jason@zx2c4.com> References: <20220208155335.378318-1-Jason@zx2c4.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Continuing the reasoning of "random: use RDSEED instead of RDRAND in entropy extraction" from this series, at init time we also don't want to be xoring RDSEED directly into the crng. Instead it's safer to put it into our entropy collector and then re-extract it, so that it goes through a hash function with preimage resistance. Cc: Theodore Ts'o Cc: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Reviewed-by: Eric Biggers --- drivers/char/random.c | 14 ++++---------- 1 file changed, 4 insertions(+), 10 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index db0e0e77613e..2bd19dce822d 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1218,24 +1218,18 @@ int __init rand_initialize(void) bool arch_init = true; unsigned long rv; + mix_pool_bytes(utsname(), sizeof(*(utsname()))); mix_pool_bytes(&now, sizeof(now)); for (i = BLAKE2S_BLOCK_SIZE; i > 0; i -= sizeof(rv)) { - if (!arch_get_random_seed_long(&rv) && - !arch_get_random_long(&rv)) - rv = random_get_entropy(); - mix_pool_bytes(&rv, sizeof(rv)); - } - mix_pool_bytes(utsname(), sizeof(*(utsname()))); - - extract_entropy(&primary_crng.state[4], sizeof(u32) * 12); - for (i = 4; i < 16; i++) { if (!arch_get_random_seed_long_early(&rv) && !arch_get_random_long_early(&rv)) { rv = random_get_entropy(); arch_init = false; } - primary_crng.state[i] ^= rv; + mix_pool_bytes(&rv, sizeof(rv)); } + + extract_entropy(&primary_crng.state[4], sizeof(u32) * 12); if (arch_init && trust_cpu && crng_init < 2) { invalidate_batched_entropy(); crng_init = 2; From patchwork Tue Feb 8 15:53:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jason A. Donenfeld" X-Patchwork-Id: 12738965 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 90C12C433EF for ; Tue, 8 Feb 2022 15:54:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1382211AbiBHPyV (ORCPT ); Tue, 8 Feb 2022 10:54:21 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35420 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1380361AbiBHPyO (ORCPT ); Tue, 8 Feb 2022 10:54:14 -0500 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A5955C061576; Tue, 8 Feb 2022 07:54:13 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 6E781B81BC2; Tue, 8 Feb 2022 15:54:12 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C6F1AC004E1; Tue, 8 Feb 2022 15:54:10 +0000 (UTC) Authentication-Results: smtp.kernel.org; dkim=pass (1024-bit key) header.d=zx2c4.com header.i=@zx2c4.com header.b="cms9AXVg" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=zx2c4.com; s=20210105; t=1644335649; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=3vijYymYJ1hV/XdaO0shzH+kc8Al+CbQUsKrgDFf33k=; b=cms9AXVgnXTyC/hzNu7Xf08Mip9NrgN5O0j4KPV4hytC8ddX3uTSkj1NQSif5s+ARh0sUj YX/FBwXU2++6jMG5wYCAn08kOQ5b1UIk4x8S6NS+7JE4khjvnUCWGEGLg09lHs5QNlFFt5 lx9A2PjkEblsAtWploGkwYREIoMXK5U= Received: by mail.zx2c4.com (ZX2C4 Mail Server) with ESMTPSA id 0545e3c2 (TLSv1.3:AEAD-AES256-GCM-SHA384:256:NO); Tue, 8 Feb 2022 15:54:09 +0000 (UTC) From: "Jason A. Donenfeld" To: linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org Cc: "Jason A. Donenfeld" , Theodore Ts'o , Dominik Brodowski Subject: [PATCH v1 5/7] random: do not xor RDRAND when writing into /dev/random Date: Tue, 8 Feb 2022 16:53:33 +0100 Message-Id: <20220208155335.378318-6-Jason@zx2c4.com> In-Reply-To: <20220208155335.378318-1-Jason@zx2c4.com> References: <20220208155335.378318-1-Jason@zx2c4.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Continuing the reasoning of "random: ensure early RDSEED goes through mixer on init", we don't want RDRAND interacting with anything without going through the mixer function, as a backdoored CPU could presumably cancel out data during an xor, which it'd have a harder time doing when being forced through a cryptographic hash function. There's actually no need at all to be calling RDRAND in write_pool(), because before we extract from the pool, we always do so with 32 bytes of RDSEED hashed in at that stage. Xoring at this stage is needless and introduces a minor liability. Cc: Theodore Ts'o Cc: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Reviewed-by: Eric Biggers --- drivers/char/random.c | 14 ++------------ 1 file changed, 2 insertions(+), 12 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 2bd19dce822d..ed7fcef1ba31 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1315,25 +1315,15 @@ static __poll_t random_poll(struct file *file, poll_table *wait) static int write_pool(const char __user *buffer, size_t count) { size_t bytes; - u32 t, buf[16]; + u8 buf[BLAKE2S_BLOCK_SIZE]; const char __user *p = buffer; while (count > 0) { - int b, i = 0; - bytes = min(count, sizeof(buf)); - if (copy_from_user(&buf, p, bytes)) + if (copy_from_user(buf, p, bytes)) return -EFAULT; - - for (b = bytes; b > 0; b -= sizeof(u32), i++) { - if (!arch_get_random_int(&t)) - break; - buf[i] ^= t; - } - count -= bytes; p += bytes; - mix_pool_bytes(buf, bytes); cond_resched(); } From patchwork Tue Feb 8 15:53:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Jason A. Donenfeld" X-Patchwork-Id: 12738966 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 03813C433F5 for ; Tue, 8 Feb 2022 15:54:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1382232AbiBHPyW (ORCPT ); Tue, 8 Feb 2022 10:54:22 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35536 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1382234AbiBHPyU (ORCPT ); Tue, 8 Feb 2022 10:54:20 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1B87BC0612C3; Tue, 8 Feb 2022 07:54:15 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 8166F61656; Tue, 8 Feb 2022 15:54:15 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6CCE6C340F2; Tue, 8 Feb 2022 15:54:14 +0000 (UTC) Authentication-Results: smtp.kernel.org; dkim=pass (1024-bit key) header.d=zx2c4.com header.i=@zx2c4.com header.b="FiHYO4x3" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=zx2c4.com; s=20210105; t=1644335653; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=628oqAggMjxzPVXr3Ay/5BYl18OjWFwmproIG7wOfxM=; b=FiHYO4x3DkzJ70CHjEfMmNIyn/7CY83bFdaXFGi7e75xAvrEyGjvnp3wyIKv7WgYxxN+Vb TVilbUc1mJ7Vui86iffPvPaheWLYK5HAjaajicNLt3TuskRJ4my/z3EWDfIaCAUrLy7cNy yylsF+sta8pwiQqb43FZZAR+L5NDGSk= Received: by mail.zx2c4.com (ZX2C4 Mail Server) with ESMTPSA id ae1ffce8 (TLSv1.3:AEAD-AES256-GCM-SHA384:256:NO); Tue, 8 Feb 2022 15:54:12 +0000 (UTC) From: "Jason A. Donenfeld" To: linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org Cc: "Jason A. Donenfeld" , Theodore Ts'o , Dominik Brodowski , Sebastian Andrzej Siewior Subject: [PATCH v1 6/7] random: use simpler fast key erasure flow on per-cpu keys Date: Tue, 8 Feb 2022 16:53:34 +0100 Message-Id: <20220208155335.378318-7-Jason@zx2c4.com> In-Reply-To: <20220208155335.378318-1-Jason@zx2c4.com> References: <20220208155335.378318-1-Jason@zx2c4.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Rather than the clunky NUMA full ChaCha state system we had prior, this commit is closer to the original "fast key erasure RNG" proposal from , by simply treating ChaCha keys on a per-cpu basis. All entropy is extracted to a base crng key of 32 bytes. This base crng has a birthdate and a generation counter. When we go to take bytes from the crng, we first check if the birthdate is too old; if it is, we reseed per usual. Then we start working on a per-cpu crng. This per-cpu crng makes sure that it has the same generation counter as the base crng. If it doesn't, it does fast key erasure with the base crng key and uses the output as its new per-cpu key, and then updates its local generation counter. Then, using this per-cpu state, we do ordinary fast key erasure. Half of this first block is used to overwrite the per-cpu crng key for the next call -- this is the fast key erasure RNG idea -- and the other half, along with the ChaCha state, is returned to the caller. If the caller desires more than this remaining half, it can generate more ChaCha blocks, unlocked, using the now detached ChaCha state that was just returned. Crypto-wise, this is more or less what we were doing before, but this simply makes it more explicit and ensures that we always have backtrack protection by not playing games with a shared block counter. The flow looks like this: ──extract()──► base_crng.key ◄──memcpy()───┐ │ │ └──chacha()──────┬─► new_base_key └─► crngs[n].key ◄──memcpy()───┐ │ │ └──chacha()───┬─► new_key └─► random_bytes │ └────► There are a few hairy details around early init. Just as was done before, prior to having gathered enough entropy, crng_fast_load() and crng_slow_load() dump bytes directly into the base crng, and when we go to take bytes from the crng, in that case, we're doing fast key erasure with the base crng rather than the fast unlocked per-cpu crngs. This is fine as that's only the state of affairs during very early boot; once the crng initializes we never use these paths again. In the process of all this, the APIs into the crng become a bit simpler: we have get_random_bytes(buf, len) and get_random_bytes_user(buf, len), which both do what you'd expect. All of the details of fast key erasure and per-cpu selection happen only in a very short critical section of crng_make_state(), which selects the right per-cpu key, does the fast key erasure, and returns a local state to the caller's stack. So, we no longer have a need for a separate backtrack function, as this happens all at once here. The API then allows us to extend backtrack protection to batched entropy without really having to do much at all. The result is a bit simpler than before and has fewer foot guns. The init time state machine also gets a lot simpler as we don't need to wait for workqueues to come online and do deferred work. And the multi-core performance should be increased significantly, by virtue of having hardly any locking on the fast path. Cc: Theodore Ts'o Cc: Dominik Brodowski Cc: Sebastian Andrzej Siewior Signed-off-by: Jason A. Donenfeld --- drivers/char/random.c | 278 +++++++++++++++++++++++++----------------- 1 file changed, 169 insertions(+), 109 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index ed7fcef1ba31..ac22dc94b037 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -300,20 +300,6 @@ static struct fasync_struct *fasync; static DEFINE_SPINLOCK(random_ready_list_lock); static LIST_HEAD(random_ready_list); -struct crng_state { - u32 state[16]; - unsigned long init_time; - spinlock_t lock; -}; - -static struct crng_state primary_crng = { - .lock = __SPIN_LOCK_UNLOCKED(primary_crng.lock), - .state[0] = CHACHA_CONSTANT_EXPA, - .state[1] = CHACHA_CONSTANT_ND_3, - .state[2] = CHACHA_CONSTANT_2_BY, - .state[3] = CHACHA_CONSTANT_TE_K, -}; - /* * crng_init = 0 --> Uninitialized * 1 --> Initialized @@ -325,9 +311,6 @@ static struct crng_state primary_crng = { static int crng_init = 0; #define crng_ready() (likely(crng_init > 1)) static int crng_init_cnt = 0; -#define CRNG_INIT_CNT_THRESH (2 * CHACHA_KEY_SIZE) -static void extract_crng(u8 out[CHACHA_BLOCK_SIZE]); -static void crng_backtrack_protect(u8 tmp[CHACHA_BLOCK_SIZE], int used); static void process_random_ready_list(void); static void _get_random_bytes(void *buf, int nbytes); @@ -465,7 +448,28 @@ static void credit_entropy_bits(int nbits) * *********************************************************************/ -#define CRNG_RESEED_INTERVAL (300 * HZ) +enum { + CRNG_RESEED_INTERVAL = 300 * HZ, + CRNG_INIT_CNT_THRESH = 2 * CHACHA_KEY_SIZE +}; + +struct { + u8 key[CHACHA_KEY_SIZE] __aligned(__alignof__(long)); + unsigned long birth; + unsigned long generation; + spinlock_t lock; +} base_crng; + +struct crng { + u8 key[CHACHA_KEY_SIZE]; + unsigned long generation; + local_lock_t lock; +}; + +static DEFINE_PER_CPU(struct crng, crngs) = { + .generation = ULONG_MAX, + .lock = INIT_LOCAL_LOCK(crngs.lock), +}; static DECLARE_WAIT_QUEUE_HEAD(crng_init_wait); @@ -482,22 +486,22 @@ static size_t crng_fast_load(const u8 *cp, size_t len) u8 *p; size_t ret = 0; - if (!spin_trylock_irqsave(&primary_crng.lock, flags)) + if (!spin_trylock_irqsave(&base_crng.lock, flags)) return 0; if (crng_init != 0) { - spin_unlock_irqrestore(&primary_crng.lock, flags); + spin_unlock_irqrestore(&base_crng.lock, flags); return 0; } - p = (u8 *)&primary_crng.state[4]; + p = base_crng.key; while (len > 0 && crng_init_cnt < CRNG_INIT_CNT_THRESH) { - p[crng_init_cnt % CHACHA_KEY_SIZE] ^= *cp; + p[crng_init_cnt % sizeof(base_crng.key)] ^= *cp; cp++; crng_init_cnt++; len--; ret++; } if (crng_init_cnt >= CRNG_INIT_CNT_THRESH) { invalidate_batched_entropy(); crng_init = 1; } - spin_unlock_irqrestore(&primary_crng.lock, flags); + spin_unlock_irqrestore(&base_crng.lock, flags); if (crng_init == 1) pr_notice("fast init done\n"); return ret; @@ -522,14 +526,14 @@ static int crng_slow_load(const u8 *cp, size_t len) unsigned long flags; static u8 lfsr = 1; u8 tmp; - unsigned int i, max = CHACHA_KEY_SIZE; + unsigned int i, max = sizeof(base_crng.key); const u8 *src_buf = cp; - u8 *dest_buf = (u8 *)&primary_crng.state[4]; + u8 *dest_buf = base_crng.key; - if (!spin_trylock_irqsave(&primary_crng.lock, flags)) + if (!spin_trylock_irqsave(&base_crng.lock, flags)) return 0; if (crng_init != 0) { - spin_unlock_irqrestore(&primary_crng.lock, flags); + spin_unlock_irqrestore(&base_crng.lock, flags); return 0; } if (len > max) @@ -540,38 +544,40 @@ static int crng_slow_load(const u8 *cp, size_t len) lfsr >>= 1; if (tmp & 1) lfsr ^= 0xE1; - tmp = dest_buf[i % CHACHA_KEY_SIZE]; - dest_buf[i % CHACHA_KEY_SIZE] ^= src_buf[i % len] ^ lfsr; + tmp = dest_buf[i % sizeof(base_crng.key)]; + dest_buf[i % sizeof(base_crng.key)] ^= src_buf[i % len] ^ lfsr; lfsr += (tmp << 3) | (tmp >> 5); } - spin_unlock_irqrestore(&primary_crng.lock, flags); + spin_unlock_irqrestore(&base_crng.lock, flags); return 1; } static void crng_reseed(void) { unsigned long flags; - int i, entropy_count; - union { - u8 block[CHACHA_BLOCK_SIZE]; - u32 key[8]; - } buf; + int entropy_count; + unsigned long next_gen; + u8 key[CHACHA_KEY_SIZE]; do { entropy_count = READ_ONCE(input_pool.entropy_count); if (entropy_count < POOL_MIN_BITS) return; } while (cmpxchg(&input_pool.entropy_count, entropy_count, 0) != entropy_count); - extract_entropy(buf.key, sizeof(buf.key)); + extract_entropy(key, sizeof(key)); wake_up_interruptible(&random_write_wait); kill_fasync(&fasync, SIGIO, POLL_OUT); - spin_lock_irqsave(&primary_crng.lock, flags); - for (i = 0; i < 8; i++) - primary_crng.state[i + 4] ^= buf.key[i]; - memzero_explicit(&buf, sizeof(buf)); - WRITE_ONCE(primary_crng.init_time, jiffies); - spin_unlock_irqrestore(&primary_crng.lock, flags); + spin_lock_irqsave(&base_crng.lock, flags); + memcpy(base_crng.key, key, sizeof(base_crng.key)); + next_gen = base_crng.generation + 1; + if (next_gen == ULONG_MAX) + ++next_gen; + WRITE_ONCE(base_crng.generation, next_gen); + base_crng.birth = jiffies; + spin_unlock_irqrestore(&base_crng.lock, flags); + memzero_explicit(key, sizeof(key)); + if (crng_init < 2) { invalidate_batched_entropy(); crng_init = 2; @@ -592,50 +598,88 @@ static void crng_reseed(void) } } -static void extract_crng(u8 out[CHACHA_BLOCK_SIZE]) +/* + * The general form here is based on a "fast key erasure RNG" from: + * https://blog.cr.yp.to/20170723-random.html + */ +static void crng_fast_key_erasure(u8 key[CHACHA_KEY_SIZE], + u32 chacha_state[CHACHA_STATE_WORDS], + u8 random_data[32], size_t random_data_len) { - unsigned long flags, init_time; + u8 first_block[CHACHA_BLOCK_SIZE]; - if (crng_ready()) { - init_time = READ_ONCE(primary_crng.init_time); - if (time_after(jiffies, init_time + CRNG_RESEED_INTERVAL)) - crng_reseed(); - } - spin_lock_irqsave(&primary_crng.lock, flags); - chacha20_block(&primary_crng.state[0], out); - if (primary_crng.state[12] == 0) - primary_crng.state[13]++; - spin_unlock_irqrestore(&primary_crng.lock, flags); + chacha_init_consts(chacha_state); + memcpy(&chacha_state[4], key, CHACHA_KEY_SIZE); + memset(&chacha_state[12], 0, sizeof(u32) * 4); + chacha20_block(chacha_state, first_block); + + memcpy(key, first_block, CHACHA_KEY_SIZE); + memcpy(random_data, first_block + CHACHA_KEY_SIZE, random_data_len); + memzero_explicit(first_block, sizeof(first_block)); } /* - * Use the leftover bytes from the CRNG block output (if there is - * enough) to mutate the CRNG key to provide backtracking protection. + * This function returns a ChaCha block that you may use for generating + * random data. It also returns up to 32 bytes on its own of random data + * that may be used. */ -static void crng_backtrack_protect(u8 tmp[CHACHA_BLOCK_SIZE], int used) +static void crng_make_state(u32 chacha_state[CHACHA_STATE_WORDS], + u8 random_data[32], size_t random_data_len) { unsigned long flags; - u32 *s, *d; - int i; + struct crng *crng; + + /* For the fast path, we check this unlocked first. */ + if (unlikely(!crng_ready())) { + bool not_ready; + + spin_lock_irqsave(&base_crng.lock, flags); + /* Now that we're locked, re-check. */ + not_ready = !crng_ready(); + if (not_ready) + crng_fast_key_erasure(base_crng.key, chacha_state, + random_data, random_data_len); + spin_unlock_irqrestore(&base_crng.lock, flags); + if (!not_ready) + return; + } + + if (unlikely(time_after(jiffies, READ_ONCE(base_crng.birth) + CRNG_RESEED_INTERVAL))) + crng_reseed(); + + local_lock_irqsave(&crngs.lock, flags); + crng = raw_cpu_ptr(&crngs); - used = round_up(used, sizeof(u32)); - if (used + CHACHA_KEY_SIZE > CHACHA_BLOCK_SIZE) { - extract_crng(tmp); - used = 0; + if (unlikely(crng->generation != READ_ONCE(base_crng.generation))) { + spin_lock(&base_crng.lock); + crng_fast_key_erasure(base_crng.key, chacha_state, + crng->key, sizeof(crng->key)); + crng->generation = base_crng.generation; + spin_unlock(&base_crng.lock); } - spin_lock_irqsave(&primary_crng.lock, flags); - s = (u32 *)&tmp[used]; - d = &primary_crng.state[4]; - for (i = 0; i < 8; i++) - *d++ ^= *s++; - spin_unlock_irqrestore(&primary_crng.lock, flags); + + crng_fast_key_erasure(crng->key, chacha_state, random_data, random_data_len); + local_unlock_irqrestore(&crngs.lock, flags); } -static ssize_t extract_crng_user(void __user *buf, size_t nbytes) +static ssize_t get_random_bytes_user(void __user *buf, size_t nbytes) { - ssize_t ret = 0, i = CHACHA_BLOCK_SIZE; - u8 tmp[CHACHA_BLOCK_SIZE] __aligned(4); - int large_request = (nbytes > 256); + bool large_request = (nbytes > 256); + ssize_t ret = 0, len; + u32 chacha_state[CHACHA_STATE_WORDS]; + u8 output[CHACHA_BLOCK_SIZE]; + + if (!nbytes) + return 0; + + len = min_t(ssize_t, 32, nbytes); + crng_make_state(chacha_state, output, len); + + if (copy_to_user(buf, output, len)) + return -EFAULT; + nbytes -= len; + buf += len; + ret += len; while (nbytes) { if (large_request && need_resched()) { @@ -647,22 +691,23 @@ static ssize_t extract_crng_user(void __user *buf, size_t nbytes) schedule(); } - extract_crng(tmp); - i = min_t(int, nbytes, CHACHA_BLOCK_SIZE); - if (copy_to_user(buf, tmp, i)) { + chacha20_block(chacha_state, output); + if (unlikely(chacha_state[12] == 0)) + ++chacha_state[13]; + + len = min_t(ssize_t, nbytes, CHACHA_BLOCK_SIZE); + if (copy_to_user(buf, output, len)) { ret = -EFAULT; break; } - nbytes -= i; - buf += i; - ret += i; + nbytes -= len; + buf += len; + ret += len; } - crng_backtrack_protect(tmp, i); - - /* Wipe data just written to memory */ - memzero_explicit(tmp, sizeof(tmp)); + memzero_explicit(chacha_state, sizeof(chacha_state)); + memzero_explicit(output, sizeof(output)); return ret; } @@ -982,23 +1027,37 @@ static void _warn_unseeded_randomness(const char *func_name, void *caller, void */ static void _get_random_bytes(void *buf, int nbytes) { - u8 tmp[CHACHA_BLOCK_SIZE] __aligned(4); + u32 chacha_state[CHACHA_STATE_WORDS]; + u8 tmp[CHACHA_BLOCK_SIZE]; + ssize_t len; trace_get_random_bytes(nbytes, _RET_IP_); - while (nbytes >= CHACHA_BLOCK_SIZE) { - extract_crng(buf); - buf += CHACHA_BLOCK_SIZE; - nbytes -= CHACHA_BLOCK_SIZE; + if (!nbytes) + return; + + len = min_t(ssize_t, 32, nbytes); + crng_make_state(chacha_state, buf, len); + nbytes -= len; + buf += len; + + while (nbytes) { + if (nbytes < CHACHA_BLOCK_SIZE) { + chacha20_block(chacha_state, tmp); + memcpy(buf, tmp, nbytes); + memzero_explicit(tmp, sizeof(tmp)); + break; + } + + len = min_t(ssize_t, nbytes, CHACHA_BLOCK_SIZE); + chacha20_block(chacha_state, buf); + if (unlikely(chacha_state[12] == 0)) + ++chacha_state[13]; + nbytes -= len; + buf += len; } - if (nbytes > 0) { - extract_crng(tmp); - memcpy(buf, tmp, nbytes); - crng_backtrack_protect(tmp, nbytes); - } else - crng_backtrack_protect(tmp, CHACHA_BLOCK_SIZE); - memzero_explicit(tmp, sizeof(tmp)); + memzero_explicit(chacha_state, sizeof(chacha_state)); } void get_random_bytes(void *buf, int nbytes) @@ -1229,13 +1288,12 @@ int __init rand_initialize(void) mix_pool_bytes(&rv, sizeof(rv)); } - extract_entropy(&primary_crng.state[4], sizeof(u32) * 12); + extract_entropy(base_crng.key, sizeof(base_crng.key)); if (arch_init && trust_cpu && crng_init < 2) { invalidate_batched_entropy(); crng_init = 2; pr_notice("crng init done (trusting CPU's manufacturer)\n"); } - primary_crng.init_time = jiffies - CRNG_RESEED_INTERVAL - 1; if (ratelimit_disable) { urandom_warning.interval = 0; @@ -1267,7 +1325,7 @@ static ssize_t urandom_read_nowarn(struct file *file, char __user *buf, int ret; nbytes = min_t(size_t, nbytes, INT_MAX >> 6); - ret = extract_crng_user(buf, nbytes); + ret = get_random_bytes_user(buf, nbytes); trace_urandom_read(8 * nbytes, 0, input_pool.entropy_count); return ret; } @@ -1583,8 +1641,8 @@ static atomic_t batch_generation = ATOMIC_INIT(0); struct batched_entropy { union { - u64 entropy_u64[CHACHA_BLOCK_SIZE / sizeof(u64)]; - u32 entropy_u32[CHACHA_BLOCK_SIZE / sizeof(u32)]; + u64 entropy_u64[CHACHA_BLOCK_SIZE * 3 / (2 * sizeof(u64))]; + u32 entropy_u32[CHACHA_BLOCK_SIZE * 3 / (2 * sizeof(u32))]; }; local_lock_t lock; unsigned int position; @@ -1593,11 +1651,9 @@ struct batched_entropy { /* * Get a random word for internal kernel use only. The quality of the random - * number is good as /dev/urandom, but there is no backtrack protection, with - * the goal of being quite fast and not depleting entropy. In order to ensure - * that the randomness provided by this function is okay, the function - * wait_for_random_bytes() should be called and return 0 at least once at any - * point prior. + * number is good as /dev/urandom. In order to ensure that the randomness + * provided by this function is okay, the function wait_for_random_bytes() + * should be called and return 0 at least once at any point prior. */ static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u64) = { .lock = INIT_LOCAL_LOCK(batched_entropy_u64.lock) @@ -1619,12 +1675,14 @@ u64 get_random_u64(void) next_gen = atomic_read(&batch_generation); if (batch->position % ARRAY_SIZE(batch->entropy_u64) == 0 || next_gen != batch->generation) { - extract_crng((u8 *)batch->entropy_u64); + get_random_bytes(batch->entropy_u64, sizeof(batch->entropy_u64)); batch->position = 0; batch->generation = next_gen; } - ret = batch->entropy_u64[batch->position++]; + ret = batch->entropy_u64[batch->position]; + batch->entropy_u64[batch->position] = 0; + ++batch->position; local_unlock_irqrestore(&batched_entropy_u64.lock, flags); return ret; } @@ -1650,12 +1708,14 @@ u32 get_random_u32(void) next_gen = atomic_read(&batch_generation); if (batch->position % ARRAY_SIZE(batch->entropy_u32) == 0 || next_gen != batch->generation) { - extract_crng((u8 *)batch->entropy_u32); + get_random_bytes(batch->entropy_u32, sizeof(batch->entropy_u32)); batch->position = 0; batch->generation = next_gen; } - ret = batch->entropy_u32[batch->position++]; + ret = batch->entropy_u32[batch->position]; + batch->entropy_u32[batch->position] = 0; + ++batch->position; local_unlock_irqrestore(&batched_entropy_u32.lock, flags); return ret; } From patchwork Tue Feb 8 15:53:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jason A. Donenfeld" X-Patchwork-Id: 12738967 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 17E06C4332F for ; Tue, 8 Feb 2022 15:54:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1382276AbiBHPyW (ORCPT ); Tue, 8 Feb 2022 10:54:22 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35522 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1382359AbiBHPyV (ORCPT ); Tue, 8 Feb 2022 10:54:21 -0500 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 281EEC061266; Tue, 8 Feb 2022 07:54:20 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id E239BB81BD2; Tue, 8 Feb 2022 15:54:18 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 349A4C340E9; Tue, 8 Feb 2022 15:54:17 +0000 (UTC) Authentication-Results: smtp.kernel.org; dkim=pass (1024-bit key) header.d=zx2c4.com header.i=@zx2c4.com header.b="YoX924Qk" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=zx2c4.com; s=20210105; t=1644335656; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=vBP+JsWMxcggJGLLXQ2ZKVQO9bvMzdwL7er8u9wTpII=; b=YoX924QkkUedetsM5jg1+mO+UDCr37VPTT3iB1ap8nt+YrkKYiCo4Sg01f+x9cmOK4wxyX GNyFjY4WktOSa9gVYb5Zo3I+sjigoM/iTQ/Hs+w1C+AWdzIziay13fDveHZgYyW7YxDZl5 MXVmILVdIWW3VUs9PXk2HHy6H+RpSNQ= Received: by mail.zx2c4.com (ZX2C4 Mail Server) with ESMTPSA id 838a1684 (TLSv1.3:AEAD-AES256-GCM-SHA384:256:NO); Tue, 8 Feb 2022 15:54:16 +0000 (UTC) From: "Jason A. Donenfeld" To: linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org Cc: "Jason A. Donenfeld" , Theodore Ts'o , Dominik Brodowski Subject: [PATCH v1 7/7] random: remove outdated INT_MAX >> 6 check in urandom_read() Date: Tue, 8 Feb 2022 16:53:35 +0100 Message-Id: <20220208155335.378318-8-Jason@zx2c4.com> In-Reply-To: <20220208155335.378318-1-Jason@zx2c4.com> References: <20220208155335.378318-1-Jason@zx2c4.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org In 79a8468747c5 ("random: check for increase of entropy_count because of signed conversion"), a number of checks were added around what values were passed to account(), because account() was doing fancy fixed point fractional arithmetic, and a user had some ability to pass large values directly into it. One of things in that commit was limiting those values to INT_MAX >> 6. However, for several years now, urandom reads no longer touch entropy accounting, and so this check serves no purpose. The current flow is: urandom_read_nowarn()-->get_random_bytes_user()-->chacha20_block() We arrive at urandom_read_nowarn() in the first place either via ordinary fops, which limits reads to MAX_RW_COUNT, or via getrandom() which limits reads to INT_MAX. Cc: Theodore Ts'o Cc: Dominik Brodowski Signed-off-by: Jason A. Donenfeld --- drivers/char/random.c | 1 - 1 file changed, 1 deletion(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index ac22dc94b037..fa366ab38263 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1324,7 +1324,6 @@ static ssize_t urandom_read_nowarn(struct file *file, char __user *buf, { int ret; - nbytes = min_t(size_t, nbytes, INT_MAX >> 6); ret = get_random_bytes_user(buf, nbytes); trace_urandom_read(8 * nbytes, 0, input_pool.entropy_count); return ret;