From patchwork Wed Jun 7 23:25:55 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jason A. Donenfeld" X-Patchwork-Id: 9773253 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 90B9960234 for ; Wed, 7 Jun 2017 23:26:44 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 83F9E283AD for ; Wed, 7 Jun 2017 23:26:44 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 76ED42848D; Wed, 7 Jun 2017 23:26:44 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 75F6E283AD for ; Wed, 7 Jun 2017 23:26:43 +0000 (UTC) Received: (qmail 26027 invoked by uid 550); 7 Jun 2017 23:26:25 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 25956 invoked from network); 7 Jun 2017 23:26:22 -0000 DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=zx2c4.com; h=from:to:cc :subject:date:message-id:in-reply-to:references; s=mail; bh=mHXA ViCYB+BU8pyhwzjD5j17ejw=; b=W6847SKieQNeiSVjxpsZyw+fCgtOC7kXOo6K BOmcLnva3VphL8dK6ArRwuiJk9wZ9AFsXee+sxj9UakSorVWfINNSQzMCD17SAOv 7BTYYj7oSHmurgm1duFw1tj2g/aXa60lazjk2VTIHbPOI3iv3xRt8E+MNMTCQcOa WAqdQSu0bNzt5vs/vmX8GC3gQicrrOaRTuZ09QGDJ0u3BggcLPc0rywh1nBCcYIX w+yhk4D3hFyJVNsJf6S/yM883IZtuk2hqaAVvj1we0ABCkZTzzn3XuwYRIVyBN61 NNX1+rX1i4b2NF4NzusBsS2VWSi94jPwv4rc6Unjcx/jvOGybw== From: "Jason A. Donenfeld" To: Theodore Ts'o , Linux Crypto Mailing List , LKML , kernel-hardening@lists.openwall.com, Greg Kroah-Hartman , Eric Biggers , Linus Torvalds , David Miller Cc: "Jason A. Donenfeld" Date: Thu, 8 Jun 2017 01:25:55 +0200 Message-Id: <20170607232607.26870-2-Jason@zx2c4.com> In-Reply-To: <20170607232607.26870-1-Jason@zx2c4.com> References: <20170607232607.26870-1-Jason@zx2c4.com> Subject: [kernel-hardening] [PATCH v5 01/13] random: invalidate batched entropy after crng init X-Virus-Scanned: ClamAV using ClamSMTP It's possible that get_random_{u32,u64} is used before the crng has initialized, in which case, its output might not be cryptographically secure. For this problem, directly, this patch set is introducing the *_wait variety of functions, but even with that, there's a subtle issue: what happens to our batched entropy that was generated before initialization. Prior to this commit, it'd stick around, supplying bad numbers. After this commit, we force the entropy to be re-extracted after each phase of the crng has initialized. In order to avoid a race condition with the position counter, we introduce a simple rwlock for this invalidation. Since it's only during this awkward transition period, after things are all set up, we stop using it, so that it doesn't have an impact on performance. This should probably be backported to 4.11. (Also: adding my copyright to the top. With the patch series from January, this patch, and then the ones that come after, I think there's a relevant amount of code in here to add my name to the top.) Signed-off-by: Jason A. Donenfeld Cc: Greg Kroah-Hartman --- drivers/char/random.c | 37 +++++++++++++++++++++++++++++++++++++ 1 file changed, 37 insertions(+) diff --git a/drivers/char/random.c b/drivers/char/random.c index a561f0c2f428..d35da1603e12 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1,6 +1,9 @@ /* * random.c -- A strong random number generator * + * Copyright (C) 2017 Jason A. Donenfeld . All + * Rights Reserved. + * * Copyright Matt Mackall , 2003, 2004, 2005 * * Copyright Theodore Ts'o, 1994, 1995, 1996, 1997, 1998, 1999. All @@ -762,6 +765,8 @@ static DECLARE_WAIT_QUEUE_HEAD(crng_init_wait); static struct crng_state **crng_node_pool __read_mostly; #endif +static void invalidate_batched_entropy(void); + static void crng_initialize(struct crng_state *crng) { int i; @@ -799,6 +804,7 @@ static int crng_fast_load(const char *cp, size_t len) cp++; crng_init_cnt++; len--; } if (crng_init_cnt >= CRNG_INIT_CNT_THRESH) { + invalidate_batched_entropy(); crng_init = 1; wake_up_interruptible(&crng_init_wait); pr_notice("random: fast init done\n"); @@ -836,6 +842,7 @@ static void crng_reseed(struct crng_state *crng, struct entropy_store *r) memzero_explicit(&buf, sizeof(buf)); crng->init_time = jiffies; if (crng == &primary_crng && crng_init < 2) { + invalidate_batched_entropy(); crng_init = 2; process_random_ready_list(); wake_up_interruptible(&crng_init_wait); @@ -2023,6 +2030,7 @@ struct batched_entropy { }; unsigned int position; }; +static rwlock_t batched_entropy_reset_lock = __RW_LOCK_UNLOCKED(batched_entropy_reset_lock); /* * Get a random word for internal kernel use only. The quality of the random @@ -2033,6 +2041,8 @@ static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u64); u64 get_random_u64(void) { u64 ret; + const bool use_lock = READ_ONCE(crng_init) < 2; + unsigned long flags = 0; struct batched_entropy *batch; #if BITS_PER_LONG == 64 @@ -2045,11 +2055,15 @@ u64 get_random_u64(void) #endif batch = &get_cpu_var(batched_entropy_u64); + if (use_lock) + read_lock_irqsave(&batched_entropy_reset_lock, flags); if (batch->position % ARRAY_SIZE(batch->entropy_u64) == 0) { extract_crng((u8 *)batch->entropy_u64); batch->position = 0; } ret = batch->entropy_u64[batch->position++]; + if (use_lock) + read_unlock_irqrestore(&batched_entropy_reset_lock, flags); put_cpu_var(batched_entropy_u64); return ret; } @@ -2059,22 +2073,45 @@ static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u32); u32 get_random_u32(void) { u32 ret; + const bool use_lock = READ_ONCE(crng_init) < 2; + unsigned long flags = 0; struct batched_entropy *batch; if (arch_get_random_int(&ret)) return ret; batch = &get_cpu_var(batched_entropy_u32); + if (use_lock) + read_lock_irqsave(&batched_entropy_reset_lock, flags); if (batch->position % ARRAY_SIZE(batch->entropy_u32) == 0) { extract_crng((u8 *)batch->entropy_u32); batch->position = 0; } ret = batch->entropy_u32[batch->position++]; + if (use_lock) + read_unlock_irqrestore(&batched_entropy_reset_lock, flags); put_cpu_var(batched_entropy_u32); return ret; } EXPORT_SYMBOL(get_random_u32); +/* It's important to invalidate all potential batched entropy that might + * be stored before the crng is initialized, which we can do lazily by + * simply resetting the counter to zero so that it's re-extracted on the + * next usage. */ +static void invalidate_batched_entropy(void) +{ + int cpu; + unsigned long flags; + + write_lock_irqsave(&batched_entropy_reset_lock, flags); + for_each_possible_cpu (cpu) { + per_cpu_ptr(&batched_entropy_u32, cpu)->position = 0; + per_cpu_ptr(&batched_entropy_u64, cpu)->position = 0; + } + write_unlock_irqrestore(&batched_entropy_reset_lock, flags); +} + /** * randomize_page - Generate a random, page aligned address * @start: The smallest acceptable address the caller will take.