From patchwork Fri Apr 8 18:21:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jason A. Donenfeld" X-Patchwork-Id: 12807040 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9498DC433EF for ; Fri, 8 Apr 2022 18:22:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238576AbiDHSYf (ORCPT ); Fri, 8 Apr 2022 14:24:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40030 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238550AbiDHSYY (ORCPT ); Fri, 8 Apr 2022 14:24:24 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5DBDB3713D9; Fri, 8 Apr 2022 11:22:20 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id DF72A6220B; Fri, 8 Apr 2022 18:22:19 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 96822C385BB; Fri, 8 Apr 2022 18:22:15 +0000 (UTC) Authentication-Results: smtp.kernel.org; dkim=pass (1024-bit key) header.d=zx2c4.com header.i=@zx2c4.com header.b="jcpLG88Y" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=zx2c4.com; s=20210105; t=1649442134; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=bKnGziADgzlHYNafG897CPxh8A5i6JayWbcFusMVF+o=; b=jcpLG88YHHP6AloS0heU9nwr6NJdw/JKwkHxnAETctCFQ01GcKhEazXCv1l8LYoXkmFxRD ftX1/Z6lNxkT70S7kwuImsr0xpjUDcs5HuIyBEDxSRaDq4wzLA6+sbCA5e6UpziDcyropk fx7n4JcOruZvQmUQLel+8lYi/eG+/LU= Received: by mail.zx2c4.com (ZX2C4 Mail Server) with ESMTPSA id e268004a (TLSv1.3:AEAD-AES256-GCM-SHA384:256:NO); Fri, 8 Apr 2022 18:22:14 +0000 (UTC) From: "Jason A. Donenfeld" To: linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org, arnd@arndb.de Cc: "Jason A. Donenfeld" , Theodore Ts'o , Dominik Brodowski , Russell King , Catalin Marinas , Will Deacon , Geert Uytterhoeven , Thomas Bogendoerfer , Paul Walmsley , Palmer Dabbelt , Albert Ou , "David S . Miller" , Richard Weinberger , Anton Ivanov , Johannes Berg , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H . Peter Anvin" , Chris Zankel , Max Filippov , John Stultz , Stephen Boyd , linux-arm-kernel@lists.infradead.org, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-riscv@lists.infradead.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, x86@kernel.org, linux-xtensa@linux-xtensa.org Subject: [PATCH RFC v1 01/10] random: use sched_clock() for random_get_entropy() if no get_cycles() Date: Fri, 8 Apr 2022 20:21:36 +0200 Message-Id: <20220408182145.142506-2-Jason@zx2c4.com> In-Reply-To: <20220408182145.142506-1-Jason@zx2c4.com> References: <20220408182145.142506-1-Jason@zx2c4.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org In the event that a given arch does not define get_cycles(), falling back to the get_cycles() default implementation that returns 0 is really not the best we can do. Instead, at least calling sched_clock() would be preferable, because that always needs to return _something_, even falling back to jiffies eventually. It's not as though sched_clock() is super high precision or guaranteed to be entropic, but basically anything that's not zero all the time is better than returning zero all the time. Cc: Thomas Gleixner Cc: Arnd Bergmann Cc: Theodore Ts'o Signed-off-by: Jason A. Donenfeld --- include/linux/timex.h | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/include/linux/timex.h b/include/linux/timex.h index 5745c90c8800..bd78f784762e 100644 --- a/include/linux/timex.h +++ b/include/linux/timex.h @@ -61,6 +61,7 @@ #include #include #include +#include #include @@ -74,8 +75,13 @@ * * By default we use get_cycles() for this purpose, but individual * architectures may override this in their asm/timex.h header file. + * If a given arch does not have get_cycles(), then we fallback to + * using sched_clock(). */ +#ifdef get_cycles #define random_get_entropy() ((unsigned long)get_cycles()) +#else +#define random_get_entropy() ((unsigned long)sched_clock()) #endif /*