Message ID | 20220419111650.1582274-8-Jason@zx2c4.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | archs/random: fallback to best raw ktime when no cycle counter | expand |
On Tue, Apr 19, 2022 at 01:16:46PM +0200, Jason A. Donenfeld wrote: > In the event that random_get_entropy() can't access a cycle counter or > similar, falling back to returning 0 is really not the best we can do. > Instead, at least calling random_get_entropy_fallback() would be > preferable, because that always needs to return _something_, even > falling back to jiffies eventually. It's not as though > random_get_entropy_fallback() is super high precision or guaranteed to > be entropic, but basically anything that's not zero all the time is > better than returning zero all the time. > > If CONFIG_X86_TSC=n, then it's possible that we're running on a 486 with > no RDTSC, so we only need the fallback code for that case. > > Cc: Thomas Gleixner <tglx@linutronix.de> > Cc: Arnd Bergmann <arnd@arndb.de> > Cc: x86@kernel.org > Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> > --- > arch/x86/include/asm/timex.h | 10 ++++++++++ > 1 file changed, 10 insertions(+) > > diff --git a/arch/x86/include/asm/timex.h b/arch/x86/include/asm/timex.h > index a4a8b1b16c0c..fac180359693 100644 > --- a/arch/x86/include/asm/timex.h > +++ b/arch/x86/include/asm/timex.h > @@ -5,6 +5,16 @@ > #include <asm/processor.h> > #include <asm/tsc.h> > > +static inline unsigned long random_get_entropy(void) > +{ > +#ifndef CONFIG_X86_TSC > + if (!boot_cpu_has(X86_FEATURE_TSC)) cpu_feature_enabled() pls.
Hi Borislav, On Tue, Apr 19, 2022 at 8:16 PM Borislav Petkov <bp@alien8.de> wrote: > > +static inline unsigned long random_get_entropy(void) > > +{ > > +#ifndef CONFIG_X86_TSC > > + if (!boot_cpu_has(X86_FEATURE_TSC)) > > cpu_feature_enabled() pls. This function began as a carbon copy of get_cycles(), which reads: static inline cycles_t get_cycles(void) { #ifndef CONFIG_X86_TSC if (!boot_cpu_has(X86_FEATURE_TSC)) return 0; #endif return rdtsc(); } As you see, random_get_entropy() is the same function, but with that zero replaced with the fallback. (Using the fallback in get_cycles() wouldn't be appropriate.) So, your options are: a) We keep this patch as-is, using boot_cpu_has(); or b) I make an unrelated change inside of this same patch to change get_cycles() to use cpu_feature_enabled() (in addition to the new random_get_entropy()). I think I prefer doing (a), and leaving (b) for another time when you or another x86 maintainer can do so. But I'll do whichever you say. Which would you like? Jason
On Tue, Apr 19, 2022 at 08:38:41PM +0200, Jason A. Donenfeld wrote: > I think I prefer doing (a), and leaving (b) for another time when you > or another x86 maintainer can do so. But I'll do whichever you say. > Which would you like? We are switching all feature checks to cpu_feature_enabled() so you might as well do the new preferred way of checking when adding a new function and so that we have one less place to touch. Which is appreciated. :) Thx!
Hi Borislav, On Tue, Apr 19, 2022 at 8:59 PM Borislav Petkov <bp@alien8.de> wrote: > > On Tue, Apr 19, 2022 at 08:38:41PM +0200, Jason A. Donenfeld wrote: > > I think I prefer doing (a), and leaving (b) for another time when you > > or another x86 maintainer can do so. But I'll do whichever you say. > > Which would you like? > > We are switching all feature checks to cpu_feature_enabled() so you > might as well do the new preferred way of checking when adding a > new function and so that we have one less place to touch. Which is > appreciated. :) Okay, no problem. I'll make the change there and get_cycles() for v+1 of the patchset. Jason
diff --git a/arch/x86/include/asm/timex.h b/arch/x86/include/asm/timex.h index a4a8b1b16c0c..fac180359693 100644 --- a/arch/x86/include/asm/timex.h +++ b/arch/x86/include/asm/timex.h @@ -5,6 +5,16 @@ #include <asm/processor.h> #include <asm/tsc.h> +static inline unsigned long random_get_entropy(void) +{ +#ifndef CONFIG_X86_TSC + if (!boot_cpu_has(X86_FEATURE_TSC)) + return random_get_entropy_fallback(); +#endif + return rdtsc(); +} +#define random_get_entropy random_get_entropy + /* Assume we use the PIT time source for the clock tick */ #define CLOCK_TICK_RATE PIT_TICK_RATE
In the event that random_get_entropy() can't access a cycle counter or similar, falling back to returning 0 is really not the best we can do. Instead, at least calling random_get_entropy_fallback() would be preferable, because that always needs to return _something_, even falling back to jiffies eventually. It's not as though random_get_entropy_fallback() is super high precision or guaranteed to be entropic, but basically anything that's not zero all the time is better than returning zero all the time. If CONFIG_X86_TSC=n, then it's possible that we're running on a 486 with no RDTSC, so we only need the fallback code for that case. Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Arnd Bergmann <arnd@arndb.de> Cc: x86@kernel.org Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> --- arch/x86/include/asm/timex.h | 10 ++++++++++ 1 file changed, 10 insertions(+)