diff mbox series

[4.19,72/98] x86/CPU/AMD: Clear RDRAND CPUID bit on AMD family 15h/16h

Message ID 20190827072722.020603090@linuxfoundation.org (mailing list archive)
State Not Applicable, archived
Headers show
Series None | expand

Commit Message

Greg Kroah-Hartman Aug. 27, 2019, 7:50 a.m. UTC
From: Tom Lendacky <thomas.lendacky@amd.com>

commit c49a0a80137c7ca7d6ced4c812c9e07a949f6f24 upstream.

There have been reports of RDRAND issues after resuming from suspend on
some AMD family 15h and family 16h systems. This issue stems from a BIOS
not performing the proper steps during resume to ensure RDRAND continues
to function properly.

RDRAND support is indicated by CPUID Fn00000001_ECX[30]. This bit can be
reset by clearing MSR C001_1004[62]. Any software that checks for RDRAND
support using CPUID, including the kernel, will believe that RDRAND is
not supported.

Update the CPU initialization to clear the RDRAND CPUID bit for any family
15h and 16h processor that supports RDRAND. If it is known that the family
15h or family 16h system does not have an RDRAND resume issue or that the
system will not be placed in suspend, the "rdrand=force" kernel parameter
can be used to stop the clearing of the RDRAND CPUID bit.

Additionally, update the suspend and resume path to save and restore the
MSR C001_1004 value to ensure that the RDRAND CPUID setting remains in
place after resuming from suspend.

Note, that clearing the RDRAND CPUID bit does not prevent a processor
that normally supports the RDRAND instruction from executing it. So any
code that determined the support based on family and model won't #UD.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Chen Yu <yu.c.chen@intel.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: "linux-doc@vger.kernel.org" <linux-doc@vger.kernel.org>
Cc: "linux-pm@vger.kernel.org" <linux-pm@vger.kernel.org>
Cc: Nathan Chancellor <natechancellor@gmail.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Pavel Machek <pavel@ucw.cz>
Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
Cc: <stable@vger.kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "x86@kernel.org" <x86@kernel.org>
Link: https://lkml.kernel.org/r/7543af91666f491547bd86cebb1e17c66824ab9f.1566229943.git.thomas.lendacky@amd.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

---
 Documentation/admin-guide/kernel-parameters.txt |    7 +
 arch/x86/include/asm/msr-index.h                |    1 
 arch/x86/kernel/cpu/amd.c                       |   66 ++++++++++++++++++
 arch/x86/power/cpu.c                            |   86 ++++++++++++++++++++----
 4 files changed, 147 insertions(+), 13 deletions(-)

Comments

Pavel Machek Aug. 27, 2019, 11:36 a.m. UTC | #1
On Tue 2019-08-27 09:50:51, Greg Kroah-Hartman wrote:
> From: Tom Lendacky <thomas.lendacky@amd.com>
> 
> commit c49a0a80137c7ca7d6ced4c812c9e07a949f6f24 upstream.
> 
> There have been reports of RDRAND issues after resuming from suspend on
> some AMD family 15h and family 16h systems. This issue stems from a BIOS
> not performing the proper steps during resume to ensure RDRAND continues
> to function properly.

Yes. And instead of reinitializing the RDRAND on resume, this patch
breaks support even for people with properly functioning BIOSes... 

Also note that this is nowhere near minimum fix, and over 100 line
limit.

Best regards,
								Pavel


> 
> RDRAND support is indicated by CPUID Fn00000001_ECX[30]. This bit can be
> reset by clearing MSR C001_1004[62]. Any software that checks for RDRAND
> support using CPUID, including the kernel, will believe that RDRAND is
> not supported.
> 
> Update the CPU initialization to clear the RDRAND CPUID bit for any family
> 15h and 16h processor that supports RDRAND. If it is known that the family
> 15h or family 16h system does not have an RDRAND resume issue or that the
> system will not be placed in suspend, the "rdrand=force" kernel parameter
> can be used to stop the clearing of the RDRAND CPUID bit.
> 
> Additionally, update the suspend and resume path to save and restore the
> MSR C001_1004 value to ensure that the RDRAND CPUID setting remains in
> place after resuming from suspend.
> 
> Note, that clearing the RDRAND CPUID bit does not prevent a processor
> that normally supports the RDRAND instruction from executing it. So any
> code that determined the support based on family and model won't #UD.
> 
> Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
> Signed-off-by: Borislav Petkov <bp@suse.de>
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Chen Yu <yu.c.chen@intel.com>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Jonathan Corbet <corbet@lwn.net>
> Cc: Josh Poimboeuf <jpoimboe@redhat.com>
> Cc: Juergen Gross <jgross@suse.com>
> Cc: Kees Cook <keescook@chromium.org>
> Cc: "linux-doc@vger.kernel.org" <linux-doc@vger.kernel.org>
> Cc: "linux-pm@vger.kernel.org" <linux-pm@vger.kernel.org>
> Cc: Nathan Chancellor <natechancellor@gmail.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Pavel Machek <pavel@ucw.cz>
> Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
> Cc: <stable@vger.kernel.org>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: "x86@kernel.org" <x86@kernel.org>
> Link: https://lkml.kernel.org/r/7543af91666f491547bd86cebb1e17c66824ab9f.1566229943.git.thomas.lendacky@amd.com
> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> 
> ---
>  Documentation/admin-guide/kernel-parameters.txt |    7 +
>  arch/x86/include/asm/msr-index.h                |    1 
>  arch/x86/kernel/cpu/amd.c                       |   66 ++++++++++++++++++
>  arch/x86/power/cpu.c                            |   86 ++++++++++++++++++++----
>  4 files changed, 147 insertions(+), 13 deletions(-)
> 
> --- a/Documentation/admin-guide/kernel-parameters.txt
> +++ b/Documentation/admin-guide/kernel-parameters.txt
> @@ -3948,6 +3948,13 @@
>  			Run specified binary instead of /init from the ramdisk,
>  			used for early userspace startup. See initrd.
>  
> +	rdrand=		[X86]
> +			force - Override the decision by the kernel to hide the
> +				advertisement of RDRAND support (this affects
> +				certain AMD processors because of buggy BIOS
> +				support, specifically around the suspend/resume
> +				path).
> +
>  	rdt=		[HW,X86,RDT]
>  			Turn on/off individual RDT features. List is:
>  			cmt, mbmtotal, mbmlocal, l3cat, l3cdp, l2cat, l2cdp,
> --- a/arch/x86/include/asm/msr-index.h
> +++ b/arch/x86/include/asm/msr-index.h
> @@ -334,6 +334,7 @@
>  #define MSR_AMD64_PATCH_LEVEL		0x0000008b
>  #define MSR_AMD64_TSC_RATIO		0xc0000104
>  #define MSR_AMD64_NB_CFG		0xc001001f
> +#define MSR_AMD64_CPUID_FN_1		0xc0011004
>  #define MSR_AMD64_PATCH_LOADER		0xc0010020
>  #define MSR_AMD64_OSVW_ID_LENGTH	0xc0010140
>  #define MSR_AMD64_OSVW_STATUS		0xc0010141
> --- a/arch/x86/kernel/cpu/amd.c
> +++ b/arch/x86/kernel/cpu/amd.c
> @@ -799,6 +799,64 @@ static void init_amd_ln(struct cpuinfo_x
>  	msr_set_bit(MSR_AMD64_DE_CFG, 31);
>  }
>  
> +static bool rdrand_force;
> +
> +static int __init rdrand_cmdline(char *str)
> +{
> +	if (!str)
> +		return -EINVAL;
> +
> +	if (!strcmp(str, "force"))
> +		rdrand_force = true;
> +	else
> +		return -EINVAL;
> +
> +	return 0;
> +}
> +early_param("rdrand", rdrand_cmdline);
> +
> +static void clear_rdrand_cpuid_bit(struct cpuinfo_x86 *c)
> +{
> +	/*
> +	 * Saving of the MSR used to hide the RDRAND support during
> +	 * suspend/resume is done by arch/x86/power/cpu.c, which is
> +	 * dependent on CONFIG_PM_SLEEP.
> +	 */
> +	if (!IS_ENABLED(CONFIG_PM_SLEEP))
> +		return;
> +
> +	/*
> +	 * The nordrand option can clear X86_FEATURE_RDRAND, so check for
> +	 * RDRAND support using the CPUID function directly.
> +	 */
> +	if (!(cpuid_ecx(1) & BIT(30)) || rdrand_force)
> +		return;
> +
> +	msr_clear_bit(MSR_AMD64_CPUID_FN_1, 62);
> +
> +	/*
> +	 * Verify that the CPUID change has occurred in case the kernel is
> +	 * running virtualized and the hypervisor doesn't support the MSR.
> +	 */
> +	if (cpuid_ecx(1) & BIT(30)) {
> +		pr_info_once("BIOS may not properly restore RDRAND after suspend, but hypervisor does not support hiding RDRAND via CPUID.\n");
> +		return;
> +	}
> +
> +	clear_cpu_cap(c, X86_FEATURE_RDRAND);
> +	pr_info_once("BIOS may not properly restore RDRAND after suspend, hiding RDRAND via CPUID. Use rdrand=force to reenable.\n");
> +}
> +
> +static void init_amd_jg(struct cpuinfo_x86 *c)
> +{
> +	/*
> +	 * Some BIOS implementations do not restore proper RDRAND support
> +	 * across suspend and resume. Check on whether to hide the RDRAND
> +	 * instruction support via CPUID.
> +	 */
> +	clear_rdrand_cpuid_bit(c);
> +}
> +
>  static void init_amd_bd(struct cpuinfo_x86 *c)
>  {
>  	u64 value;
> @@ -813,6 +871,13 @@ static void init_amd_bd(struct cpuinfo_x
>  			wrmsrl_safe(MSR_F15H_IC_CFG, value);
>  		}
>  	}
> +
> +	/*
> +	 * Some BIOS implementations do not restore proper RDRAND support
> +	 * across suspend and resume. Check on whether to hide the RDRAND
> +	 * instruction support via CPUID.
> +	 */
> +	clear_rdrand_cpuid_bit(c);
>  }
>  
>  static void init_amd_zn(struct cpuinfo_x86 *c)
> @@ -855,6 +920,7 @@ static void init_amd(struct cpuinfo_x86
>  	case 0x10: init_amd_gh(c); break;
>  	case 0x12: init_amd_ln(c); break;
>  	case 0x15: init_amd_bd(c); break;
> +	case 0x16: init_amd_jg(c); break;
>  	case 0x17: init_amd_zn(c); break;
>  	}
>  
> --- a/arch/x86/power/cpu.c
> +++ b/arch/x86/power/cpu.c
> @@ -13,6 +13,7 @@
>  #include <linux/smp.h>
>  #include <linux/perf_event.h>
>  #include <linux/tboot.h>
> +#include <linux/dmi.h>
>  
>  #include <asm/pgtable.h>
>  #include <asm/proto.h>
> @@ -24,7 +25,7 @@
>  #include <asm/debugreg.h>
>  #include <asm/cpu.h>
>  #include <asm/mmu_context.h>
> -#include <linux/dmi.h>
> +#include <asm/cpu_device_id.h>
>  
>  #ifdef CONFIG_X86_32
>  __visible unsigned long saved_context_ebx;
> @@ -398,15 +399,14 @@ static int __init bsp_pm_check_init(void
>  
>  core_initcall(bsp_pm_check_init);
>  
> -static int msr_init_context(const u32 *msr_id, const int total_num)
> +static int msr_build_context(const u32 *msr_id, const int num)
>  {
> -	int i = 0;
> +	struct saved_msrs *saved_msrs = &saved_context.saved_msrs;
>  	struct saved_msr *msr_array;
> +	int total_num;
> +	int i, j;
>  
> -	if (saved_context.saved_msrs.array || saved_context.saved_msrs.num > 0) {
> -		pr_err("x86/pm: MSR quirk already applied, please check your DMI match table.\n");
> -		return -EINVAL;
> -	}
> +	total_num = saved_msrs->num + num;
>  
>  	msr_array = kmalloc_array(total_num, sizeof(struct saved_msr), GFP_KERNEL);
>  	if (!msr_array) {
> @@ -414,19 +414,30 @@ static int msr_init_context(const u32 *m
>  		return -ENOMEM;
>  	}
>  
> -	for (i = 0; i < total_num; i++) {
> -		msr_array[i].info.msr_no	= msr_id[i];
> +	if (saved_msrs->array) {
> +		/*
> +		 * Multiple callbacks can invoke this function, so copy any
> +		 * MSR save requests from previous invocations.
> +		 */
> +		memcpy(msr_array, saved_msrs->array,
> +		       sizeof(struct saved_msr) * saved_msrs->num);
> +
> +		kfree(saved_msrs->array);
> +	}
> +
> +	for (i = saved_msrs->num, j = 0; i < total_num; i++, j++) {
> +		msr_array[i].info.msr_no	= msr_id[j];
>  		msr_array[i].valid		= false;
>  		msr_array[i].info.reg.q		= 0;
>  	}
> -	saved_context.saved_msrs.num	= total_num;
> -	saved_context.saved_msrs.array	= msr_array;
> +	saved_msrs->num   = total_num;
> +	saved_msrs->array = msr_array;
>  
>  	return 0;
>  }
>  
>  /*
> - * The following section is a quirk framework for problematic BIOSen:
> + * The following sections are a quirk framework for problematic BIOSen:
>   * Sometimes MSRs are modified by the BIOSen after suspended to
>   * RAM, this might cause unexpected behavior after wakeup.
>   * Thus we save/restore these specified MSRs across suspend/resume
> @@ -441,7 +452,7 @@ static int msr_initialize_bdw(const stru
>  	u32 bdw_msr_id[] = { MSR_IA32_THERM_CONTROL };
>  
>  	pr_info("x86/pm: %s detected, MSR saving is needed during suspending.\n", d->ident);
> -	return msr_init_context(bdw_msr_id, ARRAY_SIZE(bdw_msr_id));
> +	return msr_build_context(bdw_msr_id, ARRAY_SIZE(bdw_msr_id));
>  }
>  
>  static const struct dmi_system_id msr_save_dmi_table[] = {
> @@ -456,9 +467,58 @@ static const struct dmi_system_id msr_sa
>  	{}
>  };
>  
> +static int msr_save_cpuid_features(const struct x86_cpu_id *c)
> +{
> +	u32 cpuid_msr_id[] = {
> +		MSR_AMD64_CPUID_FN_1,
> +	};
> +
> +	pr_info("x86/pm: family %#hx cpu detected, MSR saving is needed during suspending.\n",
> +		c->family);
> +
> +	return msr_build_context(cpuid_msr_id, ARRAY_SIZE(cpuid_msr_id));
> +}
> +
> +static const struct x86_cpu_id msr_save_cpu_table[] = {
> +	{
> +		.vendor = X86_VENDOR_AMD,
> +		.family = 0x15,
> +		.model = X86_MODEL_ANY,
> +		.feature = X86_FEATURE_ANY,
> +		.driver_data = (kernel_ulong_t)msr_save_cpuid_features,
> +	},
> +	{
> +		.vendor = X86_VENDOR_AMD,
> +		.family = 0x16,
> +		.model = X86_MODEL_ANY,
> +		.feature = X86_FEATURE_ANY,
> +		.driver_data = (kernel_ulong_t)msr_save_cpuid_features,
> +	},
> +	{}
> +};
> +
> +typedef int (*pm_cpu_match_t)(const struct x86_cpu_id *);
> +static int pm_cpu_check(const struct x86_cpu_id *c)
> +{
> +	const struct x86_cpu_id *m;
> +	int ret = 0;
> +
> +	m = x86_match_cpu(msr_save_cpu_table);
> +	if (m) {
> +		pm_cpu_match_t fn;
> +
> +		fn = (pm_cpu_match_t)m->driver_data;
> +		ret = fn(m);
> +	}
> +
> +	return ret;
> +}
> +
>  static int pm_check_save_msr(void)
>  {
>  	dmi_check_system(msr_save_dmi_table);
> +	pm_cpu_check(msr_save_cpu_table);
> +
>  	return 0;
>  }
>  
>
Thomas Gleixner Aug. 27, 2019, 1:30 p.m. UTC | #2
On Tue, 27 Aug 2019, Pavel Machek wrote:

> On Tue 2019-08-27 09:50:51, Greg Kroah-Hartman wrote:
> > From: Tom Lendacky <thomas.lendacky@amd.com>
> > 
> > commit c49a0a80137c7ca7d6ced4c812c9e07a949f6f24 upstream.
> > 
> > There have been reports of RDRAND issues after resuming from suspend on
> > some AMD family 15h and family 16h systems. This issue stems from a BIOS
> > not performing the proper steps during resume to ensure RDRAND continues
> > to function properly.
> 
> Yes. And instead of reinitializing the RDRAND on resume, this patch
> breaks support even for people with properly functioning BIOSes...

There is no way to reinitialize RDRAND from the kernel otherwise we would
have exactly done that. If you know how to do that please tell.

Also disabling it for every BIOS is the only way which can be done because
there is no way to know whether the BIOS is fixed or not at cold boot
time. And it has to be known there because applications cache the
availablity and continue using it after resume and because the valid bit is
set they wont notice.

There is a know to turn it back on for those who are sure that it works,
but the default has to be: OFF simply because we cannot endanger everyone
out there with a broken BIOS just to please you.

Thanks,

	tglx
Pavel Machek Aug. 28, 2019, 10:31 a.m. UTC | #3
On Tue 2019-08-27 15:30:30, Thomas Gleixner wrote:
> On Tue, 27 Aug 2019, Pavel Machek wrote:
> 
> > On Tue 2019-08-27 09:50:51, Greg Kroah-Hartman wrote:
> > > From: Tom Lendacky <thomas.lendacky@amd.com>
> > > 
> > > commit c49a0a80137c7ca7d6ced4c812c9e07a949f6f24 upstream.
> > > 
> > > There have been reports of RDRAND issues after resuming from suspend on
> > > some AMD family 15h and family 16h systems. This issue stems from a BIOS
> > > not performing the proper steps during resume to ensure RDRAND continues
> > > to function properly.
> > 
> > Yes. And instead of reinitializing the RDRAND on resume, this patch
> > breaks support even for people with properly functioning BIOSes...
> 
> There is no way to reinitialize RDRAND from the kernel otherwise we would
> have exactly done that. If you know how to do that please tell.

Would they? AMD is not exactly doing good job with communication
here. If BIOS can do it, kernel can do it, too... or do you have
information saying otherwise?

> Also disabling it for every BIOS is the only way which can be done because
> there is no way to know whether the BIOS is fixed or not at cold boot
> time. And it has to be known there because applications cache the

I'm pretty sure DMI-based whitelist would help here. It should be
reasonably to fill it with the common machines at least.

Plus, where is the CVE, and does AMD do anything to make BIOS vendors
fix them?

Best regards,
								Pavel
Thomas Gleixner Aug. 28, 2019, 10:47 a.m. UTC | #4
Pavel,

On Wed, 28 Aug 2019, Pavel Machek wrote:
> On Tue 2019-08-27 15:30:30, Thomas Gleixner wrote:
> > There is no way to reinitialize RDRAND from the kernel otherwise we would
> > have exactly done that. If you know how to do that please tell.
> 
> Would they? AMD is not exactly doing good job with communication

Yes they would. Stop making up weird conspiracy theories.

> here. If BIOS can do it, kernel can do it, too...

May I recommend to read up on SMM and BIOS being able to lock down access
to certain facilities?

> or do you have information saying otherwise?

Yes. It was clearly stated by Tom that it can only be done in the BIOS.

> > Also disabling it for every BIOS is the only way which can be done because
> > there is no way to know whether the BIOS is fixed or not at cold boot
> > time. And it has to be known there because applications cache the
> 
> I'm pretty sure DMI-based whitelist would help here. It should be
> reasonably to fill it with the common machines at least.

Send patches to that effect.
 
> Plus, where is the CVE, and does AMD do anything to make BIOS vendors
> fix them?

May I redirect you to: https://www.amd.com/en/corporate/contact

Thanks,

	tglx
Pavel Machek Aug. 28, 2019, 11:49 a.m. UTC | #5
Hi!

> > > There is no way to reinitialize RDRAND from the kernel otherwise we would
> > > have exactly done that. If you know how to do that please tell.
> > 
> > Would they? AMD is not exactly doing good job with communication
> 
> Yes they would. Stop making up weird conspiracy theories.

> > here. If BIOS can do it, kernel can do it, too...
> 
> May I recommend to read up on SMM and BIOS being able to lock down access
> to certain facilities?
> 
> > or do you have information saying otherwise?
> 
> Yes. It was clearly stated by Tom that it can only be done in the
> BIOS.

Do you have a link for that? Because I don't think I seen that one.

> > > Also disabling it for every BIOS is the only way which can be done because
> > > there is no way to know whether the BIOS is fixed or not at cold boot
> > > time. And it has to be known there because applications cache the
> > 
> > I'm pretty sure DMI-based whitelist would help here. It should be
> > reasonably to fill it with the common machines at least.
> 
> Send patches to that effect.

Why should it be my job? AMD screwed this up, they should fix it
properly. And you should insist on proper fix.

> > Plus, where is the CVE, and does AMD do anything to make BIOS vendors
> > fix them?
> 
> May I redirect you to: https://www.amd.com/en/corporate/contact

That will certainly make communication easier, right.

								Pavel
Borislav Petkov Aug. 28, 2019, noon UTC | #6
On Wed, Aug 28, 2019 at 01:49:47PM +0200, Pavel Machek wrote:
> AMD screwed this up,

Except that it wasn't AMD who screwed up but BIOS on *some* laptops.
Pavel Machek Aug. 28, 2019, 12:09 p.m. UTC | #7
On Wed 2019-08-28 14:00:24, Borislav Petkov wrote:
> On Wed, Aug 28, 2019 at 01:49:47PM +0200, Pavel Machek wrote:
> > AMD screwed this up,
> 
> Except that it wasn't AMD who screwed up but BIOS on *some* laptops.

Yes, and now AMD has patch to break it on *all* machines.

Hmm. "Get random data" instruction that fails to return random data,
depending on BIOS... (and even indicates success, IIRC)... Sounds like
CPU vendor screwup to me.

And they can also fix it up. If re-initing random generator is
impossible from kernel, surely CPU microcode can be either updated to
do the init automatically, or at least allow kernel to do the init.

Best regards,
								Pavel
Borislav Petkov Aug. 28, 2019, 12:16 p.m. UTC | #8
On Wed, Aug 28, 2019 at 02:09:36PM +0200, Pavel Machek wrote:
> Yes, and now AMD has patch to break it on *all* machines.

It doesn't break all machines - you need to look at that patch again.
Pavel Machek Aug. 28, 2019, 12:29 p.m. UTC | #9
On Wed 2019-08-28 14:16:28, Borislav Petkov wrote:
> On Wed, Aug 28, 2019 at 02:09:36PM +0200, Pavel Machek wrote:
> > Yes, and now AMD has patch to break it on *all* machines.
> 
> It doesn't break all machines - you need to look at that patch again.

This is not a way to have an inteligent conversation.

The patch clearly breaks more machines than it has to. It is known to
cause regression. You snipped the rest of the email with better ways
to solve this.

Best regards,
								Pavel
Borislav Petkov Aug. 28, 2019, 12:46 p.m. UTC | #10
On Wed, Aug 28, 2019 at 02:29:13PM +0200, Pavel Machek wrote:
> This is not a way to have an inteligent conversation.

No, this *is* the way to keep the conversation sane, without veering
off into some absurd claims.

So, to cut to the chase: you can simply add "rdrand=force" to your
cmdline parameters and get back to using RDRAND.

And yet if you still feel this fix does not meet your expectations,
you were told already to either produce patches or who to contact. I'm
afraid complaining on this thread won't get you anywhere but that's your
call.
Pavel Machek Aug. 28, 2019, 1:37 p.m. UTC | #11
On Wed 2019-08-28 14:46:21, Borislav Petkov wrote:
> On Wed, Aug 28, 2019 at 02:29:13PM +0200, Pavel Machek wrote:
> > This is not a way to have an inteligent conversation.
> 
> No, this *is* the way to keep the conversation sane, without veering
> off into some absurd claims.
> 
> So, to cut to the chase: you can simply add "rdrand=force" to your
> cmdline parameters and get back to using RDRAND.
> 
> And yet if you still feel this fix does not meet your expectations,
> you were told already to either produce patches or who to contact. I'm
> afraid complaining on this thread won't get you anywhere but that's your
> call.

No, this does not meet my expectations, it violates stable kernel
rules, and will cause regression to some users, while better solution
is known to be available.

Best regards,

								Pavel
Thomas Gleixner Aug. 28, 2019, 2:15 p.m. UTC | #12
On Wed, 28 Aug 2019, Pavel Machek wrote:
> On Wed 2019-08-28 14:46:21, Borislav Petkov wrote:
> > On Wed, Aug 28, 2019 at 02:29:13PM +0200, Pavel Machek wrote:
> > > This is not a way to have an inteligent conversation.
> > 
> > No, this *is* the way to keep the conversation sane, without veering
> > off into some absurd claims.
> > 
> > So, to cut to the chase: you can simply add "rdrand=force" to your
> > cmdline parameters and get back to using RDRAND.
> > 
> > And yet if you still feel this fix does not meet your expectations,
> > you were told already to either produce patches or who to contact. I'm
> > afraid complaining on this thread won't get you anywhere but that's your
> > call.
> 
> No, this does not meet my expectations, it violates stable kernel
> rules, and will cause regression to some users, while better solution
> is known to be available.

Your unqualified ranting does not meet my expectation either and it
violates any rule of common sense.

For the record:

  Neither AMD nor we have any idea which particular machines have a fixed
  BIOS and which have not. There is no technical indicator either at boot
  time as the wreckage manifests itself only after resume.

  So in the interest of users the only sensible decision is to disable
  RDRAND for this class of CPUs.

  If you have a list of machines which have a fixed BIOS, then provide it
  in form of patches. If not then stop claiming that there is a better
  solution available.

Anyway, I'm done with that and further rants of yours go directly to
/dev/null.

Thanks for wasting everyones time

       tglx
Pavel Machek Aug. 28, 2019, 10:05 p.m. UTC | #13
On Wed 2019-08-28 16:15:06, Thomas Gleixner wrote:
> On Wed, 28 Aug 2019, Pavel Machek wrote:
> > On Wed 2019-08-28 14:46:21, Borislav Petkov wrote:
> > > On Wed, Aug 28, 2019 at 02:29:13PM +0200, Pavel Machek wrote:
> > > > This is not a way to have an inteligent conversation.
> > > 
> > > No, this *is* the way to keep the conversation sane, without veering
> > > off into some absurd claims.
> > > 
> > > So, to cut to the chase: you can simply add "rdrand=force" to your
> > > cmdline parameters and get back to using RDRAND.
> > > 
> > > And yet if you still feel this fix does not meet your expectations,
> > > you were told already to either produce patches or who to contact. I'm
> > > afraid complaining on this thread won't get you anywhere but that's your
> > > call.
> > 
> > No, this does not meet my expectations, it violates stable kernel
> > rules, and will cause regression to some users, while better solution
> > is known to be available.
> 
> Your unqualified ranting does not meet my expectation either and it
> violates any rule of common sense.
> 
> For the record:
> 
>   Neither AMD nor we have any idea which particular machines have a fixed
>   BIOS and which have not. There is no technical indicator either at boot
>   time as the wreckage manifests itself only after resume.
> 
>   So in the interest of users the only sensible decision is to disable
>   RDRAND for this class of CPUs.

No.

Obviously best solution would be microcode update to fix the problem,
or to enable kernel to fix the problem.

>   If you have a list of machines which have a fixed BIOS, then provide it
>   in form of patches. If not then stop claiming that there is a better
>   solution available.

And yes, whitelist would be already better than present solution. It is
not my duty to submit fixes to your proposed patch.

> Anyway, I'm done with that and further rants of yours go directly to
> /dev/null.
> 
> Thanks for wasting everyones time

Thanks for your profesional attitude.

								Pavel
diff mbox series

Patch

--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -3948,6 +3948,13 @@ 
 			Run specified binary instead of /init from the ramdisk,
 			used for early userspace startup. See initrd.
 
+	rdrand=		[X86]
+			force - Override the decision by the kernel to hide the
+				advertisement of RDRAND support (this affects
+				certain AMD processors because of buggy BIOS
+				support, specifically around the suspend/resume
+				path).
+
 	rdt=		[HW,X86,RDT]
 			Turn on/off individual RDT features. List is:
 			cmt, mbmtotal, mbmlocal, l3cat, l3cdp, l2cat, l2cdp,
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -334,6 +334,7 @@ 
 #define MSR_AMD64_PATCH_LEVEL		0x0000008b
 #define MSR_AMD64_TSC_RATIO		0xc0000104
 #define MSR_AMD64_NB_CFG		0xc001001f
+#define MSR_AMD64_CPUID_FN_1		0xc0011004
 #define MSR_AMD64_PATCH_LOADER		0xc0010020
 #define MSR_AMD64_OSVW_ID_LENGTH	0xc0010140
 #define MSR_AMD64_OSVW_STATUS		0xc0010141
--- a/arch/x86/kernel/cpu/amd.c
+++ b/arch/x86/kernel/cpu/amd.c
@@ -799,6 +799,64 @@  static void init_amd_ln(struct cpuinfo_x
 	msr_set_bit(MSR_AMD64_DE_CFG, 31);
 }
 
+static bool rdrand_force;
+
+static int __init rdrand_cmdline(char *str)
+{
+	if (!str)
+		return -EINVAL;
+
+	if (!strcmp(str, "force"))
+		rdrand_force = true;
+	else
+		return -EINVAL;
+
+	return 0;
+}
+early_param("rdrand", rdrand_cmdline);
+
+static void clear_rdrand_cpuid_bit(struct cpuinfo_x86 *c)
+{
+	/*
+	 * Saving of the MSR used to hide the RDRAND support during
+	 * suspend/resume is done by arch/x86/power/cpu.c, which is
+	 * dependent on CONFIG_PM_SLEEP.
+	 */
+	if (!IS_ENABLED(CONFIG_PM_SLEEP))
+		return;
+
+	/*
+	 * The nordrand option can clear X86_FEATURE_RDRAND, so check for
+	 * RDRAND support using the CPUID function directly.
+	 */
+	if (!(cpuid_ecx(1) & BIT(30)) || rdrand_force)
+		return;
+
+	msr_clear_bit(MSR_AMD64_CPUID_FN_1, 62);
+
+	/*
+	 * Verify that the CPUID change has occurred in case the kernel is
+	 * running virtualized and the hypervisor doesn't support the MSR.
+	 */
+	if (cpuid_ecx(1) & BIT(30)) {
+		pr_info_once("BIOS may not properly restore RDRAND after suspend, but hypervisor does not support hiding RDRAND via CPUID.\n");
+		return;
+	}
+
+	clear_cpu_cap(c, X86_FEATURE_RDRAND);
+	pr_info_once("BIOS may not properly restore RDRAND after suspend, hiding RDRAND via CPUID. Use rdrand=force to reenable.\n");
+}
+
+static void init_amd_jg(struct cpuinfo_x86 *c)
+{
+	/*
+	 * Some BIOS implementations do not restore proper RDRAND support
+	 * across suspend and resume. Check on whether to hide the RDRAND
+	 * instruction support via CPUID.
+	 */
+	clear_rdrand_cpuid_bit(c);
+}
+
 static void init_amd_bd(struct cpuinfo_x86 *c)
 {
 	u64 value;
@@ -813,6 +871,13 @@  static void init_amd_bd(struct cpuinfo_x
 			wrmsrl_safe(MSR_F15H_IC_CFG, value);
 		}
 	}
+
+	/*
+	 * Some BIOS implementations do not restore proper RDRAND support
+	 * across suspend and resume. Check on whether to hide the RDRAND
+	 * instruction support via CPUID.
+	 */
+	clear_rdrand_cpuid_bit(c);
 }
 
 static void init_amd_zn(struct cpuinfo_x86 *c)
@@ -855,6 +920,7 @@  static void init_amd(struct cpuinfo_x86
 	case 0x10: init_amd_gh(c); break;
 	case 0x12: init_amd_ln(c); break;
 	case 0x15: init_amd_bd(c); break;
+	case 0x16: init_amd_jg(c); break;
 	case 0x17: init_amd_zn(c); break;
 	}
 
--- a/arch/x86/power/cpu.c
+++ b/arch/x86/power/cpu.c
@@ -13,6 +13,7 @@ 
 #include <linux/smp.h>
 #include <linux/perf_event.h>
 #include <linux/tboot.h>
+#include <linux/dmi.h>
 
 #include <asm/pgtable.h>
 #include <asm/proto.h>
@@ -24,7 +25,7 @@ 
 #include <asm/debugreg.h>
 #include <asm/cpu.h>
 #include <asm/mmu_context.h>
-#include <linux/dmi.h>
+#include <asm/cpu_device_id.h>
 
 #ifdef CONFIG_X86_32
 __visible unsigned long saved_context_ebx;
@@ -398,15 +399,14 @@  static int __init bsp_pm_check_init(void
 
 core_initcall(bsp_pm_check_init);
 
-static int msr_init_context(const u32 *msr_id, const int total_num)
+static int msr_build_context(const u32 *msr_id, const int num)
 {
-	int i = 0;
+	struct saved_msrs *saved_msrs = &saved_context.saved_msrs;
 	struct saved_msr *msr_array;
+	int total_num;
+	int i, j;
 
-	if (saved_context.saved_msrs.array || saved_context.saved_msrs.num > 0) {
-		pr_err("x86/pm: MSR quirk already applied, please check your DMI match table.\n");
-		return -EINVAL;
-	}
+	total_num = saved_msrs->num + num;
 
 	msr_array = kmalloc_array(total_num, sizeof(struct saved_msr), GFP_KERNEL);
 	if (!msr_array) {
@@ -414,19 +414,30 @@  static int msr_init_context(const u32 *m
 		return -ENOMEM;
 	}
 
-	for (i = 0; i < total_num; i++) {
-		msr_array[i].info.msr_no	= msr_id[i];
+	if (saved_msrs->array) {
+		/*
+		 * Multiple callbacks can invoke this function, so copy any
+		 * MSR save requests from previous invocations.
+		 */
+		memcpy(msr_array, saved_msrs->array,
+		       sizeof(struct saved_msr) * saved_msrs->num);
+
+		kfree(saved_msrs->array);
+	}
+
+	for (i = saved_msrs->num, j = 0; i < total_num; i++, j++) {
+		msr_array[i].info.msr_no	= msr_id[j];
 		msr_array[i].valid		= false;
 		msr_array[i].info.reg.q		= 0;
 	}
-	saved_context.saved_msrs.num	= total_num;
-	saved_context.saved_msrs.array	= msr_array;
+	saved_msrs->num   = total_num;
+	saved_msrs->array = msr_array;
 
 	return 0;
 }
 
 /*
- * The following section is a quirk framework for problematic BIOSen:
+ * The following sections are a quirk framework for problematic BIOSen:
  * Sometimes MSRs are modified by the BIOSen after suspended to
  * RAM, this might cause unexpected behavior after wakeup.
  * Thus we save/restore these specified MSRs across suspend/resume
@@ -441,7 +452,7 @@  static int msr_initialize_bdw(const stru
 	u32 bdw_msr_id[] = { MSR_IA32_THERM_CONTROL };
 
 	pr_info("x86/pm: %s detected, MSR saving is needed during suspending.\n", d->ident);
-	return msr_init_context(bdw_msr_id, ARRAY_SIZE(bdw_msr_id));
+	return msr_build_context(bdw_msr_id, ARRAY_SIZE(bdw_msr_id));
 }
 
 static const struct dmi_system_id msr_save_dmi_table[] = {
@@ -456,9 +467,58 @@  static const struct dmi_system_id msr_sa
 	{}
 };
 
+static int msr_save_cpuid_features(const struct x86_cpu_id *c)
+{
+	u32 cpuid_msr_id[] = {
+		MSR_AMD64_CPUID_FN_1,
+	};
+
+	pr_info("x86/pm: family %#hx cpu detected, MSR saving is needed during suspending.\n",
+		c->family);
+
+	return msr_build_context(cpuid_msr_id, ARRAY_SIZE(cpuid_msr_id));
+}
+
+static const struct x86_cpu_id msr_save_cpu_table[] = {
+	{
+		.vendor = X86_VENDOR_AMD,
+		.family = 0x15,
+		.model = X86_MODEL_ANY,
+		.feature = X86_FEATURE_ANY,
+		.driver_data = (kernel_ulong_t)msr_save_cpuid_features,
+	},
+	{
+		.vendor = X86_VENDOR_AMD,
+		.family = 0x16,
+		.model = X86_MODEL_ANY,
+		.feature = X86_FEATURE_ANY,
+		.driver_data = (kernel_ulong_t)msr_save_cpuid_features,
+	},
+	{}
+};
+
+typedef int (*pm_cpu_match_t)(const struct x86_cpu_id *);
+static int pm_cpu_check(const struct x86_cpu_id *c)
+{
+	const struct x86_cpu_id *m;
+	int ret = 0;
+
+	m = x86_match_cpu(msr_save_cpu_table);
+	if (m) {
+		pm_cpu_match_t fn;
+
+		fn = (pm_cpu_match_t)m->driver_data;
+		ret = fn(m);
+	}
+
+	return ret;
+}
+
 static int pm_check_save_msr(void)
 {
 	dmi_check_system(msr_save_dmi_table);
+	pm_cpu_check(msr_save_cpu_table);
+
 	return 0;
 }