diff mbox series

[4/5] x86/cpu: Create Hygon Dhyana architecture support file

Message ID 1554409592-28572-5-git-send-email-andrew.cooper3@citrix.com (mailing list archive)
State New, archived
Headers show
Series x86/cpu: Rework of X86_VENDOR_* constants | expand

Commit Message

Andrew Cooper April 4, 2019, 8:26 p.m. UTC
From: Pu Wen <puwen@hygon.cn>

Add x86 architecture support for a new processor: Hygon Dhyana Family
18h. To make Hygon initialization flow more clear, carve out code from
amd.c into a separate file hygon.c, and remove unnecessary code for
Hygon Dhyana.

To identify Hygon Dhyana CPU, add a new vendor type X86_VENDOR_HYGON
and vendor ID "HygonGenuine" for system recognition, and fit the new
x86 vendor lookup mechanism.

Hygon can fully use the function early_init_amd(), so make this common
function non-static and direct call it from Hygon code.

Add a separate hygon_get_topology(), which calculate phys_proc_id from
AcpiId[6](see reference [1]).

Reference:
[1] https://git.kernel.org/tip/e0ceeae708cebf22c990c3d703a4ca187dc837f5

Signed-off-by: Pu Wen <puwen@hygon.cn>
---
 tools/tests/cpu-policy/test-cpu-policy.c |   1 +
 xen/arch/x86/cpu/Makefile                |   1 +
 xen/arch/x86/cpu/amd.c                   |   2 +-
 xen/arch/x86/cpu/common.c                |   1 +
 xen/arch/x86/cpu/cpu.h                   |   4 +-
 xen/arch/x86/cpu/hygon.c                 | 107 +++++++++++++++++++++++++++++++
 xen/include/asm-x86/x86-vendors.h        |   5 ++
 xen/lib/x86/cpuid.c                      |   7 ++
 8 files changed, 126 insertions(+), 2 deletions(-)
 create mode 100644 xen/arch/x86/cpu/hygon.c

Comments

Jan Beulich April 5, 2019, 9:17 a.m. UTC | #1
>>> On 04.04.19 at 22:26, <andrew.cooper3@citrix.com> wrote:
> +static void init_hygon(struct cpuinfo_x86 *c)
> +{
> +	unsigned long long value;
> +
> +	/*
> +	 * Attempt to set lfence to be Dispatch Serialising.  This MSR almost
> +	 * certainly isn't virtualised (and Xen at least will leak the real
> +	 * value in but silently discard writes), as well as being per-core
> +	 * rather than per-thread, so do a full safe read/write/readback cycle
> +	 * in the worst case.
> +	 */
> +	if (rdmsr_safe(MSR_AMD64_DE_CFG, value))
> +		/* Unable to read.  Assume the safer default. */
> +		__clear_bit(X86_FEATURE_LFENCE_DISPATCH,
> +			    c->x86_capability);
> +	else if (value & AMD64_DE_CFG_LFENCE_SERIALISE)
> +		/* Already dispatch serialising. */
> +		__set_bit(X86_FEATURE_LFENCE_DISPATCH,
> +			  c->x86_capability);
> +	else if (wrmsr_safe(MSR_AMD64_DE_CFG,
> +			    value | AMD64_DE_CFG_LFENCE_SERIALISE) ||
> +		 rdmsr_safe(MSR_AMD64_DE_CFG, value) ||
> +		 !(value & AMD64_DE_CFG_LFENCE_SERIALISE))
> +		/* Attempt to set failed.  Assume the safer default. */
> +		__clear_bit(X86_FEATURE_LFENCE_DISPATCH,
> +			    c->x86_capability);
> +	else
> +		/* Successfully enabled! */
> +		__set_bit(X86_FEATURE_LFENCE_DISPATCH,
> +			  c->x86_capability);

Down the road we may want to make this another helper function
shared by AMD any Hygon code.

> +	/*
> +	 * If the user has explicitly chosen to disable Memory Disambiguation
> +	 * to mitigiate Speculative Store Bypass, poke the appropriate MSR.
> +	 */
> +	if (opt_ssbd && !rdmsr_safe(MSR_AMD64_LS_CFG, value)) {
> +		value |= 1ull << 10;
> +		wrmsr_safe(MSR_AMD64_LS_CFG, value);
> +	}

Like the above, this lacks a model check. Is it really expected for
all future Hygon models to supports both in exactly the same
fashion? Even if there's just a small chance of this not being the
case, rather than fiddling with MSRs which have an entirely
different meaning in future models, I'd prefer if model checks
were added in cases like these.

I do realize that in the former case there's effectively an "all
models except a few" logic already in the original AMD code,
which I would too question whether it's really desirable. After
all we've learned recently that MSRs indeed can go away
(albeit in that case it was a change to the MSR simply becoming
meaningless, rather than obtaining a new meaning). Based on
this I could accept just the SSBD logic to gain a model check.

Jan
Pu Wen April 5, 2019, 3:30 p.m. UTC | #2
On 2019/4/5 17:38, Jan Beulich wrote:
> On 04.04.19 at 22:26, <andrew.cooper3@citrix.com> wrote:
>> +	else
>> +		/* Successfully enabled! */
>> +		__set_bit(X86_FEATURE_LFENCE_DISPATCH,
>> +			  c->x86_capability);
> 
> Down the road we may want to make this another helper function
> shared by AMD any Hygon code.

It sounds good.

>> +	/*
>> +	 * If the user has explicitly chosen to disable Memory Disambiguation
>> +	 * to mitigiate Speculative Store Bypass, poke the appropriate MSR.
>> +	 */
>> +	if (opt_ssbd && !rdmsr_safe(MSR_AMD64_LS_CFG, value)) {
>> +		value |= 1ull << 10;
>> +		wrmsr_safe(MSR_AMD64_LS_CFG, value);
>> +	}
> 
> Like the above, this lacks a model check. Is it really expected for
> all future Hygon models to supports both in exactly the same

For current Hygon family 18h, all models will have the same meaning.

> fashion? Even if there's just a small chance of this not being the
> case, rather than fiddling with MSRs which have an entirely
> different meaning in future models, I'd prefer if model checks
> were added in cases like these.

In future for some other Hygon CPU families(such as maybe family 20h ?), 
the bits definition of this MSR may have different meaning. But I think 
it should be dealt with by then, since there would be some other 
features to be adjusted by the way for those families.

> I do realize that in the former case there's effectively an "all
> models except a few" logic already in the original AMD code,
> which I would too question whether it's really desirable. After
> all we've learned recently that MSRs indeed can go away

It's a good thing that MSRs can go away.

> (albeit in that case it was a change to the MSR simply becoming
> meaningless, rather than obtaining a new meaning). Based on
> this I could accept just the SSBD logic to gain a model check.
Jan Beulich April 5, 2019, 4:10 p.m. UTC | #3
>>> On 05.04.19 at 17:30, <puwen@hygon.cn> wrote:
> On 2019/4/5 17:38, Jan Beulich wrote:
>> On 04.04.19 at 22:26, <andrew.cooper3@citrix.com> wrote:
>>> +	/*
>>> +	 * If the user has explicitly chosen to disable Memory Disambiguation
>>> +	 * to mitigiate Speculative Store Bypass, poke the appropriate MSR.
>>> +	 */
>>> +	if (opt_ssbd && !rdmsr_safe(MSR_AMD64_LS_CFG, value)) {
>>> +		value |= 1ull << 10;
>>> +		wrmsr_safe(MSR_AMD64_LS_CFG, value);
>>> +	}
>> 
>> Like the above, this lacks a model check. Is it really expected for
>> all future Hygon models to supports both in exactly the same
> 
> For current Hygon family 18h, all models will have the same meaning.

Oh, I'm sorry - I meant "families", not "models" in my earlier reply
(throughout). I'm frequently mixing this up when switching between
"Intel" and "AMD" modes, unfortunately.

Jan
diff mbox series

Patch

diff --git a/tools/tests/cpu-policy/test-cpu-policy.c b/tools/tests/cpu-policy/test-cpu-policy.c
index beced5e..88f5121 100644
--- a/tools/tests/cpu-policy/test-cpu-policy.c
+++ b/tools/tests/cpu-policy/test-cpu-policy.c
@@ -35,6 +35,7 @@  static void test_vendor_identification(void)
         { { "AuthenticAMD" }, X86_VENDOR_AMD },
         { { "CentaurHauls" }, X86_VENDOR_CENTAUR },
         { { "  Shanghai  " }, X86_VENDOR_SHANGHAI },
+        { { "HygonGenuine" }, X86_VENDOR_HYGON },
 
         { { ""             }, X86_VENDOR_UNKNOWN },
         { { "            " }, X86_VENDOR_UNKNOWN },
diff --git a/xen/arch/x86/cpu/Makefile b/xen/arch/x86/cpu/Makefile
index 34a01ca..466acc8 100644
--- a/xen/arch/x86/cpu/Makefile
+++ b/xen/arch/x86/cpu/Makefile
@@ -4,6 +4,7 @@  subdir-y += mtrr
 obj-y += amd.o
 obj-y += centaur.o
 obj-y += common.o
+obj-y += hygon.o
 obj-y += intel.o
 obj-y += intel_cacheinfo.o
 obj-y += mwait-idle.o
diff --git a/xen/arch/x86/cpu/amd.c b/xen/arch/x86/cpu/amd.c
index e19a5ea..3966560 100644
--- a/xen/arch/x86/cpu/amd.c
+++ b/xen/arch/x86/cpu/amd.c
@@ -526,7 +526,7 @@  static void amd_get_topology(struct cpuinfo_x86 *c)
                                                           : c->cpu_core_id);
 }
 
-static void early_init_amd(struct cpuinfo_x86 *c)
+void early_init_amd(struct cpuinfo_x86 *c)
 {
 	if (c == &boot_cpu_data)
 		amd_init_levelling();
diff --git a/xen/arch/x86/cpu/common.c b/xen/arch/x86/cpu/common.c
index 7cc45fe..89d3a7b 100644
--- a/xen/arch/x86/cpu/common.c
+++ b/xen/arch/x86/cpu/common.c
@@ -286,6 +286,7 @@  static void __init early_cpu_detect(void)
 	case X86_VENDOR_AMD:	  this_cpu = &amd_cpu_dev;      break;
 	case X86_VENDOR_CENTAUR:  this_cpu = &centaur_cpu_dev;  break;
 	case X86_VENDOR_SHANGHAI: this_cpu = &shanghai_cpu_dev; break;
+	case X86_VENDOR_HYGON:    this_cpu = &hygon_cpu_dev;    break;
 	default:
 		printk(XENLOG_ERR
 		       "Unrecognised or unsupported CPU vendor '%.12s'\n",
diff --git a/xen/arch/x86/cpu/cpu.h b/xen/arch/x86/cpu/cpu.h
index 54bd0d3..30cd3a8 100644
--- a/xen/arch/x86/cpu/cpu.h
+++ b/xen/arch/x86/cpu/cpu.h
@@ -5,7 +5,7 @@  struct cpu_dev {
 };
 
 extern const struct cpu_dev intel_cpu_dev, amd_cpu_dev, centaur_cpu_dev,
-    shanghai_cpu_dev;
+    shanghai_cpu_dev, hygon_cpu_dev;
 
 extern bool_t opt_arat;
 extern unsigned int opt_cpuid_mask_ecx, opt_cpuid_mask_edx;
@@ -14,3 +14,5 @@  extern unsigned int opt_cpuid_mask_ext_ecx, opt_cpuid_mask_ext_edx;
 
 extern int get_model_name(struct cpuinfo_x86 *c);
 extern void display_cacheinfo(struct cpuinfo_x86 *c);
+
+void early_init_amd(struct cpuinfo_x86 *c);
diff --git a/xen/arch/x86/cpu/hygon.c b/xen/arch/x86/cpu/hygon.c
new file mode 100644
index 0000000..9ab7aa8
--- /dev/null
+++ b/xen/arch/x86/cpu/hygon.c
@@ -0,0 +1,107 @@ 
+#include <xen/init.h>
+#include <asm/processor.h>
+#include <asm/hvm/support.h>
+#include <asm/spec_ctrl.h>
+
+#include "cpu.h"
+
+#define APICID_SOCKET_ID_BIT 6
+
+static void hygon_get_topology(struct cpuinfo_x86 *c)
+{
+	unsigned int ebx;
+
+	if (c->x86_max_cores <= 1)
+		return;
+
+	/* Socket ID is ApicId[6] for Hygon processors. */
+	c->phys_proc_id >>= APICID_SOCKET_ID_BIT;
+
+	ebx = cpuid_ebx(0x8000001e);
+	c->x86_num_siblings = ((ebx >> 8) & 0x3) + 1;
+	c->x86_max_cores /= c->x86_num_siblings;
+	c->cpu_core_id = ebx & 0xff;
+
+	if (opt_cpu_info)
+	        printk("CPU %d(%d) -> Processor %d, Core %d\n",
+	                smp_processor_id(), c->x86_max_cores,
+	                        c->phys_proc_id, c->cpu_core_id);
+}
+
+static void init_hygon(struct cpuinfo_x86 *c)
+{
+	unsigned long long value;
+
+	/*
+	 * Attempt to set lfence to be Dispatch Serialising.  This MSR almost
+	 * certainly isn't virtualised (and Xen at least will leak the real
+	 * value in but silently discard writes), as well as being per-core
+	 * rather than per-thread, so do a full safe read/write/readback cycle
+	 * in the worst case.
+	 */
+	if (rdmsr_safe(MSR_AMD64_DE_CFG, value))
+		/* Unable to read.  Assume the safer default. */
+		__clear_bit(X86_FEATURE_LFENCE_DISPATCH,
+			    c->x86_capability);
+	else if (value & AMD64_DE_CFG_LFENCE_SERIALISE)
+		/* Already dispatch serialising. */
+		__set_bit(X86_FEATURE_LFENCE_DISPATCH,
+			  c->x86_capability);
+	else if (wrmsr_safe(MSR_AMD64_DE_CFG,
+			    value | AMD64_DE_CFG_LFENCE_SERIALISE) ||
+		 rdmsr_safe(MSR_AMD64_DE_CFG, value) ||
+		 !(value & AMD64_DE_CFG_LFENCE_SERIALISE))
+		/* Attempt to set failed.  Assume the safer default. */
+		__clear_bit(X86_FEATURE_LFENCE_DISPATCH,
+			    c->x86_capability);
+	else
+		/* Successfully enabled! */
+		__set_bit(X86_FEATURE_LFENCE_DISPATCH,
+			  c->x86_capability);
+
+	/*
+	 * If the user has explicitly chosen to disable Memory Disambiguation
+	 * to mitigiate Speculative Store Bypass, poke the appropriate MSR.
+	 */
+	if (opt_ssbd && !rdmsr_safe(MSR_AMD64_LS_CFG, value)) {
+		value |= 1ull << 10;
+		wrmsr_safe(MSR_AMD64_LS_CFG, value);
+	}
+
+	/* MFENCE stops RDTSC speculation */
+	if (!cpu_has_lfence_dispatch)
+		__set_bit(X86_FEATURE_MFENCE_RDTSC, c->x86_capability);
+
+	display_cacheinfo(c);
+
+	if (c->extended_cpuid_level >= 0x80000008)
+		c->x86_max_cores = (cpuid_ecx(0x80000008) & 0xff) + 1;
+
+	if (c->extended_cpuid_level >= 0x80000007) {
+		if (cpu_has(c, X86_FEATURE_ITSC)) {
+			__set_bit(X86_FEATURE_CONSTANT_TSC, c->x86_capability);
+			__set_bit(X86_FEATURE_NONSTOP_TSC, c->x86_capability);
+			__set_bit(X86_FEATURE_TSC_RELIABLE, c->x86_capability);
+		}
+	}
+
+	hygon_get_topology(c);
+
+	/* Hygon CPUs do not support SYSENTER outside of legacy mode. */
+	__clear_bit(X86_FEATURE_SEP, c->x86_capability);
+
+	/* Hygon processors have APIC timer running in deep C states. */
+	if (opt_arat)
+		__set_bit(X86_FEATURE_ARAT, c->x86_capability);
+
+	if (cpu_has(c, X86_FEATURE_EFRO)) {
+		rdmsrl(MSR_K7_HWCR, value);
+		value |= (1 << 27); /* Enable read-only APERF/MPERF bit */
+		wrmsrl(MSR_K7_HWCR, value);
+	}
+}
+
+const struct cpu_dev hygon_cpu_dev = {
+	.c_early_init	= early_init_amd,
+	.c_init		= init_hygon,
+};
diff --git a/xen/include/asm-x86/x86-vendors.h b/xen/include/asm-x86/x86-vendors.h
index 1ecb934..5414753 100644
--- a/xen/include/asm-x86/x86-vendors.h
+++ b/xen/include/asm-x86/x86-vendors.h
@@ -31,4 +31,9 @@ 
 #define X86_VENDOR_SHANGHAI_ECX 0x20206961U
 #define X86_VENDOR_SHANGHAI_EDX 0x68676e61U
 
+#define X86_VENDOR_HYGON (1 << 5)
+#define X86_VENDOR_HYGON_EBX 0x6f677948 /* "HygonGenuine" */
+#define X86_VENDOR_HYGON_ECX 0x656e6975
+#define X86_VENDOR_HYGON_EDX 0x6e65476e
+
 #endif	/* __XEN_X86_VENDORS_H__ */
diff --git a/xen/lib/x86/cpuid.c b/xen/lib/x86/cpuid.c
index 23619c7..876ade6 100644
--- a/xen/lib/x86/cpuid.c
+++ b/xen/lib/x86/cpuid.c
@@ -29,6 +29,12 @@  unsigned int x86_cpuid_lookup_vendor(uint32_t ebx, uint32_t ecx, uint32_t edx)
              edx == X86_VENDOR_SHANGHAI_EDX )
             return X86_VENDOR_SHANGHAI;
         break;
+
+    case X86_VENDOR_HYGON_EBX:
+        if ( ecx == X86_VENDOR_HYGON_ECX &&
+             edx == X86_VENDOR_HYGON_EDX )
+            return X86_VENDOR_HYGON;
+        break;
     }
 
     return X86_VENDOR_UNKNOWN;
@@ -42,6 +48,7 @@  const char *x86_cpuid_vendor_to_str(unsigned int vendor)
     case X86_VENDOR_AMD:      return "AMD";
     case X86_VENDOR_CENTAUR:  return "Centaur";
     case X86_VENDOR_SHANGHAI: return "Shanghai";
+    case X86_VENDOR_HYGON:    return "Hygon";
     default:                  return "Unknown";
     }
 }