Message ID | 20240606232334.41384-1-alexey.makhalov@broadcom.com (mailing list archive) |
---|---|
State | Not Applicable |
Headers | show |
Series | [v11,1/8] x86/vmware: Introduce VMware hypercall API | expand |
Borislav, please review v11 implementation of 1/8 based on your proposal. I'm waiting for your feedback before sending full v11 patchset. Thanks, --Alexey On 6/6/24 4:23 PM, Alexey Makhalov wrote: > Introduce vmware_hypercall family of functions. It is a common > implementation to be used by the VMware guest code and virtual > device drivers in architecture independent manner. > > The API consists of vmware_hypercallX and vmware_hypercall_hb_{out,in} > set of functions by analogy with KVM hypercall API. Architecture > specific implementation is hidden inside. > > It will simplify future enhancements in VMware hypercalls such > as SEV-ES and TDX related changes without needs to modify a > caller in device drivers code. > > Current implementation extends an idea from commit bac7b4e84323 > ("x86/vmware: Update platform detection code for VMCALL/VMMCALL > hypercalls") to have a slow, but safe path vmware_hypercall_slow() > earlier during the boot when alternatives are not yet applied. > The code inherits VMWARE_CMD logic from the commit mentioned above. > > Move common macros from vmware.c to vmware.h. > > Signed-off-by: Alexey Makhalov <alexey.makhalov@broadcom.com> > --- > arch/x86/include/asm/vmware.h | 279 ++++++++++++++++++++++++++++++++-- > arch/x86/kernel/cpu/vmware.c | 58 ++++++- > 2 files changed, 315 insertions(+), 22 deletions(-) > > diff --git a/arch/x86/include/asm/vmware.h b/arch/x86/include/asm/vmware.h > index ac9fc51e2b18..724c8b9b4b8d 100644 > --- a/arch/x86/include/asm/vmware.h > +++ b/arch/x86/include/asm/vmware.h > @@ -7,26 +7,277 @@ > #include <linux/stringify.h> > > /* > - * The hypercall definitions differ in the low word of the %edx argument > - * in the following way: the old port base interface uses the port > - * number to distinguish between high- and low bandwidth versions. > + * VMware hypercall ABI. > + * > + * - Low bandwidth (LB) hypercalls (I/O port based, vmcall and vmmcall) > + * have up to 6 input and 6 output arguments passed and returned using > + * registers: %eax (arg0), %ebx (arg1), %ecx (arg2), %edx (arg3), > + * %esi (arg4), %edi (arg5). > + * The following input arguments must be initialized by the caller: > + * arg0 - VMWARE_HYPERVISOR_MAGIC > + * arg2 - Hypercall command > + * arg3 bits [15:0] - Port number, LB and direction flags > + * > + * - High bandwidth (HB) hypercalls are I/O port based only. They have > + * up to 7 input and 7 output arguments passed and returned using > + * registers: %eax (arg0), %ebx (arg1), %ecx (arg2), %edx (arg3), > + * %esi (arg4), %edi (arg5), %ebp (arg6). > + * The following input arguments must be initialized by the caller: > + * arg0 - VMWARE_HYPERVISOR_MAGIC > + * arg1 - Hypercall command > + * arg3 bits [15:0] - Port number, HB and direction flags > + * > + * For compatibility purposes, x86_64 systems use only lower 32 bits > + * for input and output arguments. > + * > + * The hypercall definitions differ in the low word of the %edx (arg3) > + * in the following way: the old I/O port based interface uses the port > + * number to distinguish between high- and low bandwidth versions, and > + * uses IN/OUT instructions to define transfer direction. > * > * The new vmcall interface instead uses a set of flags to select > * bandwidth mode and transfer direction. The flags should be loaded > - * into %dx by any user and are automatically replaced by the port > - * number if the VMWARE_HYPERVISOR_PORT method is used. > - * > - * In short, new driver code should strictly use the new definition of > - * %dx content. > + * into arg3 by any user and are automatically replaced by the port > + * number if the I/O port method is used. > + */ > + > +#define VMWARE_HYPERVISOR_HB BIT(0) > +#define VMWARE_HYPERVISOR_OUT BIT(1) > + > +#define VMWARE_HYPERVISOR_PORT 0x5658 > +#define VMWARE_HYPERVISOR_PORT_HB (VMWARE_HYPERVISOR_PORT | \ > + VMWARE_HYPERVISOR_HB) > + > +#define VMWARE_HYPERVISOR_MAGIC 0x564d5868U > + > +#define VMWARE_CMD_GETVERSION 10 > +#define VMWARE_CMD_GETHZ 45 > +#define VMWARE_CMD_GETVCPU_INFO 68 > +#define VMWARE_CMD_STEALCLOCK 91 > + > +#define CPUID_VMWARE_FEATURES_ECX_VMMCALL BIT(0) > +#define CPUID_VMWARE_FEATURES_ECX_VMCALL BIT(1) > + > +extern unsigned long vmware_hypercall_slow(unsigned long cmd, > + unsigned long in1, unsigned long in3, > + unsigned long in4, unsigned long in5, > + u32 *out1, u32 *out2, u32 *out3, > + u32 *out4, u32 *out5); > + > +/* > + * The low bandwidth call. The low word of %edx is presumed to have OUT bit > + * set. The high word of %edx may contain input data from the caller. > */ > +#define VMWARE_HYPERCALL \ > + ALTERNATIVE_2("movw %[port], %%dx\n\t" \ > + "inl (%%dx), %%eax", \ > + "vmcall", X86_FEATURE_VMCALL, \ > + "vmmcall", X86_FEATURE_VMW_VMMCALL) > + > +static inline > +unsigned long vmware_hypercall1(unsigned long cmd, unsigned long in1) > +{ > + unsigned long out0; > + > + if (unlikely(!alternatives_patched) && !__is_defined(MODULE)) > + return vmware_hypercall_slow(cmd, in1, 0, 0, 0, > + NULL, NULL, NULL, NULL, NULL); > + > + asm_inline volatile (VMWARE_HYPERCALL > + : "=a" (out0) > + : [port] "i" (VMWARE_HYPERVISOR_PORT), > + "a" (VMWARE_HYPERVISOR_MAGIC), > + "b" (in1), > + "c" (cmd), > + "d" (0) > + : "cc", "memory"); > + return out0; > +} > + > +static inline > +unsigned long vmware_hypercall3(unsigned long cmd, unsigned long in1, > + u32 *out1, u32 *out2) > +{ > + unsigned long out0; > + > + if (unlikely(!alternatives_patched) && !__is_defined(MODULE)) > + return vmware_hypercall_slow(cmd, in1, 0, 0, 0, > + out1, out2, NULL, NULL, NULL); > + > + asm_inline volatile (VMWARE_HYPERCALL > + : "=a" (out0), "=b" (*out1), "=c" (*out2) > + : [port] "i" (VMWARE_HYPERVISOR_PORT), > + "a" (VMWARE_HYPERVISOR_MAGIC), > + "b" (in1), > + "c" (cmd), > + "d" (0) > + : "cc", "memory"); > + return out0; > +} > + > +static inline > +unsigned long vmware_hypercall4(unsigned long cmd, unsigned long in1, > + u32 *out1, u32 *out2, u32 *out3) > +{ > + unsigned long out0; > + > + if (unlikely(!alternatives_patched) && !__is_defined(MODULE)) > + return vmware_hypercall_slow(cmd, in1, 0, 0, 0, > + out1, out2, out3, NULL, NULL); > + > + asm_inline volatile (VMWARE_HYPERCALL > + : "=a" (out0), "=b" (*out1), "=c" (*out2), "=d" (*out3) > + : [port] "i" (VMWARE_HYPERVISOR_PORT), > + "a" (VMWARE_HYPERVISOR_MAGIC), > + "b" (in1), > + "c" (cmd), > + "d" (0) > + : "cc", "memory"); > + return out0; > +} > + > +static inline > +unsigned long vmware_hypercall5(unsigned long cmd, unsigned long in1, > + unsigned long in3, unsigned long in4, > + unsigned long in5, u32 *out2) > +{ > + unsigned long out0; > + > + if (unlikely(!alternatives_patched) && !__is_defined(MODULE)) > + return vmware_hypercall_slow(cmd, in1, in3, in4, in5, > + NULL, out2, NULL, NULL, NULL); > + > + asm_inline volatile (VMWARE_HYPERCALL > + : "=a" (out0), "=c" (*out2) > + : [port] "i" (VMWARE_HYPERVISOR_PORT), > + "a" (VMWARE_HYPERVISOR_MAGIC), > + "b" (in1), > + "c" (cmd), > + "d" (in3), > + "S" (in4), > + "D" (in5) > + : "cc", "memory"); > + return out0; > +} > + > +static inline > +unsigned long vmware_hypercall6(unsigned long cmd, unsigned long in1, > + unsigned long in3, u32 *out2, > + u32 *out3, u32 *out4, u32 *out5) > +{ > + unsigned long out0; > + > + if (unlikely(!alternatives_patched) && !__is_defined(MODULE)) > + return vmware_hypercall_slow(cmd, in1, in3, 0, 0, > + NULL, out2, out3, out4, out5); > + > + asm_inline volatile (VMWARE_HYPERCALL > + : "=a" (out0), "=c" (*out2), "=d" (*out3), "=S" (*out4), > + "=D" (*out5) > + : [port] "i" (VMWARE_HYPERVISOR_PORT), > + "a" (VMWARE_HYPERVISOR_MAGIC), > + "b" (in1), > + "c" (cmd), > + "d" (in3) > + : "cc", "memory"); > + return out0; > +} > + > +static inline > +unsigned long vmware_hypercall7(unsigned long cmd, unsigned long in1, > + unsigned long in3, unsigned long in4, > + unsigned long in5, u32 *out1, > + u32 *out2, u32 *out3) > +{ > + unsigned long out0; > + > + if (unlikely(!alternatives_patched) && !__is_defined(MODULE)) > + return vmware_hypercall_slow(cmd, in1, in3, in4, in5, > + out1, out2, out3, NULL, NULL); > + > + asm_inline volatile (VMWARE_HYPERCALL > + : "=a" (out0), "=b" (*out1), "=c" (*out2), "=d" (*out3) > + : [port] "i" (VMWARE_HYPERVISOR_PORT), > + "a" (VMWARE_HYPERVISOR_MAGIC), > + "b" (in1), > + "c" (cmd), > + "d" (in3), > + "S" (in4), > + "D" (in5) > + : "cc", "memory"); > + return out0; > +} > + > +#ifdef CONFIG_X86_64 > +#define VMW_BP_CONSTRAINT "r" > +#else > +#define VMW_BP_CONSTRAINT "m" > +#endif > + > +/* > + * High bandwidth calls are not supported on encrypted memory guests. > + * The caller should check cc_platform_has(CC_ATTR_MEM_ENCRYPT) and use > + * low bandwidth hypercall if memory encryption is set. > + * This assumption simplifies HB hypercall implementation to just I/O port > + * based approach without alternative patching. > + */ > +static inline > +unsigned long vmware_hypercall_hb_out(unsigned long cmd, unsigned long in2, > + unsigned long in3, unsigned long in4, > + unsigned long in5, unsigned long in6, > + u32 *out1) > +{ > + unsigned long out0; > + > + asm_inline volatile ( > + UNWIND_HINT_SAVE > + "push %%" _ASM_BP "\n\t" > + UNWIND_HINT_UNDEFINED > + "mov %[in6], %%" _ASM_BP "\n\t" > + "rep outsb\n\t" > + "pop %%" _ASM_BP "\n\t" > + UNWIND_HINT_RESTORE > + : "=a" (out0), "=b" (*out1) > + : "a" (VMWARE_HYPERVISOR_MAGIC), > + "b" (cmd), > + "c" (in2), > + "d" (in3 | VMWARE_HYPERVISOR_PORT_HB), > + "S" (in4), > + "D" (in5), > + [in6] VMW_BP_CONSTRAINT (in6) > + : "cc", "memory"); > + return out0; > +} > > -/* Old port-based version */ > -#define VMWARE_HYPERVISOR_PORT 0x5658 > -#define VMWARE_HYPERVISOR_PORT_HB 0x5659 > +static inline > +unsigned long vmware_hypercall_hb_in(unsigned long cmd, unsigned long in2, > + unsigned long in3, unsigned long in4, > + unsigned long in5, unsigned long in6, > + u32 *out1) > +{ > + unsigned long out0; > > -/* Current vmcall / vmmcall version */ > -#define VMWARE_HYPERVISOR_HB BIT(0) > -#define VMWARE_HYPERVISOR_OUT BIT(1) > + asm_inline volatile ( > + UNWIND_HINT_SAVE > + "push %%" _ASM_BP "\n\t" > + UNWIND_HINT_UNDEFINED > + "mov %[in6], %%" _ASM_BP "\n\t" > + "rep insb\n\t" > + "pop %%" _ASM_BP "\n\t" > + UNWIND_HINT_RESTORE > + : "=a" (out0), "=b" (*out1) > + : "a" (VMWARE_HYPERVISOR_MAGIC), > + "b" (cmd), > + "c" (in2), > + "d" (in3 | VMWARE_HYPERVISOR_PORT_HB), > + "S" (in4), > + "D" (in5), > + [in6] VMW_BP_CONSTRAINT (in6) > + : "cc", "memory"); > + return out0; > +} > +#undef VMW_BP_CONSTRAINT > +#undef VMWARE_HYPERCALL > > /* The low bandwidth call. The low word of edx is presumed clear. */ > #define VMWARE_HYPERCALL \ > diff --git a/arch/x86/kernel/cpu/vmware.c b/arch/x86/kernel/cpu/vmware.c > index 11f83d07925e..533ac2d1de88 100644 > --- a/arch/x86/kernel/cpu/vmware.c > +++ b/arch/x86/kernel/cpu/vmware.c > @@ -41,17 +41,9 @@ > > #define CPUID_VMWARE_INFO_LEAF 0x40000000 > #define CPUID_VMWARE_FEATURES_LEAF 0x40000010 > -#define CPUID_VMWARE_FEATURES_ECX_VMMCALL BIT(0) > -#define CPUID_VMWARE_FEATURES_ECX_VMCALL BIT(1) > > -#define VMWARE_HYPERVISOR_MAGIC 0x564D5868 > - > -#define VMWARE_CMD_GETVERSION 10 > -#define VMWARE_CMD_GETHZ 45 > -#define VMWARE_CMD_GETVCPU_INFO 68 > #define VMWARE_CMD_LEGACY_X2APIC 3 > #define VMWARE_CMD_VCPU_RESERVED 31 > -#define VMWARE_CMD_STEALCLOCK 91 > > #define STEALCLOCK_NOT_AVAILABLE (-1) > #define STEALCLOCK_DISABLED 0 > @@ -110,6 +102,56 @@ struct vmware_steal_time { > static unsigned long vmware_tsc_khz __ro_after_init; > static u8 vmware_hypercall_mode __ro_after_init; > > +unsigned long vmware_hypercall_slow(unsigned long cmd, > + unsigned long in1, unsigned long in3, > + unsigned long in4, unsigned long in5, > + u32 *out1, u32 *out2, u32 *out3, > + u32 *out4, u32 *out5) > +{ > + unsigned long out0; > + > + switch (vmware_hypercall_mode) { > + case CPUID_VMWARE_FEATURES_ECX_VMCALL: > + asm_inline volatile ("vmcall" > + : "=a" (out0), "=b" (*out1), "=c" (*out2), > + "=d" (*out3), "=S" (*out4), "=D" (*out5) > + : "a" (VMWARE_HYPERVISOR_MAGIC), > + "b" (in1), > + "c" (cmd), > + "d" (in3), > + "S" (in4), > + "D" (in5) > + : "cc", "memory"); > + break; > + case CPUID_VMWARE_FEATURES_ECX_VMMCALL: > + asm_inline volatile ("vmmcall" > + : "=a" (out0), "=b" (*out1), "=c" (*out2), > + "=d" (*out3), "=S" (*out4), "=D" (*out5) > + : "a" (VMWARE_HYPERVISOR_MAGIC), > + "b" (in1), > + "c" (cmd), > + "d" (in3), > + "S" (in4), > + "D" (in5) > + : "cc", "memory"); > + break; > + default: > + asm_inline volatile ("movw %[port], %%dx; inl (%%dx), %%eax" > + : "=a" (out0), "=b" (*out1), "=c" (*out2), > + "=d" (*out3), "=S" (*out4), "=D" (*out5) > + : [port] "i" (VMWARE_HYPERVISOR_PORT), > + "a" (VMWARE_HYPERVISOR_MAGIC), > + "b" (in1), > + "c" (cmd), > + "d" (in3), > + "S" (in4), > + "D" (in5) > + : "cc", "memory"); > + break; > + } > + return out0; > +} > + > static inline int __vmware_platform(void) > { > uint32_t eax, ebx, ecx, edx;
On Wed, Jun 12, 2024 at 03:11:54PM -0700, Alexey Makhalov wrote: > Borislav, please review v11 implementation of 1/8 based on your proposal. > I'm waiting for your feedback before sending full v11 patchset. Sorry about that - -ETOOMUCHEMAIL. :-( Yeah, that patch looks all good and regular now, and at a quick glance you know what's what. I think that's definitely better than what you started with. :-) Thx.
diff --git a/arch/x86/include/asm/vmware.h b/arch/x86/include/asm/vmware.h index ac9fc51e2b18..724c8b9b4b8d 100644 --- a/arch/x86/include/asm/vmware.h +++ b/arch/x86/include/asm/vmware.h @@ -7,26 +7,277 @@ #include <linux/stringify.h> /* - * The hypercall definitions differ in the low word of the %edx argument - * in the following way: the old port base interface uses the port - * number to distinguish between high- and low bandwidth versions. + * VMware hypercall ABI. + * + * - Low bandwidth (LB) hypercalls (I/O port based, vmcall and vmmcall) + * have up to 6 input and 6 output arguments passed and returned using + * registers: %eax (arg0), %ebx (arg1), %ecx (arg2), %edx (arg3), + * %esi (arg4), %edi (arg5). + * The following input arguments must be initialized by the caller: + * arg0 - VMWARE_HYPERVISOR_MAGIC + * arg2 - Hypercall command + * arg3 bits [15:0] - Port number, LB and direction flags + * + * - High bandwidth (HB) hypercalls are I/O port based only. They have + * up to 7 input and 7 output arguments passed and returned using + * registers: %eax (arg0), %ebx (arg1), %ecx (arg2), %edx (arg3), + * %esi (arg4), %edi (arg5), %ebp (arg6). + * The following input arguments must be initialized by the caller: + * arg0 - VMWARE_HYPERVISOR_MAGIC + * arg1 - Hypercall command + * arg3 bits [15:0] - Port number, HB and direction flags + * + * For compatibility purposes, x86_64 systems use only lower 32 bits + * for input and output arguments. + * + * The hypercall definitions differ in the low word of the %edx (arg3) + * in the following way: the old I/O port based interface uses the port + * number to distinguish between high- and low bandwidth versions, and + * uses IN/OUT instructions to define transfer direction. * * The new vmcall interface instead uses a set of flags to select * bandwidth mode and transfer direction. The flags should be loaded - * into %dx by any user and are automatically replaced by the port - * number if the VMWARE_HYPERVISOR_PORT method is used. - * - * In short, new driver code should strictly use the new definition of - * %dx content. + * into arg3 by any user and are automatically replaced by the port + * number if the I/O port method is used. + */ + +#define VMWARE_HYPERVISOR_HB BIT(0) +#define VMWARE_HYPERVISOR_OUT BIT(1) + +#define VMWARE_HYPERVISOR_PORT 0x5658 +#define VMWARE_HYPERVISOR_PORT_HB (VMWARE_HYPERVISOR_PORT | \ + VMWARE_HYPERVISOR_HB) + +#define VMWARE_HYPERVISOR_MAGIC 0x564d5868U + +#define VMWARE_CMD_GETVERSION 10 +#define VMWARE_CMD_GETHZ 45 +#define VMWARE_CMD_GETVCPU_INFO 68 +#define VMWARE_CMD_STEALCLOCK 91 + +#define CPUID_VMWARE_FEATURES_ECX_VMMCALL BIT(0) +#define CPUID_VMWARE_FEATURES_ECX_VMCALL BIT(1) + +extern unsigned long vmware_hypercall_slow(unsigned long cmd, + unsigned long in1, unsigned long in3, + unsigned long in4, unsigned long in5, + u32 *out1, u32 *out2, u32 *out3, + u32 *out4, u32 *out5); + +/* + * The low bandwidth call. The low word of %edx is presumed to have OUT bit + * set. The high word of %edx may contain input data from the caller. */ +#define VMWARE_HYPERCALL \ + ALTERNATIVE_2("movw %[port], %%dx\n\t" \ + "inl (%%dx), %%eax", \ + "vmcall", X86_FEATURE_VMCALL, \ + "vmmcall", X86_FEATURE_VMW_VMMCALL) + +static inline +unsigned long vmware_hypercall1(unsigned long cmd, unsigned long in1) +{ + unsigned long out0; + + if (unlikely(!alternatives_patched) && !__is_defined(MODULE)) + return vmware_hypercall_slow(cmd, in1, 0, 0, 0, + NULL, NULL, NULL, NULL, NULL); + + asm_inline volatile (VMWARE_HYPERCALL + : "=a" (out0) + : [port] "i" (VMWARE_HYPERVISOR_PORT), + "a" (VMWARE_HYPERVISOR_MAGIC), + "b" (in1), + "c" (cmd), + "d" (0) + : "cc", "memory"); + return out0; +} + +static inline +unsigned long vmware_hypercall3(unsigned long cmd, unsigned long in1, + u32 *out1, u32 *out2) +{ + unsigned long out0; + + if (unlikely(!alternatives_patched) && !__is_defined(MODULE)) + return vmware_hypercall_slow(cmd, in1, 0, 0, 0, + out1, out2, NULL, NULL, NULL); + + asm_inline volatile (VMWARE_HYPERCALL + : "=a" (out0), "=b" (*out1), "=c" (*out2) + : [port] "i" (VMWARE_HYPERVISOR_PORT), + "a" (VMWARE_HYPERVISOR_MAGIC), + "b" (in1), + "c" (cmd), + "d" (0) + : "cc", "memory"); + return out0; +} + +static inline +unsigned long vmware_hypercall4(unsigned long cmd, unsigned long in1, + u32 *out1, u32 *out2, u32 *out3) +{ + unsigned long out0; + + if (unlikely(!alternatives_patched) && !__is_defined(MODULE)) + return vmware_hypercall_slow(cmd, in1, 0, 0, 0, + out1, out2, out3, NULL, NULL); + + asm_inline volatile (VMWARE_HYPERCALL + : "=a" (out0), "=b" (*out1), "=c" (*out2), "=d" (*out3) + : [port] "i" (VMWARE_HYPERVISOR_PORT), + "a" (VMWARE_HYPERVISOR_MAGIC), + "b" (in1), + "c" (cmd), + "d" (0) + : "cc", "memory"); + return out0; +} + +static inline +unsigned long vmware_hypercall5(unsigned long cmd, unsigned long in1, + unsigned long in3, unsigned long in4, + unsigned long in5, u32 *out2) +{ + unsigned long out0; + + if (unlikely(!alternatives_patched) && !__is_defined(MODULE)) + return vmware_hypercall_slow(cmd, in1, in3, in4, in5, + NULL, out2, NULL, NULL, NULL); + + asm_inline volatile (VMWARE_HYPERCALL + : "=a" (out0), "=c" (*out2) + : [port] "i" (VMWARE_HYPERVISOR_PORT), + "a" (VMWARE_HYPERVISOR_MAGIC), + "b" (in1), + "c" (cmd), + "d" (in3), + "S" (in4), + "D" (in5) + : "cc", "memory"); + return out0; +} + +static inline +unsigned long vmware_hypercall6(unsigned long cmd, unsigned long in1, + unsigned long in3, u32 *out2, + u32 *out3, u32 *out4, u32 *out5) +{ + unsigned long out0; + + if (unlikely(!alternatives_patched) && !__is_defined(MODULE)) + return vmware_hypercall_slow(cmd, in1, in3, 0, 0, + NULL, out2, out3, out4, out5); + + asm_inline volatile (VMWARE_HYPERCALL + : "=a" (out0), "=c" (*out2), "=d" (*out3), "=S" (*out4), + "=D" (*out5) + : [port] "i" (VMWARE_HYPERVISOR_PORT), + "a" (VMWARE_HYPERVISOR_MAGIC), + "b" (in1), + "c" (cmd), + "d" (in3) + : "cc", "memory"); + return out0; +} + +static inline +unsigned long vmware_hypercall7(unsigned long cmd, unsigned long in1, + unsigned long in3, unsigned long in4, + unsigned long in5, u32 *out1, + u32 *out2, u32 *out3) +{ + unsigned long out0; + + if (unlikely(!alternatives_patched) && !__is_defined(MODULE)) + return vmware_hypercall_slow(cmd, in1, in3, in4, in5, + out1, out2, out3, NULL, NULL); + + asm_inline volatile (VMWARE_HYPERCALL + : "=a" (out0), "=b" (*out1), "=c" (*out2), "=d" (*out3) + : [port] "i" (VMWARE_HYPERVISOR_PORT), + "a" (VMWARE_HYPERVISOR_MAGIC), + "b" (in1), + "c" (cmd), + "d" (in3), + "S" (in4), + "D" (in5) + : "cc", "memory"); + return out0; +} + +#ifdef CONFIG_X86_64 +#define VMW_BP_CONSTRAINT "r" +#else +#define VMW_BP_CONSTRAINT "m" +#endif + +/* + * High bandwidth calls are not supported on encrypted memory guests. + * The caller should check cc_platform_has(CC_ATTR_MEM_ENCRYPT) and use + * low bandwidth hypercall if memory encryption is set. + * This assumption simplifies HB hypercall implementation to just I/O port + * based approach without alternative patching. + */ +static inline +unsigned long vmware_hypercall_hb_out(unsigned long cmd, unsigned long in2, + unsigned long in3, unsigned long in4, + unsigned long in5, unsigned long in6, + u32 *out1) +{ + unsigned long out0; + + asm_inline volatile ( + UNWIND_HINT_SAVE + "push %%" _ASM_BP "\n\t" + UNWIND_HINT_UNDEFINED + "mov %[in6], %%" _ASM_BP "\n\t" + "rep outsb\n\t" + "pop %%" _ASM_BP "\n\t" + UNWIND_HINT_RESTORE + : "=a" (out0), "=b" (*out1) + : "a" (VMWARE_HYPERVISOR_MAGIC), + "b" (cmd), + "c" (in2), + "d" (in3 | VMWARE_HYPERVISOR_PORT_HB), + "S" (in4), + "D" (in5), + [in6] VMW_BP_CONSTRAINT (in6) + : "cc", "memory"); + return out0; +} -/* Old port-based version */ -#define VMWARE_HYPERVISOR_PORT 0x5658 -#define VMWARE_HYPERVISOR_PORT_HB 0x5659 +static inline +unsigned long vmware_hypercall_hb_in(unsigned long cmd, unsigned long in2, + unsigned long in3, unsigned long in4, + unsigned long in5, unsigned long in6, + u32 *out1) +{ + unsigned long out0; -/* Current vmcall / vmmcall version */ -#define VMWARE_HYPERVISOR_HB BIT(0) -#define VMWARE_HYPERVISOR_OUT BIT(1) + asm_inline volatile ( + UNWIND_HINT_SAVE + "push %%" _ASM_BP "\n\t" + UNWIND_HINT_UNDEFINED + "mov %[in6], %%" _ASM_BP "\n\t" + "rep insb\n\t" + "pop %%" _ASM_BP "\n\t" + UNWIND_HINT_RESTORE + : "=a" (out0), "=b" (*out1) + : "a" (VMWARE_HYPERVISOR_MAGIC), + "b" (cmd), + "c" (in2), + "d" (in3 | VMWARE_HYPERVISOR_PORT_HB), + "S" (in4), + "D" (in5), + [in6] VMW_BP_CONSTRAINT (in6) + : "cc", "memory"); + return out0; +} +#undef VMW_BP_CONSTRAINT +#undef VMWARE_HYPERCALL /* The low bandwidth call. The low word of edx is presumed clear. */ #define VMWARE_HYPERCALL \ diff --git a/arch/x86/kernel/cpu/vmware.c b/arch/x86/kernel/cpu/vmware.c index 11f83d07925e..533ac2d1de88 100644 --- a/arch/x86/kernel/cpu/vmware.c +++ b/arch/x86/kernel/cpu/vmware.c @@ -41,17 +41,9 @@ #define CPUID_VMWARE_INFO_LEAF 0x40000000 #define CPUID_VMWARE_FEATURES_LEAF 0x40000010 -#define CPUID_VMWARE_FEATURES_ECX_VMMCALL BIT(0) -#define CPUID_VMWARE_FEATURES_ECX_VMCALL BIT(1) -#define VMWARE_HYPERVISOR_MAGIC 0x564D5868 - -#define VMWARE_CMD_GETVERSION 10 -#define VMWARE_CMD_GETHZ 45 -#define VMWARE_CMD_GETVCPU_INFO 68 #define VMWARE_CMD_LEGACY_X2APIC 3 #define VMWARE_CMD_VCPU_RESERVED 31 -#define VMWARE_CMD_STEALCLOCK 91 #define STEALCLOCK_NOT_AVAILABLE (-1) #define STEALCLOCK_DISABLED 0 @@ -110,6 +102,56 @@ struct vmware_steal_time { static unsigned long vmware_tsc_khz __ro_after_init; static u8 vmware_hypercall_mode __ro_after_init; +unsigned long vmware_hypercall_slow(unsigned long cmd, + unsigned long in1, unsigned long in3, + unsigned long in4, unsigned long in5, + u32 *out1, u32 *out2, u32 *out3, + u32 *out4, u32 *out5) +{ + unsigned long out0; + + switch (vmware_hypercall_mode) { + case CPUID_VMWARE_FEATURES_ECX_VMCALL: + asm_inline volatile ("vmcall" + : "=a" (out0), "=b" (*out1), "=c" (*out2), + "=d" (*out3), "=S" (*out4), "=D" (*out5) + : "a" (VMWARE_HYPERVISOR_MAGIC), + "b" (in1), + "c" (cmd), + "d" (in3), + "S" (in4), + "D" (in5) + : "cc", "memory"); + break; + case CPUID_VMWARE_FEATURES_ECX_VMMCALL: + asm_inline volatile ("vmmcall" + : "=a" (out0), "=b" (*out1), "=c" (*out2), + "=d" (*out3), "=S" (*out4), "=D" (*out5) + : "a" (VMWARE_HYPERVISOR_MAGIC), + "b" (in1), + "c" (cmd), + "d" (in3), + "S" (in4), + "D" (in5) + : "cc", "memory"); + break; + default: + asm_inline volatile ("movw %[port], %%dx; inl (%%dx), %%eax" + : "=a" (out0), "=b" (*out1), "=c" (*out2), + "=d" (*out3), "=S" (*out4), "=D" (*out5) + : [port] "i" (VMWARE_HYPERVISOR_PORT), + "a" (VMWARE_HYPERVISOR_MAGIC), + "b" (in1), + "c" (cmd), + "d" (in3), + "S" (in4), + "D" (in5) + : "cc", "memory"); + break; + } + return out0; +} + static inline int __vmware_platform(void) { uint32_t eax, ebx, ecx, edx;
Introduce vmware_hypercall family of functions. It is a common implementation to be used by the VMware guest code and virtual device drivers in architecture independent manner. The API consists of vmware_hypercallX and vmware_hypercall_hb_{out,in} set of functions by analogy with KVM hypercall API. Architecture specific implementation is hidden inside. It will simplify future enhancements in VMware hypercalls such as SEV-ES and TDX related changes without needs to modify a caller in device drivers code. Current implementation extends an idea from commit bac7b4e84323 ("x86/vmware: Update platform detection code for VMCALL/VMMCALL hypercalls") to have a slow, but safe path vmware_hypercall_slow() earlier during the boot when alternatives are not yet applied. The code inherits VMWARE_CMD logic from the commit mentioned above. Move common macros from vmware.c to vmware.h. Signed-off-by: Alexey Makhalov <alexey.makhalov@broadcom.com> --- arch/x86/include/asm/vmware.h | 279 ++++++++++++++++++++++++++++++++-- arch/x86/kernel/cpu/vmware.c | 58 ++++++- 2 files changed, 315 insertions(+), 22 deletions(-)