diff mbox series

x86/vmx: Drop memory clobbers on VMX instruction wrappers

Message ID 20250407104544.1823150-1-andrew.cooper3@citrix.com (mailing list archive)
State New
Headers show
Series x86/vmx: Drop memory clobbers on VMX instruction wrappers | expand

Commit Message

Andrew Cooper April 7, 2025, 10:45 a.m. UTC
The use, or not, of memory clobbers on the VMX instructions is complicated.

There are two separate aspects to consider:

1. Originally, the VMX instructions used hardcoded bytes, including memory
   encodings.  Therefore, the compiler couldn't see the correct relationship
   between parameters.  The memory clobber for this purpose should have been
   dropped when switching to mnemonics.

   This covers INVEPT and INVVPID, each of which has no change in memory, nor
   in fact the current address space in use.

2. Most of these instructions operate on a VMCS region.  This is a (mostly)
   opaque data structure, operated on by physical address.  Again, this hides
   the relationship between the instructions and the VMCS from the compiler.

   The processor might use internal state to cache the VMCS (writing it back
   to memory on VMCLEAR), or it might operate on memory directly.

   Because the VMCS is opaque (so the compiler has nothing interesting to know
   about it), and because VMREAD/VMWRITE have chosen not to use a memory
   clobber (to tell the compiler that something changed), none of the other
   VMX instructions should use a memory clobber either.

   This covers VMXON, VMXOFF, VMPTRLD and VMPTCLEAR.

Make the use of memory clobbers consistent for all VMX instructions.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>

Based on top of the posted cleanup.
---
 xen/arch/x86/hvm/vmx/vmcs.c            | 4 ++--
 xen/arch/x86/include/asm/hvm/vmx/vmx.h | 8 ++++----
 2 files changed, 6 insertions(+), 6 deletions(-)


base-commit: befc384d21784affa3daf2abc85b02500e4dc545
prerequisite-patch-id: 727f158a737c65bca1e4210a6bc132302037f55c
prerequisite-patch-id: 3f88b36cbba121e9893b4c8b52df9aea4f58e8ad
prerequisite-patch-id: c0b199b9ceb1ddd659e27ec8571741bd78ae6265

Comments

Jan Beulich April 7, 2025, noon UTC | #1
On 07.04.2025 12:45, Andrew Cooper wrote:
> The use, or not, of memory clobbers on the VMX instructions is complicated.
> 
> There are two separate aspects to consider:
> 
> 1. Originally, the VMX instructions used hardcoded bytes, including memory
>    encodings.  Therefore, the compiler couldn't see the correct relationship
>    between parameters.  The memory clobber for this purpose should have been
>    dropped when switching to mnemonics.
> 
>    This covers INVEPT and INVVPID, each of which has no change in memory, nor
>    in fact the current address space in use.

Yet then they need to come after respective table modifications.

> 2. Most of these instructions operate on a VMCS region.  This is a (mostly)
>    opaque data structure, operated on by physical address.  Again, this hides
>    the relationship between the instructions and the VMCS from the compiler.
> 
>    The processor might use internal state to cache the VMCS (writing it back
>    to memory on VMCLEAR), or it might operate on memory directly.
> 
>    Because the VMCS is opaque (so the compiler has nothing interesting to know
>    about it), and because VMREAD/VMWRITE have chosen not to use a memory
>    clobber (to tell the compiler that something changed), none of the other
>    VMX instructions should use a memory clobber either.

For this, there's actually a good example below, with everything needed in
context.

>    This covers VMXON, VMXOFF, VMPTRLD and VMPTCLEAR.

Nit: The last insn is VMCLEAR.

> --- a/xen/arch/x86/hvm/vmx/vmcs.c
> +++ b/xen/arch/x86/hvm/vmx/vmcs.c
> @@ -754,7 +754,7 @@ static int _vmx_cpu_up(bool bsp)
>                 _ASM_EXTABLE(1b, %l[vmxon_fault])
>                 :
>                 : [addr] "m" (this_cpu(vmxon_region))
> -               : "memory"
> +               :
>                 : vmxon_fail, vmxon_fault );
>  
>      this_cpu(vmxon) = 1;
> @@ -811,7 +811,7 @@ void cf_check vmx_cpu_down(void)
>  
>      BUG_ON(!(read_cr4() & X86_CR4_VMXE));
>      this_cpu(vmxon) = 0;
> -    asm volatile ( "vmxoff" ::: "memory" );
> +    asm volatile ( "vmxoff" );

With the clobber dropped, the compiler is free to re-order the prior store
with the asm(), despite the "volatile", isn't it? [1] This may then be
applicable elsewhere as well.

Jan

[1] Quote: "Note that the compiler can move even volatile asm instructions
            relative to other code, including across jump instructions."
Andrew Cooper April 7, 2025, 2:57 p.m. UTC | #2
On 07/04/2025 1:00 pm, Jan Beulich wrote:
> On 07.04.2025 12:45, Andrew Cooper wrote:
>> The use, or not, of memory clobbers on the VMX instructions is complicated.
>>
>> There are two separate aspects to consider:
>>
>> 1. Originally, the VMX instructions used hardcoded bytes, including memory
>>    encodings.  Therefore, the compiler couldn't see the correct relationship
>>    between parameters.  The memory clobber for this purpose should have been
>>    dropped when switching to mnemonics.
>>
>>    This covers INVEPT and INVVPID, each of which has no change in memory, nor
>>    in fact the current address space in use.
> Yet then they need to come after respective table modifications.

They don't AFAICT, but the reasoning is complicated.  I'll expand on it
in v2.

>
>> 2. Most of these instructions operate on a VMCS region.  This is a (mostly)
>>    opaque data structure, operated on by physical address.  Again, this hides
>>    the relationship between the instructions and the VMCS from the compiler.
>>
>>    The processor might use internal state to cache the VMCS (writing it back
>>    to memory on VMCLEAR), or it might operate on memory directly.
>>
>>    Because the VMCS is opaque (so the compiler has nothing interesting to know
>>    about it), and because VMREAD/VMWRITE have chosen not to use a memory
>>    clobber (to tell the compiler that something changed), none of the other
>>    VMX instructions should use a memory clobber either.
> For this, there's actually a good example below, with everything needed in
> context.
>
>>    This covers VMXON, VMXOFF, VMPTRLD and VMPTCLEAR.
> Nit: The last insn is VMCLEAR.

Oh, so it is, and we've got an incorrectly named wrapper.

>
>> --- a/xen/arch/x86/hvm/vmx/vmcs.c
>> +++ b/xen/arch/x86/hvm/vmx/vmcs.c
>> @@ -754,7 +754,7 @@ static int _vmx_cpu_up(bool bsp)
>>                 _ASM_EXTABLE(1b, %l[vmxon_fault])
>>                 :
>>                 : [addr] "m" (this_cpu(vmxon_region))
>> -               : "memory"
>> +               :
>>                 : vmxon_fail, vmxon_fault );
>>  
>>      this_cpu(vmxon) = 1;
>> @@ -811,7 +811,7 @@ void cf_check vmx_cpu_down(void)
>>  
>>      BUG_ON(!(read_cr4() & X86_CR4_VMXE));
>>      this_cpu(vmxon) = 0;
>> -    asm volatile ( "vmxoff" ::: "memory" );
>> +    asm volatile ( "vmxoff" );
> With the clobber dropped, the compiler is free to re-order the prior store
> with the asm(), despite the "volatile", isn't it? [1] This may then be
> applicable elsewhere as well.

Yeah, these might better stay as they are.

~Andrew
diff mbox series

Patch

diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index a44475ae15bd..6daaeaa656a8 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -754,7 +754,7 @@  static int _vmx_cpu_up(bool bsp)
                _ASM_EXTABLE(1b, %l[vmxon_fault])
                :
                : [addr] "m" (this_cpu(vmxon_region))
-               : "memory"
+               :
                : vmxon_fail, vmxon_fault );
 
     this_cpu(vmxon) = 1;
@@ -811,7 +811,7 @@  void cf_check vmx_cpu_down(void)
 
     BUG_ON(!(read_cr4() & X86_CR4_VMXE));
     this_cpu(vmxon) = 0;
-    asm volatile ( "vmxoff" ::: "memory" );
+    asm volatile ( "vmxoff" );
 
     local_irq_restore(flags);
 }
diff --git a/xen/arch/x86/include/asm/hvm/vmx/vmx.h b/xen/arch/x86/include/asm/hvm/vmx/vmx.h
index 33d3d43a3851..86349d4a5431 100644
--- a/xen/arch/x86/include/asm/hvm/vmx/vmx.h
+++ b/xen/arch/x86/include/asm/hvm/vmx/vmx.h
@@ -298,7 +298,7 @@  static always_inline void __vmptrld(u64 addr)
                "jbe %l[vmfail]"
                :
                : [addr] "m" (addr)
-               : "memory"
+               :
                : vmfail );
     return;
 
@@ -312,7 +312,7 @@  static always_inline void __vmpclear(u64 addr)
                "jbe %l[vmfail]"
                :
                : [addr] "m" (addr)
-               : "memory"
+               :
                : vmfail );
     return;
 
@@ -408,7 +408,7 @@  static always_inline void __invept(unsigned long type, uint64_t eptp)
                "jbe %l[vmfail]"
                :
                : [operand] "m" (operand), [type] "r" (type)
-               : "memory"
+               :
                : vmfail );
     return;
 
@@ -430,7 +430,7 @@  static always_inline void __invvpid(unsigned long type, u16 vpid, u64 gva)
                "2:" _ASM_EXTABLE(1b, 2b)
                :
                : [operand] "m" (operand), [type] "r" (type)
-               : "memory"
+               :
                : vmfail );
     return;