diff mbox series

KVM: SVM: Use 'unsigned long' for the physical address passed to VMSAVE

Message ID 20210202223416.2702336-1-seanjc@google.com (mailing list archive)
State New, archived
Headers show
Series KVM: SVM: Use 'unsigned long' for the physical address passed to VMSAVE | expand

Commit Message

Sean Christopherson Feb. 2, 2021, 10:34 p.m. UTC
Take an 'unsigned long' instead of 'hpa_t' in the recently added vmsave()
helper, as loading a 64-bit GPR isn't possible in 32-bit mode.  This is
properly reflected in the SVM ISA, which explicitly states that VMSAVE,
VMLOAD, VMRUN, etc... consume rAX based on the effective address size.

Don't bother with a WARN to detect breakage on 32-bit KVM, the VMCB PA is
stored as an 'unsigned long', i.e. the bad address is long since gone.
Not to mention that a 32-bit kernel is completely hosed if alloc_page()
hands out pages in high memory.

Reported-by: kernel test robot <lkp@intel.com>
Cc: Robert Hu <robert.hu@intel.com>
Cc: Farrah Chen <farrah.chen@intel.com>
Cc: Danmei Wei <danmei.wei@intel.com>
Cc: Tom Lendacky <Thomas.Lendacky@amd.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/kvm/svm/svm_ops.h | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

Comments

Sean Christopherson Feb. 2, 2021, 10:36 p.m. UTC | #1
On Tue, Feb 02, 2021, Sean Christopherson wrote:
> Take an 'unsigned long' instead of 'hpa_t' in the recently added vmsave()
> helper, as loading a 64-bit GPR isn't possible in 32-bit mode.  This is
> properly reflected in the SVM ISA, which explicitly states that VMSAVE,
> VMLOAD, VMRUN, etc... consume rAX based on the effective address size.
> 
> Don't bother with a WARN to detect breakage on 32-bit KVM, the VMCB PA is
> stored as an 'unsigned long', i.e. the bad address is long since gone.
> Not to mention that a 32-bit kernel is completely hosed if alloc_page()
> hands out pages in high memory.
> 
> Reported-by: kernel test robot <lkp@intel.com>
> Cc: Robert Hu <robert.hu@intel.com>
> Cc: Farrah Chen <farrah.chen@intel.com>
> Cc: Danmei Wei <danmei.wei@intel.com>
> Cc: Tom Lendacky <Thomas.Lendacky@amd.com>
> Signed-off-by: Sean Christopherson <seanjc@google.com>

Forgot got the Fixes tag.  Or just squash this.

Fixes: f84a54c04540 ("KVM: SVM: Use asm goto to handle unexpected #UD on SVM instructions")

> ---
>  arch/x86/kvm/svm/svm_ops.h | 7 ++++++-
>  1 file changed, 6 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/x86/kvm/svm/svm_ops.h b/arch/x86/kvm/svm/svm_ops.h
> index 0c8377aee52c..9f007bc8409a 100644
> --- a/arch/x86/kvm/svm/svm_ops.h
> +++ b/arch/x86/kvm/svm/svm_ops.h
> @@ -51,7 +51,12 @@ static inline void invlpga(unsigned long addr, u32 asid)
>  	svm_asm2(invlpga, "c"(asid), "a"(addr));
>  }
>  
> -static inline void vmsave(hpa_t pa)
> +/*
> + * Despite being a physical address, the portion of rAX that is consumed by
> + * VMSAVE, VMLOAD, etc... is still controlled by the effective address size,
> + * hence 'unsigned long' instead of 'hpa_t'.
> + */
> +static inline void vmsave(unsigned long pa)
>  {
>  	svm_asm1(vmsave, "a" (pa), "memory");
>  }
> -- 
> 2.30.0.365.g02bc693789-goog
>
Paolo Bonzini Feb. 3, 2021, 8:03 a.m. UTC | #2
On 02/02/21 23:34, Sean Christopherson wrote:
> diff --git a/arch/x86/kvm/svm/svm_ops.h b/arch/x86/kvm/svm/svm_ops.h
> index 0c8377aee52c..9f007bc8409a 100644
> --- a/arch/x86/kvm/svm/svm_ops.h
> +++ b/arch/x86/kvm/svm/svm_ops.h
> @@ -51,7 +51,12 @@ static inline void invlpga(unsigned long addr, u32 asid)
>  	svm_asm2(invlpga, "c"(asid), "a"(addr));
>  }
>  
> -static inline void vmsave(hpa_t pa)
> +/*
> + * Despite being a physical address, the portion of rAX that is consumed by
> + * VMSAVE, VMLOAD, etc... is still controlled by the effective address size,
> + * hence 'unsigned long' instead of 'hpa_t'.
> + */
> +static inline void vmsave(unsigned long pa)
>  {
>  	svm_asm1(vmsave, "a" (pa), "memory");
>  }
> -- 
> 2.30.0.365.g02bc693789-goog
> 

Squashed, thanks.

Paolo
diff mbox series

Patch

diff --git a/arch/x86/kvm/svm/svm_ops.h b/arch/x86/kvm/svm/svm_ops.h
index 0c8377aee52c..9f007bc8409a 100644
--- a/arch/x86/kvm/svm/svm_ops.h
+++ b/arch/x86/kvm/svm/svm_ops.h
@@ -51,7 +51,12 @@  static inline void invlpga(unsigned long addr, u32 asid)
 	svm_asm2(invlpga, "c"(asid), "a"(addr));
 }
 
-static inline void vmsave(hpa_t pa)
+/*
+ * Despite being a physical address, the portion of rAX that is consumed by
+ * VMSAVE, VMLOAD, etc... is still controlled by the effective address size,
+ * hence 'unsigned long' instead of 'hpa_t'.
+ */
+static inline void vmsave(unsigned long pa)
 {
 	svm_asm1(vmsave, "a" (pa), "memory");
 }