Message ID | 20230227084547.404871-2-robert.hu@linux.intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Linear Address Masking (LAM) KVM Enabling | expand |
On Mon, Feb 27, 2023 at 04:45:43PM +0800, Robert Hoo wrote: >LAM feature uses CR4 bit[28] (LAM_SUP) to enable/config LAM masking on >supervisor mode address. To virtualize that, move CR4.LAM_SUP out of >unconditional CR4_RESERVED_BITS; its reservation now depends on vCPU has >LAM feature or not. > >Not passing through to guest but intercept it, is to avoid read VMCS field >every time when KVM fetch its value, with expectation that guest won't >toggle this bit frequently. > >There's no other features/vmx_exec_controls connections, therefore no code >need to be complemented in kvm/vmx_set_cr4(). > >Signed-off-by: Robert Hoo <robert.hu@linux.intel.com> >--- > arch/x86/include/asm/kvm_host.h | 3 ++- > arch/x86/kvm/x86.h | 2 ++ > 2 files changed, 4 insertions(+), 1 deletion(-) > >diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h >index f35f1ff4427b..4684896698f4 100644 >--- a/arch/x86/include/asm/kvm_host.h >+++ b/arch/x86/include/asm/kvm_host.h >@@ -125,7 +125,8 @@ > | X86_CR4_PGE | X86_CR4_PCE | X86_CR4_OSFXSR | X86_CR4_PCIDE \ > | X86_CR4_OSXSAVE | X86_CR4_SMEP | X86_CR4_FSGSBASE \ > | X86_CR4_OSXMMEXCPT | X86_CR4_LA57 | X86_CR4_VMXE \ >- | X86_CR4_SMAP | X86_CR4_PKE | X86_CR4_UMIP)) >+ | X86_CR4_SMAP | X86_CR4_PKE | X86_CR4_UMIP \ >+ | X86_CR4_LAM_SUP)) > > #define CR8_RESERVED_BITS (~(unsigned long)X86_CR8_TPR) > >diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h >index 9de72586f406..8ec5cc983062 100644 >--- a/arch/x86/kvm/x86.h >+++ b/arch/x86/kvm/x86.h >@@ -475,6 +475,8 @@ bool kvm_msr_allowed(struct kvm_vcpu *vcpu, u32 index, u32 type); > __reserved_bits |= X86_CR4_VMXE; \ > if (!__cpu_has(__c, X86_FEATURE_PCID)) \ > __reserved_bits |= X86_CR4_PCIDE; \ >+ if (!__cpu_has(__c, X86_FEATURE_LAM)) \ >+ __reserved_bits |= X86_CR4_LAM_SUP; \ > __reserved_bits; \ > }) Add X86_CR4_LAM_SUP to cr4_fixed1 in nested_vmx_cr_fixed1_bits_update() to indicate CR4.LAM_SUP is allowed to be 0 or 1 in VMX operation. With this fixed, Reviewed-by: Chao Gao <chao.gao@intel.com>
On 3/2/2023 3:17 PM, Chao Gao wrote: > On Mon, Feb 27, 2023 at 04:45:43PM +0800, Robert Hoo wrote: >> LAM feature uses CR4 bit[28] (LAM_SUP) to enable/config LAM masking on >> supervisor mode address. To virtualize that, move CR4.LAM_SUP out of >> unconditional CR4_RESERVED_BITS; its reservation now depends on vCPU has >> LAM feature or not. >> >> Not passing through to guest but intercept it, is to avoid read VMCS field >> every time when KVM fetch its value, with expectation that guest won't >> toggle this bit frequently. >> >> There's no other features/vmx_exec_controls connections, therefore no code >> need to be complemented in kvm/vmx_set_cr4(). >> >> Signed-off-by: Robert Hoo <robert.hu@linux.intel.com> >> --- >> arch/x86/include/asm/kvm_host.h | 3 ++- >> arch/x86/kvm/x86.h | 2 ++ >> 2 files changed, 4 insertions(+), 1 deletion(-) >> >> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h >> index f35f1ff4427b..4684896698f4 100644 >> --- a/arch/x86/include/asm/kvm_host.h >> +++ b/arch/x86/include/asm/kvm_host.h >> @@ -125,7 +125,8 @@ >> | X86_CR4_PGE | X86_CR4_PCE | X86_CR4_OSFXSR | X86_CR4_PCIDE \ >> | X86_CR4_OSXSAVE | X86_CR4_SMEP | X86_CR4_FSGSBASE \ >> | X86_CR4_OSXMMEXCPT | X86_CR4_LA57 | X86_CR4_VMXE \ >> - | X86_CR4_SMAP | X86_CR4_PKE | X86_CR4_UMIP)) >> + | X86_CR4_SMAP | X86_CR4_PKE | X86_CR4_UMIP \ >> + | X86_CR4_LAM_SUP)) >> >> #define CR8_RESERVED_BITS (~(unsigned long)X86_CR8_TPR) >> >> diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h >> index 9de72586f406..8ec5cc983062 100644 >> --- a/arch/x86/kvm/x86.h >> +++ b/arch/x86/kvm/x86.h >> @@ -475,6 +475,8 @@ bool kvm_msr_allowed(struct kvm_vcpu *vcpu, u32 index, u32 type); >> __reserved_bits |= X86_CR4_VMXE; \ >> if (!__cpu_has(__c, X86_FEATURE_PCID)) \ >> __reserved_bits |= X86_CR4_PCIDE; \ >> + if (!__cpu_has(__c, X86_FEATURE_LAM)) \ >> + __reserved_bits |= X86_CR4_LAM_SUP; \ >> __reserved_bits; \ >> }) > Add X86_CR4_LAM_SUP to cr4_fixed1 in nested_vmx_cr_fixed1_bits_update() > to indicate CR4.LAM_SUP is allowed to be 0 or 1 in VMX operation. Thanks for pointing it out. Will fix it in the next version. > > With this fixed, > > Reviewed-by: Chao Gao <chao.gao@intel.com>
On Thu, 2023-03-02 at 15:17 +0800, Chao Gao wrote: > > diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h > > index 9de72586f406..8ec5cc983062 100644 > > --- a/arch/x86/kvm/x86.h > > +++ b/arch/x86/kvm/x86.h > > @@ -475,6 +475,8 @@ bool kvm_msr_allowed(struct kvm_vcpu *vcpu, u32 > > index, u32 type); > > __reserved_bits |= X86_CR4_VMXE; \ > > if (!__cpu_has(__c, X86_FEATURE_PCID)) \ > > __reserved_bits |= X86_CR4_PCIDE; \ > > + if (!__cpu_has(__c, X86_FEATURE_LAM)) \ > > + __reserved_bits |= X86_CR4_LAM_SUP; \ > > __reserved_bits; \ > > }) > > Add X86_CR4_LAM_SUP to cr4_fixed1 in > nested_vmx_cr_fixed1_bits_update() > to indicate CR4.LAM_SUP is allowed to be 0 or 1 in VMX operation. > Is this going to support nested LAM? > With this fixed, > > Reviewed-by: Chao Gao <chao.gao@intel.com>
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index f35f1ff4427b..4684896698f4 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -125,7 +125,8 @@ | X86_CR4_PGE | X86_CR4_PCE | X86_CR4_OSFXSR | X86_CR4_PCIDE \ | X86_CR4_OSXSAVE | X86_CR4_SMEP | X86_CR4_FSGSBASE \ | X86_CR4_OSXMMEXCPT | X86_CR4_LA57 | X86_CR4_VMXE \ - | X86_CR4_SMAP | X86_CR4_PKE | X86_CR4_UMIP)) + | X86_CR4_SMAP | X86_CR4_PKE | X86_CR4_UMIP \ + | X86_CR4_LAM_SUP)) #define CR8_RESERVED_BITS (~(unsigned long)X86_CR8_TPR) diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h index 9de72586f406..8ec5cc983062 100644 --- a/arch/x86/kvm/x86.h +++ b/arch/x86/kvm/x86.h @@ -475,6 +475,8 @@ bool kvm_msr_allowed(struct kvm_vcpu *vcpu, u32 index, u32 type); __reserved_bits |= X86_CR4_VMXE; \ if (!__cpu_has(__c, X86_FEATURE_PCID)) \ __reserved_bits |= X86_CR4_PCIDE; \ + if (!__cpu_has(__c, X86_FEATURE_LAM)) \ + __reserved_bits |= X86_CR4_LAM_SUP; \ __reserved_bits; \ })
LAM feature uses CR4 bit[28] (LAM_SUP) to enable/config LAM masking on supervisor mode address. To virtualize that, move CR4.LAM_SUP out of unconditional CR4_RESERVED_BITS; its reservation now depends on vCPU has LAM feature or not. Not passing through to guest but intercept it, is to avoid read VMCS field every time when KVM fetch its value, with expectation that guest won't toggle this bit frequently. There's no other features/vmx_exec_controls connections, therefore no code need to be complemented in kvm/vmx_set_cr4(). Signed-off-by: Robert Hoo <robert.hu@linux.intel.com> --- arch/x86/include/asm/kvm_host.h | 3 ++- arch/x86/kvm/x86.h | 2 ++ 2 files changed, 4 insertions(+), 1 deletion(-)