Message ID | 20221209044557.1496580-6-robert.hu@linux.intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Linear Address Masking (LAM) KVM Enabling | expand |
On Fri, Dec 09, 2022 at 12:45:53PM +0800, Robert Hoo wrote: > When calc the new CR3 value, take LAM bits in. > > Signed-off-by: Robert Hoo <robert.hu@linux.intel.com> > Reviewed-by: Jingqi Liu <jingqi.liu@intel.com> > --- > arch/x86/kvm/mmu.h | 5 +++++ > arch/x86/kvm/vmx/vmx.c | 3 ++- > 2 files changed, 7 insertions(+), 1 deletion(-) > > diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h > index 6bdaacb6faa0..866f2b7cb509 100644 > --- a/arch/x86/kvm/mmu.h > +++ b/arch/x86/kvm/mmu.h > @@ -142,6 +142,11 @@ static inline unsigned long kvm_get_active_pcid(struct kvm_vcpu *vcpu) > return kvm_get_pcid(vcpu, kvm_read_cr3(vcpu)); > } > > +static inline u64 kvm_get_active_lam(struct kvm_vcpu *vcpu) > +{ Unlike the PCIDs, LAM bits in CR3 are not sharing with other features, (e.g. PCID vs non-PCIN on bit 0:11) so not check CR4[28] here should be fine, otherwise follows kvm_get_pcid() looks better. > + return kvm_read_cr3(vcpu) & (X86_CR3_LAM_U48 | X86_CR3_LAM_U57); > +} > + > static inline void kvm_mmu_load_pgd(struct kvm_vcpu *vcpu) > { > u64 root_hpa = vcpu->arch.mmu->root.hpa; > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c > index cfa06c7c062e..9985dbb63e7b 100644 > --- a/arch/x86/kvm/vmx/vmx.c > +++ b/arch/x86/kvm/vmx/vmx.c > @@ -3261,7 +3261,8 @@ static void vmx_load_mmu_pgd(struct kvm_vcpu *vcpu, hpa_t root_hpa, > update_guest_cr3 = false; > vmx_ept_load_pdptrs(vcpu); > } else { > - guest_cr3 = root_hpa | kvm_get_active_pcid(vcpu); > + guest_cr3 = root_hpa | kvm_get_active_pcid(vcpu) | > + kvm_get_active_lam(vcpu); > } > > if (update_guest_cr3) > -- > 2.31.1 >
On Mon, 2022-12-19 at 14:53 +0800, Yuan Yao wrote: > On Fri, Dec 09, 2022 at 12:45:53PM +0800, Robert Hoo wrote: > > When calc the new CR3 value, take LAM bits in. > > > > Signed-off-by: Robert Hoo <robert.hu@linux.intel.com> > > Reviewed-by: Jingqi Liu <jingqi.liu@intel.com> > > --- > > arch/x86/kvm/mmu.h | 5 +++++ > > arch/x86/kvm/vmx/vmx.c | 3 ++- > > 2 files changed, 7 insertions(+), 1 deletion(-) > > > > diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h > > index 6bdaacb6faa0..866f2b7cb509 100644 > > --- a/arch/x86/kvm/mmu.h > > +++ b/arch/x86/kvm/mmu.h > > @@ -142,6 +142,11 @@ static inline unsigned long > > kvm_get_active_pcid(struct kvm_vcpu *vcpu) > > return kvm_get_pcid(vcpu, kvm_read_cr3(vcpu)); > > } > > > > +static inline u64 kvm_get_active_lam(struct kvm_vcpu *vcpu) > > +{ > > Unlike the PCIDs, LAM bits in CR3 are not sharing with other > features, > (e.g. PCID vs non-PCIN on bit 0:11) so not check CR4[28] here should > be fine, otherwise follows kvm_get_pcid() looks better. > No. CR4.LAM_SUP isn't an enablement switch over CR3.LAM_U{48,57}, they're parallel relationship, CR4.LAM_SUP controls supervisor mode addresses has LAM or not while CR3.LAM_U controls user mode address's LAM enablement. > > + return kvm_read_cr3(vcpu) & (X86_CR3_LAM_U48 | > > X86_CR3_LAM_U57);
On Tue, Dec 20, 2022 at 10:07:33PM +0800, Robert Hoo wrote: > On Mon, 2022-12-19 at 14:53 +0800, Yuan Yao wrote: > > On Fri, Dec 09, 2022 at 12:45:53PM +0800, Robert Hoo wrote: > > > When calc the new CR3 value, take LAM bits in. > > > > > > Signed-off-by: Robert Hoo <robert.hu@linux.intel.com> > > > Reviewed-by: Jingqi Liu <jingqi.liu@intel.com> > > > --- > > > arch/x86/kvm/mmu.h | 5 +++++ > > > arch/x86/kvm/vmx/vmx.c | 3 ++- > > > 2 files changed, 7 insertions(+), 1 deletion(-) > > > > > > diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h > > > index 6bdaacb6faa0..866f2b7cb509 100644 > > > --- a/arch/x86/kvm/mmu.h > > > +++ b/arch/x86/kvm/mmu.h > > > @@ -142,6 +142,11 @@ static inline unsigned long > > > kvm_get_active_pcid(struct kvm_vcpu *vcpu) > > > return kvm_get_pcid(vcpu, kvm_read_cr3(vcpu)); > > > } > > > > > > +static inline u64 kvm_get_active_lam(struct kvm_vcpu *vcpu) > > > +{ > > > > Unlike the PCIDs, LAM bits in CR3 are not sharing with other > > features, > > (e.g. PCID vs non-PCIN on bit 0:11) so not check CR4[28] here should > > be fine, otherwise follows kvm_get_pcid() looks better. > > > No. CR4.LAM_SUP isn't an enablement switch over CR3.LAM_U{48,57}, > they're parallel relationship, CR4.LAM_SUP controls supervisor mode > addresses has LAM or not while CR3.LAM_U controls user mode address's > LAM enablement. That's right, I didn't realize this at that time, thanks. : -) > > > > + return kvm_read_cr3(vcpu) & (X86_CR3_LAM_U48 | > > > X86_CR3_LAM_U57); > >
> No. CR4.LAM_SUP isn't an enablement switch over CR3.LAM_U{48,57}, > they're parallel relationship, CR4.LAM_SUP controls supervisor mode > addresses has LAM or not while CR3.LAM_U controls user mode address's > LAM enablement. Unfortunately, the spec(the one in your cover letter) has a bug in "10.1 ENUMERATION, ENABLING, AND CONFIGURATION": CR4.LAM_SUP enables and configures LAM for supervisor pointers: • If CR3.LAM_SUP = 0, LAM is not enabled for supervisor pointers. • If CR3.LAM_SUP = 1, LAM is enabled for supervisor pointers with a width determined by the paging mode: Based on the context, I think "CR3.LAM_SUP" should be "CR4.LAM_SUP". I believe it could just be a typo. But it is confusing enough. B.R. Yu
On Wed, 2022-12-21 at 15:50 +0800, Yu Zhang wrote: > > No. CR4.LAM_SUP isn't an enablement switch over CR3.LAM_U{48,57}, > > they're parallel relationship, CR4.LAM_SUP controls supervisor mode > > addresses has LAM or not while CR3.LAM_U controls user mode > > address's > > LAM enablement. > > Unfortunately, the spec(the one in your cover letter) has a bug in > "10.1 ENUMERATION, ENABLING, AND CONFIGURATION": > > CR4.LAM_SUP enables and configures LAM for supervisor pointers: > • If CR3.LAM_SUP = 0, LAM is not enabled for supervisor pointers. > • If CR3.LAM_SUP = 1, LAM is enabled for supervisor pointers with a > width determined by the paging mode: > > Based on the context, I think "CR3.LAM_SUP" should be "CR4.LAM_SUP". > > I believe it could just be a typo. Ah, right, I hold the same belief with you. We can report it to ISE author;-) > But it is confusing enough. > > B.R. > Yu
diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index 6bdaacb6faa0..866f2b7cb509 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -142,6 +142,11 @@ static inline unsigned long kvm_get_active_pcid(struct kvm_vcpu *vcpu) return kvm_get_pcid(vcpu, kvm_read_cr3(vcpu)); } +static inline u64 kvm_get_active_lam(struct kvm_vcpu *vcpu) +{ + return kvm_read_cr3(vcpu) & (X86_CR3_LAM_U48 | X86_CR3_LAM_U57); +} + static inline void kvm_mmu_load_pgd(struct kvm_vcpu *vcpu) { u64 root_hpa = vcpu->arch.mmu->root.hpa; diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index cfa06c7c062e..9985dbb63e7b 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -3261,7 +3261,8 @@ static void vmx_load_mmu_pgd(struct kvm_vcpu *vcpu, hpa_t root_hpa, update_guest_cr3 = false; vmx_ept_load_pdptrs(vcpu); } else { - guest_cr3 = root_hpa | kvm_get_active_pcid(vcpu); + guest_cr3 = root_hpa | kvm_get_active_pcid(vcpu) | + kvm_get_active_lam(vcpu); } if (update_guest_cr3)