diff mbox series

[v4,4/9] KVM: x86: MMU: Integrate LAM bits when build guest CR3

Message ID 20230209024022.3371768-5-robert.hu@linux.intel.com (mailing list archive)
State New, archived
Headers show
Series Linear Address Masking (LAM) KVM Enabling | expand

Commit Message

Robert Hoo Feb. 9, 2023, 2:40 a.m. UTC
When calc the new CR3 value, take LAM bits in.

Signed-off-by: Robert Hoo <robert.hu@linux.intel.com>
Reviewed-by: Jingqi Liu <jingqi.liu@intel.com>
---
 arch/x86/kvm/mmu.h     | 5 +++++
 arch/x86/kvm/vmx/vmx.c | 3 ++-
 2 files changed, 7 insertions(+), 1 deletion(-)

Comments

Chao Gao Feb. 10, 2023, 2:04 p.m. UTC | #1
On Thu, Feb 09, 2023 at 10:40:17AM +0800, Robert Hoo wrote:
>When calc the new CR3 value, take LAM bits in.

I prefer to merge this one into patch 2 because both are related to
CR3_LAM_U48/U57 handling. Merging them can give us the whole picture of
how the new LAM bits are handled:
* strip them from CR3 when allocating/finding a shadow root
* stitch them with other fields to form a shadow CR3

I have a couple questions:
1. in kvm_set_cr3(), 

        /* PDPTRs are always reloaded for PAE paging. */
        if (cr3 == kvm_read_cr3(vcpu) && !is_pae_paging(vcpu))
                goto handle_tlb_flush;

Shouldn't we strip off CR3_LAM_U48/U57 and do the comparison?
It depends on whether toggling CR3_LAM_U48/U57 causes a TLB flush.

2. also in kvm_set_cr3(),

        if (cr3 != kvm_read_cr3(vcpu))
                kvm_mmu_new_pgd(vcpu, cr3);

is it necessary to use a new pgd if only CR3_LAM_U48/U57 were changed?

>
>Signed-off-by: Robert Hoo <robert.hu@linux.intel.com>
>Reviewed-by: Jingqi Liu <jingqi.liu@intel.com>
>---
> arch/x86/kvm/mmu.h     | 5 +++++
> arch/x86/kvm/vmx/vmx.c | 3 ++-
> 2 files changed, 7 insertions(+), 1 deletion(-)
>
>diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
>index 6bdaacb6faa0..866f2b7cb509 100644
>--- a/arch/x86/kvm/mmu.h
>+++ b/arch/x86/kvm/mmu.h
>@@ -142,6 +142,11 @@ static inline unsigned long kvm_get_active_pcid(struct kvm_vcpu *vcpu)
> 	return kvm_get_pcid(vcpu, kvm_read_cr3(vcpu));
> }
> 
>+static inline u64 kvm_get_active_lam(struct kvm_vcpu *vcpu)
>+{
>+	return kvm_read_cr3(vcpu) & (X86_CR3_LAM_U48 | X86_CR3_LAM_U57);
>+}
>+
> static inline void kvm_mmu_load_pgd(struct kvm_vcpu *vcpu)
> {
> 	u64 root_hpa = vcpu->arch.mmu->root.hpa;
>diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
>index fe5615fd8295..66edd091f145 100644
>--- a/arch/x86/kvm/vmx/vmx.c
>+++ b/arch/x86/kvm/vmx/vmx.c
>@@ -3289,7 +3289,8 @@ static void vmx_load_mmu_pgd(struct kvm_vcpu *vcpu, hpa_t root_hpa,
> 			update_guest_cr3 = false;
> 		vmx_ept_load_pdptrs(vcpu);
> 	} else {
>-		guest_cr3 = root_hpa | kvm_get_active_pcid(vcpu);
>+		guest_cr3 = root_hpa | kvm_get_active_pcid(vcpu) |
>+			    kvm_get_active_lam(vcpu);
> 	}
> 
> 	if (update_guest_cr3)
>-- 
>2.31.1
>
Robert Hoo Feb. 11, 2023, 6:24 a.m. UTC | #2
On Fri, 2023-02-10 at 22:04 +0800, Chao Gao wrote:
> On Thu, Feb 09, 2023 at 10:40:17AM +0800, Robert Hoo wrote:
> > When calc the new CR3 value, take LAM bits in.
> 
> I prefer to merge this one into patch 2 because both are related to
> CR3_LAM_U48/U57 handling. Merging them can give us the whole picture
> of
> how the new LAM bits are handled:
> * strip them from CR3 when allocating/finding a shadow root
> * stitch them with other fields to form a shadow CR3

OK
> 
> I have a couple questions:
> 1. in kvm_set_cr3(), 
> 
>         /* PDPTRs are always reloaded for PAE paging. */
>         if (cr3 == kvm_read_cr3(vcpu) && !is_pae_paging(vcpu))
>                 goto handle_tlb_flush;
> 
> Shouldn't we strip off CR3_LAM_U48/U57 and do the comparison?

Here is the stringent check, i.e. including LAM bits.
Below we'll check if LAM bits is allowed to check.

	if (!guest_cpuid_has(vcpu, X86_FEATURE_LAM) &&
	    (cr3 & (X86_CR3_LAM_U48 | X86_CR3_LAM_U57)))
		return	1;

The only LAM bits toggling case will be handled in

	if (cr3 != old_cr3) {
		if ((cr3 ^ old_cr3) & CR3_ADDR_MASK) {
			kvm_mmu_new_pgd(vcpu, cr3 & ~(X86_CR3_LAM_U48 |
					X86_CR3_LAM_U57));
		} else {
			/*
			 * Though effective addr no change, mark the
			 * request so that LAM bits will take effect
			 * when enter guest.
			 */
			kvm_make_request(KVM_REQ_LOAD_MMU_PGD, vcpu);
		}
	}

> It depends on whether toggling CR3_LAM_U48/U57 causes a TLB flush.

In v1, Kirill and I discussed about this. He lean toward being
conservative on TLB flush.
> 
> 2. also in kvm_set_cr3(),
> 
>         if (cr3 != kvm_read_cr3(vcpu))
>                 kvm_mmu_new_pgd(vcpu, cr3);
> 
> is it necessary to use a new pgd if only CR3_LAM_U48/U57 were
> changed?
> 
Hasn't applied my patch? It isn't like this. It's like below
	if ((cr3 ^ old_cr3) & CR3_ADDR_MASK) {
			kvm_mmu
_new_pgd(vcpu, cr3 & ~(X86_CR3_LAM_U48 |
				
	X86_CR3_LAM_U57));
			}

Only if effective pgd addr bits changes, kvm_mmu_new_pgd().
Robert Hoo Feb. 11, 2023, 6:29 a.m. UTC | #3
On Sat, 2023-02-11 at 14:24 +0800, Robert Hoo wrote:
> In v1, Kirill and I discussed about this. He lean toward being
> conservative on TLB flush.
> > 
https://lore.kernel.org/kvm/20221103024001.wtrj77ekycleq4vc@box.shutemov.name/
diff mbox series

Patch

diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
index 6bdaacb6faa0..866f2b7cb509 100644
--- a/arch/x86/kvm/mmu.h
+++ b/arch/x86/kvm/mmu.h
@@ -142,6 +142,11 @@  static inline unsigned long kvm_get_active_pcid(struct kvm_vcpu *vcpu)
 	return kvm_get_pcid(vcpu, kvm_read_cr3(vcpu));
 }
 
+static inline u64 kvm_get_active_lam(struct kvm_vcpu *vcpu)
+{
+	return kvm_read_cr3(vcpu) & (X86_CR3_LAM_U48 | X86_CR3_LAM_U57);
+}
+
 static inline void kvm_mmu_load_pgd(struct kvm_vcpu *vcpu)
 {
 	u64 root_hpa = vcpu->arch.mmu->root.hpa;
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index fe5615fd8295..66edd091f145 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -3289,7 +3289,8 @@  static void vmx_load_mmu_pgd(struct kvm_vcpu *vcpu, hpa_t root_hpa,
 			update_guest_cr3 = false;
 		vmx_ept_load_pdptrs(vcpu);
 	} else {
-		guest_cr3 = root_hpa | kvm_get_active_pcid(vcpu);
+		guest_cr3 = root_hpa | kvm_get_active_pcid(vcpu) |
+			    kvm_get_active_lam(vcpu);
 	}
 
 	if (update_guest_cr3)